We propose a test for the identifying assumptions invoked in designs based on random assignment to one of many "judges." We show that standard identifying assumptions imply that the conditional expectation of the outcome given judge assignment is a continuous function with bounded slope of the judge propensity to treat. The implication leads to a two-part test that generalizes the Sargan-Hansen overidentification test and assesses whether implied treatment effects across the range of judge propensities are possible given the domain of the outcome. We show the asymptotic validity of the testing procedure, demonstrate its finite-sample performance in simulations, and apply the test in an empirical setting examining the effects of pre-trial release on defendant outcomes in Miami. When the assumptions are not satisfied, we propose a weaker average monotonicity assumption under which IV still converges to a proper weighted average of treatment effects.
The presence of multiple parameters complicates statistical inference. Ignoring the presence of multiple parameters can result in misleading inferences while existing remedies often produce inferences too imprecise to be economically meaningful. This article proposes an approach to inference on multiple parameters that produces valid simultaneous inferences on multiple parameters while maintaining precision on the parameter or parameters of greatest interest. The approach allows inference to reflect differing preferences for precision that the researcher may have across parameters, resulting in hypothesis tests that are more powerful and confidence regions with shorter projections on the parameters that the researcher cares more about, while remaining jointly valid across all parameters. A researcher using the procedure specifies in advance non-negative weights that correspond to the relative preference for precision across parameters. The proposed procedure chooses a confidence region to minimize the weighted sum of the projections on the parameter dimensions. A decision theoretic framework presents axioms for researcher preferences under which the proposed procedure is optimal. An empirical example from a field experiment on charitable giving shows the method offers substantial improvements in real-world settings.
Revise and resubmit at Journal of Political Economy
Using administrative data matching individual worker earnings to employers in a regression discontinuity design based on close union representation elections, this study presents new evidence on the impacts of unionization on establishment and worker outcomes. The paper first shows evidence that close union elections are subject to nonrandom selection, with large discontinuities in pre-election characteristics at the majority threshold. Estimates accounting for this selection show, perhaps surprisingly, that unionization significantly and substantially decreases establishment-level payroll, employment, average worker earnings at the establishment, and the probability of establishment survival. Estimates show the decreases in payroll and earnings are driven by union impacts on the composition of workers at unionization establishments, with older and higher-paid workers more likely to leave and younger workers more likely to join or stay. Worker-level effects on the earnings of workers who stay are small. The distinction between the large negative establishment-level effects and small worker-level effects is interpreted in a model of employer and employee selection into union jobs.
With Lars Lefgren
This article develops bounds on the distribution of treatment effects under plausible and testable assumptions on the joint distribution of potential outcomes, namely that potential outcomes are stochastically increasing. We show how to test the empirical restrictions implied by those assumptions. The resulting bounds substantially sharpen the classical bounds based on Frechet-Hoeffding limits. We present an application in which we identify bounds on the distribution of effects of attending a Knowledge is Power Program (KIPP) charter school on student academic achievement.
This paper describes a randomization-based estimation and inference procedure for the distribution or quantiles of potential outcomes with a binary treatment and instrument. The method imposes no parametric model for the treatment e ect, and remains valid for small n, a weak instrument, or inference on tail quantiles, when conventional large-sample methods break down. The method is illustrated using simulations and data from a randomized trial of college student incentives and services.