This course offers a first systematic approach to policy evaluation from a perspective of a practitioner. It provides rationale why evaluation may be used to inform and improve policy development, adoption, implementation, and effectiveness, and builds the evidence for policy interventions. We begin with experimental approaches, the gold standard in program evaluation. The main purpose of randomized evaluations is to determine whether a program has an impact, and more specifically, to quantify how large that impact is. Impact evaluations measure program effectiveness typically by comparing outcomes of those who received the program against those who did not. We will learn basic sets of skills for designing and evaluating policy interventions, and then practice them immediately. The first lecture will be devoted to the goals and organization of program design before beginning our discussion of the experimental ideal. Each subsequent class will delve into particular research tools used in evaluation for attempting to recover the experimental ideal (randomized control trials, survey experiments, regression discontinuity design, matching estimators, and difference-in-differences). Within each lecture, we will discuss the underlying assumptions, power estimations, and diagnostics for determining whether the tool is appropriate for the particular research question.
The course will take the organizational structure of a workshop. Understanding the challenges of teaching econometrics without formulas, we have selected a nuanced approach which offers a harmonic, narrative based, combination of theory, in-class discussions, and computer applications. The course assessment is based on identifying a critical policy question that students are interested in and then designing the ideal evaluation for it. The final project will a Pre-Analysis Plan, a specialized research design that lays out the specific for how a new policy will be evaluated.