MPP2020-525

Policy Evaluation

Le Viet Phu, Edmund J. Malesky
Date: 13/06/2019 11:26; File size: 259,035 bytes
Week
1
|
2
|
3
|
4
|
5
|
6
Date
Time
Activity
Wednesday, 19/06/2019
15:15 - 16:45

Section 1: Fundamentals of Policy Evaluation [Le Viet Phu]

Lecture 1. What is Policy Evaluation and Causal Inference?

Importance of evaluation and the difference from monitoring, counterfactuals and the potential outcomes framework

  • EGAP, "10 Strategies for Figuring Out if X Caused Y"
  • EGAP, "10 Things You Need to Know about Causal Inference"
  • Angrist and Pishke, Introduction.KKS, pp 3-20.
Thursday, 20/06/2019
15:15 - 16:45

Section 2: Randomized Controlled Trials [Edmund Malesky]

Lecture 2. Introduction to Randomized Controlled Trials (RCTs)

Basics of field experiments and exploration of different designs

  • KKS, pp. 33-38.
  • EGAP, "10 Types of Treatment Effects You Should Know About".
  • EGAP, "10 Things to Know about External Validity".
Friday, 21/06/2019
15:15 - 16:45

Lecture 3. Designing and Implementing an RCT:

Random assignment, sampling, blocking/stratification, and power calculations.

  • KKS, pp. 39-50.
  • Angrist and Pishke, Chapter 1.
  • EGAP, "10 Things to Know about Randomization".
Joshua D. Angrist, Jörn-Steffen Pischke
Date: 21/10/2020 17:47; File size: 85,417 bytes
Shahidur R. Khandker, Gayatri B. Koolwal, Hussain A. Samad
Date: 21/10/2020 17:46; File size: 85,417 bytes
EGAP
Date: 21/10/2020 05:26; File size: 48,496 bytes
Alejandro Ganiminan
Date: 21/10/2020 05:26; File size: 47,495 bytes
David McKenzie
Date: 21/10/2020 05:26; File size: 47,793 bytes
Joshua D. Angrist, Jörn-Steffen Pischke
Date: 21/10/2020 05:26; File size: 116,978 bytes
Shahidur R. Khandker, Gayatri B. Koolwal, Hussain A. Samad
Date: 21/10/2020 05:26; File size: 116,978 bytes
Shahidur R. Khandker, Gayatri B. Koolwal, Hussain A. Samad
Date: 21/10/2020 05:26; File size: 116,978 bytes
Joshua D. Angrist, Jörn-Steffen Pischke
Date: 21/10/2020 05:26; File size: 116,978 bytes
Shahidur R. Khandker, Gayatri B. Koolwal, Hussain A. Samad
Date: 21/10/2020 05:26; File size: 116,978 bytes
Abhijit Banerjee, Jordan Kyle Cohen, Rema Hanna, Benjamin Olken, Sudarno Sumarto
Date: 21/10/2020 05:26; File size: 50,531 bytes
EGAP
Date: 21/10/2020 05:26; File size: 44,831 bytes
EGAP
Date: 21/10/2020 05:26; File size: 116,978 bytes
EGAP
Date: 21/10/2020 05:26; File size: 116,978 bytes
Joshua D. Angrist, Jörn-Steffen Pischke
Date: 21/10/2020 17:39; File size: 85,417 bytes
Shahidur R. Khandker, Gayatri B. Koolwal, Hussain A. Samad
Date: 21/10/2020 05:26; File size: 116,978 bytes
Shahidur R. Khandker, Gayatri B. Koolwal, Hussain A. Samad
Date: 21/10/2020 05:26; File size: 116,978 bytes
Shahidur R. Khandker, Gayatri B. Koolwal, Hussain A. Samad
Date: 21/10/2020 05:26; File size: 116,978 bytes
Joshua D. Angrist, Jörn-Steffen Pischke
Date: 21/10/2020 05:26; File size: 116,978 bytes
Edmund Malesky
Date: 22/07/2019 09:33; File size: 1,403,637 bytes
Edmund Malesky
Date: 22/07/2019 08:15; File size: 524,246 bytes
Edmund Malesky
Date: 22/07/2019 08:14; File size: 993,178 bytes
Edmund Malesky
Date: 22/07/2019 08:14; File size: 1,785,462 bytes

This course offers a first systematic approach to policy evaluation from a perspective of a practitioner. It provides rationale why evaluation may be used to inform and improve policy development, adoption, implementation, and effectiveness, and builds the evidence for policy interventions. We begin with experimental approaches, the gold standard in program evaluation. The main purpose of randomized evaluations is to determine whether a program has an impact, and more specifically, to quantify how large that impact is. Impact evaluations measure program effectiveness typically by comparing outcomes of those who received the program against those who did not. We will learn basic sets of skills for designing and evaluating policy interventions, and then practice them immediately. The first lecture will be devoted to the goals and organization of program design before beginning our discussion of the experimental ideal. Each subsequent class will delve into particular research tools used in evaluation for attempting to recover the experimental ideal (randomized control trials, survey experiments, regression discontinuity design, matching estimators, and difference-in-differences). Within each lecture, we will discuss the underlying assumptions, power estimations, and diagnostics for determining whether the tool is appropriate for the particular research question.

The course will take the organizational structure of a workshop. Understanding the challenges of teaching econometrics without formulas, we have selected a nuanced approach which offers a harmonic, narrative based, combination of theory, in-class discussions, and computer applications. The course assessment is based on identifying a critical policy question that students are interested in and then designing the ideal evaluation for it. The final project will a Pre-Analysis Plan, a specialized research design that lays out the specific for how a new policy will be evaluated.

This site uses cookies to provide a better user experience.

Essential cookies are active by default and are necessary for the proper functioning of the website. Analytics cookies gather anonymous information for us to enhance and monitor the site. Performance cookies are employed by third parties to optimize their applications (such as videos and maps) that are embedded within our website. To accept all cookies, click 'I accept.'