On Rigor

Lant Pritchett wrote a piece for the Building State Capacity blog about the notion of “rigorous evidence.” At the risk of putting words in his mouth, my sense is that his argument boils down to this: promoters of evidence-based policy overplay their hands by focusing exclusively on internal validity[1]. He says as much in his post:

Evidence would be “rigorous” about predicting the future impact of the adoption of a policy only if the conditions under which the policy was to be implemented were exactly the same in every relevant dimension as that under which the “rigorous” evidence was generated. But that can never be so because neither economics—nor any other social science—have theoretically sound and empirically validated invariance laws that specify what “exactly the same” conditions would be.

Pritchett raises an important point; our understanding of internal validity and our methods for assessing it are far more developed than that of external validity[2]. However, I can’t help but feel that Pritchett is overplaying his hand as well. We consider a study to be internally valid if our comparison groups are equivalent in expectation, not if they are exactly the same in every relevant dimension. This may seem like mincing words, but there’s a distinction between equivalence and plausibly arguing the absence of bias. The latter is the standard to which we hold studies when assessing internal validity, and we should do the same for external validity. Still, the point remains that our understanding of external validity is far removed from even this weaker definition.

Link: Rigorous Evidence Isn’t

  1. Wikipedia’s entry on external validity vs. internal validity provides a nice overview of the tension for those unfamiliar with the concepts.  ↩

  2. A recent working paper by Hunt Alcott and Sendhil Mullainathan is an interesting foray into developing metrics for external validity. Unfortunately, these metrics seem to require a whole heap of data that is rarely available for a single intervention.  ↩

EZ does it

While discussing research workflows with colleagues, I’ve been surprised to hear that many get the full text of a journal article by coming across an article, navigating to their school library website, searching for the journal under E-Resources, clicking on the journal link, digging down to the article of interest, and downloading it.

It doesn’t have to be this way (usually).

In most cases, links to journal pages through your library’s website are identical to their ordinary link + a suffix (called an EZproxy) that validates your school credentials to check if you have access. Thus, instead of:


Your library’s link will look something like:


Your browser can make this change through what’s called a ‘bookmarklet,’ a bookmark that, when clicked, does something with the existing URL in your browser. For my case, creating a bookmark with the following script as its content will redirect a journal’s site through my school library:


Alternatively, you can simply drag this EZproxy link into your browser bar.

Here’s a list of different schools’ EZproxy URLs. Edit the code above by replacing http://ezp-prod1.hul.harvard.edu/login?url= with your school’s URL.

In practice, this means that if you click this after trying to access a full PDF:


You get this:


Now, the caveat: this will not work for certain journals[1]. This may be because your school accesses that journal through some larger database, in which case you may have to go back to your library website, like an animal. But this works for me most of the time; hope you find it useful.

  1. Note that this may simply be because your school doesn’t have access to that journal. Ugh, I know.  ↩

Teaching Regression Discontinuity

A very enjoyable post on the regression discontinuity study design over at the must-read Development Impact blog. I was lucky enough to introduce this design to a room full of policy master’s students this semester, and I agree completely with all of Evans’ points. In addition to being a conceptually interesting design, RD is a great way to introduce students to the rationale behind quasi-experimental studies.