Contradictions of the MPH

But the more time I spent as a student of public health, the more my worries of impracticality gave way to a funny feeling of being left out. Our professors were trained as statisticians, economists, and sociologists; what was I being trained as? Was public health a discipline? An area of expertise? An employment category? After years of being a quantitative researcher, I still hesitate to call myself a statistician or an econometrician; I suspect those who work in qualitative methods have similar identity crises with respect to anthropology and ethnography. My courses still adhered to conventions of being observers, not practitioners; but my training was intentionally discipline agnostic. As a result I never quite feel at home; too dispassionate to be a practitioner, too invested to be an academic.

Yours truly writing for the Brooklyn Quarterly.

Link: Digging Into Data

On Rigor 2

This is a followup to my previous post on external validity and rigor, and a further attempt to pretend that this blog is not just about productivity, apps, and hacks.

Jed Friedman has a great piece on the Development Impact blog on a working paper by Hunt Alcott (who I cited in my previous post). Alcott describes a concept called “External Unconfoundedness”. This perfectly articulates what I was trying to get at in my previous post, an attempt to bring statistical notions of unbiasedness to questions of external validity. A lot of the conditions for external unconfoundedness have to do with the environment of the original study, and Alcott is particularly interested in site selection bias – the extent to which the setting for a study is chose because of favorable conditions.

Both the working paper and the post are great reads.

Link: Toward a more systematic approach to external validity: Understanding site-selection bias

MOOC Money

Caroline M. Hoxby recently published a NBER working paper on the role of MOOCs in the future business models of both “selective” and “non-selective” postsecondary institutions. The bulk of the paper would be interesting only to an economist, but the end is where things get interesting.

Hoxby argues that the current way that MOOCs are run (make your courses available publicly) could align with the business of non-selective institutions, in that students would pay to enroll in a course just like they would at a brick and mortar school. For “Highly Selective Postsecondary Education” (HSPE) Institutions (schools that rely heavily on alumni donations), though, Hoxby presents an alterative:

Viable online education for HSPE must deal with two problems: (i) the selectivity necessary for offering advanced education and (ii) the experiences that build the beliefs and adherence that sustain the venture capital-like financial model.

Consider a system in which HSPE institutions created online versions of their courses that could be traded with other institutions whose students had similarly high aptitude and preparation. The exporting institution could maintain the advanced nature of the course by limiting enrollment to those outside students who were best prepared, by disallowing outside students whose home institutions had previously sent students who underperformed, or by insisting that the outside students receive support (interactions and assessment) from an instructor at their home institution who is trusted by the exporting faculty member. Exporting institutions might offer such courses at a sustainable cost.18 A student’s home HSPE institution would continue to set his degree requirements, grant his degree, and be responsible for all other aspects of his PE experience.

I haven’t thought enough about this model to consider its implications, but this is one of the more novel pieces I’ve come across that speaks directly to how higher education business models can incorporate these new modes of teaching. If you’re aware of others, please do let me know!

Link: The Economics of Online Postsecondary Education: MOOCs, Nonselective Education, and Highly Selective Education

On Rigor

Lant Pritchett wrote a piece for the Building State Capacity blog about the notion of “rigorous evidence.” At the risk of putting words in his mouth, my sense is that his argument boils down to this: promoters of evidence-based policy overplay their hands by focusing exclusively on internal validity[1]. He says as much in his post:

Evidence would be “rigorous” about predicting the future impact of the adoption of a policy only if the conditions under which the policy was to be implemented were exactly the same in every relevant dimension as that under which the “rigorous” evidence was generated. But that can never be so because neither economics—nor any other social science—have theoretically sound and empirically validated invariance laws that specify what “exactly the same” conditions would be.

Pritchett raises an important point; our understanding of internal validity and our methods for assessing it are far more developed than that of external validity[2]. However, I can’t help but feel that Pritchett is overplaying his hand as well. We consider a study to be internally valid if our comparison groups are equivalent in expectation, not if they are exactly the same in every relevant dimension. This may seem like mincing words, but there’s a distinction between equivalence and plausibly arguing the absence of bias. The latter is the standard to which we hold studies when assessing internal validity, and we should do the same for external validity. Still, the point remains that our understanding of external validity is far removed from even this weaker definition.

Link: Rigorous Evidence Isn’t


  1. Wikipedia’s entry on external validity vs. internal validity provides a nice overview of the tension for those unfamiliar with the concepts.  ↩

  2. A recent working paper by Hunt Alcott and Sendhil Mullainathan is an interesting foray into developing metrics for external validity. Unfortunately, these metrics seem to require a whole heap of data that is rarely available for a single intervention.  ↩