View: Experimental economics not perfect but has potential
RCT, a new field, can make useful predictions and can help discover not "what works", but "why things work".
One experiment on deworming schoolchildren in Kenya showed that the dewormed kids had not only better health (that was to be expected) but much better school attendance and marks, and ended up earning 20% more than those that were not dewormed. Since de-worming pills cost just a few paise each, this was a remarkable bang for buck. Surprisingly, parents did not spend the small sums needed for deworming themselves.
This experiment showed that major programmes could be evaluated accurately by RCTs; that they could unearth unexpected successes and failures and throw light on the reasons; and that people (parents in this case) often need an inducement to do what is in their self-interest anyway.
This last lesson is important. In another experiment in Rajasthan, many parents did not bring their children to clinics for free immunisation, but did so if offered a free kilo of dal! Bribing people to do what is good for them anyway may sound silly. But changing habits is always hard, and people have hidden costs like losing a day’s wages to take kids to an immunisation centre. Another experiment showed that families getting a free bednet (laced with pesticides to ward off mosquitos) were more likely to buy a second bednet on their own. A small nudge can go a long way.
Since then researchers working under the umbrella of JPAL (Jameel Poverty Action Lab), have conducted hundreds of RCTs in several countries. Shobhini Mukherji, head of JPAL in India, says RCTs on issues ranging from police performance and gender equality to livelihoods and healthcare have provided firm experimental evidence that helped scale up programmes to reach over 400 million people.
However, while some admirers call RCTs the “gold standard” in evaluating programmes, other economists have been very critical. Nobel laureate Angus Deaton provided this sobering assessment. “RCTs would be more useful if there were more realistic expectations of them and if their pitfalls were better recognised. For example, contrary to many claims in the applied literature, randomisation does not equalise everything but the treatment across treatments and controls, it does not automatically deliver a precise estimate of the average treatment effect, and it does not relieve us of the need to think about (observed or unobserved) confounders. Estimates apply to the trial sample only, sometimes a convenience sample, and usually selected; justification is required to extend them to other groups...”
RCTs can play a role in building scientific knowledge and useful predictions but they can only do so as part of a cumulative programme, combining with other methods, including conceptual and theoretical development, to discover not “what works,” but “why things work.”
Banerjee, Duflo and Kremer will readily acknowledge this warning against excessive expectations. RCTs are not silver bullets, and clearly have limitations. However, they have proved useful in helping us learn what works and what does not in different contexts.
The deworming experiment that worked so well in Kenya was tried in other countries and failed to yield such great dividends. Results obtained in one sector or country will not necessarily be replicated in other sectors and countries. Outcomes will vary, depending on local political systems, macroeconomic policies, quality of governance, skill levels, infrastructure, culture, traditions and much else.
In another RCT, the researchers showed that the school attendance of teachers could be improved by having a camera with date and time record the arrival and departure of teachers every day. But Banerjee and Duflo themselves mention in their seminal book Poor Economics that a similar attempt to monitor the attendance of nurses failed. Economist Lant Pritchett has shown that government staff will quickly sabotage attempts at monitoring by ensuring that the monitoring devices are out of order or non-functional in some other way.
Some will call this a failure of RCTs. I disagree. RCTs have never guaranteed that what works in one place will work everywhere. They reveal grassroots truths, and glean lessons from failures no less than successes. Constant testing is needed to understand what is really happening at the field level, identify glitches, and suggest possible solutions. RCTs are new tools to help us move forward on this front.