To get a new drug approved in most developed countries, it is necessary to show that it works in a randomised trial. Yet to get a new policy approved, politicians need no evidence of efficacy. Consequently, while we can be confident that most pharmaceuticals work as intended, it is quite possible that some of our social policies do more harm than good.
To understand why medical scientists rely so heavily on randomised trials, we need to go back to the purpose of an evaluation.
In judging the effectiveness of any intervention, we want to know the counterfactual: what would have happened if we had not intervened? In the case of a new pharmaceutical, those who choose to take a drug are probably different from those who choose not to take it. Perhaps pill-poppers worry more about their health, or maybe they live closer to the doctor. If so, then those who chose not to take the drug are a bad comparison group for those who actually took the drug.
Advertisement
Enter the randomised trial. By assigning participants to the treatment and control group with the toss of a coin, we can be sure that the characteristics of both groups are identical at the start of the trial. So at the end of the experiment, any differences in outcomes must be due to the intervention.
What works in the laboratory can also work in many areas of policy. Here, the power of randomised trials lies in two things. From a statistical standpoint, they are regarded as the “gold standard” of policy evaluation, beloved by policy wonks. And from a policymaking standpoint, randomised trials are the simplest form of evaluation, providing compelling results in a simple graph.
In the policy arena, the United States has conducted many more randomised trials than any other country. For example, one of the reasons that early childhood intervention is so high on the policy agenda is the results from the Perry Preschool program. For social researchers seeking to understand neighbourhood effects, there is no better source of evidence than the five-city Moving to Opportunity experiment. Many of the early insights about health insurance came from the RAND Health Insurance Experiment. And wage subsidy programs rapidly gained ground after the National Supported Work Demonstration (PDF 4.46MB) was conducted.
Randomised policy trials can also show up policy failure. A randomised evaluation of the US Job Training Partnership Act found that job training for low-skilled youths did not make them more employable. Randomised evaluations of pre-licence driver education programs have found no evidence that it makes youths into safer drivers. And DARE, a school-based anti-drugs program, was revised (PDF KB) following randomised trials showing that the program did not deliver promised results.
One excuse that Australian policymakers sometimes give for failing to conduct randomised trials is that they cannot face the ethical dilemma of denying some people a potentially beneficial new program. But here again, the policymakers can learn from medical researchers.
For the past two years, an NRMA CareFlight team, led by Alan Garner, has been running the Head Injury Retrieval Trial, which aims to answer two important questions: Are victims of serious head injuries more likely to recover if we can get a trauma physician onto the scene instead of a paramedic? And can we justify the extra expense of sending out a physician, or would the money be better spent in other parts of the health system?
Advertisement
To answer these questions, Garner’s team is running a randomised trial. When a Sydney 000 operator receives a report of a serious head injury, a coin is tossed. Heads, you get an ambulance and a paramedic. Tails, you get a helicopter and a trauma physician. Once 500 head injury patients have gone through the study, the experiment will cease and the results will be analysed.
Although he has spent over a decade working on the trial, even Garner himself admits that he doesn’t know what to expect from the results. “We think this will work”, he told me a in a phone conversation last week, “but so far, we’ve only got data from cohort studies”. Indeed, he points out that “like any medical intervention, there is even a possibility that sending a doctor will make things worse. I don’t think that’s the case, but [until HIRT ends] I don’t have good evidence either way.”
For anyone who has heard policymakers confidently proclaim their favourite new idea, what is striking about Garner is his willingness to run a rigorous randomised trial, and listen to the evidence. Underlying the HIRT is a passionate desire to help head injury patients, a firm commitment to the data, and a modesty about the extent of our current knowledge. What area of Australian public policy could not benefit from a little more of this kind of thinking?