Should we beware the tyranny of the randomized controlled trial? | Association of Health Care Journalists – by Tara Haelle (@TaraHaelle) – Jan 2017
The intersection of scientific research, evidence and expertise can be a dicey one, particularly in an age in which evidence-based medicine is replacing the clinical expertise of practitioners.
In The New York Times Sunday Review, Jamie Holmes wrote about how the challenge of assessing the quality of evidence against expertise and less stringently conducted research can lead readers to confusion and frustration.
It can lead to a further distrust of science, Holmes suggested, noting the example of dental flossing in the wake of an Associated Press story that questioned the evidence in favor of the practice.
“In the case of flossing’s benefits, the supposedly weak evidence cited by the Associated Press was the absence of support in the form of definitive randomized controlled trials, the so-called gold standard for scientific research,” he writes.
So little RCT evidence exists “because the kind of long-term randomized controlled trial needed to evaluate flossing properly is hardly, if ever, conducted — because such studies are hard to implement.”
Holmes writes, because “it’s considered unethical to run randomized controlled trials without genuine uncertainty among experts regarding what works.”
In other words, you may only randomize people to a placebo group if you honestly don’t yet know whether the trialed therapy/medication is effective.
But once you do know it’s effective, it is unethical to deliberately deprive people of a treatment that can relieve their suffering.
…unless we’re dealing with opioids.
Any extreme is unhelpful and problematic, and extremes in evidence-based medicine are no different. Yet it is easy as a journalist to get caught up in strictly adhering to “rules” in evidence-based medicine, such as something not being “true” unless supported by adequate RCTs.
In the process, journalists might become a bit like Javert in “Les Miserables,” so blinded by the “letter of the law” that they miss the substance and nuance in picking apart evidence, recommendations, clinical experience and other aspects of assessing the benefits and harms of various interventions, behaviors and practices.
Such substance and nuance are exactly what we used to get from our “real doctors” in the past and what no amount of artificial intelligence can replace.
Yet that’s exactly what “the practice of medicine” is supposed to do: combine evidence with previous knowledge and experience to treat patients.
Holmes paraphrases doctor Mark Tonelli in stating that “what a patient prefers on the basis of personal experience; what a doctor thinks on the basis of clinical experience; and what clinical research has discovered” are each valuable in their own way.
What I found most salient in Holmes’ piece was the implication that extreme reliance on RCTs to explain and defend — or debunk — pretty much anything and everything has contributed to an erosion of trust in experts and expertise.
Author: Tara Haelle (@TaraHaelle) is AHCJ’s medical studies core topic leader, guiding journalists through the jargon-filled shorthand of science and research and enabling them to translate the evidence into accurate information.
Below is the famous parachute study from PubMed:
Objective To determine if using a parachute prevents death or major traumatic injury when jumping from an aircraft.
Design Randomized controlled trial.
Setting Private or commercial aircraft between September 2017 and August 2018.
92 aircraft passengers aged 18 and over were screened for participation. 23 agreed to be enrolled and were randomized.
Jumping from an aircraft (airplane or helicopter) with a parachute versus an empty backpack (unblinded).
Main outcome measures
Composite of death or major traumatic injury (defined by an Injury Severity Score over 15) upon impact with the ground measured immediately after landing.
Parachute use did not significantly reduce death or major injury (0% for parachute v 0% for control; P>0.9). This finding was consistent across multiple subgroups.
Compared with individuals screened but not enrolled, participants included in the study were on aircraft at significantly lower altitude (mean of 0.6 m for participants v mean of 9146 m for non-participants; P<0.001) and lower velocity (mean of 0 km/h v mean of 800 km/h; P<0.001).
Parachute use did not reduce death or major traumatic injury when jumping from aircraft in the first randomized evaluation of this intervention.
However, the trial was only able to enroll participants on small stationary aircraft on the ground, suggesting cautious extrapolation to high altitude jumps.
When beliefs regarding the effectiveness of an intervention exist in the community, randomized trials might selectively enroll individuals with a lower perceived likelihood of benefit, thus diminishing the applicability of the results to clinical practice.