Randomized Controlled Trials (RCTs) are the gold standard of medical research. They are considered the most rigorous method for elucidating the cause and effect relationships in clinical research. Yet these trials are far from perfect. All too often, RCTs are unable to answer the primary clinical question they have asked, or are plagued with methodological flaws that make them only as conclusive as their observational and retrospective study counterparts. Despite these flaws, most professionals view RCTs as the final arbiter of science. But should we?
The power of RCTs
Non-randomized, non-blinded studies suffer from inherent selection, recall, and interviewer bias. Observational studies are unable to establish causality, but instead show associations between variables. When methodologically rigorous, RCTs can reliably establish causality with minimal bias.
RCTs have five major characteristics.1 RCTS are…
- Are Randomized
- Are Blinded
- Have a similar clinical population between intervention groups
- Follow an intention to treat analysis
- Estimate the size of difference via predefined outcomes
Randomization assures that the control and active groups are very nearly the same, something that is also addressed in item 3. Having large patient populations also helps this assure an even assortment of characteristics in both groups. Blinding prevents human biases from interfering with results. Intention to treat analysis avoids certain biases that may arise from treatment itself (e.g. patients in the active arm leave the study because of drug side effects). Finally, outcomes are predetermined so as to be manipulated after the data are collected.
RCTs are not immune to real-world problems
While in the ideal world, every clinical study would be a multi-institutional RCT, we know that this is not realistic. And there are many reasons for this. One obvious one is the immense expense and time commitment required of RCTs. It often takes years to recruit a sizeable sample and when we add the criteria of multi-institutional or -national cohorts, treatment, material and services costs push the price tag upwards. One estimate puts the RCTs associated with new drug development as half of the total $1.2B price tag.2 Another difficulty is participant recruitment. People actually make a living out of volunteering for phase I clinical trials, lying about health and comorbidities to pass inclusion/exclusion screens and collect their participation check.3 Patients may not be eager to sign up for the placebo treatment in Phase II/III trials, feeling as though they are missing out on the “cure.” In addition, when an intervention is widespread, patients and clinicians may be reluctant to agree to randomization, when they so easily can get the therapy on their own.1 This may be one reason behind the sparsity of good RCTs evaluating complementary and alternative medicine (CAM) therapies.4
Another concern for RCTs is regarding the ethical implications. RCTs are supposedly done when researchers are unsure about the best intervention for patients. Yet this is not entirely accurate, prior observational or retrospective studies often lead researchers to hypothesize about the benefit of an intervention or lack thereof. This provides an ethical dilemma: how can we justify depriving participants of a possibly useful or even lifesaving treatment? As such, there are multiple instances of ethics committee refusing RCT funding for this very reason. Of course, this is a valid concern due to the sordid history of clinical research’s and is the basis for the Nuremberg Code of Ethics that was developed to protect research participants from wayward researchers who are not looking at the participants or society’s best interest.
On the other hand, the literature is filled with cases where RCTs provided valuable information or even saved patients from anecdotal treatments that caused real harm. A well-known case is that of the RCT evaluating high levels of oxygen therapy in premature infants.5 Without this critical work, neonates might still be harmed by overexposure to oxygen. Another example is the CAST trial, which had to be discontinued early secondary due to the significant reduction in mortality associated with removing the “standard of care” antiarrhythmic therapy in participants with a history of recent myocardial infarction.6
When do we need RCTs?
So when can we justify the time, expense and (possible) patient harm associated with these “gold standard” trials? Experts suggest1 limiting RCTs to:
- Interventions that are well developed and “ready for prime time”
- Interventions that have good quality observational studies suggesting a benefit; Researchers should also be aware of the effect size so that the participant size (and costs) can be estimated
Of course, these suggestions are not always easily carried out. Item one is a Catch-22. How do we know what interventions are ready for a clinical trial without doing a clinical trial. In truth, large pharmaceutical companies often make that decision because they are the only group willing to finance an interventional RCT. When funded by government, egos, politics, and prestige play an underappreciated but outsized role in the decision.
What can be done?
A satisfying answer may be found in the preclinical RCT. Traditionally, small observational proof of concept studies were done prior to clinical RCTs, often with low reproducibility and inconsistent results. An international consortium identified preclinical RCTs as a method to effectively bridge the gap between bench and clinical research. As such, preclinical RCTs may be an effective strategy to test the waters before taking on the costs and burdens of a clinical RCT.7 If the preclinical RCT is unsuccessful, this can save researchers the significant amount of money associated with pursuing a negative RCT.7 Obviously one would never know if a larger pool of study participants would have provided the statistical power to produce significant results. On the other hand, if we are chasing a small clinical benefit but testing 50,000 people, is it really worth testing in the first place?
The consolidated standards of reporting trials (CONSORT, was formed to provide a simplified checklist to ensure the rigor of RCTs going forward), most recently updated in 2010. 8 A vital mission it has championed is to augment the transparency of individual RCT’s methodology and provide easy access to the raw data.8 This goal ensures that the statistical analysis is accurate and the RCT can be replicated. Also, increased scrutiny of methodological design—before submission and publication—decreases the number of biased and inaccurate RCTs that confuse the medical literature.
RCTs are still the gold standard, but they are time-consuming, expensive and fraught with challenges. Clinical RCTs are still vital in the case of promising interventions that have a modest effect size and the possibility of changing the clinical management of patients for the better. The scientific community, however, must hold researchers to a high standard in terms of study design, execution, and reporting, but also determining which large RCTs get funded and which do not.
1. Sibbald B, Roland M. Understanding Controlled Trials. Why Are Randomised Controlled Trials Important? BMJ. 1998;316(7126):201.
2. Raftery J, Young A, Stanton L, et al. Clinical Trial Metadata: Defining and Extracting Metadata on the Design, Conduct, Results and Costs of 125 Randomised Clinical Trials Funded by the National Institute for Health Research Health Technology Assessment Programme. Health Technol Assess. 2015;19(11):1-138. doi:10.3310/hta19110
3. Devine EG, Waters ME, Putnam M, et al. Concealment and Fabrication by Experienced Research Subjects. Clin Trials. 2013;10(6):935-948. doi:10.1177/1740774513492917
4. Harlan WR, Jr. Research on Complementary and Alternative Medicine Using Randomized Controlled Trials. J Altern Complement Med. 2001;7 Suppl 1:S45-52.
5. Lanman JT, Guy LP, Dancis J. Retrolental Fibroplasia and Oxygen Therapy. J Am Med Assoc. 1954;155(3):223-226.
6. Cardiac Arrhythmia Suppression Trial I. Preliminary Report: Effect of Encainide and Flecainide on Mortality in a Randomized Trial of Arrhythmia Suppression after Myocardial Infarction. N Engl J Med. 1989;321(6):406-412. doi:10.1056/NEJM198908103210629
7. Llovera G, Liesz A. The Next Step in Translational Research: Lessons Learned from the First Preclinical Randomized Controlled Trial. J Neurochem. 2016;139 Suppl 2:271-279. doi:10.1111/jnc.13516
8. Schulz KF, Altman DG, Moher D, Group C. Consort 2010 Statement: Updated Guidelines for Reporting Parallel Group Randomized Trials. Ann Intern Med. 2010;152(11):726-732. doi:10.7326/0003-4819-152-11-201006010-00232