Why Most Clinical Research Is Not Useful

PLOS Medicine: Why Most Clinical Research Is Not Useful – by John P. A. Ioannidis –  June 21, 2016

Practicing doctors and other health care professionals will be familiar with how little of what they find in medical journals is useful.

The term “clinical research” is meant to cover all types of investigation that address questions on the treatment, prevention, diagnosis/screening, or prognosis of disease or enhancement and maintenance of health.

There are many millions of papers of clinical research—approximately 1 million papers from clinical trials have been published to date, along with tens of thousands of systematic reviews—but most of them are not useful. 

Waste across medical research (clinical or other types) has been estimated as consuming 85% of the billions spent each year [1].

have previously written about why most published research is false [2] and how to make more of it true [3]. In order to be useful, clinical research should be true, but this is not sufficient.

Here I describe the key features of useful clinical research (Table 1) and the current state of affairs and suggest future prospects for improvement.

Features to consider in appraising whether clinical research is useful. Table 1:


Problem Base

Solving problems with low prevalence but grave consequences for affected patients is valuable, and broadly applicable useful research may stem from studying rare conditions if the knowledge is also relevant to common conditions

clinical research confers actual disutility when disease mongering [4] creates a fictitious perception of disease burden among healthy people.

Data show only weak or modest correlations between the amount of research done and the burden of various diseases [5,6].

Moreover, disease mongering affects multiple medical specialties [4,7,8].

Context Placement and Information Gain

Useful clinical research procures a clinically relevant information gain [9]: it adds to what we already know.

This means that, first, we need to be aware of what we already know so that new information can be placed in context [10].

Second, studies should be designed to provide sufficiently large amounts of evidence to ensure patients, clinicians, and decision makers can be confident about the magnitude and specifics of benefits and harms

Ideally, studies that are launched should be clinically useful regardless of their eventual results. If the findings of a study are expected to be clinically useful only if a particular result is obtained, there may be a pressure to either obtain that result or interpret the data as if the desired result has been obtained.

The article then lists several common problems:

Most new research is not preceded or accompanied by systematic reviews

Interventions are often compared to placebos or normal care, despite effective interventions having previously been demonstrated

Sample-size calculations almost always see each trial in isolation, ignoring other studies. Across PubMed, the median sample size for published randomized trials in 2006 was 36 per arm

Nonvalidated surrogate outcomes lacking clinical insight [13] and composite outcomes that combine outcomes of very different clinical portent [14] are often utilized so that authors can claim that clinical studies are well powered.

The value of “negative” results is rarely discussed when clinical studies are being designed.


Research inferences should be applicable to real-life circumstances

A common misconception is that a trial population should be fully representative of the general population of all patients (for treatment) or the entire community (for prevention) to be generalizable.

capturing real-life circumstances is possible, regardless of the representativeness of the study sample, by utilizing pragmatic study designs.

Pragmatism has long been advocated in clinical research [15], but it is rare.

thousands of efficacy trials have been published that explore optimization of testing circumstances

Studying treatment effects under idealized clinical trial conditions is attractive, but questions then remain over the generalizability of the findings to real-life circumstances.

Observational studies (performed in the thousands) are often precariously interpreted as able to answer questions about causal treatment effects [17]

The use of routinely collected data is typically touted as being more representative of real life, but this is often not true.

Most of the widely used observational studies deal with peculiar populations (e.g., nurses, physicians, or workers) and/or peculiar circumstances (e.g., patients managed in specialized health care systems or covered by specific insurance or fitting criteria for inclusion in a registry).

Eventually, observational studies often substantially overestimate treatment effects [18,19].

Patient Centeredness

Useful research is patient centered [20]. It is done to benefit patients or to preserve health and enhance wellness, not for the needs of physicians, investigators, or sponsors.

There is currently a heightened interest in patient-centered research, as exemplified by the Patient-Centered Outcomes Research Institute (PCORI), which was launched in 2012 in the United States to foster research relevant to patient needs [21]

Patients and physicians are frequently bombarded with information that tries to convince them that surrogates or other unimportant outcomes are important—such short-cuts either have commercial benefits or facilitate fast publication and academic advancement.

Value for Money

Most methods for calculating value for money remain theoretical constructs

Clinical research remains extremely expensive, even though an estimated 90% of the present cost of trials could be safely eliminated [26,27].

Reducing costs by streamlining research could do more than simply allow more research to take place. It could help make research better by reducing the pressure to cut corners, which leads to studies lacking sufficient power, precision, duration, and proper outcomes to convincingly change practice.


Feasibility of research can sometimes be difficult to predict up front, and there may be unwarranted optimism among investigators and funders.

Many clinical trials are terminated because of futility

Transparency (Trust)

Utility decreases when research is not transparent, when study data, protocols, and other processes are not available for verification or for further use by others. Trust is also eroded when major biases occur in the design, conduct, and reporting of research.

Other Considerations


Some uncertainty may exist for each of the features of clinical research outlined above, even though it is less than the uncertainty inherent in blue-sky and preclinical investigation.

Uncertainty also evolves over time, especially when research efforts take many years. Questions can lose their importance when circumstances change

Focusing on major journals

Out of the 730,447 articles labeled as “clinical trial” in PubMed as of May 26, 2016, only 18,231 were published in the major medical journals.

Most of the articles that inform guidelines and clinical practice are published elsewhere

Studies in major general medical journals may do better in terms of addressing important problems, but given their visibility, they can also propagate more disease mongering than less visible journals

The Lancet requires routinely systematic placement of the research in context for trials, and increasingly, major journals request full protocols for published trials

Improving the Situation

The challenges and the problems to solve involve not only researchers but also institutions, funding mechanisms, the industry, journals, and many other stakeholders, including patients and the public. Joint efforts by multiple stakeholders may yield solutions that are more likely to be more widely adopted and thus successful [3].

Clinical Research Workforce and Physicians

Students, residents, and clinical fellows are often expected to do some research

This exposure can be interesting, but trainees are judged on their ability to rapidly produce publications, a criterion that lends itself badly to the production of the sort of large, long-term, team-performed studies often needed

Other perverse recipes in clinical research include universities and other institutions simply asking for more papers (e.g., least publishable units) instead of clinically useful papers and clinical impact not being a formal part of the publication metrics so often used to judge academic performance

Instead of trying to make a prolific researcher of every physician, training physicians in understanding research methods and evidence-based medicine may also help improve the situation by instilling healthy skepticism and critical thinking skills.

The Industry–Regulator Dipole and Academic Partners

Industry responds to regulatory requirements, and regulatory agencies increasingly act as both guardians of the common good and industry facilitators. This creates tension and ambiguity in mission

Industry should be enabled to better champion useful clinical research, with regulators matching commercial rewards to clinical utility for industry products, thus helping good companies outperform bad ones and aligning the interests of shareholders with those of patients and the public.

Regulatory agencies may need to assume a more energetic role towards ensuring the conduct of large, clinically useful megatrials.

Current research funding incentivizes small studies of short duration that can be quickly performed and generate rapidly publishable results, while answering important questions may sometimes require long-term studies whose financial needs exceed the resources of most currently available funding cycles

Funding Agenda for Blue-Sky, Preclinical, and Clinical Science

Discovery research without prespecified deliverables—blue-sky science—is important and requires public support.

However, a lot of “basic” investigation does have anticipated deliverables, like research into developing new drug targets or new tests. This research may best be funded by industry and those standing to profit if they deliver a product that is effective.

Our tax dollars pay for research that private companies then use to make money, like NIH drug investigation studies that are turned into cash by the pharmaceutical industry.

Much current public funding could move from such preclinical research to useful clinical research, especially in the many cases in which a lack of patent protection means there is no commercial reason for industry to fund studies that might nevertheless be useful in improving care  

1 thought on “Why Most Clinical Research Is Not Useful

  1. david becker

    Ioannidis work reflects what happens when the public lets researchers do as they please- they become careless, sloppy, wasteful and unaccountable. Recently i was told to mind my own business when it comes to pain care by the NYS education department- and some of you can just imagine the response they will get from me.
    We all need to involve ourselves with the big and little issues of the day and not assume big brother is minding their business well.

    Liked by 1 person


Other thoughts?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.