Uncritical Publication of a Biased Study: Careless or Unethical?

Uncritical Publication of a Biased Study Leads to Misleading Media Reports | Pain Medicine | Oxford Academic – Lynn R Webster, MD – 20 November 2018

Dr. Webster points out how much harm is done when sloppy and biased research results are handed to reporters, often with a dramatic positive spin.

On March 6, 2018, (JAMA) published a manuscript titled “Effect of Opioid vs Nonopioid Medications on Pain-Related Function in Patients With Chronic Back Pain or Hip or Knee Osteoarthritis Pain: The SPACE Randomized Clinical Trial,” by Krebs et al. …results of the study led to headlines in major news outlets touting proof that opioids were not  for chronic noncancer pain. [Ironically, Krebs Study” Shows Opioids are Safe]

The article is in the top 5% of all research outputs measured by Altmetric: As of this writing, 313 news stories from 191 outlets, 2 278 tweeters, 45 Facebook pages, nine blogs, and seven Redditors have reported on the study. [See Popular Article on Opioids is Misleading]  

This much media reach could influence social and political policy for the better if the understanding of the research is valid. Unfortunately, the conclusions of the article were widely mischaracterized so that the extensive reporting could instead lead to harm.

Four Letters to the Editor of JAMA, along with a reply by Krebs et al., were published, demonstrating that others also had concerns about the manuscript [3–7].

We as researchers and reviewers of manuscripts can better help people to understand this type of complicated research on controversial topics. Here is my analysis of how the journal authors, reviewers, and media got it wrong.

Pragmatic research arises from the idea that the real-world setting of patients does not match the strict inclusion/exclusion criteria situation used in a randomized controlled trial (RCT) study design. By greatly relaxing the inclusion/exclusion criteria, pragmatic trial findings are claimed to be more generalizable to a wider range of patients

Here is a simple analogous illustration of how the pragmatic study design was compromised.

  • In Scenario A, a market research study with the strict inclusion criteria of an RCT is conducted among ice cream consumers to determine their preferred ice cream flavor: chocolate, vanilla, or no preference. The average result will almost surely be either chocolate or vanilla.
  • Scenario B, in contrast, is a pragmatic trial that would include all consumers, both those who eat ice cream and those who do not, so the inference to consumers as a whole can be made. If the majority of study participants do not even eat ice cream, the average result will almost surely be no preference.
  • A more extreme Scenario C would exclude ice cream consumers altogether, so the foregone conclusion is no preference. It is merely a poorly designed study with participant selection bias so extreme that it has no scientific validity.

If a consumer is going to buy ice cream (which will only happen with consumers who eat ice cream), which flavor will they choose? A reasonable person would realize that the pragmatic study approach is not useful in this situation.

What if after conducting the Scenario C study, the investigators did not point out the selection bias to the reader and the impact it had on the results (ice cream consumers were excluded, masking that chocolate is the more popular ice cream flavor)?

What if, further, the investigators did not report the additional finding that consumers reported that they prefer chocolate over vanilla in other foods, misleading by omission?

It would not be appropriate for a scientific publication to omit this type of information.

These omissions describe the errors in the JAMA article.

By specifically excluding patients who had tolerated and presumably benefitted from opioids (ice cream consumers), the investigators studied only participants

1) who had previously tried opioids and discovered they did not respond to them and
2) patients who had never tried opioids because they had previously responded adequately to nonopioid medications

In the words of the analogy, they studied only people who do not consume ice cream.[!!!]

Naturally, therefore, the JAMA study achieved the only finding possible: that both opioids and nonopioids reduced pain equally well in patients in whom opioids were not medically indicated.

This critical phrase describing such significant exclusion criteria belongs right in the title. Almost all PubMed research studies do so because these criteria limit the significance and applicability of the results.

To create their title with such a glaring omission, so different from the standards used by most other research studies, shows a desire to deceive.

readers, the mass media, and policy makers did not recognize the selection bias and flawed study design, so they erred in concluding and writing that opioids are no more effective than nonopioids, or worse, that opioids do not even reduce pain but merely create euphoria.

The authors did include a statement in the Methods section [only in full article] stating,

“Patients on long-term opioid therapy were excluded,”

This literally changes everything. A study to determine if opioids control pain cannot exclude people who currently use opioids to control pain!

I rarely read the full “Methods” section of studies because they include so many technical details that don’t didn’t seem critical. But I do remember seeing that statement, and now I’m kicking myself for having missed the implications of such an exclusion.

I carelessly assumed this meant they were only looking at “new” pain patients.

However, this information should also have been mentioned in the Limitations section and the Abstract (which is all many readers see),

This is the crime here, omitting the unusual and notable exclusion criteria in the sections where such glaring issues are normally addressed.

and the authors should have explained this, qualifying what can be inferred from the conclusions.

If you go to the study design article [11], which few readers are likely to do, you see a more complete diagram.

The investigators started out with 9,403 participants, who were identified by an electronic medical record (EMR) algorithm, then excluded 3,836 participants before arriving at 5,567 selected for phone screening. All patients with opioid prescriptions in their EMR were excluded in the first step.

The JAMA report omitted these numbers altogether and started with 4,485 patients with a prior-month diagnosis of back or lower extremity pain identified by the EMR algorithm.

This certainly looks like they were trying to hide something.

1,843 declined to participate. Let’s look more closely at who might have declined participation.

Now, if you are a veteran who needs an opioid, because you have already found that nonopioids are not sufficiently effective, if you are invited to be in a study investigating opioids, you are simply going to decline to participate.

In this way, patients who were medically indicated for opioid use did not make it into the study; however, this is not mentioned in the study’s conclusions.

I would never participate in such a study either, knowing it might force me into placebo treatments. My pain is too severe to risk going without opioids for 3 months.

I can now see this means no pain patients with serious pain will ever be studied unless they include current patients taking opioids. They will recruit only those who can get by without opioids, the ones with less serious pain.

It is the responsibility of reviewers to help authors understand flaws in design and avoid making inappropriate conclusions.

I’m gratified that he points out the failure of the reviewers too.

We must recognize that consumers of politically charged research extend to those without scientific training, who might jump to unjustified, sensational conclusions unless we as reviewers and editors do our jobs without bias.

This is happening even with “consumers” that have extensive scientific and medical training, like doctors, and even with addiction specialists, like Dr. Kolodny. They will believe anything that fits their predetermined beliefs (just like the rest of us if we don’t make a conscious effort).

One thought on “Uncritical Publication of a Biased Study: Careless or Unethical?

Other thoughts?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.