Research Reporting Bias in Clinical Trials

As our healthcare system shifts to corporate ownership, the drive to generate profits is destroying healthcare (and all other social services).

When decisions are made on a financial basis instead of patient welfare, inappropriate standardization is applied where human variety is critically important: the fluctuating biochemistry of our individual bodies and how they react to interventions.

Here are 3 PubMed studies showing how research is corrupted by financial motives:

Frequency and reasons for outcome reporting bias in clinical trials: interviews with trialists – free full-text /PMC3016816/ – Jan 2011  

Objectives

To provide information on the frequency and reasons for outcome reporting bias in clinical trials.

Design

Trial protocols were compared with subsequent publication(s) to identify any discrepancies in the outcomes reported, and telephone interviews were conducted with the respective trialists to investigate more extensively the reporting of the research and the issue of unreported outcomes.  

Participants Chief investigators, or lead or coauthors of trials, were identified from two sources:

trials published since 2002 covered in Cochrane systematic reviews where at least one trial analysed was suspected of being at risk of outcome reporting bias (issue 4, 2006; issue 1, 2007, and

issue 2, 2007 of the Cochrane library); and a random sample of trial reports indexed on PubMed between August 2007 and July 2008.

Setting

Australia, Canada, Germany, the Netherlands, New Zealand, the United Kingdom, and the United States.

Main outcome measures

Frequency of incomplete outcome reporting—signified by outcomes that were specified in a trial’s protocol but not fully reported in subsequent publications—and trialists’ reasons for incomplete reporting of outcomes.

Results

268 trials were identified for inclusion (183 from the cohort of Cochrane systematic reviews and 85 from PubMed).

Initially, 161 respective investigators responded to our requests for interview, 130 (81%) of whom agreed to be interviewed.

However, failure to achieve subsequent contact, obtain a copy of the study protocol, or both meant that final interviews were conducted with 59 (37%) of the 161 trialists.

  • Sixteen trial investigators failed to report analysed outcomes at the time of the primary publication,
  • 17 trialists collected outcome data that were subsequently not analysed, and
  • five trialists did not measure a prespecified outcome over the course of the trial.

In almost all trials in which prespecified outcomes had been analysed but not reported (15/16, 94%), this under-reporting resulted in bias.

In nearly a quarter of trials in which prespecified outcomes had been measured but not analysed (4/17, 24%), the “direction” of the main findings influenced the investigators’ decision not to analyse the remaining data collected.

In 14 (67%) of the 21 randomly selected PubMed trials, there was at least one unreported efficacy or harm outcome.

More than a quarter (6/21, 29%) of these trials were found to have displayed outcome reporting bias.

Conclusion

The prevalence of incomplete outcome reporting is high.

Trialists seemed generally unaware of the implications for the evidence base of not reporting all outcomes and protocol changes.

A general lack of consensus regarding the choice of outcomes in particular clinical settings was evident and effects trial design, conduct, analysis, and reporting.

Collaboration between academics and industry in clinical trials: cross sectional study of publications and survey of lead academic authors – free full-text /PMC6169401/ – Oct 2018

Objectives

To determine the role of academic authors, funders, and contract research organisations in industry funded trials of vaccines, drugs, and devices and to determine lead academic authors’ experiences with industry funder collaborations.

Design

Cross sectional analysis of trial publications and survey of lead academic authors.

Eligibility criteria for selecting studies

The most recent 200 phase III and IV trials of vaccines, drugs, and devices with full industry funding, at least one academic author, published in one of the top seven high impact general medical journals (New England Journal of Medicine, Lancet, JAMA, BMJ, Annals of Internal Medicine, JAMA Internal Medicine, and PLoS Medicine).

Results

  • Employees of industry funders co-authored 173 (87%) of publications;
  • 183 (92%) trials reported involvement of funders in design, and
  • 167 (84%) reported involvement of academic authors.
  • Data analysis involved the funder in 146 (73%) trials and the academic authors in 79 (40%).
  • Trial reporting involved the funder in 173 (87%) trials and academic authors in 197 (99%).
  • Contract research organisations were involved in the reporting of 123 (62%) trials.
  • Eighty (40%) of 200 lead academic authors responded to the survey.
  • Twenty nine (33%) of the 80 responders reported that academics had final say on the design.
  • Ten responders described involvement of an unnamed funder and/or contract research organisation employee in the data analysis and/or reporting.

Most academic authors found the collaboration with industry funder beneficial, but 3 (4%) experienced delay in publication due to the industry funder and 9 (11%) reported disagreements with the industry funder, mostly concerning trial design and reporting.

Conclusions

Industry employees and academic authors are involved in the design, conduct, and reporting of most industry funded trials in high impact journals.

However, data analysis is often conducted without academic involvement.

Academics view the collaboration as beneficial, but some report loss of academic freedom.

Inappropriate Statistical Analysis and Reporting in Medical Research | Annals of Internal Medicine | American College of Physicians – Oct 2018

Wang and colleagues (1) present a sobering report of a national survey of nearly 400 consulting statisticians about requests from investigators to engage in inappropriate statistical practices.

Framed as an exploration of bioethical issues, the report implicitly adopts Doug Altman’s mantra: “Misuse of statistics is unethical” (2).

Although the survey did not ask statisticians whether they fulfilled these requests, the inappropriate methods described in this report are still used in the published literature, and thus contribute to the problem of nonreproducible research.

Practices like these are extraordinarily difficult to detect in published work; identification takes either unusual transparency or a time-consuming re-examination of the original research methods and data.

We cannot determine whether these requests arise largely from researchers’ inexperience or their response to academic and professional incentives that reward impressive-looking results in higher-profile publications.

 

Other thoughts?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.