Tag Archives: statistics

Record 72,000 Overdose Deaths (from ALL drugs) 2017

Bleak New Estimates in Drug Epidemic: A Record 72,000 Overdose Deaths in 2017 – New York Times – Aug 2018

Remember, this number is for all deaths from all drugs, not just opioids and certainly rarely prescribed opioids for the decedent, but rather illicit fentanyl added to all kinds of other drugs (cocaine, meth, heroin). You can read the usual misguided story about “innocent victims” and the horrors of any and all opioid use, but here are a couple of intelligent rebuttals from ATIP members:

Published comment by Richard A Lawhern PhD | Fort Mill SC

The opioid crisis wasn’t caused by prescribing to patients in pain and won’t be solved by restricting prescriptions. Drug prescribing is now at a ten year low, while overdose deaths continue to climb.  Continue reading

High-dose patients suffer greatest opioid cuts

Here are a series of tweets from Stefan Kertesz (@StefanKertesz Aug 2, 2018), explaining a recent BMJ study that shows no decline in the numbers of people receiving opioids.

1/A new @bmj_latest paper is seen by some as showing that 2 prior reports were incorrect in reporting major drops in opioid prescribing. Rather, this fine paper shows a problem in how US Rx reductions were achieved
https://www.bmj.com/content/362/bmj.k2833

2/the headline I would offer is: “large, well-documented US opioid Rx reductions have not involved changing a long-term American habit of sloppy short-term prescriptions, but instead reflect changes in care for a small number of sick people on long-term prescriptions”   Continue reading

Unhelpful Statistics Create Misleading Political Statements

Unhelpful Statistics Create Misleading Political Statements – 21st July, 2018 – By Lynn Webster, M.D.

Benjamin Disraeli reportedly said that there are three types of lies: “lies, damned lies, and statistics.” He may have been right.

The Washington Post recently published an article titled, “Companies shipped 1.6 billion opioids to Missouri from 2012 to 2017, report says.”

The story references a report released by Senator Claire McCaskill (D-Mo.) that says “drug distributors Cardinal Health, McKesson Corp. and Amerisource Bergen funneled the equivalent of about 260 opioid pills for every person in Missouri in the six-year period.”   Continue reading

Alcohol-related cirrhosis deaths skyrocket

Alcohol-related cirrhosis deaths skyrocket in young adults: Rapid rise in liver deaths highlights new challenges in treatment and prevention despite gains in fighting hepatitis C, researchers say — ScienceDaily – July 18, 2018

Alcohol has been, is now, and will be the most damaging and societally costly “drug” abused in the U.S. compared to all the rest.

In the early 20th century, alcohol prohibition only caused more problems by driving people to break laws and consume far more dangerous concoctions, just like is happening with opioids.  It’s really too bad our legislators cannot learn from history.

Deaths from cirrhosis rose in all but one state between 1999-2016, with increases seen most often among young adults, a new study shows. Continue reading

All randomized controlled trials produce biased results

Why all randomised controlled trials produce biased results – Mar 2018 – Alexander Krauss

Background: Randomised controlled trials (RCTs) are commonly viewed as the best research method to inform public health and social policy. Usually they are thought of as providing the most rigorous evidence of a treatment’s effectiveness without strong assumptions, biases and limitations.

Objective: This is the first study to examine that hypothesis by assessing the 10 most cited RCT studies worldwide.

Data sources: These 10 RCT studies with the highest number of citations in any journal (up to June 2016) were identified by searching Scopus (the largest database of peer-reviewed journals).

Results: This study shows that these world-leading RCTs that have influenced policy produce biased results by illustrating

  • that participants’ background traits that affect outcomes are often poorly distributed between trial groups,
  • that the trials often neglect alternative factors contributing to their main reported outcome and, among many other issues,
  • that the trials are often only partially blinded or unblinded.

The study here also identifies a number of novel and important assumptions, biases and limitations not yet thoroughly discussed in existing studies that arise when designing, implementing and analysing trials.

Conclusions: Researchers and policymakers need to become better aware of the broader set of assumptions, biases and limitations in trials. Journals need to also begin requiring researchers to outline them in their studies. We need to furthermore better use RCTs together with other research methods.

Key messages

  • RCTs face a range of strong assumptions, biases and limitations that have not yet all been thoroughly discussed in the literature.
  • This study assesses the 10 most cited RCTs worldwide and shows that trials inevitably produce bias.
  • Trials involve complex processes – from randomising, blinding and controlling, to implementing treatments, monitoring participants etc. – that require many decisions and steps at different levels that bring their own assumptions and degree of bias to results.

Below are excerpts from the full article:

Introduction

How well a given treatment may work can greatly influence our lives.

But before we decide whether to take a treatment we generally want to know how effective it may be. Randomised controlled trials (RCTs) are commonly conducted by randomly distributing people into treatment and control groups to test if a treatment may be effective.

Researchers in fields like medicine, psychology, and economics often claim that this method is the only reliable means to properly inform medical, social and policy decisions;

it is an ultimate benchmark against which to assess other methods; and it is exempt from strong theoretical assumptions, methodological biases and the influence of researchers (or as exempt as possible) which non-randomised methods are subject to.

This study assesses the hypothesis that randomised experiments estimate the effects of some treatment without strong assumptions, biases and limitations. In assessing this hypothesis, the 10 most cited RCT studies worldwide are analysed.

While these trials are related to the fields of general medicine, biology and neurology, the insights outlined here are as useful for researchers and practitioners using RCTs across any field including psychology, neuroscience, economics and, among others, agriculture.

This study shows that all of the 10 most cited RCTs assessed here suffer from at least several commonly known methodological issues that lead to biased results:

  • poor allocation of their participants’ background characteristics that influence outcomes across trial groups,
  • issues related to partially blinding and unblinding,
  • significant shares of participant refusal and
  • participants switching between trial groups, among others.

Some of these issues cannot be avoided in trials – and they affect their robustness and constrain their reported outcomes.

This study thereby contributes to the literature on the methodological biases and limits of RCTs and a number of meta-analyses of RCTs also indicate that trials at times face different biases, using common assessment criteria including randomisation, double-blinding, dropouts and withdrawals

To help reduce biases, trial reporting guidelines have been important but these need to be significantly improved.

A critical concern for trial quality is that only some trials report the common methodological problems. Even fewer explain how these problems affect their trial’s results. And no existing trials report all such problems and explain how they influence trial outcomes.

Exacerbating the situation, these are only some of the more commonly reported problems. This study’s main contribution is outlining a larger set of important assumptions, biases and limitations facing RCTs that have not yet all been thoroughly discussed in trial studies.

Better understanding the limits of randomised experiments is very important for research, policy and practice.

Results and discussion

Assumptions, biases and limitations in designing RCTs

  • simple-treatment-at-the-individual-level limitation

To begin, a constraint of RCTs not yet thoroughly discussed in existing studies is that randomisation is only possible for a small set of questions we are interested in – i.e. the simple-treatment-at-the-individual-level limitation of trials.

Randomisation is largely infeasible for many complex scientific questions, e.g. on what drives overall good physical or mental health, high life expectancy, functioning public health institutions or, in general, what shapes any other intricate or large-scale phenomenon (from depression to social anxiety).

Topics are generally not amenable to randomisation that are related to

  • genetics,
  • immunology,
  • behaviour,
  • mental states,
  • human capacities,
  • norms and practices.

Not having a comparable counterfactual for such topics is often the reason for not being able to randomise.

Trials are restricted in answering questions about how to achieve the desired outcomes within another context and policy setting: about what type of health practitioners are needed in which kind of clinics within what regulatory, administrative and institutional environment to deliver health services effective in providing the treatment.

But they cannot generally be conducted in cases with multiple and complex treatments or outcomes simultaneously that often reflect the reality of medical situations (e.g. for understanding how to increase life expectancy or make public health institutions more effective)

Researchers would, if they viewed RCTs as the only reliable research design, thus largely only focus on select questions related to simple treatments at the level of the individual that fit the quantifiable treatment–outcome schema (more to come on this later).

  • initial sample selection bias

Another constraint facing RCTs is that a trial’s initial sample, when the aim is to later scale up a treatment, would ideally need to be generated randomly and chosen representatively from the general population – but the 10 most cited RCTs at times use, when reported, a selective sample that can limit scaling up results and can lead to an initial sample selection bias.

Some of these leading trials, as Table 1 indicates, do not provide information about how their initial sample was selected before randomisation  while others only state that “patient records” were used or that they “recruited at 29 centers” but critical information is not provided such as the quality, diversity or location of such centres and the participating practitioners, how the centres were selected, the types of individuals they tend to treat and so forth.

This means that we do not have details about the representativeness of the data used for these RCTs.

  • achieving-good-randomisation assumption

A foundational and strong assumption of RCTs (once the sample is chosen) is the achieving-good-randomisation assumption. Poor randomisation – and thus poor distribution of participants’ background traits that affect outcomes between trial groups – puts into question the degree of robustness of the results from several of these 10 leading RCTs.

all of these 10 RCTs randomised their sample, showing that randomisation by itself does not ensure a balanced distribution – as we always have finite samples with finite randomisations.

As long as there are important imbalances we cannot interpret the different outcomes between the treatment and control groups as simply reflecting the treatment’s effectiveness.

  • incomplete baseline data limitation

Another constraint that can arise in trials is when they do not collect baseline data for all relevant background influencers (but only some) that are known to alternatively influence outcomes – i.e. an incomplete baseline data limitation.

The common claim, that “an advantage of RCTs is that nobody needs to know all the factors affecting the outcome as randomising should ensure it is due to the treatment”, does not hold and we cannot evade an even balance of influencing factors.

  • lack-of-blinding bias

some of these 10 trials did not double-blind while others initially double-blinded but later partially unblinded, or only partially blinded for one arm of the trial – which reflects in relevant cases (while often unavoidable) a lack-of-blinding bias.

  • small sample bias

Beyond randomisation and blinding, a further constraint is that trials often consist of a few hundred individuals that are often too restrictive to produce robust results – i.e. the small sample bias.

  • quantitative variable limitation 

Another issue facing RCTs not yet discussed in existing studies is the quantitative variable limitation: that trials are only possible for those specific phenomena for which we can create strictly defined outcome variables that fit within our experimental model and make correlational or causal claims possible.

The 10 most cited RCTs thus all use a rigid quantitative outcome variable. Some use the binary treatment variable (1 or 0) of whether participants died or not

But this binary variable can neglect the multiple ways in which participants perceive the quality of their life while receiving treatment.

In fact, most medical phenomena (from depression, cancer and overall health, to medical norms and hospital capacity) are not naturally binary or amendable to randomisation and statistical analysis (and this issue also affects other statistical methods and its implications need to be discussed in studies).

Assumptions, biases and limitations in implementing RCTs

  • all-preconditions-are-fully-met assumption

An assumption in implementing trials that has not yet been thoroughly discussed in existing studies is the all-preconditions-are-fully-met assumption: that a trial treatment can only work if a broad set of influencing factors (beyond the treatment) that can be difficult to measure and control would be simultaneously present

A treatment – whether chemotherapy or a cholesterol drug – can only work

  • if patients are nourished and healthy enough for the treatment to be effective,
  • if compliance is high enough in taking the proper dosage,
  • if community clinics administering the treatment are not of low quality,
  • if practitioners are trained and experienced in delivering it effectively,
  • if institutional capacity of the health services to monitor and evaluate its implementation is sufficient,
  • among many other issues.

Variation in the extent to which such preconditions are met leads to variation (bias) in average treatment effects across different groups of people. To increase the effectiveness of treatments and the usefulness of results, researchers need to give greater focus, when designing trials and when extrapolating from them, to this broader context.

Table 1. Research designs of the ten most cited RCTs worldwide

In these 10 leading RCTs, some degree of statistical bias arises during implementation through issues related to people initially recruited who refused to participate, participants switching between trial groups, variations in actual dosage taken, missing data for participants and the like.

Table 1 illustrates that for the few trials in which the share of people unwilling to participate after being recruited was reported it accounted at times for a large share of the eligible sample

This implies a selection bias among those who

  • have time,
  • are willing,
  • find it useful,
  • view limited risk in participating and
  • possibly have greater demand for treatment.

Among this small share, 88% were then randomised into the trial. During implementation, 42% in the treatment group stopped taking the drug. Among all participants 4% had unknown vital status (missing data) and 3% died.

As a sample gets smaller due to people refusing, people with missing data etc. “average participants” are likely not being lost but those who may differ strongly – which are issues that intention-to-treat analysis cannot necessarily address.

Assumptions, biases and limitations in analysing RCTs

  • unique time period assessment bias

In evaluating results after trial implementation, RCTs face a unique time period assessment bias that has not yet been thoroughly discussed in existing studies: that a correlational or causal claim about the outcome is a function of when a researcher chooses to collect baseline and endline data points and thus assesses one average outcome instead of another.

Treatments generally have different levels of decreasing (or at times increasing) returns. Variation in estimated results is thus generally inevitable depending on when we decide to evaluate a treatment – every month, quarter, year or several years.

No two assessment points are identical and we need to thus evaluate at multiple time points to improve our understanding of the evaluation trajectory and of lags over time (while this issue also affects other statistical methods).

  • background-traits-remain-constant assumption

Another strong assumption made in evaluating RCTs that has not yet been discussed is the background-traits-remain-constant assumption – but these change during the trial so we need to assess them not only at baseline but also at endline as they can alternatively influence outcomes and bias results.

  • average treatment effects limitation

Another constraint is that trials are commonly designed to only evaluate average effects – i.e. the average treatment effects limitation. Though, average effects can at times be positive even when some or the majority are not influenced or even negatively influenced by the treatment but a minority still experience large effects.

  • best results bias

A best results bias can also exist in reporting treatment effects, with funders and journals at times less likely to accept negligible or negative results.

  • funder bias

Another constraint in evaluating trials is that funders can have some inherent interest in the published outcomes that can lead to a funder bias.

  • placebo-only oa studyr conventional-treatment-only limitation

An associated constraint that arises in interpreting a trial’s treatment effects is related to a placebo-only or conventional-treatment-only limitation.

Combining the set of assumptions, biases and limitations facing RCTs

Pulling the range of assumptions and biases together that arise in designing, implementing and analysing trials (Figure 1), we can try to assess how reliable an RCT’s outcomes are.

Figure 1. Overview of assumptions, biases and limitations in RCTs (i.e. improving trials involves reducing these biases and satisfying these assumptions as far as possible). Source: Own illustration. Note: For further details on any assumption, bias or limitation, see the respective section throughout the study. This list is not exhaustive.

Yet is it feasible to always meet this set of assumptions and minimise this set of biases? The answer does not seem positive when assessing these leading RCTs.

The extent of assumptions and biases underlying a trial’s results can increase at each stage: from

how we

  • choose our research question and objective,
  • create our variables,
  • select our sample,
  • randomise, blind and control,

to how we

  • carry out treatments and monitor participants,
  • collect our data and conduct our data analysis,
  • interpret our results

and do everything else before, in between and after these steps.

We need to furthermore use RCTs together with other methods that also have benefits.

When a trial suggests that a new treatment can be effective for some participants in the sample, subsequent observational studies for example can often be important to provide insight into:

  • a treatment’s broader range of side effects,
  • the distribution of effects on those of different age, location and other traits and,
  • among others, whether people in everyday practice with everyday service providers in everyday facilities would be able to attain comparable outcomes as the average trial participant.

Conclusions

Randomised experiments require much more than just randomising an experiment to identify a treatment’s effectiveness.

They involve many decisions and complex steps that bring their own assumptions and degree of bias before, during and after randomisation.

Seen through this lens, the reproducibility crisis can also be explained by the scientific process being a complex human process involving many actors making many decisions at many levels when designing, implementing and analysing studies, with some degree of bias inevitably arising during this process.

And addressing one bias can at times mean introducing another bias (e.g. making a sample more heterogeneous can help improve how useful results are after the trial but can also reduce reliability in the trial’s estimated results).

Journals must begin requiring that researchers include a standalone section with additional tables in their studies on the “Research assumptions, biases and limitations” they faced in carrying out the trial.

Researchers need to furthermore better combine methods as each can provide insight into different aspects of a treatment. These range from RCTs, observational studies and historically controlled trials, to rich single cases and consensus of experts.

Finally, randomisation does not always even out everything well at the baseline and it cannot control for endline imbalances in background influencers.

No researcher should thus just generate a single randomisation schedule and then use it to run an experiment. Instead researchers need to run a set of randomisation iterations before conducting a trial and select the one with the most balanced distribution of background influencers between trial groups, and then also control for changes in those background influencers during the trial by collecting endline data

 

An honest analysis of anti-opioid data

Are doctors bribed by pharma? An analysis of data – By Rafael Fonseca MD & John A Tucker MBA, PhD – Jul 21, 2018
A Critical Analysis of a Recent Study by Hadland and colleagues

Here’s an astute analysis of the statistics used to demonize doctors’ “prescribing behaviors”, especially when they prescribe opioids.

Association studies that draw correlations between drug company-provided meals and physician prescribing behavior have become a favorite genre among advocates of greater separation between drug manufacturers and physicians.

Recent studies have demonstrated correlations between acceptance of drug manufacturer payments and undesirable physician behaviors, such as increased prescription of promoted drugs.   Continue reading

Deadly Scope of America’s Fentanyl Problem

The True Deadly Scope of America’s Fentanyl Problem – JAMA April 2018 – by Nora Volkow, the director of the National Institute on Drug Abuse (NIDA)

Ms. Volkow seems to understand that our problem is with fentanyl, not prescriptions opioids. I wish she could convince other government agencies and bureaucrats of this truth.

A year ago I wrote on this blog about the escalating numbers of people dying from overdoses involving the extremely potent synthetic opioid fentanyl and its analogues. Continue reading

FDA Finds Errors in Opioid Data, calls for quality review

FDA finds errors in its opioid sales data, calls for quality review – by Meg Tirrell | @megtirrell – May 16, 2018

The Food and Drug Administration says it’s found mistakes in opioid sales data provided by industry researcher Iqvia that led to an overestimation of the amount of prescription fentanyl being used in the U.S

This means that they assumed more of the fentanyl found on the streets and involved in overdoses was from prescriptions when instead it was illicit.

Because the larger number fed into the prevailing narrative that prescription opioids are fueling the “opioid crisis”, these numbers weren’t questioned closely.  Continue reading

First Look at Latest REAL Data on Opioid Overdoses

US Opioid Prescribing vs. Overdose Deaths and Hospital ER Visits
Implications for Public Policy
Richard A. Lawhern, PhD – May 2018

The ATIP advocacy group has sourced a new statistical analysis of the latest overdose data. Founding member Richard Lawhern has written a paper conclusively illustrating how the CDC presented deceptive views of the data to show a supposed link between opioid overdoses and opioid prescribing.

Abstract

An analysis has been performed on data for opioid prescribing, opioid-overdose related mortality, and hospital emergency room visits recorded for 1999-2016 by the US Centers for Disease Control and Prevention (CDC) and the US Agency for Healthcare Research Quality (AHRQ).   Continue reading

CDC Rx Opioid Overdoses Over-Reported by Half

CDC Opioid Overdose Death Rates Over-Reported by Half – Apr 2018 – A PPM Brief

Agency says inflated estimates were caused by blurred lines between prescription and illicit opioids.

The CDC is supposed to be an evidence and science-based agency, yet they published their opioid prescribing guidelines based on their own erroneous data.

Many laypeople and even their own authors have known for years that illicit fentanyl was causing overdoses, yet the CDC is just now starting to admit it.  

Continue reading