On March 6, 2018, the Journal of the American Medical Association published a 12-month randomized clinical trial [authors Erin E Krebs, Amy Gravely, Beth DeRonne, Elizabeth Goldsmith, and others] which compared opioids to non-opioid medications for treatment of moderate to severe osteoarthritis and back pain among 240 Veterans Administration patients.
In the days since publication, the study has been picked up by popular online magazines and blogs under blaring, but incorrect, headlines.
The trial supposedly “proves” that opioid pain relievers work no better than acetaminophen (Tylenol) or non-steroidal anti-inflammatory drugs (NSAIDs), and that the risks of opioids make them unacceptable in the treatment of these types of pain.
In reality, the trial proves no such thing.
To an informed reader, the study is profoundly flawed on several grounds.
The study addresses types of pain for which opioids have never been a preferred treatment of choice. The first line treatment is anti-inflammatory drugs (NSAIDs).
The study protocols were flawed; the group set up two cohorts of patients in a “practical” trial that offered several medications in sequence until something was found that worked.
About 11% of the patients in the “non-opioid” leg of the trial eventually tried tramadol, an opioid that was not identified as such in the study.
Patients on “non-opioids” were switched between medications an average of four to five times during the year of the study before a medication was found that worked.
Patients on opioids either had successful therapy on the first medication tried or were switched only once. This difference was not detailed in the study results.
Observations of Stephen Nadeau, MD (a specialist in the treatment of chronic pain) in a private email to the author are also meaningful:
“The mean dose of opioid was 21 mg morphine equivalent (MEQ)/day and only 12.6% of patients randomized to the opioid group were on > 50mg MEQ/day.
The operational clinical range for opioids used in the [the] treatment of chronic non-malignant pain is roughly 50-1000 mg MEQ/day [and yes 1000 is not a typo].
There are excellent scientific data on this, even as the CDC and others have avidly promoted a “one size fits all” concept and advocated for daily dosage of <90 mg MEQ (resulting in untold suffering and many deaths in the 1.6 million people with chronic pain and on doses of >90 mg MEQ/day).
The bottom line is that the study seems to have set up to give a predetermined result: to discredit opioids in favor of NSAIDs and Tylenol.
Any of these biases alone should have disqualified the study from publication in JAMA, but the fact that three of them are found in the same paper raises suspicions, and rightly so.
The fact that JAMA editors let this piece see the light of day should have us all questioning the integrity and/or standards of the journal.
This blatantly biased “study” is a real black eye for JAMA, which I once held in high regard.
The researchers probably knew that any study showing that “opioids are evil” will be published these days, regardless of its lack of scientific rigor.
Renowned medical schools, like Harvard, and world-famous clinics like the Cleveland Clinic are peddling previously ridiculed “alternative” medicine under the meaningless initiative of “Integrative Medicine”.
The scientific and medical communities, at least the parts that are visible to the public, have abdicated their responsibility to remain reality-based.
Author: Richard A. Lawhern Ph.D. is a technically trained non-physician healthcare writer and social media moderator for chronic pain communities. He has over 20 years of hands-on experience as a patient advocate, with multiple publications concerning pain and public policy on medical issues. Dr. Lawhern is also an American Council advisor.
I posted the following comment on the original article: Sometimes the Journal of the AMA Gets It Wrong! And so do careless journalists. By Richard “Red” Lawhern
Thank you for pointing out the many serious mistakes in this clearly biased study.
Usually, researchers do a better job of hiding their anti-opioid bias, but this one was over the top, with so many unaccounted for variables and dependencies that it’s difficult to determine what exactly was measured. Deriving any firm conclusions from this vague mess of circumstances is impossible.