Reporting of Positive Results in Randomized Controlled Trials of Mindfulness-Based Mental Health Interventions – PlosOne – April 8, 2016 – Free full-text study
A large proportion of mindfulness-based therapy trials report statistically significant results, even in the context of very low statistical power.
We have the same problem with the studies used to support the CDC guidelines.
The objective of the present study was to characterize the reporting of “positive” results in randomized controlled trials of mindfulness-based therapy.
We also assessed mindfulness-based therapy trial registrations for indications of possible reporting bias and reviewed recent systematic reviews and meta-analyses to determine whether reporting biases were identified.
CINAHL, Cochrane CENTRAL, EMBASE, ISI, MEDLINE, PsycInfo, and SCOPUS databases were searched for randomized controlled trials of mindfulness-based therapy. The number of positive trials was described and compared to the number that might be expected if mindfulness-based therapy were similarly effective compared to individual therapy for depression. Trial registries were searched for mindfulness-based therapy registrations. CINAHL, Cochrane CENTRAL, EMBASE, ISI, MEDLINE, PsycInfo, and SCOPUS were also searched for mindfulness-based therapy systematic reviews and meta-analyses.
- 108 (87%) of 124 published trials reported ≥1 positive outcome in the abstract, and 109 (88%) concluded that mindfulness-based therapy was effective, 1.6 times greater than the expected number of positive trials based on effect size d = 0.55 (expected number positive trials = 65.7).
- Of 21 trial registrations, 13 (62%) remained unpublished 30 months post-trial completion.
- No trial registrations adequately specified a single primary outcome measure with time of assessment.
- None of 36 systematic reviews and meta-analyses concluded that effect estimates were overestimated due to reporting biases.
In other words, these studies did not follow the scientific standards established for such research.
The proportion of mindfulness-based therapy trials with statistically significant results may overstate what would occur in practice.
The main finding of this study was that of 124 MBT RCTs that were reviewed, almost 90% were presented as positive studies when published.
Furthermore, there were only 3 trials that were presented unequivocally as negative trials without alternative interpretations or caveats to mitigate the negative results and suggest that the treatment might still be an effective treatment
The above is a sure sign of researcher bias, which runs rampant in “science” these days.
This is just like all the studies of opioids make them seem detrimental because they do not account for the positive pain-relieving effects at all.
MBT is often administered in groups by people who do not necessarily have professional mental health training to treatment recipients without defined diagnoses or levels of symptoms, all of which likely reduce effect sizes
Based on this reference point, there were 1.6 times as many positive MBT RCTs among the 124 RCTs we reviewed as would be expected if the true effect size of d = 0.55 in a relatively homogeneous group of trials.
When we examined subgroups of only studies of MBCT (versus other MBTs), only studies with clinical populations (versus general population, employees, or students), and only studies that required mental health symptoms for enrollment, results were consistent.
Our review of trial registration records also suggest the possibility that reporting biases may have been an important factor.
Of the 124 RCTs reviewed, only 21 (17%) were registered prior to data collection, even though 80 of the eligible RCTs were published recently (since 2010).
This means they collected data first, then examined it for possible correlations, and only then they decided which results they would use for the study.
Statistically, data can be “processed” in various hypothetical conditions with various combinations of data point to eventually yield the sought-after result.
When we examined trial registries, we identified 21 registrations of MBT trials listed as completed by 2010 and found that 13 (62%) remained unpublished 30 months after completion; of the published trials, all conveyed a positive conclusion.
Another technique is to simply abandon studies if the results don’t support the researchers’ hypotheses. Unfortunately, this is happening in many areas of scientific inquiry and is an acknowledged problem among scientists.
None of the 21 registrations, however, adequately specified a single primary outcome (or multiple primary outcomes with an appropriate plan for statistical adjustment) and specified the outcome measure, the time of assessment, and the metric (e.g., continuous, dichotomous).
In other words, most of these studies supporting “mindfulness” aren’t scientifically rigorous and only produce questionable results that don’t hold up under inspection – much like the current rash of negative opioid studies.
Yet these bogus studies are then widely cited to support the current propaganda and even become incorporated into government health guidelines like the CDC’s, which urge us to pursue exactly this kind of unscientific mumbo-jumbo.
When we removed the metric requirement, only 2 (10%) registrations were classified as adequate.
Thus, it may be the case that selective outcome reporting, as well as “data dredging” and selective reporting of analyses may play important roles in the proportion of positive studies that we found among MBT RCTs in the present study
If one assumes that there is some effect of MBT on mental health outcomes, albeit a smaller effect than reported in published studies, the ability to selectively publish from multiple outcome options or multiple analyses could easily lead to exaggerated effect estimates and a rate of positive trial reports that exceeds plausibility, as we found in our study.
Indeed, others have suggested that exaggerated effect sizes are problematic in trials that work with “soft” outcomes, as is typically the case in psychological or behavioral research, and that selective reporting of only some outcomes and analytical flexibility may be even larger problems than classic publication bias in psychological studies compared to “harder” sciences
The very small number of trials that clearly declared negative results in the present study without caveats or “spin” also reminds us that when negative results are reported, they are often “spun” so that they appear to be equivocal or even positive findings.
…the failure to provide a clear statement of non-significance in the abstract may serve to distort understandings of results, since many readers base their assessment of trial results on what is reported in the abstract.
In the present study, we found that most existing evidence syntheses either did not evaluate reporting biases or concluded that they were not present.
A meta-analysis, which was published subsequent to our search period and not included in our analysis, for instance, did not assess publication or other forms of reporting bias with statistical methods, but did identify patterns of non-publication and likely selective outcome reporting by reviewing MBT trial registrations.
However, the proportion of positive trials that are reported, despite small sample sizes and low statistical power are concerning.
These are exactly like the opioid studies on which the CDC based their prescribing guidelines. As they say “garbage in, garbage out”.
Although we could not determine with certainty the degree that reporting biases play a definitive role in this, there was evidence that this may be driving force.
Investigators who conduct trials of MBT and other non-pharmaceutical interventions to improve mental health should register their trials with enough information so that readers can verify whether published outcomes match the pre-specified outcomes.