Over the past decade, many scholars have questioned the credibility of research across a variety of scientific fields.
As psychologists specialising in clinical work (Alexander Williams) and methodology (John Sakaluk), we wondered what these concerns mean for psychotherapy.
Research was previously used to search for truth, but these days seems to merely support a pre-determined conclusion. Bias infiltrates the design and methodology of studies, and clever statistical techniques are used to “discover” the desired conclusion. (see my numerous posts about research bias)
Over the past 50 years, therapy researchers have increasingly embraced the evidence-based practice movement.
And they are running into many of the same problems as the push to Evidence-Based Medicine (EBM):
- Misconceptions about Evidence-Based Medicine
- Does EBM adversely affect clinical judgment?
- Problems with Evidence-Based Medicine
- “Evidence-Based Medicine”: Corporate Corruption
- Systematic Pushback Against Evidence-Based Medicine
- …and more posts on EBM
Just as medicines are pitted against placebos in research studies, psychologists have used randomised clinical trials to test whether certain therapies (eg, ‘exposure therapy’, or systematically confronting what one fears) benefit people with certain mental-health conditions (eg, a phobia of spiders).
The treatment-for-diagnosis combinations that have amassed evidence from these trials are known as empirically supported treatments (ESTs).
We wondered, though:
- is the credibility of the evidence for ESTs as strong as that designation suggests?
- Or does the evidence-base for ESTs suffer from the same problems as published research in other areas of science?
Certainly, much of the recent research on opioids strains credibility:
- Truths, lies, and statistics
- NO DIFFERENCE between cancer and non-cancer pain
- Opioids Blamed for Consequences of Chronic Pain
This is what we (with our coauthors, the US psychologists Robyn Kilshaw and Kathleen T Rhyner) explored in our study published recently in the Journal of Abnormal Psychology.
The Society of Clinical Psychology – or Division 12 of the American Psychological Association – has done the arduous work since the 1990s of establishing a list of more than 70 ESTs.
Treatments | Society of Clinical Psychology: This site has an alphabetized list of 86 psychological treatments paired with diagnoses and a description, research support, clinical resources, and training opportunities.
We conducted a ‘meta-scientific review’ of these ESTs. Across a variety of statistical metrics, we assessed the credibility of the evidence cited by the Society for every EST on their list.
All told, we analysed more than 450 research articles. What we found is a study in contrasts.
- Around 20 per cent of ESTs performed well across a majority of our metrics (eg, problem-solving therapy for depression, interpersonal psychotherapy for bulimia nervosa, the aforementioned exposure therapy for specific phobias).
- We also found a ‘murky middle’: 30 per cent of ESTs had mixed results across metrics, performing neither consistently well nor poorly (eg, cognitive therapy for depression, interpersonal psychotherapy for binge-eating disorder).
- That leaves 50 per cent of ESTs with subpar outcomes across most of our metrics (eg, eye-movement desensitisation and reprocessing for PTSD, interpersonal psychotherapy for depression)
In other words, although these ESTs seemed to work based on the claims of the clinical trials cited by the Society of Clinical Psychology, we found the evidence from these trials lacked statistical credibility.
What does it mean, though, if the evidence behind the therapies thought to be best supported by research is not as strong as one would hope?
If some ESTs lack credible evidence that they are superior to simpler, less costly and time-consuming forms of therapy, shifting resources towards the latter group of treatments will benefit therapy clients and all those bearing the costs of mental-health care.
Given the problems with credibility we found across many clinical trials, we contend that we currently do not know in many cases if some therapies perform better than others.
Of course, this also means we do not know if the majority of therapies are equally effective, and, if such equality exists, we do not know if it owes to common factors.
Psychotherapy could be on the verge of a renaissance. Research on mental-illness treatment can benefit greatly from the lessons psychology has learned about credibility.
Ethical therapists can continue to engage in practice that is evidence-based, not eminence-based, rooting their therapies in scientific evidence rather than their own conjecture or that of senior colleagues.
They can also continue the routine outcome measurement many already employ:
- solicit therapy clients’ feedback early and often,
- be open to surprise about what’s working and what’s not, and
- adjust accordingly.
Clients can ask their therapists upfront if they will offer the opportunity for such mutual assessment of their progress.
Therapy helps the vast majority of those who receive it.
I would agree with this. If nothing else, the focused attention and support provided by therapy can be healing.
Happily – if the discipline embraces reform in research, and cultivates a humble, flexible approach to therapy – it could help even more.
Alexander Williams is programme director of psychology and director of the Psychological Clinic, both at the University of Kansas, Edwards Campus.
John Sakaluk is assistant professor in psychology at the University of Victoria, British Columbia.