This is a collection of 5 articles I wandered through following links from an initial post on the healthcare blog Alert & Oriented by Michel Accad, MD, including excerpts from 3 linked blog entries and 2 linked PubMed articles.
The Statistical Alchemy of Meta-Analyses – Alert & Oriented – Michel Accad, MD – July 2011
remarkable article Alvan Feinstein wrote in 1995 “Meta-Analysis: Statistical Alchemy for the 21st Century.” In a few clearly written pages, the founding father of clinical epidemiology brilliantly identifies the wishful thinking underlying meta-analysis and exposes its methodological fallacies.
Feinstein begins by reminding the reader of the four necessary requirements for acceptable scientific evidence. Translated to clinical research, these become
1) that the population under investigation be identified reliably (“in a reproducible manner”);
2) that the relevant characteristics be homogeneous;
3) that comparisons performed between subgroups of the population be unbiased (internal validity);
4) that the evidence obtained be extrapolated to a broader population (external validity).
Because meta-analyses necessarily fail on one or more of these requirements, the wished-for results can never produce better information than the trials upon which they are constructed—hence the analogy with alchemy.
Among such shortcomings are
- the inevitable problem of publication bias,
- the difficulty of reconciling the diverse statistical methods used in the original studies (meta-analyses are usually content just to identify them), and
- the fact that meta-analyses necessarily obscure the descriptive details of the population under study, dealing a severe blow to external validity.
Furthermore, much of what happens in real life, such as mid-course changes or additions to the initial therapy, are usually “censored” by meta-analytical methods.
And meta-analyses are ill-equipped for studying “soft” endpoints such as functional performance and quality of life which have added relevance to clinical practice.
the most common issue, in Feinstein’s view, is the ubiquitous practice of expressing effects of treatments as ratios (eg. relative risk, odds ratio, or proportionate increment) without direct reference to the underlying event rate
Drug A is 80% more effective than Drug B sounds great if you don’t know that Drug B was only 1% effective.
Feinstein continues his indictment of meta-analyses by listing common sense principles that are virtually always ignored:
1) in order to draw any meaningful conclusion, variables must be selected if they reflect biologically relevant notions of homogeneity, not because they can be conveniently measured and classified;
2) meta-analyses cannot claim wider applicability because they study more heterogenous populations;
3) if a meta-analysis must hint at causality, it must give particular attention to a consistent effect among in the individual studies. Inconsistencies of effect cannot be “buried in the statistical agglomeration.”
Yet rare is the meta-analysis that does not precisely include studies going “both ways” before forcefully concluding that a cause and effect relationship was established.
Meta-analytic nonsense is now quoted in the mainstream media as factual truth by prominent academics to justify specific forms of health care rationing.
But should we really be surprised if the congressional sausage machine relies on statistical mishmash and academic hodgepodge to plan healthcare into the future?
This is what we’re stuck in now with the “opioid crisis”.
After all, cui bono? (literally “to whom is it a benefit?”)
A philosophical argument against evidence-based policy. – PubMed – NCBI – J Eval Clin Pract. 2017 Oct;
RATIONALE, AIMS AND OBJECTIVES:
Evidence-based medicine has two components.
- The methodological or ontological component consists of randomized controlled trials and their systematic review. This makes use of a difference-making conception of cause.
- But there is also a policy component that makes a recommendation for uniform intervention, based on the evidence from randomized controlled trials.
The policy side of evidence-based medicine is basically a form of rule utilitarianism.
But it is then subject to an objection from Smart that rule utilitarianism inevitably collapses.
If one assumes
(1) you should recommend the intervention that has brought most benefit (the core of evidence-based policy-making),
(2) individual variation (acknowledged by use of randomization) and
(3) no intervention benefits all (contingent but true), then the objection can be brought to bear.
A utility maximizer should always ignore the rule in an individual case where greater benefit can be secured through doing so.
In the medical case, this would mean that a clinician who knows that a patient would not benefit from the recommended intervention has good reason to ignore the recommendation.
This is indeed the feeling of many clinicians who would like to offer other interventions but for an aversion to breaking clinical guidelines.
Mark Scheid skillfully reviews a provocative book that challenges the theory and practice of population medicine (PM).
The book’s author—Michel Accad, a practicing cardiologist—attributes the healthcare community’s widespread embrace of PM to 3 factors:
- the economics,
- the science, and
- the ethics of health care.
This editorial focuses on the science of health care, highlighting the intimate ties between PM and evidence-based medicine (EBM).
Although much has been written about EBM, little has been said about its relation to PM. As a result, the striking similarities between the two entities are not widely appreciated. In fact, as I will show, PM and EBM can be identical.
Population medicine (variably called population health) evaluates the healthcare needs of a specific population and makes decisions for that population as a whole.
The recipient of care is the “population” itself, and the approach does not necessarily benefit any specific individual within that population.
Evidence-based medicine emphasizes the use of external evidence derived from randomized controlled trials (RCTs), meta-analyses, and systematic reviews—integrated with clinical expertise—to make decisions about the care of individual patients.
Opponents argue that RCTs are limited in their scope.
For example, RCTs seldom include in their databases important factors such as
- types and severity of symptoms;
- rates of progression of the illnesses;
- effects of comorbid conditions; and
- patients’ social support,
- genomic profiles,
- psychological states,
- expectations, and
- willingness or ability to cooperate. (see Sampling Bias in Pain Research)
In addition, RCTs are inappropriate or unethical in certain situations and are neither possible nor pertinent in making many clinical decisions.
Studies purporting to study treatments of chronic pain run into these problems:.
- The chronicity of pain cannot be studied in “normal people” without pain.
- It isn’t ethical to experimentally induce intractable pain.
- It isn’t ethical to leave a “control group” to suffer without effective relief for months on end.
More important, the primary outcome of an RCT is always an aggregate: the results represent the average effect in the treatment group, in contrast to that in the control group.
Therein lies the problem:
Applying aggregate measures
to an individual patient.
Although EBM does enable better predictability in the management of disease, it can do so only at the aggregate, that is, the “population,” level. In that sense, EBM is no different from PM.
Old Ways Are Still Valid
Good doctors have always practiced medicine on the basis of the best available evidence. Those of us whose professional careers predated the current version of EBM faithfully pursued the best external evidence that we could find, albeit typically from sources other than RCTs.
We were ever mindful of the lessons that our patients taught us, lessons that helped shape our clinical decisions
In caring for the individual patient, that approach was—and still is—a valid practice model.
Problems in the “evidence” of “evidence-based medicine”. – PubMed – Am J Med. 1997 Dec
The proposed practice of “evidence-based medicine,” which calls for careful clinical judgment in evaluating the “best available evidence,” should be differentiated from the special collection of data regarded as suitable evidence
Although the proposed practice does not seem new, the new collection of “best available” information has major constraints for the care of individual patients.
Derived almost exclusively from randomized trials and meta-analyses,the data do not include many types of treatments or patients seen in clinical practice; and the results show comparative efficacy of treatment for an “average” randomized patient, not for pertinent subgroups formed by such cogent clinical features as severity of symptoms, illness, co-morbidity, and other clinical nuances.
The intention-to-treat analyses do not reflect important post-randomization events leading to altered treatment; and the results seldom provide suitable background data when therapy is given prophylactically rather than remedially, or when therapeutic advantages are equivocal
Randomized trial information is also seldom available for issues in etiology, diagnosis, and prognosis, and for clinical decisions that depend on pathophysiologic changes, psychosocial factors and support, personal preferences of patients, and strategies for giving comfort and reassurance.
The laudable goal of making clinical decisions based on evidence can be impaired by the restricted quality and scope of what is collected as “best available evidence.”
The authoritative aura given to the collection, however, may lead to major abuses that produce inappropriate guidelines or doctrinaire dogmas for clinical practice.
So the above article from 1997 predicted the fiasco we’re seeing from the CDC opioid guidelines over 20 years ago.
Can EBM and clinical judgment be friends? – Alert & Oriented – Michel Accad, MD – March 15, 2016
…the following idea: “Indeed, EBM has serious limitations, we’ve gone too far, and we should make sure to use both EBM and clinical judgment when we are making our medical decisions.” In other words, art and science.
The problem is that by considering EBM and clinical judgment as proportionate partners in the pursuit of clinical excellence, one commits a conceptual error, because EBM and judgment are not on the same plane: the one provides data, the other is the actual decider.
Part of the confusion, I think, is because we frequently employ the loose expression “to use judgment,” as if judgment was one of the inputs that goes into the medical decision. The correct formulation is to exercise judgment, i.e., to make a decision.
The truth, then, is that EBM must be subservient to clinical judgment because, at the end of the day, EBM is just a data point among many points that the clinical judge must consider before acting.
If EBM could possibly be considered to operate on the same plane as clinical judgment, it would mean that, on occasion, EBM can compel a medical decision. But even the staunchest defenders of EBM methodology would concede that that could never be.
I doubt that most people (or even doctors) know this. They believe that EBM can “give an answer”, which is good enough for them, and they do not progress to “exercising judgment” which is required to determine if the “answer” is applicable under these particular circumstances.
EBM, then, is always the handmaiden to clinical judgment. Clinical judgment appreciates the services of EBM, and will use them as it sees fit.
Is medicine a scientific enterprise? – Alert & Oriented – by Michel Accad, MD – 2015
…the notion that medicine should be a scientific undertaking pervades, to varying degrees, the entire healthcare community.
Now, the idea that medicine should be a scientific enterprise–even to the slightest degree–is an erroneous idea. Medicine itself cannot be viewed as scientific for the simple reason that the aim of science is to acquire knowledge, whereas the aim of medicine is to heal.
These are two distinct ends.
Furthermore, a scientific enterprise is best carried out with dispassion:observation and experimentation. Healing, on the other hand, is best accomplished through personal involvement: caring.
Of course, this is not to say that scientific inquiry cannot inform doctors on the proper course of action. It certainly can, should, and does. But scientific inquiry can only be subordinate to medical care.
It is because one cares for the patient that one seeks the best material ways to cure or treat the body, and scientific knowledge provides valuable information in that regard.
But scientific knowledge and scientific methods cannot be the prime or deciding factors in clinical decision-making for at least two reasons.
- First, we should recognize that biomedical science is only scientific in a limited way.
When, at the dawn of the modern era, science separated itself from philosophy to take on a decidedly empirical cloak, the human observer could no longer properly be the subject of scientific inquiry, except in an indirect manner.
And where physics and chemistry have been able to uncover “laws” of nature, biomedical science generally limits itself to making tentative, statistical predictions on human data aggregates—populations.
- Secondly, to circumscribe medical care inside the realm of science limits the autonomy of the patient.
Perforce, “scientific medicine” in the Flexnerian sense separates the physician from the patient, because the latter becomes an object in (or a subject of) the scientific enterprise, and therefore, at some level, must be deemed incapable of judging the value of the care received: no one asks the falling apple if it would prefer to be considered under the law of universal gravitation or under the general theory of relativity.
The Flexnerian notion of scientific medicine, then, brings to the fore the “information asymmetry” in the doctor-patient relationship and justifies State intervention by way of licensing laws. In turn, licensing laws give credence to and materialize this asymmetry.
Patients, as object of scientific medicine, can no longer freely choose their care as the State intervenes to ensure safety and efficacy according to objective, scientific norms.
The entire regulatory apparatus of the last 100 years emanates from the Flexnerian error of scientific medicine.
The apparatus either perpetuates the error (quality and safety movement) or attempts to counter it (patient autonomy movement). A more effective solution is to get rid of the error altogether.
Scientific medicine is truly nonsense.
After reading all this, I totally agree.
Especially when it comes to pain research, the science isn’t even close to representing our actual experience.
Plus, the latest studies all focus exclusively on milligrams of opioids–as though our pain and suffering were completely unrelated and even irrelevant.