Artificial Intelligence, Lower Back Pain, and the Cleveland Clinic – By Chuck Dinerstein — December 20, 2018
Two physicians from Cleveland Clinic write almost breathlessly about how artificial intelligence will revolutionize the treatment of back pain – a highly remunerative area to physicians that has no one ascendant, best treatment regimen.
What is so troubling, at least to me, about the vaporware they are peddling is both their confidence in its application, and the way cost pervades their view.
I’m glad someone else is pointing out what I’ve observed as well: all aspects of healthcare are more and more beholden to financial interests instead of medical ones.
The authors rightly point out, that there is a great deal of variation in who and how spine problems, i.e., back pain, are treated by the three groups of specialists that consider the spine “their organ” –
- orthopedists because has bones,
- neurosurgeons because it has nerves, and
- an assortment of interventional internists who can offer “less invasive” and equally efficacious care.
Research suggests that physician choice especially when the best treatment is ambiguous, drives variation.
It seems the new “healthcare business” folks believe all medical care can be standardized if they only have enough data. They never consider that it must be tailored to the specific situations of each encounter.
The full impact of medical care consists of the interactions of all the nurses and doctors, all the treatments and medications, with each other and the patient.
So, a doctor’s effectiveness will depend on his specific blend of education, knowledge, and experience and will thus vary for patients presenting with different conditions. The patient’s response, in return, will vary by their own education, knowledge, and experience, allowing for an infinite variety of actions and interactions.
I think this is good and necessary to effectively treat the infinite variety of patients, symptoms, and complaints doctors are confronted with.
Trying to standardize human sickness, let alone human bodies and behavior, reeks of ignorance and hubris (they usually go together).
“While the spine community has a wealth of knowledge in peer-reviewed medical literature, it remains extremely difficult, if not impossible, for the practicing physician or surgeon to reconcile in real time all of the data that will ultimately determine the most cost-effective choice for a particular treatment.
This can include all aspects of
- a specific patient,
- the physician’s own expertise, and
- all relevant financial data pertaining to the patient and the proposed care.”
Is anyone concerned about the phrase “all relevant financial data pertaining to the patient?”
Yes, that screamed out at me right away.
These folks are so blinded by their financial desires that they don’t even realize how crass it is to consider a patient’s finances to be as important as the doctor’s expertise. It sounds so outrageous that I reread it because I at first thought I’d misunderstood.
No, they really do show that data about the patient’s finances are at the same level of decision-making as the doctor’s expertise. I’m horrified that this is what they are teaching their AI machines!
These are the values which will guide the AI machine to make the “correct” (for who?) decisions about my healthcare.
Does that refer to the impact of the disease on their finances or their ability to pay?
They are building a platform, another computer buzzword which brings to mind the image of an off-shore drilling platform; wherein collecting historical data, they will more quickly and with a “higher probability” of being correct identify the decisions that will “lead to optimal patient outcomes within the most appropriate cost and reimbursement models.”
Again, why bring cost and reimbursement models into the discussion; won’t we all get the same care irrespective of our ability to pay?
to the “higher probability” of better decisions, the author’s cite a 50% improvement, the current generation of AI systems are found to be as good as physicians, not necessarily better
And while 50% sounds better, we are describing a system that doesn’t even have a published proof of concept.
Can an AI system provide physicians with pertinent information to make a judgment, absolutely; but who is willing to let the algorithm decide?
For the authors, the real value of AI, in addition to the tantalizing suggestion that care will be more “cost-efficient,” is that needless variations in care will be removed, homogenized away.
“Needless” according to who? Will the algorithm know what variations might be “needed” by an individual patient?
To the extent that it raises sub-standard care, that is all for good. But in homogenizing variation, we inadvertently lose innovation that improves the standard.