Artificial Intelligence, Back Pain, and the Cleveland Clinic

Artificial Intelligence, Lower Back Pain, and the Cleveland Clinic  – By Chuck Dinerstein — December 20, 2018

Two physicians from Cleveland Clinic write almost breathlessly about how artificial intelligence will revolutionize the treatment of back paina highly remunerative area to physicians that has no one ascendant, best treatment regimen.

What is so troubling, at least to me, about the vaporware they are peddling is both their confidence in its application, and the way cost pervades their view.

I’m glad someone else is pointing out what I’ve observed as well: all aspects of healthcare are more and more beholden to financial interests instead of medical ones.  

The authors rightly point out, that there is a great deal of variation in who and how spine problems, i.e., back pain, are treated by the three groups of specialists that consider the spine “their organ”

  1. orthopedists because has bones,
  2. neurosurgeons because it has nerves, and
  3. an assortment of interventional internists who can offer “less invasive” and equally efficacious care.

Research suggests that physician choice especially when the best treatment is ambiguous, drives variation. 

It seems the new “healthcare business” folks believe all medical care can be standardized if they only have enough data. They never consider that it must be tailored to the specific situations of each encounter.

The full impact of medical care consists of the interactions of all the nurses and doctors, all the treatments and medications, with each other and the patient.

So, a doctor’s effectiveness will depend on his specific blend of education, knowledge, and experience and will thus vary for patients presenting with different conditions. The patient’s response, in return, will vary by their own education, knowledge, and experience, allowing for an infinite variety of actions and interactions.

I think this is good and necessary to effectively treat the infinite variety of patients, symptoms, and complaints doctors are confronted with.

Trying to standardize human sickness, let alone human bodies and behavior, reeks of ignorance and hubris (they usually go together).

“While the spine community has a wealth of knowledge in peer-reviewed medical literature, it remains extremely difficult, if not impossible, for the practicing physician or surgeon to reconcile in real time all of the data that will ultimately determine the most cost-effective choice for a particular treatment.

This can include all aspects of

  • a specific patient,
  • the physician’s own expertise, and
  • all relevant financial data pertaining to the patient and the proposed care.

Is anyone concerned about the phrase “all relevant financial data pertaining to the patient?”

Yes, that screamed out at me right away.

These folks are so blinded by their financial desires that they don’t even realize how crass it is to consider a patient’s finances to be as important as the doctor’s expertise.  It sounds so outrageous that I reread it because I at first thought I’d misunderstood.

No, they really do show that data about the patient’s finances are at the same level of decision-making as the doctor’s expertise. I’m horrified that this is what they are teaching their AI machines!

These are the values which will guide the AI machine to make the “correct” (for who?) decisions about my healthcare.

Does that refer to the impact of the disease on their finances or their ability to pay?

They are building a platform, another computer buzzword which brings to mind the image of an off-shore drilling platform; wherein collecting historical data, they will more quickly and with a “higher probability” of being correct identify the decisions that will “lead to optimal patient outcomes within the most appropriate cost and reimbursement models.”

Again, why bring cost and reimbursement models into the discussion; won’t we all get the same care irrespective of our ability to pay?

to the “higher probability” of better decisions, the author’s cite a 50% improvement, the current generation of AI systems are found to be as good as physicians, not necessarily better

And while 50% sounds better, we are describing a system that doesn’t even have a published proof of concept.

Can an AI system provide physicians with pertinent information to make a judgment, absolutely; but who is willing to let the algorithm decide?

For the authors, the real value of AI, in addition to the tantalizing suggestion that care will be more “cost-efficient,” is that needless variations in care will be removed, homogenized away.

“Needless” according to who? Will the algorithm know what variations might be “needed” by an individual patient?

To the extent that it raises sub-standard care, that is all for good. But in homogenizing variation, we inadvertently lose innovation that improves the standard.

11 thoughts on “Artificial Intelligence, Back Pain, and the Cleveland Clinic

  1. peter jasz

    (RE: ” …Can an AI system provide physicians with pertinent information to make a judgment, absolutely; but who is willing to let the algorithm decide?”)

    Who’s willing ? For one, ME.

    For what do you believe the physician/surgeon’s procedures, ‘course-of-action’ are based upon; a hunch ? No, data collected over decades of research, experience and observations
    (i.e. ‘stats’, observations, analytics, “algorithms”).

    pj

    Like

    Reply
    1. Zyp Czyk Post author

      The problem is with deciding how to weight the factors that go into an A.I. decision. This gets programmed in so deeply that it’s impossible for humans to understand exactly how A.I. systems reach their conclusions – that’s what scares me (and other people who understand how A.I. works programmatically).

      Humans know when to make exceptions, when some seemingly unrelated factor changes what would normally be done, and humans understand the very intense narrowness of the data’s applicability – these are mental processes that A.I. cannot replicate.

      Like

      Reply
      1. peter jasz

        Let’s consider Drug Interaction fatalities; scripts written by physician’s (unaware of the distinctive, complex human physiology/drug interactions-reactions -upon which we all differ- and the tragic consequences of a fatal outcome. Does anyone know where that number stands today -and in the past ?

        Perhaps ‘Pharmacogenetic’s’ could be used to warn of us to the potential hazards of such a (deadly) scenario? I suspect it would. Is it a part of ‘AI’ -I don’t know. BUT, it is MUCH better than going without and saying “Let’s try this and see ” ???

        The fundamental extraordinary power of computing power; algorithms patient data, stat’s/research, world-wide comparisons of similar/identical “symptoms” could very well make small work out of what once believed near insolvable, or worse, both mysterious, untreated -or fatal.

        Naturally, we require intelligent human input. Together (along with what appears to be a dwindling amount of ‘(un)common sense’), great things can be achieved.

        pj

        Liked by 1 person

        Reply
      2. peter jasz

        Good points. However, when you state:

        ” … that it’s impossible for humans to understand exactly how A.I. systems reach their conclusions ”

        I didn’t realize the ‘topic’ centered around AI making crucial life/death decisions ?

        pj

        Liked by 1 person

        Reply
        1. Zyp Czyk Post author

          These decisions wouldn’t be so critical to start with, but they can start trends that humans don’t understand. This has happened with other A.I. applications and is one of the known drawbacks of this technology. It can make decisions, but it cannot tell you why, because too many factors are being used by the algorithm in ways the system has “learned” on its own.

          As with so much of tech, it’s incredibly useful, but also can lead to new problems. Also, as long as such a system is only used as a resource, it’s useful, but when it starts getting too far ahead of us and its programming, we can’t be sure how that will play out.

          Like

          Reply
  2. Zyp Czyk Post author

    The following comment is by Richard A “Red” Lawhern PhD:

    As an engineering professional with 45 years experience, I am sharply aware of the concerns you voice in your EDS blog. Although AI programs have done some wonderful things in pattern recognition (particularly speech recognition and stress analysis), they also have significant limitations in problems where the outcome measures and relationships aren’t clear. Rule-based AI systems are particularly vulnerable to ambiguity. And medical practice has a lot of those.

    I am also aware of the pitfalls of “cost” as a metric in human outcomes. Cost is driven by several variables, some of them quite intangible. That said, we have to understand that cost and affordability are constraining factors in many medical issues. The US has the highest per capita healthcare expense of any nation in the world, but it is unclear that our actual health outcomes are better than in other western nations. Thus cost reduction is a valid policy objective in both clinical practice standards and social insurance.

    Thanks for your continuing effort to cast light on the important dimensions of medical care for pain.

    Liked by 1 person

    Reply
  3. canarensis

    …reeks of ignorance and hubris (they usually go together). So utterly true…yet it’s amazing to me how it not only is acceptable to be ignorant & full of hubris these days, but it’s positively celebrated (look at our government, for instance).

    And the whole idea of getting care/treatment based upon one’s ability to pay was supposed to be over with (yeah, like i believed that). They just mask it with obfuscatory verbiage & more layers of bureaucracy these days.

    If I believed that the AI systems were programmed by medically knowledgeable people who actually had patient care in mind, I’d almost prefer that over the sadistic sleazebags who are so prevalent in medicine these days…but I have as much faith in the programmers & their knowledge base as I do in the concept that it’s possible to standardize medical care across broad expanses of conditions, patient physiologies, etc…which is to say, none at all.

    Like

    Reply
    1. Zyp Czyk Post author

      My sentiments, exactly.

      Money is bound to influence how the algorithms are designed and decision-paths are weighted due to the huge financial bounty at stake. People will be bribed or otherwise motivated to adjust weighting to favor their “sponsors”.

      ANd, a human still has to decide exactly what to measure in the first place, a choice already fraught with conflicts of interest.

      Liked by 1 person

      Reply
  4. canarensis

    Besides which, the fact that the Cleveland Clinic was one of the first to trumpet proudly their “opioid-free” surgeries & other procedures doesn’t exactly encourage me to believe that they have patient interests in mind: they are dedicated to kowtowing to hysteria, media attention, and of course to the almighty dollar. “Screw the patient, make lotsa bucks!” seems like an appropriate motto for them.

    Liked by 1 person

    Reply

Other thoughts?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.