Artificial Intelligence: Accuracy versus Explainability

Artificial Intelligence and Black‐Box Medical Decisions: Accuracy versus Explainability – by Alex John London – February 2019

This article explores a big problem with “AI”, one that’s fundamental to its design and function, one that presents thorny philosophical issues that must be confronted and decided before technology decides them for us by default, allow the technology to direct us instead of us directing the technology.

I suspect this is already starting to happen as more medical records are digitized and become available to be processed by an algorithm instead of a person, preferably one with medical knowledge, but these days more commonly a bureaucratic administrator following a procedural “algorithm” of their own. 

To use AI effectively, you have to give it pretty much all the data you have, like all of a hospital’s EHRs, all imaging, all employee records, all income and purchase orders, even cafeteria menus. From that starting point, the AI algorithm may also seek out external linked data, like every patient’s credit report, insurance claims, and all their social media.

It then analyzes these vast volumes of data looking for patterns and associations between different bits of data from different parts of different databases combined in different ways, making connections (on a numerical basis) that we humans wouldn’t think of.

Then it sorts and presents its findings in whatever format you’d like.

The problem is that it can’t explain why it arrived at those findings or outcome. It gives perfectly correct answers but cannot justify its choice, which most humans want or even need.

If a version of AI were used in the courtroom, it might be 100% accurate in determining which defendants are guilty, but it wouldn’t be able to tell you why. I wouldn’t feel comfortable entrusting any important decisions to an algorithm.

As fallible as we humans are, we can usually explain how we reach a decision. This lets others understand what we perceive and value, and thus, how we arrived at our decision.

AI cannot show us the billions (literally) of tiny decision trees it traveled to finally arrive at its conclusion. We cannot know (in detail) what factors it considered nor how it measured and valued them, and that’s the problem.

For example:

An AI-controlled vehicle can choose to deliberately run over 5 pedestrians on the side of the road to avoid crashing into and pushing off a cliff a school bus full of 20 children. But it might calculate different actions if there were fewer children on the bus or if there were more pedestrians it had to run over.

Are we OK with letting an algorithm decide how to value human lives?

Abstract

Although decision‐making algorithms are not new to medicine, the availability of vast stores of medical data, gains in computing power, and breakthroughs in machine learning are accelerating the pace of their development, expanding the range of questions they can address, and increasing their predictive power

Without an explanation in terms of reasons or a rationale for particular decisions in individual cases, some commentators regard ceding medical decision‐making to black box systems as contravening the profound moral responsibilities of clinicians.

I argue, however, that opaque decisions are more common in medicine than critics realize. Moreover, as Aristotle noted over two millennia ago, when our knowledge of causal systems is incomplete and precarious—as it often is in medicine—the ability to explain how results are produced can be less important than the ability to produce such results and empirically verify their accuracy.

Essay

Although decision‐making algorithms are not new to medicine, the availability of vast stores of medical data, gains in computing power, and breakthroughs in machine learning are accelerating the pace of their development, expanding the range of questions they can address, and increasing their predictive power.

In many cases, however, the most powerful machine learning techniques purchase diagnostic or predictive accuracy at the expense of our ability to access “the knowledge within the machine.”

This refers to the “why” of an output: “why this, and not that”.

Without an explanation in terms of reasons or a rationale for particular decisions in individual cases, some commentators regard ceding medical decision‐making to black box systems as contravening the profound moral responsibilities of clinicians

Justification, Explanation, and Causation

Trust in experts is often grounded in their ability to produce certain results and to justify their actions.

“By justifications, we mean explanations that tell why an expert system’s actions are reasonable in terms of principles of the domain—the reasoning behind the system.”

Explanations of this form require the system, or the expert who relies on it, to reveal how a finding or a decision is grounded in two kinds of knowledge:a

  1. “domain model” in which causal relationships in the domain are captured and
  2. “domain principles” that lay out the “how to” knowledge or the dynamics of the domain in question

These requirements seem reasonable, in part, because they have a long intellectual pedigree.

The idea that experts should be able to justify their actions by marshaling knowledge of causal relationships in their domain of expertise also has a long intellectual history

When explanations involve laws that track causal relationships, true explanations provide insight into how a domain works and, through that insight, enhance our ability to more effectively intervene in that system, where intervention is possible.

Another reason to expect computational systems to be able to marshal causal knowledge and provide explanations of this form is that this appears to be a mark of expertise in disciplines such as structural engineering. Building a bridge across a span requires a range of decisions.

Expert structural engineers know what factors affect the success of a bridge, such as properties of the location, features of materials to consider, and the tolerances of various designs and the stresses of various uses

They also know how to assign values to these variables in particular cases and to simulate how particular structures will behave under expected loads and stresses of a particular setting to within practically relevant margins of error

Detailed mathematical models of key causal relations enable structural engineers to make design decisions that incorporate stakeholder values, such as aesthetics and cost, into the construction of reliable structures.

Moreover, they can explain particular decisions by elaborating the functional and causal requirements that constrain or determine various choices, thereby helping nonexperts understand why certain decisions were made or why some constraints are negotiable while others are not

Together, this causal knowledge and the explanations it supports can also guide interventions to improve a structure’s integrity.

These explanations thus help to foster social trust by expanding the ability of other stakeholders to understand what is at stake in various decisions.

This fosters accountability, since understanding why a decision was made enables stakeholders to evaluate its merits and hold experts accountable for avoidable error.

explanations help stakeholders see why expert decisions are not arbitrary and do not amount to abuse of professional authority.

The Black Box of Deep Learning

Against this background, many of the properties of the most powerful machine learning systems appear suspect.

For example, deep learning systems are theory agnostic in the sense that their designers do not program into them a model that reflects their understanding of the causal structure of the problem to be solved.

Rather, programmers construct an architecture that “learns” a model from a large set of data

In most cases, these systems learn when data whose classification is already established (for example, images of retinas that display or lack diabetic retinopathy) are fed into the system.

The systems classify images as displaying diabetic retinopathy or not, or assign a probability for a medical event, such as suicide or readmission, to a medical record.

Deep learning systems can be trained on millions of inputs, and their resulting predictions can be highly accurate.

Although their designers understand the architecture of these systems and the process by which they generate the models they use for classification, the models themselves can be inscrutable to humans.

A small permutation in a seemingly unrelated aspect of the data can result in a significantly different weighting of features.

Yet how and what factors it usees to weigh different features are completely invisible to us.

Moreover, different initial settings can result in the construction of different models.

Despite the overwhelming attention paid to the fact that deep learning systems are unsuited to helping human users understand the phenomenon in question, a far more significant limitation is that they may not directly track causal relationships in the world.

Even when users limit the data fed into the system to variables believed to be causally relevant to the decision at hand, the resulting model only reflects regularities in data. How these associations relate to underlying causal relationships is unknown.

In contrast to the logical and accessible decision‐making embodied in the classical techne, machine learning systems stoke fears of unaccountability and domination by systems that arbitrarily restrict stakeholder autonomy and represent a conduit for experts to covertly impose arbitrary preferences on stakeholders.

Uncertainty and Incompleteness of Medical Knowledge

Although medicine is one of the oldest productive sciences, its knowledge of underlying causal systems is in its infancy; the pathophysiology of disease is often uncertain, and the mechanisms through which interventions work is either not known or not well understood. As a result, decisions that are atheoretic, associationist, and opaque are commonplace in medicine.

Lithium has been used as a mood stabilizer for half a century, yet why it works remains uncertain. Large parts of medical practice frequently reflect a mixture of empirical findings and inherited clinical culture.

the practical findings from rigorous empirical testing are frequently more reliable and reflective of causal relationships than the theoretical claims that purport to ground and explain them.

Even if the efficacy of a particular intervention for a given indication has been established in large RCTs, patients in the clinic often differ from clinical trial populations.

Like, for instance, those of us who have genetic defects like EDS.

Clinicians, therefore, frequently make judgments about how comorbidities, gender, ethnicity, age, or other factors might affect intervention efficacy and toxicity that go beyond validated medical evidenc

In these cases, it may not be clear what information clinicians draw on to make these judgments, whether the implicit or explicit models that support their judgments are valid or accurate, or whether equally qualified clinicians would arrive at the same conclusions in the face of the same data.

But this kind of uncertainty is a routine part of clinical practice, and the clinical judgment that it involves relies on an associationist model encoded in the neural network in the clinician’s head that is opaque and often inaccessible to others

As counterintuitive and unappealing as it may be, the opacity, independence from an explicit domain model, and lack of causal insight associated with some of the most powerful machine learning approaches are not radically different from routine aspects of medical decision‐making.

Oh yes they are! A human will by their very nature be drawing on their own life experiences whereas a computing machine cannot do that and uses only and exclusively the set of data it was given to work with

Our causal knowledge is often fragmentary, and uncertainty is the rule rather than the exception

Responsible Medical Decision‐Making

One advantage of explicit computational systems over the neural networks inside the heads of expert clinicians is that the reliability and accuracy of the former can be readily evaluated and incrementally improved.

It might be objected that explainability is too demanding a requirement since even simple associationist models are not capable of tracking causal relationships.

Nevertheless, defects in the data used by deep learning systems to construct decision models—such as biases stemming from the over‐ or underrepresentation of particular classes of individuals—can be inherited by these systems.

Without insight into how the models work, critics worry that the models may incorporate biases that are harmful enough to offset marginal gains in predictive power.

In order to ward off such possibilities, critics hold that machine learning systems must at least be interpretable to humans.

In a popular example, Rich Caruana and colleagues report that, although a neural net was more accurate than alternatives at diagnosing the probability of death from pneumonia, it ranked asthmatic patients as having a lower probability than the general population.

This finding is “counterintuitive” because patients with a history of asthma are typically admitted directly into the intensive care unit (ICU) for aggressive medical care; it is the added care that gives them a lower probability of death.

Without such aggressive care, asthmatic patients have a higher probability of death from pneumonia. Their score in the system is seen as misleading because it doesn’t reflect patients’ underlying medical need.

This prompted Caruana et al. to prefer less accurate but more transparent models in which they could adjust the weight assigned to “asthmatic” to reflect current medical knowledge.

If given more comprehensive information about treatments administered to individual patients, even a simple system would learn that, without ICU admission, asthma puts a patient at high probability of death.

In contrast, if the goal is to identify patients most at risk of dyinggiven standard practice, then systems that rank asthmatics at lower risk are not biased.

Rather than illustrating the need for interpretability, this example illustrates the importance of understanding the kind of judgments that a data set is likely to be able to support, clearly validating the accuracy of those specific decisions on real‐world data, and then restricting the use of associative systems to making the specific decisions for which their accuracy has been empirically validated.

This example also illustrates dangers inherent in mistaking the plausibility of associations in interpretable systems for causal relationships that can be exploited through intervention.

How odd that, in this case, AI emulates a common human fallibility: the tendency to assume association to be causation.

it is a mistake to expect those associations to track causal relationships in a way that we can exploit through intervention.

We saw earlier that one reason explanation is seen as the hallmark of expertise is that it involves communicating causal relationships in the relevant domain to stakeholders

It is also unclear what interpretability amounts to. Human decisions are often interpretable in the sense that we can rationalize them after the fact. But such rationalizations don’t necessarily reveal why a person made the decision, since the same decision may be open to many different post‐hoc rationalizations

Interpretability might mean, instead, that humans should be capable of simulating the model a system uses for decision‐making.

Most human decisions are not interpretable in this sense either

Accountability and Nondomination

To promote accountability and to ensure that machine learning systems are not covert tools for arbitrary interference with stakeholder autonomy in medicine, regulatory practices should establish procedures that limit the use of machine learning systems to specific tasks for which their accuracy and reliability have been empirically validated

To create such a system, greater emphasis should be placed on ensuring that data sets and analytical approaches are aligned with the decisions and uses they are intended to facilitate

Much as we seek to clarify the indications for which a drug can be prescribed, the use cases to which a machine learning system is suited and for which its accuracy and reliability have been validated should be clearly designated.

Recommendations to prioritize explainability or interpretability over predictive and diagnostic accuracy are unwarranted in domains where our knowledge of underlying causal systems is lacking.

The danger here is that we believe we know all the relevant information to establish a cause, but we cannot know what we don’t know. Yet such unknown and unanticipated associations are precisely the factors an AI algorithm may find and use – and it cannot tell us what they are.

If we trust a decision made by AI, we are assuming the algorithm knows as well or better than we do what factors to consider and how to weigh them against each other to arrive at a decision.

This can be tremendously useful for specific purposes, but AI also holds tremendous unknowns that might alter its decision paths in ways we cannot anticipate.

I don’t feel I can trust such a mysteriously calculated decision for my medical care unless it is assessed and evaluated by a doctor who is knowledgable and experienced with humans like me.

1 thought on “Artificial Intelligence: Accuracy versus Explainability

  1. Kathy C

    We only have to look at how they used it already. Our regulatory agencies are not keeping up, they have been undermined by the corporations collecting this data. Medical data is designed to obfuscate, in certain areas. There was a saying back in the 70’s Garbage In,Garbage Out. The only motivation here is profit, and the data is being gobbled up by the big tech corporations as they monopolize.

    We can already see the negative impacts, and manipulate use of this data.

    Like

    Reply

Other thoughts?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.