Ethics of Using Facial Recognition Tech in Healthcare

What Are Important Ethical Implications of Using Facial Recognition Technology in Health Care?AMA Journal of Ethics – Nicole Martinez-Martin, JD, PhD

Abstract:

Applications of facial recognition technology (FRT) in health care settings have been developed to identify and monitor patients as well as to diagnose genetic, medical, and behavioral conditions.

The use of FRT in health care suggests the importance of informed consent, data input and analysis quality, effective communication about incidental findings, and potential influence on patient-clinician relationships.

Privacy and data protection are thought to present challenges for the use of FRT for health applications. 

Promises and Challenges of Facial Recognition Technolog

Facial recognition technology (FRT) utilizes software to map a person’s facial characteristics and then store the data as a face template.1

Algorithms or machine learning techniques are applied to a database to compare facial images or to find patterns in facial features for verification or authentication purposes.2

FRT is attractive for a variety of health care applications, such as diagnosing genetic disorders, monitoring patients, and providing health indicator information (related to behavior, aging, longevity, or pain experience, for example).

FRT is likely to become a useful tool for diagnosing many medical and genetic conditions

Machine learning techniques, have already been used to assist in diagnosing a patient with a rare genetic disorder that had not been identified after years of clinical effort.

FRT is being developed to predict health characteristics, such as longevity and aging.

This raises extremely important privacy issues.

With all the money at stake, I’m sure that such predictive personal information will become accessible (or be accessed by other means) by many agencies with nefarious financial agendas (life insurance, long-term loans, employment).

FRT is also being applied to predict behavior, pain, and emotions by identifying facial expressions associated with depression or pain, for example.

The phrase “predict behavior” was only casually mentioned, but this it this is where we all face a huge risk wherever FRT is deployed.

That could be anywhere a camera can be hidden because it’s generally not illegal to film people in public. But when these cameras are hooked into FRT with predictive abilities, we could all be broadcasting more information about ourselves than we intend or even realize.

Even more frightening, this could all be working “behind the scenes” already: who’s to stop someone from scanning a crowd and seeing every person’s “future”?

It’s a very, very messy issue, in many ways for many reasons. An example: if we could know that someone was harboring murderous intentions, wouldn’t we be obligated to report it to some authority? 

Another major area for FRT applications in health care is patient identification and monitoring, such as monitoring elderly patients for safety or attempts to leave a health care facility or monitoring medication adherence through the use of sensors and facial recognition to confirm when patients take their medications.

Such monitoring sounds creepy already. If the only and pure motive were to benefit patients it could truly revolutionize healthcare.

But we know that development of such a purely patient-centric technology or device or drug is a fantasy in a healthcare system dominated by huge corporations with purely financial motives (moral imperative link)

As with any new health technology, careful attention should be paid to the accuracy and validity of FRT used in health care applications as well as to informed consent and reporting incidental findings to patients

FRT in health care also raises ethical questions about privacy and data protection, potential bias in the data or analysis, and potential negative implications for the therapeutic alliance in patient-clinician relationships.

Ethical Dimensions of FRT in Health Care

Informed consent.

FRT tools that assist with identification, monitoring, and diagnosis are expected to play a prominent role in the future of health care. Some applications have already been implemented.

This is exactly what I suspected and commented above.

As FRT is increasingly utilized in health care settings, informed consent will need to be obtained not only for collecting and storing patients’ images but also for the specific purposes for which those images might be analyzed by FRT systems.

This is absurd; rules can’t determine what a person looks at in a set of data that’s available – especially when it’s so incredibly valuable – specifically for healthcare.

In particular, patients might not be aware that their images could be used to generate additionally clinically relevant information

While FRT systems in health care can de-identify data, some experts are skeptical that such data can be truly anonymized; from clinical and ethical perspectives, informing patients about this kind of risk is critical.

Because this data has so many uses, they will have to inform individuals that their data can, and probably will be used against them by some entity in some way that’s not even fully known yet.

If I received such a warning, I would decline approval for FRT to be used, but I’d probably already been filmed somewhere in the doctor’s office. I don’t believe for one minute that such data wouldn’t end up in the hands of someone trying to make a profit from my from what my facial features tell them.

I suppose that when I start getting advertisements for heart failure drugs on my web browsing it would be a hint that somebody saw that my facial features predict hope failure. As usual, it’s the patients and only the patients who will be kept in the dark. Just like they used to be with credit scores.

Implementations: there would no longer be a therapeutic alliance because just like during the opioid crisis doctors and patients would be adversaries not adversarial

Some machine learning systems need continuous data input to train and improve the algorithms in a process that could be analogized to quality improvement research, for which informed consent is not regarded as necessary

This only gets worse and scarier as more details of this technology come out.

To maintain trust and transparency with patients, organizations should consider involving relevant community stakeholders in implementing FRT and in decisions about establishing and improving practices of informing patients about the organization’s use of FRT.

Trust and transparency? This is one of the huge problems with the use of AI: it is absolutely *not* transparent in that an AI machine cannot give any information about how it reached the results it did, and trust is irrelevant when no single entity has full control over how the data will be used.

As FRT becomes capable of detecting a wider range of health conditions, such as behavioral or developmental disorders, health care organizations and software developers will need to decide which types of analyses should be included in a FRT system and the conditions under which patients might need to be informed of incidental findings.

It would be naive to think that a patient could be given assurance that their data will only be used for the patient’s benefit when these machines cannot account for their decision process and we cannot predict what the data will be used for in the future.

Bias.

One recent example that gained notoriety was an FRT system used to identify gay men from a set of photos that may have simply identified the kind of grooming and dress habits stereotypically associated with gay men. The developers of this FRT system did not intend it to be used for a clinical purpose but rather to illustrate how bias can influence FRT findings

Thankfully, potential solutions for addressing bias in FRT systems exist.

I seriously doubt that.

These include efforts to create AI systems that explain the rationale behind the results generated.

This may not be possible due to the way AI is created. A computer doesn’t look for “logical” connections, just associations and correlations that occur without any necessary logical reason. It creates statistical models of probabilities that guide it’s “learning” and “decisions”.

The corporations that are running healthcare now only make decisions based on the potential financial gain, and saying anything damaging about their own products would be completely against corporate ethics.

How can we trust these people to be honest about what intentions were behind their programming when the AI system itself cannot show how it reached a particular decision? Their designers can tell us anything they want and we would never have a way to prove them either wrong or right. 

Patient privacy

FRT raises novel challenges regarding privacy. FRT systems can store data as a complete facial image or as a facial template.

The idea that a photo can reveal private health information is relatively new, and privacy regulations and practices are still catching up.

The Health Insurance Portability and Accountability Act (HIPAA) governs handling of patients’ health records and personal health information and includes privacy protections for personally identifiable information.

More specifically, it protects the privacy of biometric data, including “full-face photographs and any comparable images,” which are “directly related to an individual.” Thus, facial images used for FRT health applications would be protected by HIPAA.

Through the opioid crisis and the DEA’s seemingly unrestrained foray into medical practice, we patients have learned exactly how useless HIPAA for patients. It seems any employee or contractor of any agency or business can get access to our health records and only we ourselves are truly restrained by the HIPAA privacy laws.

However, clinicians should advise patients that there may be limited protections for storing and sharing data when using a consumer FRT tool.

Because data exists mostly in the cloud where it is accessed by multiple entities for multiple reasons, true control is impossible.

Any restrictions would be porous too because we cannot predict what new technology might take advantage of loopholes (which could be discovered by using AI technology too).

Employers might also be interested in using FRT tools to predict mood or behavior as well as to predict longevity, particularly for use in wellness programs to lower employers’ health care costs.

Broader influence of FRT

There will need to be careful thought and study of the broader impact of FRT in health care settings. One potential issue is that of liability.

It is therefore important to weigh the relative benefits and burdens of specific FRT uses in health care and to conduct research into how patients perceive its use.

As considered here, numerous applications of FRT in health care settings suggest the

  • ethical, clinical, and legal importance of informed consent,
  • data input and analysis quality,
  • effective communication about incidental findings, and
  • potential influence on patient-clinician relationships.

Privacy and data protections are key to advancing FRT and making it helpful.

If such protections are required before the technology is deployed, but it’s already too late for that.

If no such AI systems will be allowed in healthcare until Privacy and data protection can be assured, we have nothing to worry about because such control is impossible.

Unfortunately, we also cannot prevent being filmed in public because those laws were made long before such technology was even imagined. It looks like our laws will need big updates in the years to come.

Other thoughts?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.