Tag Archives: artificial-intelligence

Artificial Intelligence mining our digital data

Tech companies are using AI to mine our digital traces – STATBy Mason Marks – Sep 2019

Companies routinely collect the digital traces we leave behind as we go about our daily lives.

Whether we’re buying books on Amazon (AMZN), watching clips on YouTube, or communicating with friends online, evidence of nearly everything we do is compiled by technologies that surround us: messaging apps record our conversations, smartphones track our movements, social media monitors our interests, and surveillance cameras scrutinize our faces.

What happens with all that data?  Continue reading

Machine-Learning for Predicting Opioid Overdose Risk

Evaluation of Machine-Learning Algorithms for Predicting Opioid Overdose Risk Among Medicare Beneficiaries With Opioid PrescriptionsJAMA Netw Open. – Mar 2019

Question:Can machine-learning approaches predict opioid overdose risk among fee-for-service Medicare beneficiaries?

Findings:  In this prognostic study of the administrative claims data of 560 057 Medicare beneficiaries, the deep neural network and gradient boosting machine models outperformed other methods for identifying risk, although positive predictive values were low given the low prevalence of overdose episodes.

Meaning: Machine-learning algorithms using administrative data appear to be a valuable and feasible tool for more accurate identification of opioid overdose risk.

Abstract

Design, Setting, and Participants

A prognostic study was conducted between September 1, 2017, and December 31, 2018. Participants (n = 560 057) included fee-for-service Medicare beneficiaries without cancer who filled 1 or more opioid prescriptions from January 1, 2011, to December 31, 2015.

Beneficiaries were randomly and equally divided into training, testing, and validation samples.

Exposures 

Potential predictors (n = 268), including

  • sociodemographics,
  • health status,
  • patterns of opioid use, and
  • practitioner-level and regional-level factors,

were measured in 3-month windows, starting 3 months before initiating opioids until loss of follow-up or the end of observation.

deep neural network (DNN) were applied to predict overdose risk in the subsequent 3 months after initiation of treatment with prescription opioids.

Results

The DNN classified patients into

  • low-risk (76.2% [142 180] of the cohort),
  • medium-risk (18.6% [34 579] of the cohort), and
  • high-risk (5.2% [9747] of the cohort)

subgroups, with only 1 in 10 000 in the low-risk subgroup having an overdose episode.

More than 90% of overdose episodes occurred in the high-risk and medium-risk subgroups, although positive predictive values were low, given the rare overdose outcome.

Conclusions and Relevance 

Machine-learning algorithms appear to perform well for risk prediction and stratification of opioid overdose, especially in identifying low-risk subgroups that have minimal risk of overdose.

End of Abstract – excerpts from the full article below:

Introduction

health systems, payers, and policymakers have developed programs to identify and intervene in individuals at high risk of problematic opioid use and overdose.

It’s really noticeable about how risk stratification is being explored, decided, and implemented by these three powerful profit-driven groups: 

  • health systems, (medical groups, hospitals, medical labs, etc.)
  • payers, (AKA insurance companies)
  • policymakers (lobbyists for special interests, politicians, “expert witnesses”)

Our health and welfare are literally in the hands of entities that are bound by their corporate bylaws to maximize profit above all else.

Yet, the definition of high risk is variable, ranging from a high-dose opioid (defined using various cut points) to the number of pharmacies or prescribers that a patient visits.

These criteria, for example, determine how Medicare beneficiaries are selected into so-called lock-in programs in Medicare, also called the Comprehensive Addiction and Recovery Act (CARA) drug management programs.

Machine learning is an alternative analytic approach to handling complex interactions in large data, discovering hidden patterns, and generating actionable predictions in clinical settings. In many cases, machine learning is superior to traditional statistical techniques

Our overall hypothesis was that a machine-learning algorithm would perform better in predicting opioid overdose risk compared with traditional statistical approaches.

The objective of this study was to develop and validate a machine-learning algorithm to predict opioid overdose among Medicare beneficiaries with at least 1 opioid prescription.

Based on the prediction score, we stratified beneficiaries into subgroups at similar overdose risk to support clinical decisions and improved targeting of intervention.

We chose Medicare because of the high prevalence of prescription opioid use and the availability of national claims data and because the program will require specific interventions targeting individuals at high risk for opioid-associated morbidity

This study is ridiculously skewed from the start because Medicare patients are all “senior citizens”.

This demographic is the opposite of the people abusing opioids: they are sicker, in more pain, and don’t have the energy to pursue addictions.

  1. The oldest people have many more chronic and potentially painful diseases like arthritis or heart disease.
  2. The elderly suffer from all kinds of pains due to the unavoidable degradation of body tissues over time.
  3. Seniors have been the group found least likely to overdose.

The illicit drugs and combinations thereof that cause almost all of the overdoses are generally a young person’s game. You don’t see many seniors out scoring drugs on street corners.

Discussion

As expected in a population with very low prevalence of the outcome, the PPV of the models was low; however, these algorithms effectively segmented the population into 3 risk groups according to predicted risk score, with three-quarters of the sample in a low-risk group with a negligible overdose rate and more than 90% of individuals with overdose captured in the high- and medium-risk groups.

The ability to identify such risk groups has important potential for policymakers and payers who currently target interventions based on less accurate measures to identify patients at high risk.

It’s scary to think pf “policymakers and payers” using artificial intelligence algorithms to make any decisions, let alone the ones that are most crucial to not just our wellbeing, but our very survival: health and money.

However, although opioid overdose represents a particularly important outcome, it is a rare outcome, especially in the Medicare population.

Well, I’m glad that was finally clarified, though only in the conclusion of this long article.

…our risk stratification strategies may more efficiently guide the targeting of opioid interventions among Medicare beneficiaries compared with exisiting measure.

Like they “efficiently guided the targeting” of doctors and patients to persecute? The military wording sounds like they’re describing a cruise missile, which gives an ominous hint about the true motivations at work here.

This strategy first excludes most (approximately 75%) prescription opioid users with negligible overdose risk from burdensome interventions like pharmacy lock-in programs and specialty referrals.

Targeting medium- and/or high-risk groups can capture nearly all (90%) overdose episodes by focusing on only 25% of the population, which greatly frees up resources for payers and patients.

Again, they’re using military words by “capturing” overdose episodes.

For those in the high- and medium-risk groups, although most will be false-positives for overdose given the overall low prevalence, additional screening and assessment may be warranted.

They state this so casually because they have no idea of the consequences to any patient who is given such a “false positive” risk score. It could prevent them from ever receiving opioids for pain relief again.

Although certainly not perfect, these machine-learning models allow interventions to be targeted to the small number of individuals who are at greater risk, and these models are more useful than other prediction criteria that have considerably more false-positives.

So now we find out that “other prediction criteria” have even “considerably more false-positives”?

 

Artificial Intelligence: Accuracy versus Explainability

Artificial Intelligence and Black‐Box Medical Decisions: Accuracy versus Explainability – The Hastings Center, Alex John London – Feb 2019

Although decision‐making algorithms are not new to medicine, the availability of vast stores of medical data, gains in computing power, and breakthroughs in machine learning are accelerating the pace of their development, expanding the range of questions they can address, and increasing their predictive power.

In many cases, however, the most powerful machine learning techniques purchase diagnostic or predictive accuracy at the expense of our ability to access “the knowledge within the machine.”

We humans usually want to understand why something is the way it is because that helps us generalize. But artificial intelligence operates completely without “reason” and is based only on finding patterns in some data set.  Continue reading

Artificial Intelligence: Accuracy versus Explainability

Artificial Intelligence and Black‐Box Medical Decisions: Accuracy versus Explainability – by Alex John London – February 2019

This article explores a big problem with “AI”, one that’s fundamental to its design and function, one that presents thorny philosophical issues that must be confronted and decided before technology decides them for us by default, allow the technology to direct us instead of us directing the technology.

I suspect this is already starting to happen as more medical records are digitized and become available to be processed by an algorithm instead of a person, preferably one with medical knowledge, but these days more commonly a bureaucratic administrator following a procedural “algorithm” of their own.  Continue reading

Voice-recognition to automate doctors’ data entry

Voice-recognition system aims to automate data entry by doctors – STATBy Casey Ross @caseymrossMarch 4, 2019

I think artificial intelligence (AI) in healthcare simply must happen with so many people’s care sprawling over so many healthcare services (and billing companies). Using AI learning systems in healthcare makes a mockery of any kind of patient or doctor privacy. Even worse, they are dangerously prone to undetectable errors (people have died).

Still, we need these systems to cope with the ever-increasing amounts of data and knowledge, but we need them to serve humans, not to replace them.

Hands down, the one task doctors complain about most is filling out the electronic health record during and after patient visits. It is disruptive and time-consuming, and patients don’t like being talked to over the doctor’s shoulder.    Continue reading

Warnings of a Dark Side to A.I. in Health Care

Warnings of a Dark Side to A.I. in Health Care – By Cade Metz and Craig S. Smith – Mar 2019

A new breed of artificial intelligence technology is rapidly spreading across the medical field, as scientists develop systems that can identify signs of illness and disease in a wide variety of images, from X-rays of the lungs to C.A.T. scans of the brain. These systems promise to help doctors evaluate patients more efficiently, and less expensively, than in the past.

Similar forms of artificial intelligence are likely to move beyond hospitals into the computer systems used by

  • health care regulators,
  • billing companies and
  • insurance providers.

Just as A.I. will help doctors check your eyes, lungs and other organs, it will help insurance providers determine reimbursement payments and policy fees.

Ideally, such systems would improve the efficiency of the health care system. But they may carry unintended consequences, a group of researchers at Harvard and M.I.T. warns

In a paper published on Thursday in the journal Science, the researchers raise the prospect of “adversarial attacks”manipulations that can change the behavior of A.I. systems using tiny pieces of digital data.

By changing a few pixels on a lung scan, for instance, someone could fool an A.I. system into seeing an illness that is not really there, or not seeing one that is.

More likely is that doctors, hospitals and other organizations could manipulate the A.I. in billing or insurance software in an effort to maximize the money coming their way.

because so much money changes hands across the health care industry, stakeholders are already bilking the system by subtly changing billing codes and other data in computer systems that track health care visits. A.I. could exacerbate the problem.

“The inherent ambiguity in medical information, coupled with often-competing financial incentives, allows for high-stakes decisions to swing on very subtle bits of information,” he said

The new paper adds to a growing sense of concern about the possibility of such attacks, which could be aimed at everything from face recognition services and driverless cars to iris scanners and fingerprint readers.

An adversarial attack exploits a fundamental aspect of the way many A.I. systems are designed and built. Increasingly, A.I. is driven by neural networks, complex mathematical systems that learn tasks largely on their own by analyzing vast amounts of data.

By analyzing thousands of eye scans, for instance, a neural network can learn to detect signs of diabetic blindness.

This “machine learning” happens on such an enormous scale — human behavior is defined by countless disparate pieces of data — that it can produce unexpected behavior of its own.

The article gives 4 examples of how AI systems can be fooled:

  1. n 2016, a team at Carnegie Mellon used patterns printed on eyeglass frames to fool face-recognition systems into thinking the wearers were celebrities. When the researchers wore the frames, the systems mistook them for famous people, including Milla Jovovich and John Malkovich.
  2. Researchers have also warned that adversarial attacks could fool self-driving cars into seeing things that are not there. By making small changes to street signs, they have duped cars into detecting a yield sign instead of a stop sign.
  3. Late last year, a team at N.Y.U.’s Tandon School of Engineering created virtual fingerprints capable of fooling fingerprint readers 22 percent of the time. In other words, 22 percent of all phones or PCs that used such readers potentially could be unlocked.
  4. In their paper, the researchers demonstrated that, by changing a small number of pixels in an image of a benign skin lesion, a diagnostic A.I system could be tricked into identifying the lesion as malignant. Simply rotating the image could also have the same effect, they found

The implications are profound, given the increasing prevalence of biometric security devices and other A.I. systems

As regulators, insurance providers and billing companies begin using A.I. in their software systems, businesses can learn to game the underlying algorithms.

Once A.I. is deeply rooted in the health care system, the researchers argue, business will gradually adopt behavior that brings in the most money.

As always, money trumps all other concerns.

Already doctors, hospitals and other organizations sometimes manipulate the software systems that control the billions of dollars moving across the industry.

Doctors, for instance, have subtly changed billing codes — for instance, describing a simple X-ray as a more complicated scan — in an effort to boost payouts.

But, she added, it’s worth keeping an eye on. 

“There are always unintended consequences, particularly in health care,” she said.

It seems most of such unintended consequences involve the pursuit of profits. We can expect that financial interests will find a way to take advantage of any change in how medical payments travel through the monstrously complicated, innumerable layers of the healthcare industry.