Voice-recognition to automate doctors’ data entry

Voice-recognition system aims to automate data entry by doctors – STATBy Casey Ross @caseymrossMarch 4, 2019

I think artificial intelligence (AI) in healthcare simply must happen with so many people’s care sprawling over so many healthcare services (and billing companies). Using AI learning systems in healthcare makes a mockery of any kind of patient or doctor privacy. Even worse, they are dangerously prone to undetectable errors (people have died).

Still, we need these systems to cope with the ever-increasing amounts of data and knowledge, but we need them to serve humans, not to replace them.

Hands down, the one task doctors complain about most is filling out the electronic health record during and after patient visits. It is disruptive and time-consuming, and patients don’t like being talked to over the doctor’s shoulder.   

Now, amid an intensifying race to develop voice technologies for health care, a Boston-based company is preparing to release one of the first products designed to fully automate this process, by embedding artificially intelligent software into exam rooms.

Nuance, a maker of speech recognition software, is testing an ambient listening system that, without need for mouse and keyboard, can transcribe a conversation between a doctor and patient and upload key portions of it into a medical record. Executives said they hope to begin selling it next year.

The product, a rectangular box fitted with 16 microphones and a motion-detection camera, is designed to be mounted on the wall of an exam room to record patient encounters and automatically load key details into corresponding fields within the medical record

Nuance’s prototype was a hit at this year’s meeting of the Healthcare Information and Management Systems Society in Orlando, an influential technology conference where long lines of people waited between ropes to get a demonstration of the technology.

“It blew me away,” said Brian Lancaster, chief of information technology at University of Nebraska Medical Center, which is among a handful of U.S. hospitals testing the product. “It was the promise of technology that is truly invisible. It felt like looking into the future.”

This kind of “invisible” bothers me. I’ve heard of “seamless” interfaces between systems to make things easier, but never completely “invisible” technology that monitors everything all the time.

Electronic records, and the federal regulations that govern them, require doctors to document specific pieces of information on diagnosis, treatment plans, prescriptions, and so forth.

“There are voice recognition products where I can simply dictate, and then a paragraph appears in the medical record,” Halamka said. “That’s fine, but it’s not sufficient. The dream is that the doctor and patient have dialogue, there is no keyboard in the room, and then at the end the clinician reviews the chart and makes any edits.”

“If done right, with the right safeguards,

There’s no way to have truly functional universal “safeguards” in a medical system that spans thousands of entities, both providers and patients, and hundreds of thousands of computers all over the United States.

this could give the provider real-time intelligence about what’s really going on” with a patient, Harper said. “You can imagine how health care can be transformed when that [information] is there.”  

So they’re saying that real-time intelligence would be better than the doctor in figuring out what’s really going on?

Will the technology know more about you than the doctor?

I think the answer will be “yes” because the technology has access to every bit of health information attached to your existence, while doctors are given only 15 minutes with you.

Doctors are limited to what they can read and discern from talking to you in those minutes, while a computer can work on finding, processing, filtering, and matching your information 24/7//365.

Perhaps the biggest challenge facing the field is ensuring accuracy, as errors in record keeping can lead to mistakes, or missed opportunities, in the delivery of care. A recent study10 of a different Nuance voice dictation product, Dragon Medical 360, found that seven in every 100 words contained errors, and many of the errors involved clinical information. Nearly all the errors were caught by follow-up review and editing, but the study authors emphasized that careful supervision remains crucial.

Joseph Petro, chief technology officer at Nuance, said the company is testing and refining its new product, which it refers to as ambient clinical intelligence, to improve accuracy levels and minimize editing time.“It all hinges on what the interface looks like and how easy it is to do the edits,” he said. “This is the real-world part of this problem at this point.”

Nuance executives declined to provide pricing information, but said the system would be sold on a subscription basis, similar to its existing products.

The company’s product is different in form and function than popular consumer devices such as Amazon Echo or Google Home, as it includes a motion-detection camera needed to track the movements of the patient and doctor during the examination as they focus on different areas of the body.

So even the doctor doesn’t know what this system is doing; it’s not turned on and off, but just sits there until it detects motion and then automatically starts tracking it.

The camera does not produce video footage of the sort that could be watched on a smart phone, but only tracks skeletal movements.

Oh geez, that makes me feel ever so much better…

Whether use of technology in the exam room will be acceptable to patients remains to be seen. Petro said it is not emerging as a significant roadblock so far, as most patients have not objected its use in testing.

Another key part of the product’s rollout will be informing patients about its use during their visits. Lancaster, the technology chief at University of Nebraska Medical Center, said the hospital is devising a system to inform patients at multiple points in the process of getting care.

When they check in physically,

…they are already being recorded, even as they check in, by voice-technology which can also determine if they are depressed or have Alzheimer’s and who knows what else.

we will have a script to make sure they didn’t just blindly consent,

All this voice-recognition technology will be running all the time in the background.

Even if you do not consent to be recorded, there’s nothing to physically “turn off” for just your visit (tracking starts in the waiting room), they’re just promising you it won’t be recorded and stored… supposedly.

We know that technology can be used to spy on people.

Programmers can bury code deep into this kind of software to surreptitiously record and process any and all system input. Neither you nor your doctor (nor any customer) can know exactly how the system is programmed or to whom it might transmit your medical data.

Once it’s given access to your Electronic Health Record (EHR), you just have to trust that it’s only using it for legitimate purposes and not, for example, selling your data to people trying to sell medical supplies for your condition.

But from what I’ve seen, corporations cannot be trusted to behave ethically because of their profit motive. Their fundamental purpose is to make money for stockholders, and they will openly admit that a “400% price hike for drugs is ‘moral requirement’’.

but really understand that there is technology being used to capture” the encounter, he said.

it would allow for instantaneous documentation of patient visits and reduce the interference of computers with the doctor-patient relationship.

I’m leery of any system that does anything automatically and instantaneously (like MS Windows automatically checks and if you don’t have the latest software it instantaneously starts endless downloads).

Nuance is one of several companies seeking to use voice technology to automate documentation and reduce technology burnout — a problem unlikely to be solved by any one firm or product.

While alluring to doctors, the technology poses thorny questions, including whether patients will be comfortable inviting a third-party company with a camera and microphone into a conversation with their doctor.

It is training the system with hundreds of thousands of recordings of patient visits the company is collecting through providers around the country — a trove that will grow bigger over time and help the company refine its product. Such systems do not typically require approvals from government regulators.

Several other companies are working on voice products in medical record keeping, including Microsoft, which last year unveiled an intelligent scribe called EmpowerMD, Sopris Health, Notable, and Seattle-based SayKara.

So it will become a “demolition derby”-style competition; the last company left standing gets the prize – a monopoly over that market.

 That firm, led by former Amazon engineers, is building a voice assistant also designed to automatically add information from patient visits into the medical record.

Executives at Nuance said they hope the underlying information collected by their product could help advance parallel efforts to use voice technology to improve care.

Harper mentioned potential partnerships with companies seeking to use biometric analysis of voice data to predict the onset of depression or Alzheimer’s disease.

Here we start with the privacy issue, and I have many questions about that:

  • Can you opt-out of having your visit recorded, having your voice be analyzed and the results shared among whatever corporations are willing to pay for it so they can create better-targeted ads?
  • Can the doctor trust the system to collect and display all the right data for the right patient in the right situation?
  • Since the recording equipment is always present, how do we know if it’s recording or not?
  • Can we trust the system not to listen in and record if we don’t give permission? 

2 thoughts on “Voice-recognition to automate doctors’ data entry

  1. canarensis

    This does nothing for my confidence level in the American health system. I spent nearly 1/2 an hour on the phone last week trying to pay my internet bill; they only have a voice recog system (you can’t get a human). No matter How. Slowly. And. Care-ful-ly. I spoke, it took multiple tries to get ANYTHING across. It asked me if I wanted to pay the full amount or past due. I said, very clearly, “past due.” Robo Barbie cheerfully said “Good! You said you want to pay four dollars, is that right?” (my past due, needless to say, was not 4 bucks).

    I can just imagine how VR mangles things like “hydrochlorothiazide” or the many other odd-sounding meds, names of anatomical structures, diseases, conditions, etc.

    The computerized system they have can’t even update my medicines; every doc I’ve been to, we go thru the exact same routine, every single appointment. They have me write down my meds. I give it to them. Theoretically they enter them. Then they ask me “are you still taking X?” & X is something I haven’t taken in years. I tell them that. Next appointment, same question: “are you still taking X?” & so on forever. My new doc asked me, astonished, if I was taking morphine sulfate, dilaudid, fentanyl, AND hydrocodone?? I stared at him blankly, then realized he’s part of the med group I went to years ago. The doc there actually tried me on different pain meds, to try & find one that worked (what a concept!).

    I spent an evening at a friends’ house & they have Alexa, which I assume is fairly state-of-the-art, at least for Amazon. She only got maybe 1/2 the verbalizations correct. Having a VR system input errors & then trying to (A) figure out which errors are there, & (B) trying to correct them so they actually get corrected sounds like an absolute nightmare. And if “careful supervision remains crucial” to guard against errors & doctors already have more to do than they have time for, who the hell’s going to do the fact-checking, & when? If a doc has to proof it on the spot, it’s even less time/attention for the patient. If after the fact, do we think docs are going to be sitting in their office at midnight, checking the records & trying to remember what was said by whom? Or is a non-doc gonna check it, & so will have no clues what the medical terms might have been anyway? This sounds like a recipe for further cataclysmic health “care” disasters.

    And you just know it’ll be programmed to red flag any addiction key words or ideas…I can just see them programming it to red-flag anyone that says, say, “street,” on the assumption that they’re talking about buying drugs there. So if you say “I exercise walking down my street” or something, WHAMMO.

    And I’d wager the answer to your last question is, “not a chance!”

    Like

    Reply
    1. Zyp Czyk Post author

      These times are horrible for healthcare: corporations are taking over, with financials placed at the pinnacle of “value” instead of patients’ welfare being the goal.

      AI & voice recognition will be used for both offense and defense as our care deteriorates into a “Wild West” of good guys and bad guys, sometimes little difference between the two, dueling it out over access to dollars as they endlessly and profitability “process” (never heal) our damaged bodies.

      Liked by 1 person

      Reply

Other thoughts?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.