Voice-recognition system aims to automate data entry by doctors – STAT – By Casey Ross @caseymross – March 4, 2019
I think artificial intelligence (AI) in healthcare simply must happen with so many people’s care sprawling over so many healthcare services (and billing companies). Using AI learning systems in healthcare makes a mockery of any kind of patient or doctor privacy. Even worse, they are dangerously prone to undetectable errors (people have died).
Still, we need these systems to cope with the ever-increasing amounts of data and knowledge, but we need them to serve humans, not to replace them.
Hands down, the one task doctors complain about most is filling out the electronic health record during and after patient visits. It is disruptive and time-consuming, and patients don’t like being talked to over the doctor’s shoulder.
Now, amid an intensifying race to develop voice technologies for health care, a Boston-based company is preparing to release one of the first products designed to fully automate this process, by embedding artificially intelligent software into exam rooms.
Nuance, a maker of speech recognition software, is testing an ambient listening system that, without need for mouse and keyboard, can transcribe a conversation between a doctor and patient and upload key portions of it into a medical record. Executives said they hope to begin selling it next year.
The product, a rectangular box fitted with 16 microphones and a motion-detection camera, is designed to be mounted on the wall of an exam room to record patient encounters and automatically load key details into corresponding fields within the medical record
Nuance’s prototype was a hit at this year’s meeting of the Healthcare Information and Management Systems Society in Orlando, an influential technology conference where long lines of people waited between ropes to get a demonstration of the technology.
“It blew me away,” said Brian Lancaster, chief of information technology at University of Nebraska Medical Center, which is among a handful of U.S. hospitals testing the product. “It was the promise of technology that is truly invisible. It felt like looking into the future.”
This kind of “invisible” bothers me. I’ve heard of “seamless” interfaces between systems to make things easier, but never completely “invisible” technology that monitors everything all the time.
Electronic records, and the federal regulations that govern them, require doctors to document specific pieces of information on diagnosis, treatment plans, prescriptions, and so forth.
“There are voice recognition products where I can simply dictate, and then a paragraph appears in the medical record,” Halamka said. “That’s fine, but it’s not sufficient. The dream is that the doctor and patient have dialogue, there is no keyboard in the room, and then at the end the clinician reviews the chart and makes any edits.”
“If done right, with the right safeguards,
There’s no way to have truly functional universal “safeguards” in a medical system that spans thousands of entities, both providers and patients, and hundreds of thousands of computers all over the United States.
this could give the provider real-time intelligence about what’s really going on” with a patient, Harper said. “You can imagine how health care can be transformed when that [information] is there.”
So they’re saying that real-time intelligence would be better than the doctor in figuring out what’s really going on?
Will the technology know more about you than the doctor?
I think the answer will be “yes” because the technology has access to every bit of health information attached to your existence, while doctors are given only 15 minutes with you.
Doctors are limited to what they can read and discern from talking to you in those minutes, while a computer can work on finding, processing, filtering, and matching your information 24/7//365.
Perhaps the biggest challenge facing the field is ensuring accuracy, as errors in record keeping can lead to mistakes, or missed opportunities, in the delivery of care. A recent study10 of a different Nuance voice dictation product, Dragon Medical 360, found that seven in every 100 words contained errors, and many of the errors involved clinical information. Nearly all the errors were caught by follow-up review and editing, but the study authors emphasized that careful supervision remains crucial.
Joseph Petro, chief technology officer at Nuance, said the company is testing and refining its new product, which it refers to as ambient clinical intelligence, to improve accuracy levels and minimize editing time.“It all hinges on what the interface looks like and how easy it is to do the edits,” he said. “This is the real-world part of this problem at this point.”
Nuance executives declined to provide pricing information, but said the system would be sold on a subscription basis, similar to its existing products.
The company’s product is different in form and function than popular consumer devices such as Amazon Echo or Google Home, as it includes a motion-detection camera needed to track the movements of the patient and doctor during the examination as they focus on different areas of the body.
So even the doctor doesn’t know what this system is doing; it’s not turned on and off, but just sits there until it detects motion and then automatically starts tracking it.
The camera does not produce video footage of the sort that could be watched on a smart phone, but only tracks skeletal movements.
Oh geez, that makes me feel ever so much better…
Whether use of technology in the exam room will be acceptable to patients remains to be seen. Petro said it is not emerging as a significant roadblock so far, as most patients have not objected its use in testing.
Another key part of the product’s rollout will be informing patients about its use during their visits. Lancaster, the technology chief at University of Nebraska Medical Center, said the hospital is devising a system to inform patients at multiple points in the process of getting care.
“When they check in physically,
…they are already being recorded, even as they check in, by voice-technology which can also determine if they are depressed or have Alzheimer’s and who knows what else.
we will have a script to make sure they didn’t just blindly consent,
All this voice-recognition technology will be running all the time in the background.
Even if you do not consent to be recorded, there’s nothing to physically “turn off” for just your visit (tracking starts in the waiting room), they’re just promising you it won’t be recorded and stored… supposedly.
We know that technology can be used to spy on people.
Programmers can bury code deep into this kind of software to surreptitiously record and process any and all system input. Neither you nor your doctor (nor any customer) can know exactly how the system is programmed or to whom it might transmit your medical data.
Once it’s given access to your Electronic Health Record (EHR), you just have to trust that it’s only using it for legitimate purposes and not, for example, selling your data to people trying to sell medical supplies for your condition.
But from what I’ve seen, corporations cannot be trusted to behave ethically because of their profit motive. Their fundamental purpose is to make money for stockholders, and they will openly admit that a “400% price hike for drugs is ‘moral requirement’’.
but really understand that there is technology being used to capture” the encounter, he said.
it would allow for instantaneous documentation of patient visits and reduce the interference of computers with the doctor-patient relationship.
I’m leery of any system that does anything automatically and instantaneously (like MS Windows automatically checks and if you don’t have the latest software it instantaneously starts endless downloads).
Nuance is one of several companies seeking to use voice technology to automate documentation and reduce technology burnout — a problem unlikely to be solved by any one firm or product.
While alluring to doctors, the technology poses thorny questions, including whether patients will be comfortable inviting a third-party company with a camera and microphone into a conversation with their doctor.
It is training the system with hundreds of thousands of recordings of patient visits the company is collecting through providers around the country — a trove that will grow bigger over time and help the company refine its product. Such systems do not typically require approvals from government regulators.
Several other companies are working on voice products in medical record keeping, including Microsoft, which last year unveiled an intelligent scribe called EmpowerMD, Sopris Health, Notable, and Seattle-based SayKara.
So it will become a “demolition derby”-style competition; the last company left standing gets the prize – a monopoly over that market.
That firm, led by former Amazon engineers, is building a voice assistant also designed to automatically add information from patient visits into the medical record.
Executives at Nuance said they hope the underlying information collected by their product could help advance parallel efforts to use voice technology to improve care.
Harper mentioned potential partnerships with companies seeking to use biometric analysis of voice data to predict the onset of depression or Alzheimer’s disease.
Here we start with the privacy issue, and I have many questions about that:
- Can you opt-out of having your visit recorded, having your voice be analyzed and the results shared among whatever corporations are willing to pay for it so they can create better-targeted ads?
- Can the doctor trust the system to collect and display all the right data for the right patient in the right situation?
- Since the recording equipment is always present, how do we know if it’s recording or not?
- Can we trust the system not to listen in and record if we don’t give permission?