Tag Archives: deception

Addiction Is Not Dependence

Addiction Is Not Dependencepracticalpainmanagement.com – Aug 2019

In this editorial, Jennifer P. Schneider, MD, PhD, digs into a common—and frustrating—misunderstanding in pain medicine terminology.

FDA approved the buprenorphine implant, branded as Probuphine, in 2016 “for the maintenance treatment of opioid dependence.”

  • Was it approved for the treatment of what we now call Opioid Use Disorder (OUD)?
  • Or was the intent to approve it for physical dependence, a condition found in most opioid-treated chronic pain patients as well as opioid addicts?

It is not clear from the language.   Continue reading

Cancer Pain and Non-Cancer Pain are Equivalent

The fabricated distinction between cancer pain and non-cancer pain is often used to argue that opioids are only effective for the first, but not effective for all other chronic pain.

This never made sense to me, so I researched it trying to find the basis for the much-hyped difference between the two and discovered this distinction is a complete myth. 

Below are 4 previous posts covering scientific articles (including NIH/PMC and Cochrane reviews) questioning the legitimacy of regulating and restricting the treatment of non-cancer pain differently than cancer pain.

Cancer vs Noncancer Pain: Shed the Distinction    Continue reading

Abandoning America’s Pain Patients

How Did We Come to Abandon America’s Pain Patients?Filter Magazineby Alison Knopf – July 2019

This is an excellent article pointing out exactly how pain patients have been neglected and dismissed by the medical system. Kudos to Alison Knopf for her exemplary work.

Overdoses—not those involving prescription opioids, but of heroin and illicit fentanyl, often combined with benzodiazepines—continue to go up. But opioid prescribing continues to go down.

Continue reading

Warnings of a Dark Side to A.I. in Health Care

Warnings of a Dark Side to A.I. in Health Care – By Cade Metz and Craig S. Smith – Mar 2019

A new breed of artificial intelligence technology is rapidly spreading across the medical field, as scientists develop systems that can identify signs of illness and disease in a wide variety of images, from X-rays of the lungs to C.A.T. scans of the brain. These systems promise to help doctors evaluate patients more efficiently, and less expensively, than in the past.

Similar forms of artificial intelligence are likely to move beyond hospitals into the computer systems used by

  • health care regulators,
  • billing companies and
  • insurance providers.

Just as A.I. will help doctors check your eyes, lungs and other organs, it will help insurance providers determine reimbursement payments and policy fees.

Ideally, such systems would improve the efficiency of the health care system. But they may carry unintended consequences, a group of researchers at Harvard and M.I.T. warns

In a paper published on Thursday in the journal Science, the researchers raise the prospect of “adversarial attacks”manipulations that can change the behavior of A.I. systems using tiny pieces of digital data.

By changing a few pixels on a lung scan, for instance, someone could fool an A.I. system into seeing an illness that is not really there, or not seeing one that is.

More likely is that doctors, hospitals and other organizations could manipulate the A.I. in billing or insurance software in an effort to maximize the money coming their way.

because so much money changes hands across the health care industry, stakeholders are already bilking the system by subtly changing billing codes and other data in computer systems that track health care visits. A.I. could exacerbate the problem.

“The inherent ambiguity in medical information, coupled with often-competing financial incentives, allows for high-stakes decisions to swing on very subtle bits of information,” he said

The new paper adds to a growing sense of concern about the possibility of such attacks, which could be aimed at everything from face recognition services and driverless cars to iris scanners and fingerprint readers.

An adversarial attack exploits a fundamental aspect of the way many A.I. systems are designed and built. Increasingly, A.I. is driven by neural networks, complex mathematical systems that learn tasks largely on their own by analyzing vast amounts of data.

By analyzing thousands of eye scans, for instance, a neural network can learn to detect signs of diabetic blindness.

This “machine learning” happens on such an enormous scale — human behavior is defined by countless disparate pieces of data — that it can produce unexpected behavior of its own.

The article gives 4 examples of how AI systems can be fooled:

  1. n 2016, a team at Carnegie Mellon used patterns printed on eyeglass frames to fool face-recognition systems into thinking the wearers were celebrities. When the researchers wore the frames, the systems mistook them for famous people, including Milla Jovovich and John Malkovich.
  2. Researchers have also warned that adversarial attacks could fool self-driving cars into seeing things that are not there. By making small changes to street signs, they have duped cars into detecting a yield sign instead of a stop sign.
  3. Late last year, a team at N.Y.U.’s Tandon School of Engineering created virtual fingerprints capable of fooling fingerprint readers 22 percent of the time. In other words, 22 percent of all phones or PCs that used such readers potentially could be unlocked.
  4. In their paper, the researchers demonstrated that, by changing a small number of pixels in an image of a benign skin lesion, a diagnostic A.I system could be tricked into identifying the lesion as malignant. Simply rotating the image could also have the same effect, they found

The implications are profound, given the increasing prevalence of biometric security devices and other A.I. systems

As regulators, insurance providers and billing companies begin using A.I. in their software systems, businesses can learn to game the underlying algorithms.

Once A.I. is deeply rooted in the health care system, the researchers argue, business will gradually adopt behavior that brings in the most money.

As always, money trumps all other concerns.

Already doctors, hospitals and other organizations sometimes manipulate the software systems that control the billions of dollars moving across the industry.

Doctors, for instance, have subtly changed billing codes — for instance, describing a simple X-ray as a more complicated scan — in an effort to boost payouts.

But, she added, it’s worth keeping an eye on. 

“There are always unintended consequences, particularly in health care,” she said.

It seems most of such unintended consequences involve the pursuit of profits. We can expect that financial interests will find a way to take advantage of any change in how medical payments travel through the monstrously complicated, innumerable layers of the healthcare industry.

Statisticians Asked to Commit Scientific Fraud

1 in 4 Statisticians Say They Were Asked to Commit Scientific Fraud – By Alex Berezow — October 30, 2018

This article definitely points toward a sad truth, but the sample of 390/522 statisticians from whom they “received sufficient responses” doesn’t look like a representative sample at all.

Only someone who’s been in this situation themselves would answer a survey about  “inappropriate requests”. For those who haven’t, they would only check some box saying “it hasn’t happened to me” and then the rest of the survey would be pointless to fill out because it wouldn’t apply to them.

Without access to a full explanation of how they picked their sample, I wouldn’t quote these results. However…    Continue reading