The More Responsible Use of AI in Healthcare

The More Responsible Use of AI in Healthcare

Today I want to write about one of our current hot-button topics: artificial intelligence, better known as AI. First, I want to pose the question: “Is AI bad?” I think it probably is not inherently bad. But AI seems to be the most recent example of flawed implementation or misuse of technology by payors (and probably providers as well).

By now, many have no doubt read the ProPublica article about Cigna and are familiar with the 60,000 claims denied in a single month by Dr. Cheryl Dopke (that’s a little over six claims per second, assuming an eight-hour workday). We’ve read the quote from an unnamed Cigna medical director “[w]e literally click and submit … it takes all of 10 seconds to do 50 at a time.” Many are no doubt also familiar with the Cigna lawsuit resulting from these behaviors.

Similarly, many would be familiar with the suit against UnitedHealthcare (UHC) for using an AI algorithm that allegedly has a 90-percent error rate. Many are also familiar with UHC’s ED coding denials, based on use of an Optum product to code ED level of care.

But I have to ask: are these really different from Dr. Jay Iinuma’s testimony in an Aenta lawsuit, indicating that he denied most claims without ever opening a medical record? In the case at issue, Iinuma admitted he never read the plaintiff’s medical records and knew next to nothing about his disorder. Iinuma didn’t use AI; he used non-physician reviews to abstract the record and make recommendations. It’s not high-tech, but it is a physician shortcut.

The differences between modern algorithms and Iinuma’s corporate-driven denial practices are twofold:

  • The AI algorithms are based on unknown training data sets with inherent built-in biases and undisclosed validation against human experts. These algorithms lack the nursing level decision-making upon which Iinuma claimed to rely.
  • Volumes. The AI never sleeps. It doesn’t eat. It doesn’t collect overtime. In short, the AI is a full-time, automated edit of every claim. It can flag or deny claims faster than any medical director. Once it’s flagged or denied, it would require a significant degree of certainty or professional integrity to override the AI denial.

But would medical directors actually ignore the good practice of medicine or established protocols to make adverse decisions? We need only look at a LinkedIn profile of a former UHC medical director. Frank Baumann’s profile notes: I am a board-certified general surgeon who spent 10 years with the nation’s largest healthcare insurance company, denying level of care cases to hospitals. He goes on to ask:

  • Why are we denying good care?
  • Why did we tell everyone that we were using national, evidence-based guidelines – but then we didn’t?

Medical directors like Iinuma and Baumann make it clear that such flawed decision-making exists and doesn’t require technology. I suspect it will persist. After all, technology will now make it easier to render denials, and some medical directors lack either the knowledge or professional integrity to do the right thing.

We should look at the history of some other technologies in medicine. We can start with Index Medicus. This bibliographic index originated in 1879, and over the years morphed into MEDLINE. In the early days, using the index was laborious. It required identification of potentially useful articles, then finding or requesting the articles at the library. Researchers would scrutinize the articles for both relevance and scientific validity. 

Digital conversion and computers allowed access to huge numbers of articles in multiple languages. Concomitantly, journal numbers exploded, presenting additional opportunities to publish. References in journal articles increased from several to sometimes several hundred. What was missing, however, was an index of flawed articles. Only recently has a database of retracted articles been developed. It is incomplete. These articles are typically retracted for one or more of three reasons:

  • Flawed method or analyses;
  • Ethical lapses; and
  • Fabricated or fraudulent data.

Our bibliographic systems are an excellent example of how technology has enabled errors to persist – or worse, propagate.

The next technology to consider is dictation and transcription. This was viewed initially as a time-saver for busy clinicians. But there have been unexpected results, regardless of whether the transcription is by a human or dictation software. This includes record entries such as:

  • “Both breasts are equal and reactive to light and accommodation.”
  • “Remnants of a soldier can be seen in the vagina.”
  • “Patient has chest pain if she lies on her left side for over a year.”
  • “The patient has left his white blood cells at another hospital.”
  • “The patient refused an autopsy.”

These may be humorous, but they add little to the medical record or help clarify the patient’s condition(s). To account for these errors, some providers add a “disclaimer” such as “this note was created with (insert dictation service or software). Despite careful review, some errors may persist.”  Providers rarely review or correct these notes. In retrospect, these transcriptions may be clearly recognized as errors, but few providers can actually recall what the correct entry should have been. In essence, the technology, and lack of immediate review, allow for a misuse that may lead to patient detriment and adverse financial consequences for institutions.

The large language machines (LLMs) upon which much AI is based take large volumes of electronic data and “train” the program. Without careful curation for quality and ongoing updates, the LLMs obligately suffer from bias and are susceptible to errors. Despite these imitations, there’s good evidence that AI-generated diagnoses or documentation is comparable to its human counterparts – and in some cases, better.

You may be aware of the lawyers who were sanctioned for submitting an AI-generated brief to a court. In many cases, the brief itself was, by many accounts, reasonably good. The problem arose when the AI “hallucinated” several court citations. Opposing counsel complained because the citations could not be found. As a result, in one such case, a federal judge in Texas issued a requirement for lawyers in cases before him to certify that they did not use AI to draft their filings without a human checking their accuracy. While it would be comforting to believe that such a requirement of providers might result in improved documentation, the disappointing truth is that providers are unlikely to check the accuracy of AI-generated documentation. The dictation errors, as well as the behaviors of insurance company medical directors like Dopke, Baumann, and Iinuma, serve as painful examples.

So, what should organizations do, right now, to manage the use of AI? The first consideration is the internal use of AI. Institutions should:

  • First, develop a responsible policy for using AI in the medical record. This will be very hard to police, since providers could simply copy an AI-generated document into the medical record. It would probably go undetected. But an annual pledge on the part of medical staff with clear expectations would be an excellent start.
  • Second, providers should leverage AI to detect AI-generated documentation.
  • Next, providers should use AI to detect repetitive or non-contributory medical record entries as well as to flag high-risk diagnoses. These analytical algorithms already exist in many clinical documentation integrity (CDI) and coding software programs.

Institutions should also leverage AI to respond to payors:

  • Contracting is an ideal starting point. AI can review very large documents and flag problem areas for review by counsel. It can detect inconsistencies and contradictions that may later prove to be disputed. It can also help analyze contractual differences between payors.
  • Denials management is another high-gain area. Allowing AI to categorize denials may be more accurate and consistent than human categorization. AI may be able to detect subtle changes in denial patterns or wording that portends nascent denial programs by payers.

The time is now to develop responsible uses for AI in-house.

Print Friendly, PDF & Email
Facebook
Twitter
LinkedIn

John K. Hall, MD, JD, MBA, FCLM, FRCPC

John K. Hall, MD, JD, MBA, FCLM, FRCPC is a licensed physician in several jurisdictions and is admitted to the California bar. He is also the founder of The Aegis Firm, a healthcare consulting firm providing consultative and litigation support on a wide variety of criminal and civil matters related to healthcare. He lectures frequently on black-letter health law, mediation, medical staff relations, and medical ethics, as well as patient and physician rights. Dr. Hall hopes to help explain complex problems at the intersection of medicine and law and prepare providers to manage those problems.

Related Stories

Leave a Reply

Please log in to your account to comment on this article.

Featured Webcasts

2024 Observation Services Billing: How to Get It Right

2024 Observation Services Billing: How to Get It Right

Dr. Ronald Hirsch presents an essential “A to Z” review of Observation, including proper use for Medicare, Medicare Advantage, and commercial payers. He addresses the correct use of Observation in medical patients and surgical patients, and how to deal with the billing of unnecessary Observation services, professional fee billing, and more.

March 21, 2024
Top-10 Compliance Risk Areas for Hospitals & Physicians in 2024: Get Ahead of Federal Audit Targets

Top-10 Compliance Risk Areas for Hospitals & Physicians in 2024: Get Ahead of Federal Audit Targets

Explore the top-10 federal audit targets for 2024 in our webcast, “Top-10 Compliance Risk Areas for Hospitals & Physicians in 2024: Get Ahead of Federal Audit Targets,” featuring Certified Compliance Officer Michael G. Calahan, PA, MBA. Gain insights and best practices to proactively address risks, enhance compliance, and ensure financial well-being for your healthcare facility or practice. Join us for a comprehensive guide to successfully navigating the federal audit landscape.

February 22, 2024
Mastering Healthcare Refunds: Navigating Compliance with Confidence

Mastering Healthcare Refunds: Navigating Compliance with Confidence

Join healthcare attorney David Glaser, as he debunks refund myths, clarifies compliance essentials, and empowers healthcare professionals to safeguard facility finances. Uncover the secrets behind when to refund and why it matters. Don’t miss this crucial insight into strategic refund management.

February 29, 2024
Mastering Medicare Notices: Your Essential Guide to the MOON and Beyond

Mastering Medicare Notices: Your Essential Guide to the MOON and Beyond

Hospital staff continue to grapple with the complexities of Medicare notices.  In this webcast, Tiffany Ferguson, LMSW, CMAC, ACM, will present the latest requirements for preparation and delivery of CMS-mandated forms, including the Advance Beneficiary Notice (ABN), Hospital-Issued Notices of Noncoverage (HINNs), Important Message from Medicare (IMM) and Medicare Outpatient Observation Notice (MOON), and practical solutions through foolproof workflows and compliance auditing.

January 25, 2024
OBGYN ICD-10-CM/PCS Coding: Mastering Complex Guidelines and Compliance

OBGYN ICD-10-CM/PCS Coding: Mastering Complex Guidelines and Compliance

Dive into the complexities of Obstetrics and Gynecology coding, addressing challenges from antepartum to postpartum care. Learn to decode intricate guidelines, tackle claim denials, and safeguard your practice’s financial health. Uncover the secrets to compliant coding, reducing errors, and optimizing reimbursement. With practical exercises and expert insights, this webcast empowers coders, auditors, and healthcare professionals to elevate their OBGYN coding prowess.

February 28, 2024
Unlocking Clinical Documentation Excellence: Empowering CDISs & Coders

Unlocking Clinical Documentation Excellence: How to Engage the Provider

Uncover effective techniques to foster provider understanding of CDI, empower CDISs and coders to customize their queries for enhanced effectiveness, and learn to engage adult learners, leveraging their experiences for superior learning outcomes. Elevate your CDI expertise, leading to fewer coding errors, reduced claim denials, and minimized audit issues.

December 14, 2023
Coding for Spinal Procedures: A 2-Part Webcast Series

Coding for Spinal Procedures: A 2-Part Webcast Series

This exclusive ICD10monitor webcast series will help you acquire the critical knowledge you need to completely and accurately assign ICD-10-PCS and CPT® codes for spinal fusion and other common spinal procedures.

October 26, 2023

Trending News

It’s Heart Month! Use code HEART24 at checkout to receive 20% off your cardiology products. Click here to view our suite of Cardiology products!