The More Responsible Use of AI in Healthcare

The More Responsible Use of AI in Healthcare

Today I want to write about one of our current hot-button topics: artificial intelligence, better known as AI. First, I want to pose the question: “Is AI bad?” I think it probably is not inherently bad. But AI seems to be the most recent example of flawed implementation or misuse of technology by payors (and probably providers as well).

By now, many have no doubt read the ProPublica article about Cigna and are familiar with the 60,000 claims denied in a single month by Dr. Cheryl Dopke (that’s a little over six claims per second, assuming an eight-hour workday). We’ve read the quote from an unnamed Cigna medical director “[w]e literally click and submit … it takes all of 10 seconds to do 50 at a time.” Many are no doubt also familiar with the Cigna lawsuit resulting from these behaviors.

Similarly, many would be familiar with the suit against UnitedHealthcare (UHC) for using an AI algorithm that allegedly has a 90-percent error rate. Many are also familiar with UHC’s ED coding denials, based on use of an Optum product to code ED level of care.

But I have to ask: are these really different from Dr. Jay Iinuma’s testimony in an Aenta lawsuit, indicating that he denied most claims without ever opening a medical record? In the case at issue, Iinuma admitted he never read the plaintiff’s medical records and knew next to nothing about his disorder. Iinuma didn’t use AI; he used non-physician reviews to abstract the record and make recommendations. It’s not high-tech, but it is a physician shortcut.

The differences between modern algorithms and Iinuma’s corporate-driven denial practices are twofold:

  • The AI algorithms are based on unknown training data sets with inherent built-in biases and undisclosed validation against human experts. These algorithms lack the nursing level decision-making upon which Iinuma claimed to rely.
  • Volumes. The AI never sleeps. It doesn’t eat. It doesn’t collect overtime. In short, the AI is a full-time, automated edit of every claim. It can flag or deny claims faster than any medical director. Once it’s flagged or denied, it would require a significant degree of certainty or professional integrity to override the AI denial.

But would medical directors actually ignore the good practice of medicine or established protocols to make adverse decisions? We need only look at a LinkedIn profile of a former UHC medical director. Frank Baumann’s profile notes: I am a board-certified general surgeon who spent 10 years with the nation’s largest healthcare insurance company, denying level of care cases to hospitals. He goes on to ask:

  • Why are we denying good care?
  • Why did we tell everyone that we were using national, evidence-based guidelines – but then we didn’t?

Medical directors like Iinuma and Baumann make it clear that such flawed decision-making exists and doesn’t require technology. I suspect it will persist. After all, technology will now make it easier to render denials, and some medical directors lack either the knowledge or professional integrity to do the right thing.

We should look at the history of some other technologies in medicine. We can start with Index Medicus. This bibliographic index originated in 1879, and over the years morphed into MEDLINE. In the early days, using the index was laborious. It required identification of potentially useful articles, then finding or requesting the articles at the library. Researchers would scrutinize the articles for both relevance and scientific validity. 

Digital conversion and computers allowed access to huge numbers of articles in multiple languages. Concomitantly, journal numbers exploded, presenting additional opportunities to publish. References in journal articles increased from several to sometimes several hundred. What was missing, however, was an index of flawed articles. Only recently has a database of retracted articles been developed. It is incomplete. These articles are typically retracted for one or more of three reasons:

  • Flawed method or analyses;
  • Ethical lapses; and
  • Fabricated or fraudulent data.

Our bibliographic systems are an excellent example of how technology has enabled errors to persist – or worse, propagate.

The next technology to consider is dictation and transcription. This was viewed initially as a time-saver for busy clinicians. But there have been unexpected results, regardless of whether the transcription is by a human or dictation software. This includes record entries such as:

  • “Both breasts are equal and reactive to light and accommodation.”
  • “Remnants of a soldier can be seen in the vagina.”
  • “Patient has chest pain if she lies on her left side for over a year.”
  • “The patient has left his white blood cells at another hospital.”
  • “The patient refused an autopsy.”

These may be humorous, but they add little to the medical record or help clarify the patient’s condition(s). To account for these errors, some providers add a “disclaimer” such as “this note was created with (insert dictation service or software). Despite careful review, some errors may persist.”  Providers rarely review or correct these notes. In retrospect, these transcriptions may be clearly recognized as errors, but few providers can actually recall what the correct entry should have been. In essence, the technology, and lack of immediate review, allow for a misuse that may lead to patient detriment and adverse financial consequences for institutions.

The large language machines (LLMs) upon which much AI is based take large volumes of electronic data and “train” the program. Without careful curation for quality and ongoing updates, the LLMs obligately suffer from bias and are susceptible to errors. Despite these imitations, there’s good evidence that AI-generated diagnoses or documentation is comparable to its human counterparts – and in some cases, better.

You may be aware of the lawyers who were sanctioned for submitting an AI-generated brief to a court. In many cases, the brief itself was, by many accounts, reasonably good. The problem arose when the AI “hallucinated” several court citations. Opposing counsel complained because the citations could not be found. As a result, in one such case, a federal judge in Texas issued a requirement for lawyers in cases before him to certify that they did not use AI to draft their filings without a human checking their accuracy. While it would be comforting to believe that such a requirement of providers might result in improved documentation, the disappointing truth is that providers are unlikely to check the accuracy of AI-generated documentation. The dictation errors, as well as the behaviors of insurance company medical directors like Dopke, Baumann, and Iinuma, serve as painful examples.

So, what should organizations do, right now, to manage the use of AI? The first consideration is the internal use of AI. Institutions should:

  • First, develop a responsible policy for using AI in the medical record. This will be very hard to police, since providers could simply copy an AI-generated document into the medical record. It would probably go undetected. But an annual pledge on the part of medical staff with clear expectations would be an excellent start.
  • Second, providers should leverage AI to detect AI-generated documentation.
  • Next, providers should use AI to detect repetitive or non-contributory medical record entries as well as to flag high-risk diagnoses. These analytical algorithms already exist in many clinical documentation integrity (CDI) and coding software programs.

Institutions should also leverage AI to respond to payors:

  • Contracting is an ideal starting point. AI can review very large documents and flag problem areas for review by counsel. It can detect inconsistencies and contradictions that may later prove to be disputed. It can also help analyze contractual differences between payors.
  • Denials management is another high-gain area. Allowing AI to categorize denials may be more accurate and consistent than human categorization. AI may be able to detect subtle changes in denial patterns or wording that portends nascent denial programs by payers.

The time is now to develop responsible uses for AI in-house.

Print Friendly, PDF & Email
Facebook
Twitter
LinkedIn

John K. Hall, MD, JD, MBA, FCLM, FRCPC

John K. Hall, MD, JD, MBA, FCLM, FRCPC is a licensed physician in several jurisdictions and is admitted to the California bar. He is also the founder of The Aegis Firm, a healthcare consulting firm providing consultative and litigation support on a wide variety of criminal and civil matters related to healthcare. He lectures frequently on black-letter health law, mediation, medical staff relations, and medical ethics, as well as patient and physician rights. Dr. Hall hopes to help explain complex problems at the intersection of medicine and law and prepare providers to manage those problems.

Related Stories

Leave a Reply

Please log in to your account to comment on this article.

Featured Webcasts

Navigating AI in Healthcare Revenue Cycle: Maximizing Efficiency, Minimizing Risks

Navigating AI in Healthcare Revenue Cycle: Maximizing Efficiency, Minimizing Risks

Michelle Wieczorek explores challenges, strategies, and best practices to AI implementation and ongoing monitoring in the middle revenue cycle through real-world use cases. She addresses critical issues such as the validation of AI algorithms, the importance of human validation in machine learning, and the delineation of responsibilities between buyers and vendors.

May 21, 2024
Leveraging the CERT: A New Coding and Billing Risk Assessment Plan

Leveraging the CERT: A New Coding and Billing Risk Assessment Plan

Frank Cohen shows you how to leverage the Comprehensive Error Rate Testing Program (CERT) to create your own internal coding and billing risk assessment plan, including granular identification of risk areas and prioritizing audit tasks and functions resulting in decreased claim submission errors, reduced risk of audit-related damages, and a smoother, more efficient reimbursement process from Medicare.

April 9, 2024
2024 Observation Services Billing: How to Get It Right

2024 Observation Services Billing: How to Get It Right

Dr. Ronald Hirsch presents an essential “A to Z” review of Observation, including proper use for Medicare, Medicare Advantage, and commercial payers. He addresses the correct use of Observation in medical patients and surgical patients, and how to deal with the billing of unnecessary Observation services, professional fee billing, and more.

March 21, 2024
Top-10 Compliance Risk Areas for Hospitals & Physicians in 2024: Get Ahead of Federal Audit Targets

Top-10 Compliance Risk Areas for Hospitals & Physicians in 2024: Get Ahead of Federal Audit Targets

Explore the top-10 federal audit targets for 2024 in our webcast, “Top-10 Compliance Risk Areas for Hospitals & Physicians in 2024: Get Ahead of Federal Audit Targets,” featuring Certified Compliance Officer Michael G. Calahan, PA, MBA. Gain insights and best practices to proactively address risks, enhance compliance, and ensure financial well-being for your healthcare facility or practice. Join us for a comprehensive guide to successfully navigating the federal audit landscape.

February 22, 2024
2024 SDoH Update: Navigating Coding and Screening Assessment

2024 SDoH Update: Navigating Coding and Screening Assessment

Dive deep into the world of Social Determinants of Health (SDoH) coding with our comprehensive webcast. Explore the latest OPPS codes for 2024, understand SDoH assessments, and discover effective strategies for integrating coding seamlessly into healthcare practices. Gain invaluable insights and practical knowledge to navigate the complexities of SDoH coding confidently. Join us to unlock the potential of coding in promoting holistic patient care.

May 22, 2024
2024 ICD-10-CM/PCS Coding Clinic Update Webcast Series

2024 ICD-10-CM/PCS Coding Clinic Update Webcast Series

HIM coding expert, Kay Piper, RHIA, CDIP, CCS, reviews the guidance and updates coders and CDIs on important information in each of the AHA’s 2024 ICD-10-CM/PCS Quarterly Coding Clinics in easy-to-access on-demand webcasts, available shortly after each official publication.

April 15, 2024

Trending News

Happy World Health Day! Our exclusive webcast, ‘2024 SDoH Update: Navigating Coding and Screening Assessment,’  is just $99 for a limited time! Use code WorldHealth24 at checkout.

SPRING INTO SAVINGS! Get 21% OFF during our exclusive two-day sale starting 3/21/2024. Use SPRING24 at checkout to claim this offer. Click here to learn more →