The More Responsible Use of AI in Healthcare

The More Responsible Use of AI in Healthcare

Today I want to write about one of our current hot-button topics: artificial intelligence, better known as AI. First, I want to pose the question: “Is AI bad?” I think it probably is not inherently bad. But AI seems to be the most recent example of flawed implementation or misuse of technology by payors (and probably providers as well).

By now, many have no doubt read the ProPublica article about Cigna and are familiar with the 60,000 claims denied in a single month by Dr. Cheryl Dopke (that’s a little over six claims per second, assuming an eight-hour workday). We’ve read the quote from an unnamed Cigna medical director “[w]e literally click and submit … it takes all of 10 seconds to do 50 at a time.” Many are no doubt also familiar with the Cigna lawsuit resulting from these behaviors.

Similarly, many would be familiar with the suit against UnitedHealthcare (UHC) for using an AI algorithm that allegedly has a 90-percent error rate. Many are also familiar with UHC’s ED coding denials, based on use of an Optum product to code ED level of care.

But I have to ask: are these really different from Dr. Jay Iinuma’s testimony in an Aenta lawsuit, indicating that he denied most claims without ever opening a medical record? In the case at issue, Iinuma admitted he never read the plaintiff’s medical records and knew next to nothing about his disorder. Iinuma didn’t use AI; he used non-physician reviews to abstract the record and make recommendations. It’s not high-tech, but it is a physician shortcut.

The differences between modern algorithms and Iinuma’s corporate-driven denial practices are twofold:

  • The AI algorithms are based on unknown training data sets with inherent built-in biases and undisclosed validation against human experts. These algorithms lack the nursing level decision-making upon which Iinuma claimed to rely.
  • Volumes. The AI never sleeps. It doesn’t eat. It doesn’t collect overtime. In short, the AI is a full-time, automated edit of every claim. It can flag or deny claims faster than any medical director. Once it’s flagged or denied, it would require a significant degree of certainty or professional integrity to override the AI denial.

But would medical directors actually ignore the good practice of medicine or established protocols to make adverse decisions? We need only look at a LinkedIn profile of a former UHC medical director. Frank Baumann’s profile notes: I am a board-certified general surgeon who spent 10 years with the nation’s largest healthcare insurance company, denying level of care cases to hospitals. He goes on to ask:

  • Why are we denying good care?
  • Why did we tell everyone that we were using national, evidence-based guidelines – but then we didn’t?

Medical directors like Iinuma and Baumann make it clear that such flawed decision-making exists and doesn’t require technology. I suspect it will persist. After all, technology will now make it easier to render denials, and some medical directors lack either the knowledge or professional integrity to do the right thing.

We should look at the history of some other technologies in medicine. We can start with Index Medicus. This bibliographic index originated in 1879, and over the years morphed into MEDLINE. In the early days, using the index was laborious. It required identification of potentially useful articles, then finding or requesting the articles at the library. Researchers would scrutinize the articles for both relevance and scientific validity. 

Digital conversion and computers allowed access to huge numbers of articles in multiple languages. Concomitantly, journal numbers exploded, presenting additional opportunities to publish. References in journal articles increased from several to sometimes several hundred. What was missing, however, was an index of flawed articles. Only recently has a database of retracted articles been developed. It is incomplete. These articles are typically retracted for one or more of three reasons:

  • Flawed method or analyses;
  • Ethical lapses; and
  • Fabricated or fraudulent data.

Our bibliographic systems are an excellent example of how technology has enabled errors to persist – or worse, propagate.

The next technology to consider is dictation and transcription. This was viewed initially as a time-saver for busy clinicians. But there have been unexpected results, regardless of whether the transcription is by a human or dictation software. This includes record entries such as:

  • “Both breasts are equal and reactive to light and accommodation.”
  • “Remnants of a soldier can be seen in the vagina.”
  • “Patient has chest pain if she lies on her left side for over a year.”
  • “The patient has left his white blood cells at another hospital.”
  • “The patient refused an autopsy.”

These may be humorous, but they add little to the medical record or help clarify the patient’s condition(s). To account for these errors, some providers add a “disclaimer” such as “this note was created with (insert dictation service or software). Despite careful review, some errors may persist.”  Providers rarely review or correct these notes. In retrospect, these transcriptions may be clearly recognized as errors, but few providers can actually recall what the correct entry should have been. In essence, the technology, and lack of immediate review, allow for a misuse that may lead to patient detriment and adverse financial consequences for institutions.

The large language machines (LLMs) upon which much AI is based take large volumes of electronic data and “train” the program. Without careful curation for quality and ongoing updates, the LLMs obligately suffer from bias and are susceptible to errors. Despite these imitations, there’s good evidence that AI-generated diagnoses or documentation is comparable to its human counterparts – and in some cases, better.

You may be aware of the lawyers who were sanctioned for submitting an AI-generated brief to a court. In many cases, the brief itself was, by many accounts, reasonably good. The problem arose when the AI “hallucinated” several court citations. Opposing counsel complained because the citations could not be found. As a result, in one such case, a federal judge in Texas issued a requirement for lawyers in cases before him to certify that they did not use AI to draft their filings without a human checking their accuracy. While it would be comforting to believe that such a requirement of providers might result in improved documentation, the disappointing truth is that providers are unlikely to check the accuracy of AI-generated documentation. The dictation errors, as well as the behaviors of insurance company medical directors like Dopke, Baumann, and Iinuma, serve as painful examples.

So, what should organizations do, right now, to manage the use of AI? The first consideration is the internal use of AI. Institutions should:

  • First, develop a responsible policy for using AI in the medical record. This will be very hard to police, since providers could simply copy an AI-generated document into the medical record. It would probably go undetected. But an annual pledge on the part of medical staff with clear expectations would be an excellent start.
  • Second, providers should leverage AI to detect AI-generated documentation.
  • Next, providers should use AI to detect repetitive or non-contributory medical record entries as well as to flag high-risk diagnoses. These analytical algorithms already exist in many clinical documentation integrity (CDI) and coding software programs.

Institutions should also leverage AI to respond to payors:

  • Contracting is an ideal starting point. AI can review very large documents and flag problem areas for review by counsel. It can detect inconsistencies and contradictions that may later prove to be disputed. It can also help analyze contractual differences between payors.
  • Denials management is another high-gain area. Allowing AI to categorize denials may be more accurate and consistent than human categorization. AI may be able to detect subtle changes in denial patterns or wording that portends nascent denial programs by payers.

The time is now to develop responsible uses for AI in-house.

Facebook
Twitter
LinkedIn

John K. Hall, MD, JD, MBA, FCLM, FRCPC

John K. Hall, MD, JD, MBA, FCLM, FRCPC is a licensed physician in several jurisdictions and is admitted to the California bar. He is also the founder of The Aegis Firm, a healthcare consulting firm providing consultative and litigation support on a wide variety of criminal and civil matters related to healthcare. He lectures frequently on black-letter health law, mediation, medical staff relations, and medical ethics, as well as patient and physician rights. Dr. Hall hopes to help explain complex problems at the intersection of medicine and law and prepare providers to manage those problems.

Related Stories

Leave a Reply

Please log in to your account to comment on this article.

Featured Webcasts

2026 ICD-10-CM/PCS Coding Clinic Update Webcast Series

Uncover essential coding insights with nationally recognized coding authority Kay Piper, RHIA, CDIP, CCS. Through ICD10monitor’s interactive, on‑demand webcast series, Kay walks you through the AHA’s 2026 ICD‑10‑CM/PCS Quarterly Coding Clinics, translating each update into practical, easy‑to‑apply guidance designed to sharpen precision, ensure compliance, and strengthen day‑to‑day decision‑making. Available shortly after each official release.

April 13, 2026

2026 ICD-10-CM/PCS Coding Clinic Update: Fourth Quarter

Uncover critical guidance on the ICD-10-CM/PCS code updates. Kay Piper reviews and explains ICD-10-CM/PCS coding guidelines in the AHA’s fourth quarter 2026 ICD-10-CM/PCS Coding Clinic in an easy to access on-demand webcast.

December 14, 2026

2026 ICD-10-CM/PCS Coding Clinic Update: Third Quarter

Uncover critical guidance on the ICD-10-CM/PCS code updates. Kay Piper reviews and explains ICD-10-CM/PCS coding guidelines in the AHA’s third quarter 2026 ICD-10-CM/PCS Coding Clinic in an easy to access on-demand webcast.

October 12, 2026

2026 ICD-10-CM/PCS Coding Clinic Update: Second Quarter

Uncover critical guidance on the ICD-10-CM/PCS code updates. Kay Piper reviews and explains ICD-10-CM/PCS coding guidelines in the AHA’s second quarter 2026 ICD-10-CM/PCS Coding Clinic in an easy to access on-demand webcast.

July 13, 2026

Trending News

Featured Webcasts

Compliance for the Inpatient Psychiatric Facility (IPF-PPS): Minimizing Federal Audit Findings by Strengthening Best Practices

Federal auditors are intensifying their focus on inpatient psychiatric facilities, using advanced data analytics to spotlight outliers and pursue high‑dollar repayments. In this high‑impact webcast, Michael Calahan, PA, MBA, Compliance Officer and V.P., Hospital & Physician Compliance, breaks down what regulators are really targeting in IPF-PPS admissions, documentation, treatment and discharge planning. Attendees will learn practical steps to tighten processes, avoid common audit triggers and protect reimbursement and reduce the risk of multimillion-dollar repayment demands.

April 9, 2026

Mastering MDM for Accurate Professional Fee Coding

In this timely session, Stacey Shillito, CDIP, CPMA, CCS, CCS-P, CPEDC, COPC, breaks down the complexities of Medical Decision Making (MDM) documentation so providers can confidently capture the true complexity of their care. Attendees will learn practical, efficient strategies to ensure documentation aligns with current E/M guidelines, supports accurate coding, and reduces audit risk, all without adding to charting time.

March 31, 2026

The PEPPER Returns – Risk and Opportunity at Your Fingertips

Join Ronald Hirsch, MD, FACP, CHCQM for The PEPPER Returns – Risk and Opportunity at Your Fingertips, a practical webcast that demystifies the PEPPER and shows you how to turn complex claims data into actionable insights. Dr. Hirsch will explain how to interpret key measures, identify compliance risks, uncover missed revenue opportunities, and understand new updates in the PEPPER, all to help your organization stay ahead of audits and use this powerful data proactively.

March 19, 2026

Top 10 Audit Targets for 2026-2027 for Hospitals & Physicians: Protect Your Revenue

Stay ahead of the 2026-2027 audit surge with “Top 10 Audit Targets for 2026-2027 for Hospitals & Physicians: Protect Your Revenue,” a high-impact webcast led by Michael Calahan, PA, MBA. This concise session gives hospitals and physicians clear insight into the most likely federal audit targets, such as E/M services, split/shared and critical care, observation and admissions, device credits, and Two-Midnight Rule changes, and shows how to tighten documentation, coding, and internal processes to reduce denials, recoupments, and penalties. Attendees walk away with practical best practices to protect revenue, strengthen compliance, and better prepare their teams for inevitable audits.

January 29, 2026

Trending News

Prepare for the 2025 CMS IPPS Final Rule with ICD10monitor’s IPPSPalooza! Click HERE to learn more

Get 15% OFF on all educational webcasts at ICD10monitor with code JULYFOURTH24 until July 4, 2024—start learning today!

BLOOM INTO SAVINGS! Get 25% OFF during our spring sale through March 27. Use code SPRING26 at checkout to claim this offer.

CYBER WEEK IS HERE! Don’t miss your chance to get 20% off now until Dec. 1 with code CYBER25

CYBER WEEK IS HERE! Don’t miss your chance to get 20% off now until Dec. 2 with code CYBER24