AI in Auditing: Are We Auditing the Machines?

AI in Auditing: Are We Auditing the Machines?

We’ve trained ourselves to audit documentation, audit codes, audit teams – but now we need to ensure we have added something else to the list:

Audit the AI.

That’s right. artificial intelligence isn’t just something we use to assist documentation or streamline chart review. This may be new to some, but AI is shaping decisions in real time, and quietly inserting itself into the audit trail.

And if no one’s reviewing those outputs before they move downstream, we’re not just working faster; we’re working blind.

We’ve all seen AI show up in documentation workflows, coding suggestions, and even chart prioritization.

But now that these tools are becoming embedded into operational systems, we have to move from asking “how can AI help?” to something more urgent:

How do we validate what AI is doing?

Because let’s be clear: these tools are not neutral.

They are scoring documentation risk, pre-filtering audit queues, and suggesting what deserves attention.

That’s useful, but it’s also powerful. And like anything powerful, it needs governance.

So, when a compliance team says “we audit 5 percent of discharges,” it’s time to ask: which 5 percent?

If that sample is based on AI flags, your audit pool is already filtered.

And unless your team knows how the tool flagged those encounters, you could be leaving behind entire categories of risk that simply didn’t make the list.

And here’s where we need to pause and ask the bigger question:

As artificial intelligence becomes part of the audit trail, who’s reviewing the reviewers?

That doesn’t just mean spot-checking an output or nodding at a dashboard.

It means making sure there are real people and defined processes in place to regularly evaluate the logic, challenge questionable flags, and track unintended drift – before a payor or auditor points it out for you.

Now, most vendors won’t hand over the algorithm.

You may not get the full logic, and that’s expected.

But what you should be able to ask is:

  • What patterns are driving these flags?
  • When was this logic last reviewed?
  • Who’s monitoring it for drift, bias, or misalignment?

That’s not just a workflow question; it’s an information governance responsibility.

And that perspective is now being backed by national and international guidance.

Frameworks from NIST, ISO, and EUAIML all emphasize the importance of auditability, explainability, and human oversight, even if full transparency into proprietary systems isn’t possible.

At the same time, agencies such as the U.S. Department of Health and Human Services (HHS) Office for Civil Rights (OCR) and Office of Inspector General (OIG) have flagged concerns about automation that operates without clear policy alignment, particularly when it affects decision-making in healthcare.

And if you’ve built your own internal audit triggers or filtering logic, then that responsibility lives with your team – which means you have the ability to make it stronger.


Try This practice prompt:

“Act as a healthcare compliance auditor. Based on this documentation, would you escalate the chart for review? Why or why not?”

Here’s a fictional (but familiar) example:

“Patient admitted with fall and confusion. CT head negative. Provider notes ‘likely encephalopathy’ and starts antibiotics for pneumonia. No neuro consult. Discharge summary includes ‘encephalopathy resolved.’ Code billed: metabolic encephalopathy.”

Would your AI tool flag that case?

Would your team agree with the diagnosis, or question whether it’s clinically supported?
Would this slide through coding and billing if no one challenged the narrative?

This is how we sharpen oversight: not by resisting automation, but by thinking around it.


Three steps to take right now:

  1. Audit the logic influencing your audit program.
    Even without full access, you should know what triggers are in play – and whether they still match payor expectations and clinical guidance.
  2. Don’t let automation shrink your scope.
    Intentionally rotate in non-flagged cases. That’s how you catch what the system overlooks – and what it overconfidently approves.
  3. Strengthen your information governance lens.
    Oversight of AI tools belong not just to IT or vendor teams, but to compliance, CDI, and clinical documentation leaders. Make sure your policies reflect that.

Because speed doesn’t guarantee accuracy.

And automation without validation is just a fancier version of guessing.


So, I’ll leave you with this:

If AI is now auditing your data, who’s auditing the AI?

And how confident are you in what it’s not showing you?

Stay sharp. Stay curious.

And let’s keep leading this next chapter, strategically and responsibly.

EDITOR’S NOTE:

The opinions expressed in this article are solely those of the author and do not necessarily represent the views or opinions of MedLearn Media. We provide a platform for diverse perspectives, but the content and opinions expressed herein are the author’s own. MedLearn Media does not endorse or guarantee the accuracy of the information presented. Readers are encouraged to critically evaluate the content and conduct their own research. Any actions taken based on this article are at the reader’s own discretion.

Facebook
Twitter
LinkedIn

Sharon Easterling, MHA, RHIA, CCS, CDIP, CPHM

Sharon B. Easterling is the CEO of Recovery Analytics, LLC in Charlotte, NC. Her past job roles include corporate assistant vice president as well as senior director of ambulatory clinical documentation improvement. She is a national speaker and has been widely published. Easterling authored the Clinical Documentation Improvement Prep Guide and Exam Book and is a previous winner of the CSA Recognition for Advancing Coding Knowledge through Code Write. She currently sits on the executive board of NCHIMA as past president, is a member of the Coding Classification and Terminologies Practice Council, is a member of the Wolters Kluwer Advisory Board, and is the chair of the advisory board of the American College of Physician Advisors.

Related Stories

Revisiting Sepsis

Revisiting Sepsis  

Did you know it is Sepsis Awareness Month? To be honest, I didn’t, until a recent Agency for Healthcare Research and Quality (AHRQ) newsletter informed

Read More

Leave a Reply

Please log in to your account to comment on this article.

Featured Webcasts

Sepsis: Bridging the Clinical Documentation and Coding Gap to Reduce Denials

Sepsis: Bridging the Clinical Documentation and Coding Gap to Reduce Denials

Sepsis remains one of the most frequently denied and contested diagnoses, creating costly revenue loss and compliance risks. In this webcast, Angela Comfort, DBA, MBA, RHIA, CDIP, CCS, CCS-P, provides practical, real-world strategies to align documentation with coding guidelines, reconcile Sepsis-2 and Sepsis-3 definitions, and apply compliant queries. You’ll learn how to identify and address documentation gaps, strengthen provider engagement, and defend diagnoses against payer scrutiny—equipping you to protect reimbursement, improve SOI/ROM capture, and reduce audit vulnerability in this high-risk area.

September 24, 2025
2026 IPPS Masterclass 3: Master MS-DRG Shifts and NTAPs

2026 IPPS Masterclass Day 3: MS-DRG Shifts and NTAPs

This third session in our 2026 IPPS Masterclass will feature a review of FY26 changes to the MS-DRG methodology and new technology add-on payments (NTAPs), presented by nationally recognized ICD-10 coding expert Christine Geiger, MA, RHIA, CCS, CRC, with bonus insights and analysis from Dr. James Kennedy.

August 14, 2025
2026 IPPS Masterclass Day 2: Master ICD-10-PCS Changes

2026 IPPS Masterclass Day 2: Master ICD-10-PCS Changes

This second session in our 2026 IPPS Masterclass will feature a review the FY26 changes to ICD-10-PCS codes. This information will be presented by nationally recognized ICD-10 coding expert Christine Geiger, MA, RHIA, CCS, CRC, with bonus insights and analysis from Dr. James Kennedy.

August 13, 2025

Trending News

Featured Webcasts

E/M Services Under Intensive Federal Scrutiny: Navigating Split/Shared, Incident-to & Critical Care Compliance in 2025-2026

E/M Services Under Intensive Federal Scrutiny: Navigating Split/Shared, Incident-to & Critical Care Compliance in 2025-2026

During this essential RACmonitor webcast Michael Calahan, PA, MBA Certified Compliance Officer, will clarify the rules, dispel common misconceptions, and equip you with practical strategies to code, document, and bill high-risk split/shared, incident-to & critical care E/M services with confidence. Don’t let audit risks or revenue losses catch your organization off guard — learn exactly what federal auditors are looking for and how to ensure your documentation and reporting stand up to scrutiny.

August 26, 2025
The Two-Midnight Rule: New Challenges, Proven Strategies

The Two-Midnight Rule: New Challenges, Proven Strategies

RACmonitor is proud to welcome back Dr. Ronald Hirsch, one of his most requested webcasts. In this highly anticipated session, Dr. Hirsch will break down the complex Two Midnight Rule Medicare regulations, translating them into clear, actionable guidance. He’ll walk you through the basics of the rule, offer expert interpretation, and apply the rule to real-world clinical scenarios—so you leave with greater clarity, confidence, and the tools to ensure compliance.

June 19, 2025
Open Door Forum Webcast Series

Open Door Forum Webcast Series

Bring your questions and join the conversation during this open forum series, live every Wednesday at 10 a.m. EST from June 11–July 30. Hosted by Chuck Buck, these fast-paced 30-minute sessions connect you directly with top healthcare experts tackling today’s most urgent compliance and policy issues.

June 11, 2025

Trending News

Happy National Doctor’s Day! Learn how to get a complimentary webcast on ‘Decoding Social Admissions’ as a token of our heartfelt appreciation! Click here to learn more →

CYBER WEEK IS HERE! Don’t miss your chance to get 20% off now until Dec. 2 with code CYBER24