Artificial intelligence (AI) is having a big moment in healthcare, but for those of us who have been in coding, clinical documentation integrity (CDI), and health information management (HIM) for decades, the core technology isn’t new. AI, particularly Natural Language Processing (NLP), has been supporting documentation workflows since the early 2000s.
What has changed is how fast and how far it’s advancing.
We’re now seeing AI tools with conversational interfaces (like Generative Pretrained Transformers, or GPTs), automated query suggestion engines, and real-time coding assistants – all built on the foundation laid by earlier NLP systems.
But here’s the challenge:
Just because AI tools are more accessible and powerful now doesn’t mean they are magic.
They won’t fix everything – especially the underlying people, policy, and process issues that existed long before ChatGPT hit the headlines.
So, let’s talk about where AI can truly enhance CDI and coding, and where it’s being miscast as a one-size-fits-all solution.
More Accurate Document Parsing with NLP
The NLP that powered early auto-coding systems has now matured into clinical concept extraction, helping flag missing or implied diagnoses with greater accuracy – but still needing oversight.
Faster Drafts of Clinical Queries and Clarifications
Newer AI models can suggest compliant, well-structured queries faster than ever. This supports productivity, but the decision to send, edit, or withhold a query still rests with the CDI professional.
Prompt-Based Learning for Coders & New CDI Staff
GPT-powered tools now allow home practice, simulations, and just-in-time learning, without touching personal health information (PHI). This is a breakthrough for training and upskilling.
Cross-Referencing Codes and Guidelines
Modern AI can pull in logic from multiple sources – ICD-10, CPT, HCC, Coding Clinics – and even explain coding pathways. But the coder’s logic and clinical understanding still matter most.
Speeding Up, Not Replacing, Clinical Review
AI can highlight common gaps (e.g., missing laterality, specificity, causality), but not always why those gaps matter in the clinical context. That’s where human expertise shines.
How Technology Will Shift the Role of the Coder and CDI Professional
As technology evolves, so will the role of those working in documentation integrity. Coders and CDI specialists will still be essential, but how they work and what they’re expected to contribute will shift.
More time will be spent on analysis, education, interpretation, and strategic decision-making, and less on basic lookups or repetition. Teams will need to become more comfortable with AI collaboration, prompt engineering, and critical evaluation of outputs.
This doesn’t mean becoming technical experts. It means developing adaptive thinking, clinical insight, and digital confidence – all of which are becoming core competencies for the next generation of documentation professionals. The needs and demands of these roles will change, so enhance your skills.
There are still some things AI can’t fix, even after two decades:
A Poor Documentation Culture
If clinicians aren’t trained or don’t have a desire to document clearly, AI can’t clean it up. We still need human-led education and communication.
Inconsistent or Non-Compliant Query Practice
AI might suggest a query, but if your team isn’t aligned on tone, format, or escalation processes, inconsistency will persist – and compliance risk may grow.
Foundational Coding Knowledge Gaps
AI won’t replace an understanding of anatomy, physiology, sequencing rules, and coding conventions. In fact, weak foundations make AI suggestions more dangerous.
Fear-Based Leadership
AI can’t fix teams that are afraid to speak up, ask questions, or explore new tools. Culture change is human work, not algorithmic.
Shortcuts Without Strategy
Without clear guidelines, teams may be tempted to overuse AI for efficiency, creating quality, compliance, and audit risks down the road.
From Then to Now: What Should Teams Do Differently in 2025?
- Acknowledge the history. AI in healthcare isn’t new; it’s evolving. Position new tools as a next step, not a revolution.
- Invest in AI literacy and prompt practice. Don’t wait for formal training; start internal experiments using non-PHI queries and documentation drills.
- Focus on integration, not substitution. AI should support clinical reasoning, not replace it.
- Update your query policies. If AI plays a role in drafting or suggesting queries, ensure that your policies reflect human oversight, accountability, and compliance checkpoints.
- Lead with calm, not fear. Help veteran coders and CDI pros see that their deep thinking and judgment are more valuable than ever, especially in an age of automation.
Final Word
AI in coding and CDI didn’t just show up; they have been walking beside us for 20+ years.
What is different now is how visible and powerful these tools have become.
Let’s be careful not to throw away the human strengths that made this field resilient – clinical curiosity, ethical integrity, and documentation logic – in a rush to automate.
AI can be a strategic partner, but only if we guide it well.
Remember, stay curious…and impact change!
Practice at Home: A Safe Way to Explore AI Without PHI
You don’t need access to patient records or proprietary systems to build AI fluency. In fact, one of the safest and most effective ways to practice is using made-up, clinically accurate scenarios in an AI tool like ChatGPT as a foundational example.
Try using this scenario:
A 78-year-old female presents with shortness of breath, fever, and a productive cough. Chest X-ray shows right lower lobe pneumonia. She also has a history of chronic systolic heart failure (EF 35 percent) and stage 3 chronic kidney disease. The provider documents “pneumonia with CHF exacerbation” in the assessment, but provides no further clinical details about treatment or volume overload.
Prompt:
“Act as a clinical documentation improvement specialist. Based on the following documentation, identify any areas that might need a provider query. Explain your reasoning using CDI best practices and coding guidelines.”
Paste the scenario after the prompt and then review the AI’s response. Ask yourself:
- Does it catch the need for clarification of volume overload or treatment?
- Does it suggest appropriate query language?
- Is the AI confusing coexisting conditions with causality?
This simple, low-risk practice builds confidence in prompt structure, clinical interpretation, and compliant query development – without touching PHI.
You can even try it again with slight variations:
- Remove the EF;
- Add oxygen therapy; or
- Document sepsis or leukocytosis.
See how the AI response changes, and where your judgment might override it.
EDITOR’S NOTE:
The opinions expressed in this article are solely those of the author and do not necessarily represent the views or opinions of MedLearn Media. We provide a platform for diverse perspectives, but the content and opinions expressed herein are the author’s own. MedLearn Media does not endorse or guarantee the accuracy of the information presented. Readers are encouraged to critically evaluate the content and conduct their own research. Any actions taken based on this article are at the reader’s own discretion.