Generative artificial intelligence (AI) is reshaping health information management (HIM) at a pace few anticipated.
Hospitals and health systems are integrating large language models (LLMs) and ambient intelligence into their documentation, coding, and clinical documentation integrity (CDI) workflows. Unlike traditional rule-based tools, these new systems can generate clinical text, draft provider notes, and propose codes in real time.
While these technologies offer a transformative potential to improve documentation efficiency, they also introduce new challenges regarding accuracy, privacy, bias, governance, and regulatory compliance. HIM and CDI leaders are being called upon to set the standards, ensuring that these tools enhance, rather than disrupt, the integrity of documentation and coding.
For years, AI in HIM has focused on structured tasks like computer-assisted coding, automated charge capture, or predictive denial alerts. Generative AI represents a leap forward: it can read unstructured clinical notes, generate draft documentation from ambient audio, or analyze full patient records to suggest ICD-10-CM, CPT®, and DRG codes.
These capabilities promise measurable benefits. Early adopters have reported 20-40-percent reductions in provider documentation time, along with faster coding turnaround. CDI specialists are using these tools to flag documentation gaps in real time. But unlike deterministic algorithms, LLMs generate outputs based on probabilities, which means they can be convincingly wrong.
A single incorrect detail, such as an AI hallucinating a diagnosis, can cascade through coding, billing, and audit trails.
Generative AI brings novel compliance and privacy responsibilities. Protected health information (PHI) must be processed in ways that comply with the Health Insurance Portability and Accountability Act (HIPAA) and emerging state privacy laws.
Many generative models run on cloud infrastructure, raising questions about data-sharing agreements, encryption, and model retraining. HIM leaders must ensure that vendors provide transparent documentation of data handling and audit logs for every AI-generated output.
Bias is another critical issue. If AI models are trained on incomplete or skewed datasets, they may reflect and amplify existing documentation or coding disparities, which may impact quality metrics, risk scores, and equity reporting. Organizations must implement processes to regularly audit AI outputs for accuracy, bias, and unintended consequences.
One of the most important messages to convey is that AI is not here to replace coding professionals, CDI specialists, or HIM staff. These tools should enhance human performance by automating repetitive work and surfacing critical data, but the final responsibility for documentation and coding decisions remains human.
Effective governance is the cornerstone of safe, compliant AI deployment. HIM leaders should advocate for cross-functional AI governance committees that include HIM and CDI leadership, compliance and legal teams, clinical champions, IT, and data privacy experts. These groups must define policies for model vetting, user training, and audit monitoring.
Generative AI has the potential to revolutionize clinical documentation and coding, but only if deployed with rigorous oversight, governance, and human leadership. These tools should amplify, not replace, human expertise. By establishing strong governance structures, maintaining compliance with privacy and regulatory frameworks, auditing for bias, and empowering coders and CDI specialists to oversee AI outputs, HIM leaders can leverage this technology to strengthen documentation integrity and operational efficiency.
The future of coding and CDI isn’t AI versus humans. It’s AI and humans, working together to create more accurate, efficient, and equitable health information ecosystems.
Programming note:
Listen live October 21 when Angela Comfort cohosts Talk Ten Tuesday with Chuck Buck, 10 Eastern.


















