The boundaries of clinical judgment have always been clearly defined. They belong to the provider, and are supported by documentation, clarified through compliant queries, and governed by regulatory standards that ensure the integrity of the medical record. What has remained constant is accountability.
That accountability is now being challenged.
On May 1, the Commonwealth of Pennsylvania State Board of Medicine filed a petition in the Commonwealth Court seeking to enjoin Character Technologies, Inc. – the company behind Character.AI – from engaging in the unauthorized practice of medicine.¹
This case, recently highlighted by Eric Fish, Partner at Hooper, Lundy & Bookman, brings early visibility to what may become a defining regulatory moment for artificial intelligence (AI) in healthcare.
The Association of Clinical Documentation Integrity Specialists/American Health Information Management Association (ACDIS/AHIMA) draft Guidelines for Achieving a Compliant Query Practice 2026 Update reaffirm that clinical judgment belongs to the provider. This case raises the question of what happens when something else begins to act like the provider.
At first glance, this reads like a case about a chatbot, but it is not. This is a case about where the line is drawn between technology and clinical practice, and what happens when that line is crossed.
The platform at the center of this action, Character.AI, allows users to create and interact with AI-generated personas, including those designed to simulate healthcare professionals.¹ In the complaint, a state investigator engaged a chatbot presenting itself as a psychiatrist. During that interaction, the AI discussed symptoms consistent with depression, offered to perform an assessment, suggested that medication could be appropriate, and critically, represented itself as a licensed physician, including in Pennsylvania, even providing a false license number.¹
That interaction is the foundation of this case, and it is enough.
The Pennsylvania Medical Practice Act does not require harm to occur for a violation to exist. It does not require a prescription to be written or a procedure to be performed. It requires only that an individual or entity practice or purport to practice medicine without a valid license.¹ That distinction is not subtle, but foundational The act of representing clinical authority is, in itself, regulated.
This is where the connection to query compliance becomes unavoidable.
The chatbot did not simply provide general information. It interpreted symptoms, advanced a diagnostic direction, and suggested treatment. In effect, it moved beyond clarification and into clinical decision-making. It behaved as if it were the clinician.
That distinction matters. A compliant query presents clinically supported options and preserves independent provider judgment. What occurred in this case bypasses that structure entirely. It introduces clinical reasoning without defined authorship, validation, or accountability. This is not a deviation from compliant query practice, but a wholesale avoidance of it.
For healthcare organizations, this marks an inflection point. For years, AI has been framed as a tool: something that supports clinical workflows, enhances documentation, or improves efficiency. This case reframes that narrative. When an AI system begins to interpret clinical information, engage in decision-making dialogue, and communicate with the authority of a licensed provider, it is no longer functioning as a tool, but as a clinical actor. And clinical actors are subject to regulation.
It would be a mistake to view this as an isolated enforcement action against a consumer-facing platform. The implications extend directly into healthcare operations. We are already seeing widespread adoption of AI scribes that generate clinical documentation, decision-support tools that influence treatment pathways, and risk-adjustment models that shape how conditions are captured and represented. The question is no longer whether AI is present in the workflow, but whether the output of the AI functions as clinical judgment.
That distinction matters because the regulatory standard is shifting from intent to function. If the output behaves like clinical decision-making, it will be evaluated as clinical decision-making.
This is also where federal enforcement posture becomes relevant. The U.S. Department of Health and Human Services (HHS) Office of Inspector General (OIG) has made it clear through its recent compliance guidance and enforcement priorities that organizations are expected to maintain effective oversight of evolving technologies, particularly where those technologies influence billing, documentation, or clinical decision-making.² AI does not sit outside of existing compliance frameworks; it amplifies them. If AI-generated content contributes to a claim, supports a diagnosis, or influences medical necessity, it becomes subject to the same scrutiny as any other element of the record.
This introduces a new layer of risk into the medical record – one that centers on authorship and provenance. The medical record has always served as the legal record of care. It is the foundation for reimbursement, quality measurement, and regulatory review. Increasingly, it is also the data source for automated payer review, structured data extraction, and interoperability frameworks. When AI-generated content enters that record, the question becomes not just what was documented, but who or what generated it, and whether that content can withstand scrutiny.
If a statement in the record reads as a clinical conclusion, it will be treated as one. If that conclusion cannot be attributed to a licensed provider exercising clinical judgment, the integrity of the record is compromised, from an enforcement perspective, which creates exposure not only to documentation accuracy, but also to the risk of false claims if unsupported or non-validated AI-generated content contributes to reimbursement.
Another critical element of this case is where accountability lies. The action is not directed at an individual user who created the chatbot. It is directed at the platform itself. That signals a clear regulatory posture: organizations may be held responsible for how AI systems are deployed, how they behave, and what they are allowed to represent. AI outputs are not viewed as isolated artifacts, but extensions of the entity that enables them.
For healthcare systems, this aligns with existing expectations regarding documentation integrity, coding accuracy, and medical necessity. The difference is that AI introduces scale and variability at a pace that traditional governance structures were not designed to manage. Without deliberate oversight, the risk is not incremental, but exponential.
This is where governance maturity becomes the differentiator. Organizations that approach AI as a technology implementation will remain reactive. Organizations that approach AI as a clinical and compliance risk domain will build the controls necessary to withstand scrutiny. That includes formal governance structures, defined accountability for AI-generated content, validation processes for clinical language, and clear policies prohibiting AI from representing itself as a licensed provider or an independent clinical authority.
Perhaps the most important takeaway from this case is how low the exposure threshold is. The trigger is not treatment, but perception. If an AI system presents itself with clinical authority, uses protected titles, or implies licensure, it may meet the definition of “holding out” as a provider. That standard has direct implications for any AI-enabled interface that interacts with patients, supports documentation, or generates clinical language.
Healthcare organizations must respond accordingly. This requires more than awareness; it also demands governance. There must be clear boundaries governing what AI can and cannot do within clinical workflows. Clinical statements must be attributable to licensed providers. Documentation generated or supported by AI must be validated, not assumed. And critically, clinical documentation integrity (CDI), compliance, and physician advisory leadership must have a defined role in evaluating and overseeing AI deployment. This is where CDI becomes central, not peripheral, in AI governance.
This is not simply a technological decision, but also a clinical, regulatory, and legal one.
The broader trajectory is clear. As regulators, payers, and enforcement bodies continue to evaluate the role of AI in healthcare, the focus will not be on the technology’s sophistication. It will be on the impact of its output. Does it influence care? Does it shape decision-making? Does it present itself as a clinical authority?
If the answer is yes, it will be regulated accordingly.
We are entering a phase where the distinction between technology and clinician is no longer defined by design, but by behavior. That shift places the medical record at the center of the conversation once again, not just as a reflection of care, but as evidence of how care decisions are made.
The expectation remains unchanged. The record must be accurate, attributable, and defensible. What has changed is the environment in which that record is created.
If it looks like clinical judgment, communicates like clinical judgment, and influences care like clinical judgment, it will be regulated as clinical judgment.
References
- Commonwealth of Pennsylvania, Department of State, State Board of Medicine v. Character Technologies, Inc. Petition for Review like a Complaint in Equity. Filed May 1, 2026.
- Office of Inspector General. General Compliance Program Guidance and industry segment guidance updates addressing technology, oversight, and program integrity expectations. https://oig.hhs.gov/compliance
- Fish E. LinkedIn post regarding Pennsylvania State Board of Medicine action against Character.AI. May 2026.
- ACDIS, AHIMA. Guidelines for Achieving a Compliant Query Practice—2026 Update (Draft). 2026. Accessed May 2026. https://acdis.org/resources/acdisahima-guidelines-achieving-compliant-query-practice-2026-update


















