EDITOR’S NOTE: The author of this article used AI-assisted tools in its composition, but all content, analysis, and conclusions were based on the author’s professional judgment and expertise. The article was then edited by a human being.
Over the past year, California has continued to take a more direct approach than most states when it comes to how AI is used in healthcare, particularly in areas that affect access to care and patient communication.
That approach became very concrete with Senate Bill 1120: the Physicians Make Decisions Act. Signed in September 2024 and effective as of Jan. 1, 2025, the law leaves little room for interpretation: AI tools may support utilization workflows, but they cannot be the final decision-maker when care is denied, delayed, or modified. Medical necessity determinations must come from a licensed clinician. Although the statute is aimed at payers, providers have felt its impact throughout 2025. As utilization processes rely more heavily on automated intake and routing, documentation has become the primary means by which clinicians communicate the reasoning supporting those determinations. In practice, this means the record must do more work, often under tighter timelines.
Assembly Bill 3030 added another layer of responsibility. Also effective Jan. 1, 2025, it required disclosure when generative AI is used in communications sent to patients. For many organizations, this prompted a closer look at how after-visit summaries, discharge instructions, and portal messages are generated. Over the course of the year, it became clear that misalignment between what is documented clinically and what is communicated to patients is no longer just a messaging issue. It carries compliance risk, and in some cases, patient safety implications, particularly when automated language smooths over clinically relevant nuance.
Looking ahead, Assembly Bill 489 extends this theme. Effective Jan. 1, 2026, it will prohibit AI systems from giving patients the impression that they are interacting with a licensed clinician when they are not. As chat-based tools and automated documentation interfaces continue to expand, organizations will need to be explicit about where automation stops and human judgment begins. From a documentation perspective, this places renewed emphasis on clarity around authorship and review – questions that were once implicit, but now increasingly visible.
Taken together, California’s recent legislation sends a consistent signal. AI can help move work along, but it cannot replace clinical judgment or blur who is responsible for decisions that affect patients.
Federal Rules Accelerate the Consequences of Documentation
Federal policy has reinforced similar expectations, though often through operational pressure, rather than explicit AI language. The Interoperability and Prior Authorization Final Rule (CMS-0057-F) has reshaped how documentation flows between providers and payers serving Medicare Advantage (MA), Medicaid, Children’s Health Insurance Program (CHIP), and Marketplace populations. Over the past year, many organizations have seen how electronic prior authorization and shortened decision timelines change the stakes for documentation. Beginning in 2026, public reporting of prior authorization metrics will make those stakes even more visible.
For providers, the practical reality is straightforward. Documentation is reaching payers quickly, sometimes almost immediately. There is far less opportunity to clarify intent after the fact. What is written at the time of submission is what gets reviewed. As denial explanations become more specific and transparent, gaps that might once have led to generic denials are now cited directly.
Other federal actions reinforce this direction. MA utilization management rules emphasize individualized, clinically grounded determinations. Information blocking requirements have expanded access to the medical record for patients and other stakeholders. Health Insurance Portability and Accountability Act (HIPAA) enforcement has made clear that AI-generated documentation is not treated differently from clinician-authored content. At the same time, oversight by the Federal Trade Commission and continued enforcement under the False Claims Act underscore that automation does not dilute responsibility when documentation does not accurately reflect the care provided.
As healthcare enters 2026, with additional AI transparency requirements taking effect, the direction is clear. AI can support clinicians and reduce administrative burden, but it does not replace clinical judgment, clear communication, or well-supported documentation. In a more automated and interconnected system, documentation integrity is no longer a back-end function.
It is one of the structures on which quality, compliance, financial performance, and patient trust now rest.


















