As artificial intelligence (AI) becomes increasingly embedded in the U.S. healthcare system, the lack of comprehensive federal regulation has created something of a vacuum – one that some industry leaders are stepping in to fill.
And with growing concerns about things like patient safety, inherent bias, and ethical boundaries, states such as Illinois are establishing their own standards for responsible AI use.
But despite AI’s rapid expansion into clinical workflows, diagnostics, patient-facing tools, and back-office processes like billing and payment, there is no single or predominant federal law governing its use in healthcare.
Agencies like the Food and Drug Administration (FDA) have issued guidance on AI-based medical devices, and the Office of the National Coordinator for Health Information Technology (IT) has promoted transparency in AI in the past – such as proposed criteria for developers of federally certified health IT to ensure clinical users can access consistent information about the AI algorithms they use – but these efforts fall short of comprehensive governance.
The result is a fragmented oversight landscape, wherein AI systems are sometimes deployed with limited testing, transparency, or accountability, all under a presidential administration that continues to push for, quote: “progress without regulation.”
Mental health applications in particular have raised alarms, as several AI chatbots and “therapy assistants” have been shown to deliver inappropriate or misleading responses without adequate human oversight or safeguards to maintain ethical standards in mental healthcare.
In response, Illinois passed the Wellness and Oversight for Psychological Resources (WOPR) Act, signed into law by the Governor last week. The legislation is the first in the country to explicitly ban AI systems from the delivery of mental health treatment and from making related clinical decisions. Under the new law:
- AI cannot perform therapy or interact with patients as a replacement for licensed professionals;
- AI tools may only be used in administrative or supplemental roles, such as scheduling, reminders, or drafting non-therapeutic communication; and
- Any AI use in mental health services must occur under the direct supervision of a licensed provider.
In the absence of federal rules, healthcare providers and tech companies are also increasingly taking matters into their own hands. Several leading health systems have established AI oversight committees to evaluate things like safety, equity, and efficacy before adopting new technologies.
Meanwhile, companies like Google and Microsoft have published AI ethics frameworks to guide product development, including commitments to transparency and “human-in-the-loop decision-making.”
Some healthcare organizations have even created internal review boards to assess algorithmic bias and clinical risk, though participation remains voluntary and standards vary widely across the industry.
Third-party entities have also stepped up, such as the Utilization Review Accreditation Commission (URAC) developing accreditation and certification standards and the Coalition for Healthcare AI’s flexible guidelines and playbooks.
Some experts caution that without federal leadership, the nation could face a regulatory Wild West of sorts, where inconsistent oversight allows potentially harmful tools to reach patients in less regulated areas of the country.
On Capitol Hill, healthcare advocates have been urging Congress to act. Several federal proposals have been introduced, focusing on a variety of AI issues, from transparency to data privacy to the need for clinical validation. However, nothing has passed into law, leaving states like Illinois and the industry to lead in the meantime.
So, Illinois’ WOPR Act demonstrates how state governments can maneuver decisively to protect patients when federal action lags. And as AI continues to shape the future of healthcare, the responsibility to ensure appropriate use increasingly falls to those willing to step in – until national standards catch up.
Reference material
- ONC finalizes AI transparency, interoperability rule.
- FDA.gov.
- Nature.com.
- idfpr.illinois.gov.
- natlawreview.com.
- ai.google.
- axios.com.
EDITOR’S NOTE:
The opinions expressed in this article are solely those of the author and do not necessarily represent the views or opinions of MedLearn Media. We provide a platform for diverse perspectives, but the content and opinions expressed herein are the author’s own. MedLearn Media does not endorse or guarantee the accuracy of the information presented. Readers are encouraged to critically evaluate the content and conduct their own research. Any actions taken based on this article are at the reader’s own discretion.