Artificial intelligence (AI) has become a fixture in healthcare revenue cycle management (RCM), an area where finance leaders are desperate for ways to relieve understaffed departments struggling under unprecedented volumes of third-party audit demands and rising denial rates without sacrificing accuracy or precision.
At a time when RCM staffing shortages are high, AI provides a critical productivity boost. By investing in data, AI, and technology platforms, compliance and revenue integrity departments have been able to reduce their necessary team size by a third while performing 10 percent more in audit activities, compared to 2022, according to the 2023 Benchmark Report.
Here is where AI shines. Arguably its greatest asset is assisting in uncovering outliers and needles in the haystack across millions of data points.
Unfulfilled Promises
While AI has enabled the automation of many RCM tasks, however, the promise of fully autonomous systems remains unfulfilled. This is partially due to software vendors’ propensity to focus on technology without first taking the time to fully understand the targeted workflows and the human touchpoints within them. It’s a practice that leads to ineffective AI integration and end-user adoption.
For AI to function appropriately in a complex RCM environment, humans must be in the loop. Human intervention helps overcome deficits in accuracy and precision – the toughest challenges with autonomous AI – and enhances outcomes, helping avoid the repercussions of poorly designed solutions.
Financial impacts are the most obvious repercussion for healthcare organizations. Poorly trained AI tools being used to conduct prospective claim audits might miss instances of undercoding, which means missed revenue opportunities. For one MDaudit customer, an incorrect rule within their “autonomous” coding system was improperly coding drug units administered, resulting in $25 million in lost revenues. The error would never have been caught and corrected if not for a human in the loop uncovering the flaw.
AI can also fall short by overcoding results with false positives, an area under specific scrutiny due to the government’s mission of fighting fraud, abuse, and waste in the healthcare system.
Even individual providers can be impacted by poorly designed AI, for example, if the tool has not been properly trained on the concept of “at-risk providers” in the revenue cycle sense. Physicians could find themselves unfairly targeted for additional scrutiny and training if they are included in sweeps for at-risk providers with high denial rates – wasting time that should be spent seeing patients, slowing cash flow by delaying claims for prospective reviews, and potentially harming their reputation by slapping them with a “problematic” label.
Retaining Humans in the Loop
Again, keeping humans in the loop is the best strategy for preventing these types of negative outcomes. In fact, there are three specific areas of AI that will always require human involvement to achieve optimal outcomes.
Building a strong data foundation.
A robust data foundation is crucial, because the underlying data model, including proper metadata, data quality, and governance, is key to enabling AI to function at peak efficiency. This requires developers to get into the trenches with billing compliance, coding, and revenue cycle teams to fully understand their workflows and data needed to perform their duties.
Effective anomaly detection requires billing, denial, and other claims data, as well as an understanding of the complex interplay between providers, coders, billers, payors, etc. This ensures the technology can continuously assess risks in real time and deliver to users the information needed to focus their actions and activities in ways that drive measurable outcomes. If the data foundation is skipped in favor of accelerating deployment of the AI models and other shiny tools, the result will be hallucinations and false positives that will cause noise and hinder adoption.
Continuous training
AI-enabled RCM tools require ongoing education in the same way professionals do, to understand the latest regulations, trends, and priorities in an evolving healthcare RCM environment. Reinforcement learning allows AI to expand its knowledge base and increase its accuracy. User input is critical to refinement and updates, to ensure AI tools are meeting current and future needs.
AI should be trainable in real time. End users should be able to support continuous learning by immediately providing input and feedback on the results of information searches and/or analysis. Users should also be able to mark data as unsafe, when warranted, to prevent its amplification at scale. For example, this could involve attributing financial loss or compliance risk to specific entities or individuals without properly explaining why it’s appropriate to do so.
Appropriate governance
Human validation is required to ensure that AI’s output is safe. For example, for autonomous coding to work properly, a coding professional must ensure AI has properly “learned” how to apply updated code sets or deal with new regulatory requirements. Excluding humans from the governance loop leaves healthcare organizations wide open to revenue leakage, negative audit outcomes, reputational loss, and much more.
Without question, AI can transform healthcare RCM. But doing so requires that healthcare organizations augment their technology investments with human and workforce training to optimize accuracy, productivity, and business value.