Contest indicates coding accuracy is below expectations.
Central Learning is a web-based coding assessment and education application. Since 2016, the company has conducted an annual national coding contest to measure ICD-10 coding accuracy and production. The initial premise was to evaluate how coding accuracy and production work, compared to ICD-9. The oft-stated common industry accuracy benchmark under ICD-9 was 95 percent. The findings of the 2018 contest indicate that the industry at large still lags far below past expectations. In addition, as production under ICD-10 increases, accuracy decreases.
The 2018 contest included contestants from 47 states, and 4,471 real medical records were coded. Sixty-one percent of the contestants held the American Health Information Management Association (AHIMA) coding certification, and 26 percent held AAPC coding certification. Contestants self-designated their area of coding expertise. The overall coding accuracy for 2018 was 57.5 percent.
Not unexpectedly, years of experience is a key determinant of accurate coding. Coding accuracy was grouped by experience categories of less than five years, 5-10 years, 10-20 years, 20-30 years, and more than 30 years. For inpatient coders, greater than 30 years of experience scored 77 percent, while less than five years of experience scored 48.5 percent. Outpatient coders scored 10-15 percent lower for each category.
For inpatients, the average primary diagnosis accuracy was 67.8 percent; secondary diagnosis accuracy was 38.8 percent, and CPT assignment accuracy was 35.9 percent. Overall DRG accuracy was 72 percent. Common errors were lack of specificity, failure to specify laterality, acuity issues, and site designation.
Another surprising finding was that although there have been annual fluctuations, an overall significant improvement in ICD-10 coding accuracy has not occurred. Inpatient coding accuracy for 2016, 2017, and 2018 has been 55, 61, and 57.5 percent, respectively. Outpatient ICD-10 accuracy for the same years has been 38, 41, and 42.5 percent.
One possible explanation was provided by AHIMA. Unlike historical coding, whereby the coders utilized the books to find codes and read the applicable rules, today’s automated coding assistance tools present codes or code choice based on standardized algorithms or word identification. Those designated codes are not necessarily correct; they are simply possible choices to consider. Likewise, electronic health record (EHR) coding is typically far less than optimal, and often inaccurate. These tools do not replace coder knowledge, understanding of the applicable rules, or interpretation of the authoritative guidelines. If coders rely upon automated coding options or designations without critique and analysis, errors will be prevalent.
I believe that the surprising findings present concerns regarding compliance, many state and federal initiatives, and payment issues. For inpatients, incorrect primary and secondary diagnoses, incorrect acuity, failure to report complications, accurate conditions present on admission, etc. all directly impact reimbursement. For outpatient claims, it is common for payors to deny for reasons including unspecified diagnoses, lack of laterality indication, and failure to adhere to the authoritative coding guidelines. Emergency department coding scored very low accuracy. It is highly probable this could contribute to surprise medical bills. Other considerations are all of the Centers for Medicare & Medicaid Services (CMS) quality payment initiatives. Coding accuracy is critical to avoid penalties and for correct bonuses.
In addition, the increasing focus on social determinants of health is dependent on accurate reporting of patient circumstances and risks that impact care and patient compliance. As both federal and commercial payment models move to new methodologies, diagnosis coding is becoming the driver of all reimbursements.
Immense amounts of time and dedicated work have been invested in clinical documentation improvement. That begs the question of why that has not resulted in the expected improved coding accuracy, as reflected in the 2018 coding contest.
I believe the coding accuracy findings offer a huge window of opportunity for the industry. Detailed analysis to compare the findings to the metrics of your organization is recommended. New and less experienced coders need much more scrutiny than your veteran coders. Final code selection compared to coding assistance tool choices is also strongly recommended. Auditing production benchmarks weighing accuracy is prudent.
It may be easy to say that the results of this contest do not represent your business or enterprise. However, the scope, depth, and breadth of three years of contest findings says to me that they probably do.
Program Note:
Listen to Holly Louie report this story live today during Talk Ten Tuesdays, 10-10:30 am. ET.