The digitization of health care has transformed how data is collected, creating a deluge of information spanning EHRs to clinical data to payer and provider statistics. Navigating these mountains of data was already a challenging endeavor for payers, and now the Office of the National Coordinator’s (ONC) Cures Act Final Rule has upped the bar even more. The Final Rule mandates that patients be free to access all of their electronic health information – both structured and unstructured – at no cost by October 2022. That means billions of unstructured documents, including clinician notes, dictated patient interactions, labs and more, must be accurately digitized and accessible across the nearly 330 million people living in the United States. If it seems like it’s not humanly possible, that’s because it isn’t – moving forward, payers will need to partner with technology.
In my last post, I shared why natural language processing, or NLP, is now an essential tool for navigating the complexities of the healthcare ecosystem, offering payers new opportunities to optimize decision-making while also keeping them compliant with the Final Rule. The technology achieves this by scanning a wide range of health care information, surfacing insights from unstructured text, and connecting the dots to paint a fuller picture than structured data alone can create. Today, I will walk you through the application areas where NLP can help payers most.
NLP is a powerful tool to augment risk stratification. Take, for example, Medicare Advantage, which relies on accurate risk adjustments to predict the future health care expenditures of individuals based on diagnosis and demographics. The process is time-consuming and costs millions of dollars annually. At the heart of these risk adjustments are hierarchical condition categories, or HCC codes, which allow payers to identify the difference between members with different comorbid conditions (for example, diabetes without complication is much less expensive to treat than diabetic kidney disease). There are about 80 of these codes, covering approximately 10,000 ICD10 codes and the process to ensure the most granular or specific codes have been captured makes up the majority of the risk adjustment effort at payer organizations.
Independence Blue Cross is a Linguamatics client that uses our NLP to speed up chart review and increase the capture of diagnoses that may be missed by manual chart review alone, ensuring each patient has the appropriate amount of risk-adjusted revenue assigned. Using our NLP system, Independence Blue Cross was able to identify features for HCC codes with over 90% accuracy, processing documents between 45 and 100 pages long, something that can be done at a rate of millions of documents per hour. Using NLP in this way can not only speed up the chart review process, but because the software tirelessly reads every word of every document in context – it uncovers more risk adjustable diagnoses than the reviewers uncover without the software. This fundamental shift to using technology in the risk adjustment process has given the nurse reviewers a tool that they can no longer imagine working without it.
According to industry report from 2020 from PwC, 74% of healthcare executives said their organizations would be investing in predictive modeling in 2021. It is no coincidence that the high interest in predictive modeling aligns with the digital information deluge we spoke of earlier – industry leaders know that the ability to accurately predict outcomes is notably enhanced with access to a rich repository of information, which we now have with digital EHRs and other documents. For payers, the ability to accurately stratify and predict patient outcomes using unstructured information, is an area for a real competitive edge – as still, the majority of predictive modelling fails to include information from clinical notes. A recent paper from Kaiser Permanente highlights the increased value of unstructured data versus structured in understanding true disease burden of cardiovascular disease.
Many organizations are applying our NLP to better understand and predict disease progression. For example, one payer organization needed to enrich its insights to better predict diabetic foot amputations. Certain features of disease severity were only contained in unstructured provider notes, comments on medications and documented foot disease, such as ulcers. Using NLP to extract these features, the organization identified 155 additional at-risk patients who could then be proactively managed, resulting in better care as well as a cost savings between $1.5 and $3.5 million annually.
Social determinants of health (SDoH)
Even if you did not previously see the value in granularity around Social determinants of health (SDoH), the COVID-19 pandemic has surely shifted your perspective. We all saw the impact of COVID-19 hitting those with a lower socioeconomic background much harder than those with higher socioeconomic status, underscoring the significance of these variables in supporting better care. While the pandemic made the value of SDoH more pronounced, social determinants can predict incidence of heart disease and many other conditions as well. We know this to be true, and yet, until recently, most SDoH were trapped in progress notes, discharge summaries and more with no real methods for extraction that weren’t incredibly cumbersome. In fact, there is huge value in knowing whether a patient lives alone or with family; if they are a drinker or not; if they are married or widowed; and more. If we want to paint a full picture of our patients, SDoH are vital.
NLP offers a way to extract this information and normalize it across domains to standards such as ICD10 Z codes or SNOMED codes. For example, one academic medical center we worked with knew that patients with certain SDoH risk factors, including social isolation, were more likely to miss appointments. They wanted to proactively identify these patients for outreach to keep them on track with care. We worked with the center to deploy NLP against 150,000 documents, taking in appointment information as well as nursing notes to identify indicators for isolation. As a result, at-risk patients were rapidly identified, potentially improving quality of life through timely care and avoiding acute intervention required when patients wait to visit until their situation can no longer be managed at home.
Gaps in care
NLP can also be used in workflows to identify where potential diagnoses are being (or could be) missed. You can think of it as safety-netting through the use of NLP. In one example, a customer deployed our NLP against imaging reports from a busy medical center’s emergency department. The customer was concerned that providers might be missing incidental findings, such as a pre-cancerous nodule, on a chest x-ray when the patient is presenting for more emergent situations, like a collapsed lung from a car collision. When NLP was applied against 1,200 radiology reports, an additional 64 cases were identified with incidental findings. Of those, biopsy found lung cancer in 37. By identifying the malignancy at an early stage, NLP increased the patients’ chances of long-term survival while also significantly reducing the cost of care.
The payer path forward
While NLP is rapidly shifting to a must-have for payers as a result of the Final Rule, there is also a massive upside to unlocking this previously untapped data. With an NLP system in place to help rapidly extract rich insights from unstructured documents, payers are now empowered to augment risk stratification, better predict disease progression, identify gaps in care, paint a fuller patient picture through SDoH, and much more.
As with most things in life, determining when and how to take the first step with NLP is often the hardest part. New users are often faced with two options: a vendor solution or a build-it-yourself solution. While build-yourself solutions offer complete flexibility, there is also no managed roadmap for support. Vendor solutions are robust, but they can’t be refined to your needs easily and there’s often little “under the hood” transparency. Linguamatics offers something of a hybrid solution: a fully supported software solution with a managed roadmap, but with an open platform so customers can build it in a way that is right for them.
Whatever type of solution you choose, there are a few priorities to keep in mind. First, ensure your solution has the capacity to scale. The data deluge is not stopping anytime soon, and in fact, it will only increase. Make sure your NLP can handle the millions of documents coming to payers in the years ahead. Second, payers should invest in a solution with specialized domain knowledge as opposed to generic NLP. Medical terminology is notoriously complex, and it takes a specialized embedded intelligence to deliver accurate outputs to payers. Finally, consider transparency – even if you don’t need a fully transparent solution, ensure that at a minimum you understand what is happening in the solution so you have an awareness of what your algorithms are doing and why your outputs are what they are.
The digital world is changing rapidly, but opportunities abound for payers and NLP. If you are a payer interested in learning more, feel free to watch our recent webinar on this topic or reach out for a demo.