Skip to main content

Struggling with drug labels data? Why you should consider natural language processing

Struggling with drug labels data? Why you should consider NLP

There are a multitude of pressures facing regulatory teams in 2022 and beyond. Externally downward pricing pressure on products and evolving regulations globally mean teams often have to do more with less. Internally, teams can lack the resource levels needed and tools to be as effective as they need to be in such a dynamic environment.  

When we talk to our customers, we hear time and time again that teams are seeking to reduce time spent on repetitive tasks to win back time to develop the right label content, considering competitive factors, market dynamics and effective approval strategies. Accurate regulatory intelligence is essential for better decision-making for these teams, and one of the ways we are helping is to use our Natural Language Processing platform to speed up the identification of relevant and rich label information from across sources.  

Using our Insights Hub platform for drug label exploration can help teams:  

  • Quickly find labels for a class or group of drugs within one source or across sources  

  • Find labels for products with similar attributes such as mechanisms of action or pharmacokinetics 

  • Extract adverse reactions and normalize them to improve comparative analysis  

  • Identify label content relating to concepts such as groupings of adverse reactions for different types of drugs to assist with authoring for improved regulatory acceptance  

  • Understand differences in labelling language and strategy for different jurisdictions  

I would encourage teams that use sources like FDA Drug Labels and EMA labels as well as non-English sources of Spanish and French labels for example, to consider the benefits of repeatable search and extraction, where text mining can find topics and content using far more than keywords and filters. We use huge, rich ontologies of terms and synonyms for things like drug names, diseases, symptoms and adverse events meaning we can be more precise and our results less noisy when looking for comparable label content and examples. Our linguistics can surface terms that may be unexpected but appear in the context of topics of interest. For example, a group of unfamiliar symptoms may appear in a label associated with a particular patient group of interest and understanding how these symptoms are described in an approved label may help with authoring similar warnings for a new drug. We can use word proximity, negation, and wildcards to achieve this and by storing searches for the team to use, sophisticated precise results can be quickly regenerated.  

Compare this to manual search and feature extraction where regulatory labelling teams can spend a significant amount of time searching for the right labels and label-based information. Add into the mix switching between multiple sources, and searching for specific terms and concepts across sources can be complicated and inconsistent.  Semi-automated approaches with NLP at their heart can empower experts to be more effective. We are really excited about our work in this area and are developing new features and adding new data into our platforms through this year and beyond.  

With that in mind, we’d love to hear from teams that are facing challenges in getting the most from label data and share in more detail some of our examples and tools so feel free to get in touch by email – NLP@iqvia.com, or visit the Insights Hub page.

Watch the webinar

Ready to get started?

Request a Demo

Questions? Ask our experts