At the end of 2016, I attended the CBI 2nd Annual IDMP Update Forum in Philadelphia, a small but highly focused and effective conference with two days of meetings and discussions. There were presentations by industry leaders involved in understanding and addressing the challenges that IDMP compliance presents to the pharmaceutical industry, and also presentations by some of the vendors in this area.

The meeting kicked off with a keynote from John Kiser, Senior Director, Regulatory Policy and Intelligence, AbbVie.  This brought up some of the key challenges for IDMP compliance that were repeated again and again across the conference:

  • We need to think strategically, about master data management (MDM), not just about what is needed for IDMP compliance.
  • Even though the timelines are moving out, it’s really important not to take our eye off the ball. IDMP projects are being driven out of EU, and the US has to get moving to keep up. Don’t wait, start planning and kicking off pilots and proof-of-concepts with vendors now.
  • IDMP compliance planning shouldn’t just involve regulatory affairs and supply chain departments, as IDMP will impact quality, clinical operations, pharmacovigilance and safety, production, IT and more.

How text analytics using I2E can help

One comment that interested me was that while manual curation may provide the data elements for the current understanding of Iteration 1, other strategies will be needed to deal with potential changes in the implementation guidance, and to accommodate the flexibility required for Iteration 2 and beyond.

“It is easy to lie with statistics. It is hard to tell the truth without it.” -Andrejs Dunkels

This is a quote I first heard long ago, but was recently re-introduced to by a beloved colleague of mine. Anyone with a background in research can attest to just how true this quote is. Without good statistical power, life-saving pharmaceuticals never make it to the market. Undoubtedly, the ones that do, do so at a hefty cost. In 2012, published an article reporting that the average cost to develop a new pharmaceutical was $4 Billion, and could reach upwards to $11 Billion, staggering numbers, and that was 4 years ago[1]. Without any hesitation, I can confidently say, “those numbers aren’t going down.”

But WHY do pharmaceuticals cost so much?

There are genuine factors that contribute to these huge costs, and one of the most expensive phases of drug development are clinical trials. Those of us that have worked in research know that clinical trial recruitment is a huge factor that takes an exorbitant amount of time and money. If you don’t get enough eligible people successfully recruited, and finished in the study, the study won’t have the all-powerful “n”, the number of people that statistically is needed to prove that the study drug was safe and effective (or not).

How can Natural Language Processing (NLP) help in recruitment?

I needed this yesterday. 

None of us appreciate it when our clinician’s head is buried in a computer, when what we really want is to be heard and taken care of. But, when so much has to be done within a very short timeframe, what if we as a provider miss an important clinical clue? There has got to be a better way…

Rapid and efficient diagnoses are why tools such as the first automatic blood pressure monitor were invented.  Of course, the days of Seymour B. London’s 1965 design - a prototype using a blood pressure cuff, a column of mercury, a microphone and a fish tank pump - are long gone. Now all vital signs can be checked within a few minutes, including blood pressure, electric heart signals, blood oxygen, and temperature - far more quickly and accurately than a rushed human with an armful of heavy equipment in a noisy clinical setting.

At Linguamatics our goal is to provide healthcare professionals with software that helps them do their jobs better. Giving physicians time back to be more personally attentive during the patient visit, is a high priority. Patients want to be heard. Just have a look at how many bad physician reviews follow the theme of a negative bedside manner - even if the physician achieves the right clinical outcome.

In the spirit of decreasing human error, increasing patient-physician face-time (and of course, the alternative use of a fish tank pump), we at Linguamatics are delighted to introduce our I2E Asynchronous Messaging Pipeline (aka AMP).

I2E Asynchronous Messaging Pipeline (AMP) Extract, Transform, Load (ETL) technology automates the NLP text mining of real-time documents at scale

Cambridge, UK & Boston, USA – December 6th, 2016 – Text analytics provider Linguamatics today introduced the I2E Asynchronous Messaging Pipeline (AMP) platform to help healthcare professionals find critical clinical insights faster using Natural Language Processing (NLP).

The addition of I2E AMP to Linguamatics’ award winning NLP text mining solution, I2E, makes the management of background healthcare workflows more efficient, and provides scalability as NLP text mining requirements grow. By automating the text mining of real-time documents, I2E AMP can provide healthcare professionals with rapid insights and help them make timely – and potentially critical – clinical decisions.

Innovative ETL (Extract, Transform, Load) technology frees 80% of unstructured data trapped in Data Lakes, enabling high-value knowledge discovery and decision support

Cambridge, UK & Boston, USA – 30th November, 2016 – Text analytics provider Linguamatics today released the latest version of their award-winning natural language processing (NLP) text mining platform, I2E 5.0.

Game-changing capabilities in I2E 5.0 include normalization of concepts (e.g. dates, measurements, gene mutations) within unstructured text, advanced range search and a new query language EASL. These capabilities tackle the variety in big data, and accelerate insights from unstructured, semi-structured and structured data sources.

Normalization and range search helps users find key information (e.g. a particular temperature or a range of temperatures) in unstructured text sources regardless of how the information is expressed, and boosts ETL operations by identifying, extracting and standardizing data. Given that around 80-90% of big data is unstructured, these new text mining capabilities allow huge amounts of data to be processed that previously had to be read manually.