For too long, operational information has existed chiefly as the preserve of discrete functions of a company gathered in a particular way by designated people, and geared to a specific purpose, managed by a particular system. Each finite information management project may have delivered its intended results successfully, via each respective bespoke or best-of-breed solution, but ultimately the organization will have spent more than it needed to—only to arrive at a knowledge bank with finite application.
On top of this, the different parts of the business are likely to have collated overlapping data, each in a slightly different way, rendering the prospect of consolidating information more or less impossible. This is as true of product and regulatory information in life sciences organizations, as it is of other knowledge sources across numerous other markets.
While this case-by-case approach to information and content management may have sufficed until now, it poses a challenge to progress—especially when it comes to companies’ ambitions for innovation and process automation. In life sciences, as in most other industries, it is now a commonly-stated strategic aim for organizations to become more ‘data-driven:’ able to react at speed and to predict, plan, and pre-empt future scenarios using increasingly sophisticated intelligence gleaned from everyday data.
That could be signals about potential issues with new products, alerts to emerging gaps in the market, or insight into what constitutes a successful regulatory submission—and the ability to skip straight to a more robust initial application, auto-filled with high-quality, pre-approved content.
The pursuit of this efficient, new data-enabled reality is driving life sciences firms to rethink the way they organize and manage routine information, and combine this with broader intelligence to create something much more useful and powerful than the sum of its parts. By thinking beyond the immediate application, and creating a clearer line of sight from one side of its operations to the other, companies have an unprecedented opportunity to accelerate and improve processes.
Log Data Once, and Do it Well
Getting to this new, more dynamic, data-driven state starts with new thinking about the way information is captured and stored, with the ability to share data in central focus. As long as it is locked inside static documents, or proprietary, single-use database entries specific to a particular function or application, information’s value will be limited.
Yet this is a common restriction. Re-using information in other parts of the organization may involve manual data re-entry into other systems, or complex and expensive systems integration. Unless data-sharing capabilities were envisioned from the outset, organizations risk complexity, cost, and data integrity as they try to fashion something empowering and inclusive from systems which, by and large, were designed to stand alone.
Attempts to achieve more holistic regulatory information management (RIM) have highlighted the constraints and challenges caused by the traditional piecemeal approach to managing data.
A ‘Big Data’ Approach to RIM
Historically, the different elements of product information and regulatory intelligence have existed in pockets across the business, making it very hard for responsible teams to get a clear and accurate view of the current, correct status of anything at any given time. This is in contrast to the big data analytics world, where information is combined to create meaningful insights at speed, however large and diverse the original sources.
It’s here that the key to more dynamic RIM lies: the ability to slice and dice contributing data and content sources quickly, easily, and reliably to arrive at something insightful, meaningful, and of new value. To achieve this, companies need to move away from traditional ways of collating information and building content (forms, reports, regulatory submissions, or labels) from this because these are too manual, repetitive, and risk-laden—the very opposite of what organizations want and need.
The relevance of ‘big data’ thinking comes from the concept of a ‘data lake,’ which promotes a definitive central store for all related data in all its forms, ranging from raw source data to information and content that has been collated and prepared for a range of different tasks, such as form submission, reporting, visualization, analysis (potentially using some form of artificial intelligence), and product labelling.
The higher ideal is a go-to place for vetted information and ready-to-use content fragments (groups of approved data assets/parts of documents/images such as photos or logos) that can be mixed and matched at speed, and with confidence, to meet each new need.
So, instead of having different document stores and databases in each business function or department, which must all be updated individually, the starting point for everything is a single master resource from which everything else flows. Each onward manifestation of that information will be correct because every document, every use case will be drawing its content from the same, correct original.
The Object of the Exercise: Seamless Coordination
In technical terms, this is about treating approved data and combinations of data as a series of ‘objects’ held in a graph database where they can be called up and brought into play as needed across the company to support each new requirement. It’s the approach taken by enterprise resource planning (ERP) systems, which provide an integrated and continuously updated view of core business processes using common repositories maintained by a database management system, facilitating information flow between everyone with a requirement, wherever they are.
Organizations that get to this point are at less risk of using the wrong information, filing unsatisfactory submissions, and creating excess work and cost each time they access and do something with product or regulatory information. Even authorized translations of content can be stored in the central repository as ready-to-use assets, to support international submissions, labelling and other purposes related to the original document.
Turning definitive master data into reusable content building blocks could contribute to scenarios where companies are able to take advantage of increased automation opportunities, such as structured authoring, where at least some sections of standard regulatory documents are automatically populated (pre-filled) with approved content assets, expediting submissions, and increasing the likelihood that they will be accepted because the information contained in it has already been verified.
The picture we are moving towards is one not of ‘integrated’ RIM, but of ‘integral RIM,’ where systems have been architected from the outset to support the confident re-use of master data for multiple different purposes. It can help to think of RIM as the ERP of life sciences R&D and regulatory operations with value not only in ensuring information compliance, but also as the basis for process automation and acceleration, reducing costs, and aiding speed to market.
Finally, the more this approach can take in numerous different entities,—documents, data, processes, organizations, sites, etc—the greater the scope for value-added benefits (e.g. via automation of everything from content preparation to smart data analytics) as well as reduced complexity. The technology is there to make all of this possible today.
The remaining challenge—if there is one—is urging people to embrace change and distance themselves from the inefficient and unsafe workarounds that have become entrenched over preceding decades of duplicated data entry, manual reporting, document creation, and form filling.
About the Author
Romuald Braun is VP of Strategy for Life Sciences at AMPLEXOR. He holds a Master’s degree in Drug Regulatory Affairs, an Engineers’ diploma in Data Technology, and has spent the last 26 years working in compliance, document management and content management related roles in this industry in client-based as well as consulting and project management roles.