One of the greatest challenges across the biologics development lifecycle is the race to deliver novel, high quality therapeutics to patients, faster and more cost effectively.
Balancing the specificity, speed, and cost is tough—and that’s before you consider a further factor: Alongside the constant pressure to innovate, companies must also meet exacting standards of regulatory compliance.
Biopharma organizations are increasingly investing heavily in innovative technologies to support their biologics development.
We are seeing a rapid expansion of high throughput process development, process analytics, predictive scale down modelling, and continuous processing—all with the aim of achieving more specificity, and getting products to market faster, for less.
The Specificity Problem
The specificity problem is driven by the expansion of “personalized” treatments. We are not quite yet at a stage where individual treatments are available at scale, but we are in the age of cohort treatments—whereby patient groups who share similar genomic, disease or proteomic similarities are able to be treated more effectively with “specific treatments.”
This is exemplified in the oncology space, where specific treatments are used effectively rather than the scatter gun approach of either giving patients everything or simply providing the treatments that worked on others.
Essentially the biotech industry has realized it is far better in terms of outcomes if specificity is placed at the root of the drug development process. This is helped by companion diagnostics (CDx), which are used to establish if a patient will respond to a new treatment before it is given.
Specificity also helps make treatment more cost effective, as it is only given to those who will respond.
However, designing in specificity at the early stages is extremely tricky and requires huge amounts of patient-based data and analysis to define the specificity of the target in a patient population andto develop the corresponding biomarker and CDx.
This is a big area of expansion for data mining and data analytics providers—who focus on aggregating data from many sources, normalizing it to a common standard semantic framework, restructuring it to allow data analytics, and using machine learning (ML) to zoom in on anything of potential interest.
Specificity also is involved in the design of biologics [large protein-based molecules like monoclonal antibodies (MABs), fragment antibodies (FABs), antibody drug conjugates (ADCs), etc.] These can be designed to be highly specific, but, to date, the tools used have been screening based, where large amounts of potential MABs are created at random and tested for activity.
We are entering the age of “design over random,” where MABs, their structure, and how they bind to the target, are being understood to a level that allows them to be designed de novo(from the ground up).
This jump from random to designed is only just emerging, but it already has been helped by the advent of high-performance computing on the cloud. This allows anyone to leverage compute power on a pay-as-you-go commercial model previously only used by government agencies and large corporations (who could afford to build it themselves).
Put simply, the specificity problem is tractable—and data is at the heart of solving it. Whether we talk about patient stratification or molecular level specificity of the treatment, both can be impacted with a data management strategy that thinks “really big.”
The Cost of the Problem
Although we don’t like to admit it, we all frequently observe the challenges created by managing highly complex R&D development operations on an outdated foundation of paper, Excel, and manual processes.
Inefficient paper and Excel processes can alone cost upwards of 52 days per scientist per year. And, on average, between 10 percent to 20 percent of development work is repeated due to data integrity and accessibility issues, increasing associated time and materials costs.
Costs creep is everywhere, but it is important to understand the chain. Delays in research, such as repeated experiments through lost data, can cost money ($5,000 per experiment) and push back when a drug can go to market (estimated at $1 million per day for most blockbuster drugs).
Even smaller costs and time delays can add up to significant impacts and costs—not just incurred costs of researchers, labs, and reagents, but also opportunity costs.
Data Integrity and Quality From the Ground Up
Not only are these profound operational costs adding to the pressure of developing therapeutics faster and with lower cost, but they also are leading to data integrity and quality problems, which prevent the process insight needed to drive innovation.
The result is inefficient processes with data management platforms that are not fit for purpose. These processes limit the ability of organizations to realize the benefits of innovation in continuous process tools and continuous manufacturing techniques—all of which are rapidly becoming common practice in biopharma.
Earlier we talked about the emerging “specificity” approach to R&D in biologics. It is critical to remember that the foundation of this new world is, data—not just its quantity, but its quality too.
Many believe that the age of analytics, machine learning and artificial intelligence (AI) will eliminate the need to be so fastidious when it comes to data quality. This is, however, a foolish assumption.
To be effective, analytics tools need large amounts of good quality data. Without it, we would end up with the “rubbish-in rubbish out” scenario.
A Better Way
To truly push the boundaries of innovation, biopharma organizations need to remove these bottlenecks and bring process and data management into the digital age. Imagine seamlessly executing workflows while having instant access to the data needed to better understand your processes and make informed decisions.
This saving, and better decision making, will allow investment and resources to be applied in developing processes that support the delivery of novel therapeutics to patients faster and more cost effectively.
The deployment of an enterprise-ready platform, specifically designed for biologics data and workflow management, can give scientists the opportunity to assess and optimize their biologics and the processes used to manufacture it—leading to dramatic improvements in quality and productivity.
The right technology also can provide a solid foundation for organizations to get far more out of data, by enabling them to adopt cutting-edge technologies like ML and AI.
Good data can also facilitate collaboration within organizations and their external partners, resulting in some truly transformational changes and help organizations deliver novel, high quality, specific therapeutics to patients faster and more cost effectively.