After years of study, bispecific antibodies and antibody drug conjugates (ADCs) are rapidly coming to the forefront of therapeutic research. The modular architecture of antibodies offers the potential to create highly specific therapeutics. More than 35 formats of bispecific antibodies and derived constructs are under development by biopharma companies and several are approved for therapy. As this next wave of antibody-based therapies gains momentum, biochemists have begun to create even more innovative molecules with additional modes of action including multispecific antibodies and other so-called Frankenmolecules.
However, informatics technologies have not kept pace with these scientific innovations. That makes it difficult for people in pharmaceutical companies to work with novel constructs in a unified and efficient way. These professionals face tough questions. How do we store information about new biological entities so that it is searchable and sharable? How do we register them and what is the accepted standard for regulatory submission? How do we ensure we have complete information about these molecules so we can make educated decisions?
These challenges are intensified by industry trends of outsourcing and collaboration, which depend on the efficient exchange of information. Ineffective data exchange among collaborators and partners can have serious consequences including duplication of experiments, holes in IP protection, and countless inaccuracies that stem from poor data integrity.
Companies face additional hurdles in working with bispecific antibodies and ADCs, including manufacturing control issues, chemical and structural stability, production yield, homogeneity, and purity. There is still much that is not known about how some of these biologicals will behave as they move downstream into production and even less known about how they may behave in clinical settings. And while some organizations have demonstrated success with cost-effective manufacturing, others may find it difficult to keep budgets in line at large scale.
How can our industry address these challenges? It’s unlikely that any single life science company, technology vendor or academic group will be able to solve these issues unaided. But together we can establish systems and processes that make working with these new entities more efficient. As Yale professor H.E. Luccock said, “No one can whistle a symphony. It takes a whole orchestra to play it.”
The Need for a Unified Data Model
Information is handed off many times during the development of therapeutics. But if the systems that support these handoffs do not contain complete information, it can resemble the children’s game of telephone where the message gets degraded with each transfer because information is lost. This can lead to costly mistakes for pharmaceutical organizations.
A key to effectively defining pharmacological information lies in comprehensive entity registration. While small molecule registration is well established, bioregistration capabilities are just on the cusp of coming of age. As organizations select biopharmaceuticals as drug candidates, it becomes more challenging to maintain data integrity without appropriate registration. In the recent past, some companies shoehorned very large and complex biologicals into existing small molecule registration systems in order to get an ID number for integration across information systems. Consequently, the rich biological genealogy data associated with proper biological registration was missing. So professionals played the frustrating game of telephone as incomplete information was handed off among collaborators and teams.
Without a standard way to define and archive novel biopharmaceutical entities, problems may arise that persist from research and discovery all the way through development and manufacturing. The cost of failure in late stages is much higher than in early stages. So how do we get things right from the beginning? What’s needed is a unified data model that can support today’s scientific innovations in biotherapeutics so information can move efficiently through systems to support intelligent decision-making.
Along with a unified data model, the industry should establish standard operating procedures that begin in discovery and are correlated with processes throughout the continuum including the quality tests that are done in development. Discovery researchers should perform the same types of tests and speak the same language that professionals in processing and manufacturing use to get these molecules to trial.
The submission and compliance processes for these biologics may be more complex than for small molecules because they are living systems. Regulatory bodies ask for more insight into what happened upstream and quality control is expected earlier in the process. We need traceability so we can see all the way back to the earliest stages of research.
Establishing New Language and Standards
It is essential to make innovation easier, supported by processes and systems that enforce accuracy and ensure data integrity. Pharmaceutical companies, technology vendors, and other groups have been coming together in a pre-competitive environment to develop solutions that can be shared across the industry. The objective has been to help everyone get better answers faster so these new molecular constructs can improve the quality of life for patients. That comes down to not only a registration system for biotherapeutics but also terminology and ontology that the industry agrees on.
One good example of that is the HELM project at the Pistoia Alliance, a global, nonprofit alliance of life science companies, vendors, publishers, and academic groups that work together to lower barriers to innovation in R&D. The Hierarchical Editing Language for Macromolecules (HELM) creates a standard for defining antibody drug conjugates and other types of novel entities.
HELM is emerging as a standard in the industry that will make it more straightforward to register, search, view, and share data for biomolecules. HELM reduces registration time and prevents proliferation of bad data. Large molecule structures can be captured in a compact notation, tracked, and accessed more easily. A registration system based on HELM can provide an unambiguous label for an antibody or ADC that can be used to track its progress through discovery and development. It promotes more effective collaboration by making it easier to associate data and build a complete picture of a therapeutic candidate.
Industry leaders are also working together with Allotrope Foundation to address data management challenges that arise from non-standardized file formats. Allotrope Foundation is creating an open, publically available framework for the analytical laboratory. The framework will be comprised of software tools and libraries to utilize, implement, and integrate data standards into analytical laboratory workflows.
Communication among applications and systems is another opportunity for improvement in working with biomolecules. In other industries, software applications have application program interfaces (APIs) built into them so partner apps can talk to each other in a way that is seamless to end users. Our industry lags behind on this. IT directors and company leaders are frustrated with applications that function well for their defined niche but do not integrate with the overall workflow, creating data silos and perpetuating incomplete information handoffs. There is a need for scientific vendor applications to include APIs out of the box so they can communicate with other applications and systems. This will improve the traceability that is so crucial for successful work with biopharmaceuticals.
Some life science organizations are moving to a platform approach to information technology to reduce overhead on common services. But there will always be a need to develop and deploy niche applications to fill gaps. Those applications should have APIs. An information platform should be open so it is friendly with third-party apps. Organizations that do not adopt a platform approach need a way to enforce data integrity across all processes. That is accomplished with APIs, unified data models, and standardized ontology.
A Rising Tide Lifts All Boats
Industry leaders have helped establish best practices by working together in pre-competitive environments to discuss development of standard processes and systems that enforce accuracy and improve data integrity. Recognized by the FDA as an acceptable submission format, HELM is nearing a critical mass of users as a standard notation to represent therapeutic large molecules in bioregistration systems. It is becoming more common for software developers to include APIs that make it easier to provide registration of chemicals and biologics from third-party applications such as electronic lab notebooks (ELNs).
Many questions about new antibody-based therapeutic molecules remain unanswered and manufacturing and production challenges persist. The industry still needs to address manufacturing control issues, chemical and structural stability, cost effectiveness at large manufacturing scale, as well as other concerns. Perhaps organizations can come together pre-competitively to help the industry advance on these issues as they did with definition standards and a registration system.
Data integrity is essential as life science companies endeavor to create highly specific therapeutics with greater potency. If organizations can implement a unified data model so information can move through multiple domains and systems from early discovery through production, everyone can make better decisions. When everyone has complete information there is less room for error, tighter control throughout the entire process, and faster delivery of innovative therapeutics to patients.
To learn more about Dassault Systèmes BIOVIA, click here.