Technology that enables an entire production process—from R&D to manufacturing—to be simulated digitally may be the future of digitalization. Process simulation, which allows for process validation and testing to be performed in simulation mode before anything is physically built, has the potential to reduce risk and down time, and fill the communication gap in drug manufacturing left by traditional methods of recipe formulation transfers.
The development of truly global product specifications also may be an important strategy for digitalization moving forward. Doing so will allow a manufacturer to make informed decisions regarding product production sites by accounting for a variety of variables from facility capacity to the economic or political climate of the site’s location.
Siemens Life Science Director Todd Lybrook spoke about the company’s simulation software portfolio at the recent Adents Serialization Summit in Philadelphia.
Pharmaceutical Processing sat down with him afterward to further discuss the industry advantages of process simulation, as well as the exciting implications of taking a site configuration for a manufacturing facility and generating global product specifications to enable smart decision making when choosing where to produce a product.
Below are edited excerpts from our conversation following the summit.
Q: What does the process of developing a simulation for drug manufacturing look like? Who is involved?
Todd Lybrook: The Siemens digitalization approach is about connecting the top IT systems to the top floor, from ISA S-95 Levels 4, 3, 2, 1, 0 with best-of-breed applications. The applications systems are configurable as opposed to customizable. Our systems are configurable off the shelf (COTS).
With configurable applications, it leads to the question of who is the best person or group to actually work on the teams that configure those solutions at the different layers. The best resources typically are business analysts that understand the process flow, not necessarily programmers because there’s no software build.
In terms of the product definition, the R&D folks try to develop the formulations coming from diagnostics, research, or a current product they want to improve upon, so the source of the product definition is typically the R&D department.
Through the vehicle of simulation, you’re able to merge the product definition of the R&D team with the process definition of the business analysts because they’re all working collaboratively on the same simulation application. That is one significant way of pulling that wall down between R&D and manufacturing.
Q: What type of products can be tested using simulation platforms?
Lybrook: There’s no limit. Usually, R&D develops the product and then they throw it over the wall to manufacturing, and then manufacturing must figure out the best way to produce the product on their own, looking at automation first and so on. We don’t do that.
We help the groups work together by looking at the product specifications, analyzing those products, and breaking them into families, which are consistent sets of process steps. Then, we help to create a generic, class-based process flow that represents the family, and we take that process flow and we build it into our manufacturing execution system (MES) eBR tool.
That way, there’s one generic work flow or flow map that can truly deliver 50 to 100 recipes as long as they follow the same process steps. We call this process normalization.
Again, that bridges the gap between R&D and the people designing the process.
Next, execution of the product and process definition can be simulated by connecting the model to various simulators. Siemens product lifecycle management (PLM) tools allow for simulation of the product definition. The company also has an internal simulator inside of our MES eBR solution to test the process definition workflow execution. We can connect to an I/O simulator to test against various I/O conditions.
These tools speed up new product introductions and tech transfer from R&D to “scale up” pilot manufacturing, and speeds up transfer time to full-scale manufacturing.
Q: You say that these simulations can be pre-validated. What does pre-validation mean when something is virtual?
Lybrook: There is a concept of pre-validation within the COTS applications themselves, meaning since they’re configurable, the base functions are pre-validated from our development centers.
Another aspect of pre-validation comes when you connect these configurable applications together. You configure them, you connect them to the simulations, and then you pre-validate some of those base functions to the simulators, which then reduces your validation load when you get to the real world.
Q: So it’s not about creating something that’s fool proof? You’re trying to reduce risk.
Lybrook: Yes, it’s about reducing risk, and reducing time to validate, which reduces cost.
Q: How long does the simulation process usually take?
Lybrook: The amount of time it takes to design and build a process simulation changes depending on the complexity of the manufacturing process and the product itself, but let’s say that you have 100 product recipes. The traditional approach would require you to validate all 100, one at a time, through all of the process steps.
Based on the Siemens approach, if the 100 recipes could be “normalized” into one generic, class-based process flow model, you could greatly reduce validation. When a specific product is scheduled, the product and process definitions are merged, creating an instance of the normalized process model.
Therefore, you need to validate the merge of the 100 recipes, but only validate one instance of the process model through all of the process steps. This approach can reduce validation and deployment time by at least 50 percent.
Based on the Siemens approach including simulation, you cut validation and deployment time down again by another 50 percent. Simulation inside the applications, simulations for I/O, and simulations to model product and process all can deliver reductions in validation and deployment time. You end up at somewhere around 25 percent of the original time compared to the old approach.
Q: You mentioned there are no limits to the technology, but what are some of the challenges associated with it?
Lybrook: There are no limits. The challenge is that you can build and simulate anything, but the question is should you? It’s a question of time and cost. You can overdo it and go too far, spending time and money, and then you’ve spent all of your development time in the simulation mode and you haven’t gotten to the real world yet. And that line is different for every technology, process, method, company, and product specification. That line really changes.
Q: What tools do you have that are specific for pharma?
Lybrook: We have a tool specifically for pharma that is built into our MES eBR platform that allows us to design this normalized, class-based workflow and merge it with product formulas in a simulation mode. This is an internal simulator that is built into that product formula to enable faster creation of those recipes and process workflows.
Q: How does the Siemens simulation technology integrate into a company’s existing software?
Lybrook: There are various simulation options—PLM simulation of the product itself, PLM simulation of the process, simulation inside the MES eBR solution, and simulation of the control layer to the applications above it through I/O simulators. Any of these tools can be connected to a company’s existing infrastructure.
Q: Where do you still see a gap in pharma manufacturing?
Lybrook: Nobody today is able to take a site configuration for a manufacturing facility and actually have a product specification that is global in order to make a decision regarding where they want to produce this product. If, for example, you have the choice of four different manufacturing facilities around the world, but you want to make a real-time decision about where to manufacture it, you’d need feedback data analytics from the Cloud or a global data warehouse feeding into your decision making.
To produce a product, you need a product specification and a process model. The product specification can be modeled at a global level, but the process model must include normalized global components and site specific components. These components are modeled into a global general recipe (or g-recipe), and the sites need to be aligned to be able to accept this g-recipe. If the decision is to produce in site A, you must transform this recipe from a global g-recipe to a site recipe. The site differences will drive various recipe transformation rules.
A general recipe to site recipe philosophy requires applications that support a global model. The Siemens model includes a global model of the manufacturing sites, class-based, generic process flows and objects, a normalized and reusable process object library, a normalized and reusable equipment library, and an externally “mappable” MBR module to accept master item definitions, formulas, process routings, equipment classes, CPPs/CQAs (critical process parameters/critical quality attributes), etc.
Q: When do you see this becoming available?
Lybrook: Siemens in poised and ready to do that today, and nobody else is. We are ready to deliver the philosophy of recipe transfer in the pharma process vertical, and we are already delivering it today in the medical device vertical.
Smart decision making at a global level to produce product in a local region faster, cheaper, and better based on capacity, economics, politics, and so on—that is the next big thing.
Q: What are the most exciting aspects of the ability to transfer recipes in this way?
Lybrook: The ability to produce products anywhere in the world based on real-time information. With smart decision making during recipe transfer, you can assess speed to manufacture, lowest cost to manufacture, and material and equipment availability. You can also expect lower IT application and support costs since applications are normalized across sites. This approach promotes faster new product introductions.
Q: A lot of speakers at the summit discussed how pharmaceutical companies take a long time to get on board with digitalization and new technology. Why do you think that is?
Lybrook: The pharma industry is conservative and risk-averse by nature, mainly due to application and solutions locked by validation, with oversight by the FDA. New digitalization philosophies and technologies are sometimes perceived to be higher risk because they are new, or unproven.
What a number of people in the pharma industry don’t fully understand yet is that many digitalization philosophies and technologies actually can reduce risk. Education is a big part of this transformation.
Q: What should a drug manufacturer understand about digitizing its facilities and production?
Lybrook: First of all, manufacturers will digitize for specific reasons—to better analyze data, gain efficiency, produce more, reduce legacy IT systems, etc. Also, when digitizing, look holistically, not just at one or two applications or systems.
Consider mapping your applications and data flows to the ISA-S95 model (See Editor’s Note below). This will help you to organize your applications and provide a consistent approach to finding gaps and pain points. And let data drive your decisions. Focus on the CQAs to release your products.
(Editor’s Note: Developed for global manufacturers, ISA-S95 is an international standard for developing automated interface between enterprise and control systems. The objectives of ISA-S95 are to provide consistent terminology for supplier and manufacturer communications, consistent information models, and consistent operations models to ensure clear application functionality and information use.)
This story can also be found in the September/October 2018 issue of Pharmaceutical Processing.