Advertisement
Andrew Anderson
Vice President, Innovation and Informatics Strategy, ACD/Labs

In the last few years, global regulatory authorities have raised the expectation to incorporate quality-by-design (QbD) to support risk management. While QbD affords many important long-term benefits for a firm’s overall risk management, these expectations are having a dramatic impact on product development groups and their supporting corporate informatics infrastructure.

One of the major impacts of using QbD principles in process development is the requirement to establish an acceptable quality target product profile (QTPP). This is accomplished through:
•    Evaluation of input material quality attributes (MQA)
•    Evaluation of the quality impact of critical process parameters (CPP)
•    Consolidated evaluation of every MQA and CPP for all input materials and unit operations

MQA assessment requires the careful consideration of input materials to ensure that their physical/(bio)chemical properties or characteristics are within appropriate limits, ranges or distributions. Furthermore, for CPP assessment, unit operation process parameter ranges must be evaluated to determine the impact of parameter variability on product quality. The contribution of each unit operation in any pharmaceutical or biopharmaceutical manufacturing process—whether it be synthetic steps in a chemical process (i.e., filtering, stirring, agitating, heating, chilling, etc.) or product formulation—must be assessed.

Impurity control informatics platforms should optimally provide users with the ability to manage disparate impurity knowledge in a single, searchable environment. Chemical structure/material identification; related analytical data; process conditions; unit operations; and batch/lot information should be easily accessible so that individual contributing groups and departments can efficiently access the information they need. “Process maps” that allow for visual comparison of molecular composition across unit operations enable simple review of process changes. This visual comparison allows users to rapidly identify where in a process appropriate impurity control measures need to be put in place to assure effective and efficient process control. Automatic update of relative material quantities from LCMS data reduces error-prone transfer of data between systems. Such analytical data allows users to visually confirm the veracity of numerical or textual interpretations or processed results without having to open separate applications. The platform should also store the context of the experiment, expert interpretation, and decisions resulting from it. Dynamic visualization and chemically intelligent searchability of this assembled and aggregated information preserves data integrity while supporting decision-making. Examples of decisions that can be made more efficiently include:
•    Risk assessment conclusions pertaining to impurity onset, fate, and purge
•    Comparative assessments of different purification methods
•    Comparative assessments of different control strategies

Furthermore, from a view to preserve the rich scientific information therein, an ideal informatics knowledge management system will also limit the need for data abstraction. Data abstraction in analytical chemistry is the process whereby spectral and chromatographic data is reduced from interactive data to images, text and numbers that describe and summarize results. While data abstraction serves a purpose—reduction of voluminous data to pieces of knowledge—it also brings limitations since important details, knowledge and contextual information can be lost.

An example of data abstraction, and its inherent risks, is identity specification and testing. One of the classic standards of identity is that a spectrum obtained for any batch of a substance matches the reference standard of the same substance. The specification can at times be limited to, for example, a set of diagnostic spectral features. As such, these spectral features can be abstracted to a discrete set of numerical values. Such abstraction, however, presents some discrete risk. For example, should unanticipated spectral features not accounted for in the specification be present in a spectrum for an adulterated substance, that substance would pass an identity test, i.e., expected peaks are found and the substance passes while unexpected peaks representing a new impurity are not accounted for.

In order to effectively leverage QbD and the body of internal knowledge in risk mitigation, firms should consider informatics platform innovation—particularly to support reduction of data abstraction, data assembly and human data preparation.

Advertisement
Advertisement