Advertisement

Analytical Method Validation: Expert Panel Discussion on ICH Q2 Revision 2 Practical Implementation

by | Jun 5, 2025 | Summary

This summary accompanies a recent expert forum discussion organised by BioQC, and we appreciate the panellists’ time and expertise.

This expert panel discussion addresses practical aspects of implementing ICH Q2 Revision 2.

Risk Assessment in Method Validation: Formal vs. Informal Approaches

The question of whether formal risk assessment is required before method validation generates considerable discussion among practitioners. Risk assessment represents sound scientific principle—thinking carefully about appropriateness before initiating any activity constitutes risk assessment at its core. Scientists should always perform some form of risk assessment before starting work, though the formality of this assessment remains flexible.

The distinction between formal and informal risk assessment proves important. ICH Q14 explicitly states that risk assessments can be either formal or informal, providing regulatory flexibility. Formal risk assessment, whilst not mandatory for submission, offers substantial benefits. Documenting thought processes helps focus thinking and facilitates validation protocol preparation. These documented assessments can remain internal rather than requiring submission to regulatory authorities, though maintaining them provides valuable support for validation decisions.

ICH Q14 enables risk assessment through various tools, with Method Operable Design Region (MODR) studies representing one important example. MODR studies essentially constitute risk assessments examining which method parameters might influence results. If substantial risks exist that method parameters significantly impact results, MODR studies become valuable. Conversely, when experience and expertise demonstrate minimal risk, extensive studies may be unnecessary, though documenting this reasoning from quality perspectives proves beneficial even if not mandatory.

Formal risk assessment tools including Failure Mode and Effects Analysis (FMEA) and Ishikawa diagrams can support validation planning. These tools prove particularly valuable when considering the complete development-validation monitoring lifecycle, with risk assessment serving as a central basis for discussion and decision-making. USP Chapter 1220 discusses these approaches extensively, providing additional guidance beyond ICH documents.

The connection between risk assessment and robustness testing deserves emphasis. ICH Q2 Revision 2 and Q14 strengthen the principle that robustness testing forms part of method development, generating information essential for validation. Risk assessment helps identify which robustness studies are most critical, avoiding unnecessary work whilst ensuring critical parameters receive appropriate evaluation. Documenting risk assessment thinking proves valuable because, as noted in previous discussions, we cannot archive our brains—written records preserve reasoning for future reference and regulatory review.

Skipping Validation Parameters: When Is It Appropriate?

Questions frequently arise about whether validation parameters can be skipped when risks are assessed as low, particularly during early development phases when methods may be qualified or verified rather than fully validated. The distinction between validation, qualification, and verification in different development phases adds complexity to this question.

Complete elimination of validation parameters is not feasible. Every performance characteristic requires some form of evaluation, though the approach and timing may vary. However, skipping formal evaluation of specific parameters during validation because adequate development data already exist represents a different scenario. When robust development data demonstrate acceptable performance for particular characteristics, repeating identical studies in formal validation protocols may be unnecessary.

The challenge involves structuring data and reports appropriately when leveraging development data for validation purposes. Industry lacks extensive shared experience on best practices for presenting such justifications to regulatory authorities. Simply omitting parameters without explanation invites regulatory questions. If bias is demonstrated as negligible during development but not formally tested during validation without providing supporting information, the first regulatory question will address bias. This creates rework rather than achieving efficiency.

The concept of staged validation offers a middle ground. Developing data progressively across development phases differs from conducting all studies within a single formal validation exercise. The challenge lies in reporting this staged approach effectively. Prequalification and verification activities generate valuable data, but integrating this information into validation packages whilst maintaining regulatory acceptability requires careful planning and documentation.

ICH Q2 Revision 2 and Q14 strongly encourage designing validation studies according to available information. When information exists from development or from platform methods, repetition becomes unnecessary. However, implementing this principle requires organisational courage. Companies must define prerequisites for data quality, establish appropriate GMP level requirements, and determine QA review standards for development data used in validation submissions. These decisions determine whether organisations can effectively leverage existing data or default to repeating studies for regulatory safety.

The guidelines clearly encourage avoiding redundancies by using available information appropriately. Success requires establishing clear criteria for what constitutes acceptable prior data, documenting why this data adequately demonstrates required performance, and presenting justifications that withstand regulatory scrutiny. The potential time savings must be weighed against the potential costs of increased quality oversight in development departments and potential regulatory risks if authorities do not accept the approach.

Linearity Testing: Instrument Response vs. Whole Procedure

Confusion persists regarding whether linearity testing addresses instrument response—a small component of overall method procedures—or entire analytical procedures, including all sample preparation and measurement steps. ICH Q2 Revision 2 is not entirely clear on this distinction, contributing to ongoing debate.

The guideline exhibits a bipolar approach to linearity. For what are termed “linear methods” such as HPLC, the guideline maintains focus on instrument response linearity. This remains important—if calibration curves are expected to be straight lines, demonstrating linearity is appropriate. However, this evaluation applies to calibration standards rather than validation samples, focusing on the instrumental measurement component.

For non-linear and multivariate approaches, the guideline discusses linearity of validation results rather than instrumental signals. No clear reason exists why similar approaches cannot apply to linear methods. The recommendation is that linearity evaluation should always address whole procedure results. Demonstrating linear instrument response may provide interesting supporting information, though it does not apply universally across all calibration models.

Revision 2 represents major improvement by largely resolving confusion between linearity of analyte concentration in samples (reportable results) and response functions describing instrument behaviour. This distinction is now much clearer, though some backsliding into previous thinking occurs in chapters discussing non-linear responses, where concepts again become somewhat mixed.

Avoiding the term “linearity” in discussions helps reduce confusion. Talking instead about response functions or calibration models provides clearer communication. Linearity as a characteristic of results is demonstrated through accuracy studies, approachable through linear regression, percent recovery calculations, or other mathematically equivalent transformations. Addressing this under accuracy headings rather than as separate linearity studies reduces confusion from older guideline approaches.

This confusion may be less problematic in bioassay contexts, where questions about whether response functions must be linear never arose—non-linear responses are the norm. Traditional small molecule thinking created more confusion by conflating instrument linearity with method linearity. Maintaining strict terminology helps prevent this confusion, with response function specificity proving valuable for clear communication.

Regardless of calibration model typelinear, non-linear, or multivariate demonstrating how well models fit data remains important. For four-parameter logistic or other complex models, showing residual plots, function plots, and goodness-of-fit statistics addresses the same underlying question: does systematic bias exist in the calibration model? Harmonising approaches across all calibration model types, requiring similar demonstrations of model adequacy regardless of mathematical form, provides consistency and clarity.

Additional complexity arises when response functions differ from calibration models. In bioassays, sigmoidal response functions may exist, but the calibration model involves comparison to reference standards rather than the response function itself. These distinctions require careful consideration, but the principle remains: discussing response functions and their fit prevents systematic bias in transforming signals to concentrations.

Continue Reading…

Register using the download form below to read the 10-page full summary (PDF) and meet the panel.

Would you like to read more summaries?

Home $ Summary $ Analytical Method Validation: Expert Panel Discussion on ICH Q2 Revision 2 Practical Implementation

Advertisement

Advertisement

Advertisement