menu toggle

EU HTA 2025 readiness: Key methodological updates and practical tips on statistical guidelines

By Michael Hennig, PhD

EU HTA Statistical Guidelines
EU HTA Statistical Guidelines
The Regulation (EU) 2021/2282EN on health technology assessment (HTAR) is transforming HTA in Europe, effective from January 2022 and fully enforced by January 2025. The Joint Clinical Assessment (JCA) will be first mandatory for oncology drugs and advanced therapy medicinal products (ATMPs), expanding to include orphan drugs by January 2028 and all EMA-registered drugs by January 2023. 

As the deadline approaches for products seeking EMA approval post-January 12, 2025, several implementing acts and technical guidance documents have been issued to shape JCA methodology. To clarify the statistical guidelines further, we interviewed Michael Hennig, Senior Director and Expertise Line Head HTA Statistics from our EU HTA Center of Excellence.  

Keep reading or watch the recording of our webinar “Get set for 2025: Mastering the new EU HTA statistical guidelines” to gain deeper insights into these critical developments. 
EU HTA Statistical Guidelines


Q: Can you tell us more about the recent developments, and which specific guidance documents are now available? 

The journey began several years ago with the EUnetHTA initiative1. Key deliverables from this initiative have laid the groundwork for our current methodological focus. Recently published guidance documents include: 

  • Methodological Guideline for Quantitative Evidence Synthesis: Direct and Indirect Comparisons (Adopted on March 8, 2024)
  • Practical Guideline for Quantitative Evidence Synthesis: Direct and Indirect Comparisons (Adopted on March 8, 2024)
  • Guidance on Outcomes for Joint Clinical Assessments (Adopted on June 10, 2024)4
  • Guidance on Reporting Requirements for Multiplicity Issues and Subgroup/Sensitivity/Post Hoc Analyses in JCAs (Adopted on June 10, 2024)5 

 

Q: What has been the evolution of these guidelines? 

These guidelines were developed through consultations with EUnetHTA in early 2022 and involved input from stakeholders including patient organizations, pharmaceutical companies, and academic institutions, during month-long public consultations. While the final versions of the guidelines adopted by the EU share a large number of similarities with the original EUnetHTA versions, they include some alterations without further explanations.  


Q: What is the key content of the two guidelines on quantitative evidence synthesis? 

The topic of evidence synthesis forms the foundation of HTA analysis. The two guidelines on quantitative evidence synthesis are divided into those addressing direct comparisons and those addressing indirect comparisons. 

Direct comparisons occur when a study directly compares a drug with others of interest to HTA bodies. However, this ideal scenario is not always available. When direct comparison studies are lacking, indirect comparisons become necessary. These guidelines cover both methodological and practical approaches to synthesizing evidence from various sources. 

The core focus is on creating a network of evidence. Often, multiple trials and diverse pieces of evidence need to be synthesized into a cohesive network. This process involves integrating various studies to form a comprehensive analysis, as illustrated in the provided examples of potential evidence networks (Figure 1). 
EU HTA

 

Figure 1. Examples of potential evidence networks6 

The guidelines discuss two primary statistical approaches: frequentist and Bayesian. Because of the possibility of incorporating information from existing sources of data for modelling of prior distributions, Bayesian methods are useful in situations with sparse data.  

No clear preference for either approach is stated; instead, the choice should be justified based on the specific scope and context of the analysis. 

Several methods for conducting indirect comparisons are detailed7

  • Bucher methodology: Adjusted indirect treatment comparison (ITC) for simple networks when direct evidence is lacking. 
  • Network meta-analysis: Compares three or more interventions using direct and indirect evidence. 
  • Simulated treatment comparisons (STC): Adjusts population data when individual patient data is available for one treatment and aggregate data (AgD) for the other.
  • Matching adjusted indirect comparisons (MAIC): Compares studies by re-weighting individual patient data to match baseline statistics when only AgD is available. 

The guidelines also address scenarios where randomized studies are not feasible, particularly in rare diseases. They emphasize using individual patient data and stress the importance of quantifying uncertainty and assessing the robustness of findings through sensitivity analyses. 

 
Q: What methodological options exist when there is no direct study available comparing the intervention of interest to the comparator of interest? 

In many cases, there isn't a perfect study that directly compares the intervention of interest with the desired comparator. Therefore, ITC methods are essential. They can be categorized into two main scenarios: 

  • AgD methods: Network meta-analysis (NMA) and Bucher’s method utilize AgD from multiple studies for comparisons.
  • Individual patient data (IPD) Methods: Methods like MAIC and STC require individual patient data from at least one study, enabling more precise analyses by adjusting for population differences. 

ITC analysis relies on access to IPD and can be categorized into anchored (using randomized studies with a control arm) and unanchored (often from single-arm studies). These techniques involve advanced approaches such as multiple imputation, marginalization, and meta-regression. 

MAIC, a particularly popular ITC method: 

  • Combines IPD with AgD
  • Ensures comparability by re-weighting based on propensity scores 

Through MAIC, comparisons can be made even without direct studies, ensuring the comparability of patient populations for drawing conclusions on treatment efficacy. 

Q: Will these methodologies be accepted? 

Acceptance hinges on meeting criteria such as: 

  • Sufficiency of overlap between patient populations in different studies: The closer the match between patient populations, the more reliable the comparison.  
  • Comprehensive knowledge and use of effect modifiers: Identify and account for all relevant baseline characteristics that could influence treatment effects. Use these characteristics in re-weighting to enhance the acceptance and validity of the indirect comparison.
  • Transparency via pre-specification: Clearly outline and pre-specify models and methods in advance. Avoid selective reporting or "cherry-picking" data, maintaining scientific integrity. 

In unanchored situations, the corresponding approaches rely on very strong assumptions. It is key to investigate and quantify any potential sources of bias introduced by these methods, and to assess the impact of this bias. 

It is also essential to follow detailed guidelines to navigate complex unanchored scenarios and recognize that not all methods may be universally accepted, and that one should adhere to established criteria rigorously and to be fully transparent in the description of the application.  

 
Q: What are key takeaways from the guidance on reporting requirements for multiplicity issues and subgroup, sensitivity and post hoc analyses? 

Methodological flexibility 

The guidelines clearly state that they do not endorse a specific approach, emphasizing the need to tailor methods to each unique situation8. Careful consideration and justification of the chosen method are crucial and should be based on the specific evidence available. 

Importance of pre-specification 

Pre-specifying analyses is essential. Before conducting any analysis, it's important to determine and document which methods will be used9. This prevents selective reporting and ensures scientific rigor: 

 

  • Multiplicity: Investigate numerous outcomes within the PICO framework (Population, Intervention, Comparator, Outcome). While testing multiple hypotheses increases the chance of statistically significant findings by chance, pre-specification helps mitigate this risk. It is recommended to take multiplicity into account when interpreting the results 
  • Subgroup analysis: Unlike the German AMNOG system's extensive subgroup requirements, this guideline requires meaningful subgroup analyses with a clear rationale and are pre-specified. 
  • Sensitivity analysis: Assess robustness of analysis by exploring the impact of missing data through appropriate sensitivity analyses.
  • Post hoc analysis: These unplanned analyses performed based on previous results must be identified due to their different scientific value compared to pre-specified analyses. 

 

Q: What are key takeaways from the guidance on outcomes? 

Emphasis is placed on clinical relevance and interpretability: 

  • Long-term or final outcomes like mortality are prioritized. 
  • Intermediate or surrogate outcomes may be acceptable but must meet certain thresholds. For example, surrogate outcomes must have a correlation above 0.85 with the outcome of interest.
  • Short-term outcomes, such as symptoms, Health-Related Quality of Life (HRQoL) and adverse events (AEs), can be relevant, depending on the research question.  

Safety is paramount and must be reported comprehensively. All safety endpoints listed in the guideline have to be reported, whether the treating physician sees a relationship to the treatment or not. The following descriptive results must also be reported in the main text of the JCA for each treatment group: AEs in total, serious AEs, severe AEs with severity graded to pre-defined criteria, death related to AEs, treatment discontinuation due to AEs and treatment interruption due to AEs. To assess relative safety these should be reported with point estimates, 95% confidence intervals, and nominal p-values.  

Furthermore, it is critical that newly introduced outcome measures’ validity and reliability are investigated independently, following COnsensus-based Standards for the selection of health Measurement Instruments (COSMIN) for selecting health measurement instruments. 

While no specific threshold like Germany’s Institute for Quality and Efficiency in Health Care  (Institut für Qualität und Wirtschaftlichkeit im Gesundheitswesen, IQWiG) rule exists in these guidelines, established methods for assessing interpretability should be applied. The focus is on ensuring that effects are clinically relevant rather than merely statistically significant. 

These guidelines outline clear standards for analyzing various types of outcomes in JCAs while stressing transparency and scientific rigor throughout the process. 

 

Q: What are the implementation challenges of the guidance documents? 


The practical implementation of the current guidance documents presents several challenges: 

  • Uncertainty in practical application: 
    While the guidelines provide a framework, they lack strict requirements, making their practical application uncertain. The balance between strict requirements and flexibility remains to be seen. 
  • Pre-specification of statistical analyses: 
    Pre-specifying statistical analyses is crucial to avoid accusations of selective reporting. The more detailed the pre-specification, the better. 
  • Adapting to emerging trends: 
    How the guidelines will adapt to new methodologies and emerging trends is still unclear. It remains to be seen whether new methods can be readily applied or if updates to the guidance will be required. 
  • Collaborative learning: 
    A collaborative spirit between assessors and health technology developers (HTDs) is essential. Both parties must learn together to establish best practices for JCAs 


While the guidance documents set a foundational framework, their effectiveness in practice will depend on striking a balance between clear requirements and necessary flexibility, pre-specifying analyses rigorously, adapting to new trends, and fostering collaboration among stakeholders. 

In conclusion, as we look forward, we expect further guidance documents to be released, including one focusing on the validity of clinical studies. Continuous collaboration between assessors and HTDs will be essential for defining and establishing best practices. 

 

Cencora encourages readers to review the references provided herein and all available information related to the topics mentioned herein and to rely on their own experience and expertise in making decisions related thereto as the article may contain marketing statements and does not constitute legal advice. 

 

To gain practical insights and expert guidance on implementing these new guidelines, watch the recording of our webinar “Get set for 2025: Mastering the new EU HTA statistical guidelines” and contact our EU HTA Center of Excellence to receive personalized support and answers to your questions. 

 

References

1 eunethta. Joint HTA work.  https://www.eunethta.eu/jointhtawork/  

2,6,7 Methodological Guideline for Quantitative Evidence Synthesis: Direct and Indirect Comparisons -European Commission  
https://health.ec.europa.eu/latest-updates/methodological-guideline-quantitative-evidence-synthesis-direct-and-indirect-comparisons-2024-03-25_en  

3 Practical Guideline for Quantitative Evidence Synthesis: Direct and Indirect Comparisons -European Commission https://health.ec.europa.eu/latest-updates/practical-guideline-quantitative-evidence-synthesis-direct-and-indirect-comparisons-2024-03-25_en 

4 Guidance on outcomes for joint clinical assessments -European Commission  
https://health.ec.europa.eu/publications/guidance-outcomes-joint-clinical-assessments_en  

5,8,9 Guidance on reporting requirements for multiplicity issues and subgroup, sensitivity and post hoc analyses in joint clinical assessments - European Commission https://health.ec.europa.eu/publications/guidance-reporting-requirements-multiplicity-issues-and-subgroup-sensitivity-and-post-hoc-analyses_en  

 

Topics:
Consulting

About The Author

Michael Hennig, PhD
Senior Director, Expertise Line Head HTA Statistics
PharmaLex, part of Cencora
View Bio