Developing the methodology underpinning systematic reviews and meta-analyses, and applying it in high-quality reviews

Systematic review and meta-analysis are essential tools in rigorously evaluating the effects of therapies, integrating our trial results with those of other groups and informing the design of our trials. This programme develops the methodology underpinning systematic reviews and meta-analyses and applies it in high-quality reviews relevant to our trials. Interaction between systematic reviewers, methodologists and trialists is fundamental to these endeavours.

 

Beyond standard systematic reviews

Systematic reviews based on individual participant data (IPD) are regarded as the gold standard of systematic review, because they circumvent many of the biases associated with traditional aggregate data reviews, and provide much more robust and detailed results about the effects of interventions. However, such reviews bases on IPD are time- and resource-intensive, so a key aim is to establish when they are most needed to answer clinical questions reliably. If standard aggregate data systematic reviews of aggregate data are to provide a robust and speedier alternative to IPD and indicate when IPD are most needed, we need to move beyond the current methods of doing these, using a framework for adaptive meta-analysis (FAME), in order to limit the inherent biases. Our current STOPCAP M1 Programme is using the principals of FAME. Also, network meta-analysis (comparing many treatments) is a possible advance on pairwise meta-analysis (comparing two treatments). We choose the methodology that is appropriate to the scientific or medical question.

Determining which participants benefit most from treatments

Determining which participants benefit most from treatments continues to be of major interest to patients, clinicians and policy makers. Therefore, there is a clear need to clarify theory and best practice around estimation, presentation and interpretation of patient-specific benefit in meta-analysis and network meta-analysis. Our work on “deft” (within-trial interactions) methodology has demonstrated the core principles, highlighted the inferential consequences of other methods, and summarized current trends in usage, and we aim to make “deft” methodology work in a wide range of pairwise and network meta-analysis contexts.

Assessing and visualising inconsistency in network meta-analysis

Estimating reliable treatment comparisons in network meta-analysis relies on the consistency assumption that direct evidence agrees with indirect evidence. Various methods exist to explore different aspects of inconsistency, but have a number of disadvantages. We are developing new ways for assessing, testing for, and visualising inconsistency. Our methods aim to correctly handle multi-arm trials and hence resolve a long-standing problem in network meta-analysis: how to separate loop inconsistency from design inconsistency.

Choosing and building appropriate models

Many models are available for pairwise and network meta-analysis, and for some clinical questions, there is the potential to use more than one model, and/or more than one way to fit the selected model. The possible disagreement between models and methods can cause confusion among researchers, produce results that are wrong or overly complex or lessen credibility of results. Therefore, we are extending understanding about the about the appropriateness of different existing models; and for clinical questions where there is no entirely suitable model, developing new models, while avoiding unnecessary complexity.

Guidance and tools to increase impact

To ensure other researchers improve their own methodological practice, we are developing training, practical guidance and software to complement our methodological developments. For example, we continue to develop Stata programmes for pairwise and network meta-analysis of aggregate data and IPD, we are co-leading a book on IPD meta-analysis, and we are developing a course on “Beyond standard systematic reviews”.