(B-204) Evaluation of methods for quantitative bias analysis on the impact of unmeasured confounding during planning of a single arm trial with an external comparator arm
Background: A limitation of the single arm trial with external comparator arm (ECA) design is unmeasured confounding, as patients may differ with respect to key factors that are not collected in real word data (RWD). Quantitative bias analysis (QBA) comprises a set of methodologies for estimating the extent of bias needed to affect study results. While QBA is increasingly used during the analysis phase of an ECA study, its use during study planning is rare. Early QBA can add transparency around limitations of an RWD source in identifying a suitable ECA and potentially reveal if an ECA design is unjustified. Also, QBA in the design stage can inform the robustness of QBA needed in the ECA analysis. There are unique challenges when conducting QBA during study planning, including data availability and time and resource constraints.
Objectives: To evaluate QBA methods for unmeasured confounding and to determine their suitability during the planning stage of an ECA study.
Methods: We evaluated several QBA methods to determine their strengths and limitations when assessing unmeasured confounding during ECA study planning.
Results: QBA methods can either estimate a threshold at which unmeasured confounding alters the study conclusion (e.g., E-value and tipping point analysis) or estimate the direction and magnitude of bias from unmeasured confounding (e.g., deterministic and probabilistic methods for predicting a bias-adjusted effect estimate). Prior to utilizing these methods, as a QBA input, an effect estimate for the ECA comparison must be identified or developed based on assumptions. The E-value is the simplest QBA method to implement as it does not require specification of bias parameters and is useful when the magnitude of association for observable covariates is well understood. However, the E-value is less robust compared to methods that estimate a bias-adjusted effect. Tipping point analysis can be more robust than the E-value when data exist to inform underlying assumptions. Yet at the planning stage this method requires creating a simulated dataset based on assumptions for the prevalence and association between the treatment, outcome, and confounder, which is cumbersome and can limit interpretation. Similarly, deterministic and probabilistic methods can be robust when information exists to inform underlying assumptions, but the time and complexity of conducting these analyses may be difficult to justify.
Conclusions: Selection of a QBA method for evaluation of unmeasured confounding during planning of an ECA should be guided by RWD availability, the complexity of implementation, and the level of scientific rigor needed to support study findings.