How can clinicians choose between conflicting and discordant systematic reviews? A replication study of the Jadad algorithm
Loading...
Date
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
Abstract
Introduction
The exponential growth of published systematic reviews (SRs) presents challenges for decision makers seeking to answer clinical, public health or policy questions. In 1997, an algorithm was created by Jadad et al. to choose the best SR across multiple. Our study aims to replicate author assessments using the Jadad algorithm to determine: (i) if we chose the same SR as the authors; and (ii) if we reach the same results.
Methods
We searched MEDLINE, Epistemonikos, and Cochrane Database of SRs. We included any study using the Jadad algorithm. We used consensus building strategies to operationalise the algorithm and to ensure a consistent approach to interpretation.
Results
We identified 21 studies that used the Jadad algorithm to choose one or more SRs. In 62% (13/21) of cases, we were unable to replicate the Jadad assessment and ultimately chose a different SR than the authors. Overall, 18 out of the 21 (86%) independent Jadad assessments agreed in direction of the findings despite 13 having chosen a different SR.
Conclusions
Our results suggest that the Jadad algorithm is not reproducible between users as there are no prescriptive instructions about how to operationalise the algorithm. In the absence of a validated algorithm, we recommend that healthcare providers, policy makers, patients and researchers address conflicts between review findings by choosing the SR(s) with meta-analysis of RCTs that most closely resemble their clinical, public health, or policy question, are the most recent, comprehensive (i.e. number of included RCTs), and at the lowest risk of bias.
Highlights: This is the first empirical study to replicate Jadad algorithm assessments to evaluate discordance across systematic reviews. In 62% (13/21) of cases, we were unable to replicate the Jadad algorithm assessment and ultimately chose a different systematic review than the authors. When assessing systematic reviews using the Jadad algorithm, some steps of the Jadad algorithm were vague in description, making it difficult to operationalise, interpret, and use. The Jadad algorithm has several limitations as it does not account for the last literature search of the systematic review and publication recency of included trials. To assess discordance in the absence of an algorithm, we recommend decision makers consider relevance (objectives that most closely resemble their clinical question), recency (dates of search), comprehensiveness (most trials), and risk of bias (lowest risk of bias SR) when choosing one systematic review across multiple.
Highlights: This is the first empirical study to replicate Jadad algorithm assessments to evaluate discordance across systematic reviews. In 62% (13/21) of cases, we were unable to replicate the Jadad algorithm assessment and ultimately chose a different systematic review than the authors. When assessing systematic reviews using the Jadad algorithm, some steps of the Jadad algorithm were vague in description, making it difficult to operationalise, interpret, and use. The Jadad algorithm has several limitations as it does not account for the last literature search of the systematic review and publication recency of included trials. To assess discordance in the absence of an algorithm, we recommend decision makers consider relevance (objectives that most closely resemble their clinical question), recency (dates of search), comprehensiveness (most trials), and risk of bias (lowest risk of bias SR) when choosing one systematic review across multiple.
Description
Keywords
Citation
BMC Medical Research Methodology. 2022 Oct 26;22(1):276
