Systematic reviews need to consider applicability to disadvantaged populations: inter-rater agreement for a health equity plausibility algorithm

FieldValue
dc.contributor.authorWelch, Vivian
dc.contributor.authorBrand, Kevin
dc.contributor.authorKristjansson, Elizabeth
dc.contributor.authorSmylie, Janet
dc.contributor.authorWells, George
dc.contributor.authorTugwell, Peter
dc.date.accessioned2013-03-19T17:23:35Z
dc.date.available2013-03-19T17:23:35Z
dc.date.created2012
dc.date.issued2012
dc.identifier.urihttp://hdl.handle.net/10393/23972
dc.identifier.urihttp://www.biomedcentral.com/1471-2288/12/187
dc.description.abstractBackground: Systematic reviews have been challenged to consider effects on disadvantaged groups. A priori specification of subgroup analyses is recommended to increase the credibility of these analyses. This study aimed to develop and assess inter-rater agreement for an algorithm for systematic review authors to predict whether differences in effect measures are likely for disadvantaged populations relative to advantaged populations (only relative effect measures were addressed). Methods: A health equity plausibility algorithm was developed using clinimetric methods with three items based on literature review, key informant interviews and methodology studies. The three items dealt with the plausibility of differences in relative effects across sex or socioeconomic status (SES) due to: 1) patient characteristics; 2) intervention delivery (i.e., implementation); and 3) comparators. Thirty-five respondents (consisting of clinicians, methodologists and research users) assessed the likelihood of differences across sex and SES for ten systematic reviews with these questions. We assessed inter-rater reliability using Fleiss multi-rater kappa. Results: The proportion agreement was 66% for patient characteristics (95% confidence interval: 61%-71%), 67% for intervention delivery (95% confidence interval: 62% to 72%) and 55% for the comparator (95% confidence interval: 50% to 60%). Inter-rater kappa, assessed with Fleiss kappa, ranged from 0 to 0.199, representing very low agreement beyond chance. Conclusions: Users of systematic reviews rated that important differences in relative effects across sex and socioeconomic status were plausible for a range of individual and population-level interventions. However, there was very low inter-rater agreement for these assessments. There is an unmet need for discussion of plausibility of differential effects in systematic reviews. Increased consideration of external validity and applicability to different populations and settings is warranted in systematic reviews to meet this need.
dc.language.isoen
dc.subjectSystematic reviews
dc.subjectApplicability
dc.subjectHealth equity
dc.subjectSex and gender
dc.subjectSocioeconomic status
dc.titleSystematic reviews need to consider applicability to disadvantaged populations: inter-rater agreement for a health equity plausibility algorithm
dc.typeArticle
dc.identifier.doi10.1186/1471-2288-12-187
CollectionIRSP - Publications // IPH - Publications
Publications en libre accès financées par uOttawa // uOttawa financed open access publications

Files