Week 7 Flashcards
(31 cards)
What is a systematic review (SR)?
A structured, transparent review that answers a focused research question by systematically searching, selecting, appraising, and synthesising all relevant primary studies.
Why are SRs near the top of the evidence pyramid?
They reduce bias via pre-defined protocols (e.g., PRISMA-P), combine multiple studies, and provide an objective summary for practice and policy.
List three key strengths of SRs.
(i) Reduced bias, (ii) increased statistical power when pooled, (iii) time-efficient synthesis for clinicians.
List three main weaknesses of SRs.
(i) Publication bias, (ii) heterogeneity between studies, (iii) “garbage in–garbage out” if included studies are poor.
Name two checklist items you look for when critically evaluating an SR.
Clear question & eligibility criteria, comprehensive search strategy, risk-of-bias assessment, transparent synthesis.
What distinguishes a meta-analysis (MA) from a systematic review?
MA is the statistical pooling of effect sizes from studies (usually within an SR) to generate a precise summary estimate.
Give two strengths of MAs.
(i) Greater statistical power & narrower CIs, (ii) resolves conflicting results by providing a weighted average.
Give two weaknesses of MAs.
(i) Susceptible to publication bias, (ii) heterogeneity (“apples vs oranges”) can mislead if studies are too diverse.
What statistical tool quantifies heterogeneity in a MA?
I² statistic (0–100 %); higher values indicate greater between-study variability.
What are observational studies?
Non-interventional designs that observe exposures/outcomes in real-world settings to explore associations.
Name the three main types of observational study.
Cohort, case-control, and cross-sectional.
Contrast prospective vs retrospective designs.
Prospective: follow subjects forward from exposure to outcome; Retrospective: look backward after outcome occurred.
Two key strengths of observational studies.
Ethical for harmful exposures; high external validity (real-world).
Two major weaknesses of observational studies.
Vulnerable to confounding & bias; unable to prove causation.
What protocol items minimise bias in observational studies?
Rigorous participant selection, clear exposure/outcome definitions, statistical adjustment for confounders.
Primary purpose of an RCT.
To test efficacy/safety of an intervention while minimising bias through randomisation and blinding.
Three blinding levels in RCTs.
Single-blind (participants), double-blind (participants + researchers), triple-blind (participants + researchers + analysts).
One strength and one weakness of RCTs.
Strength: strongest causal evidence; Weakness: costly and may lack real-world generalisability.
Strengths & weaknesses of a systematic review
Strengths: reduces bias, increases power, broad generalisability, clear clinical guidance.
Weaknesses: publication bias, heterogeneity, depends on primary study quality, time-consuming
Critical appraisal of an SR – key considerations
Focused question (PICO/PCC)
Comprehensive multi-database search (+ grey literature)
Explicit inclusion/exclusion criteria
Risk-of-bias assessment (e.g., ROBIS)
Transparent synthesis & limitations reporting
Strengths & weaknesses of a meta-analysis
Strengths: higher precision, detects small effects, resolves study disagreement, hypothesis generation.
Weaknesses: sensitive to publication bias; heterogeneity; quality of pooled studies; requires advanced stats.
Critical appraisal of a meta-analysis
Same items as SR plus:
* Assessment of publication bias (funnel plot, Egger test)
* Exploration of heterogeneity (I², subgroup/meta-regression)
* Appropriate model choice (fixed vs random effects)
* Sensitivity analyses reported
Differences between scoping and narrative reviews
Scoping review: maps breadth of literature, identifies gaps, uses systematic search but usually no critical appraisal.
Narrative review: broad, author-driven overview; flexible search, may lack explicit methods.
- Review components to check
Review question clarity
Sources searched (databases, grey lit)
Selection criteria pre-specified & transparent
Data evaluation tools (risk-of-bias, quality scores)
Implications for practice explicitly stated