Week 6 - Critically Appraising Research Flashcards
(15 cards)
Why is critical appraisal important
Critical appraisal of research is essential in various contexts:
For students, it helps develop skills in identifying strengths and weaknesses of published research.
For student researchers, it aids in deciding what literature to include in a review and how to compare their findings with existing knowledge.
For health professionals, it supports evidence-based decisions in patient care, considering both validity and clinical relevance.
what are the benefits of critical appraisal
Critical appraisal helps health professionals make decisions based on high-quality, clinically relevant evidence.
The quality of research refers to how it was conducted, including whether results are accurate and unbiased.
what is a first key step in appraisal
checking whether the research is peer-reviewed:
Peer-reviewed articles are examined by experts before publication.
However, peer review alone does not guarantee quality, as levels of scrutiny vary between journals.
what are two important appraisal concepts
Validity – applies to quantitative research (accuracy, bias control).
Rigour – applies to qualitative research (trustworthiness, depth).
what is the second key step in appraisal
Consider the applicability (generalisability) of findings done through validity
what is validity
Validity refers to how trustworthy and accurate the study’s findings are.
As a healthcare professional, it’s important to assess the type and strength of validity when reviewing research.
what is internal validity
Internal validity ensures that observed effects in a study are genuinely due to the intervention being tested — not other external or unknown influences.
Internal validity deals with causality — whether a change in one variable (independent variable, x) truly causes a change in another (dependent variable, y).
what is external validity
External validity refers to how well a study’s findings can be generalized to other people, settings, times, and measures.
It’s important because research should ideally inform real-world practice, not just the specific study setting.
A study with high internal validity (highly controlled) may lack external validity, meaning the findings might not apply in real-life situations.
What is bias in research validity
Bias threatens the internal validity and external validity of research.
Bias affects results
what are the common types of bias
Sample or Selection Bias
- Includes volunteer/referral bias and attention bias.
Measurement or Detection Bias
- Related to how outcomes are measured.
- Examples: number of outcome measures, lack of blinded evaluation, recall or memory bias.
Intervention or Performance Bias
- Related to how the treatment is administered.
- Examples: contamination, co-intervention, timing, site of intervention, different administrators.
what are the two types of validity studies
Efficacy studies: Focus on showing if a treatment works under ideal, controlled conditions (high internal validity).
Effectiveness studies: Conducted in real-life settings to see if a treatment still works (higher external validity).
How is critical appraisal done for a research article
Reading the article briefly for a general overview.
Then read it in detail multiple times while taking notes.
Compare your understanding with the abstract at the end, not the beginning.
what are the types of trials used
Non-inferiority trials: Show a new treatment is equivalent to a standard one, especially if the new option has other advantages (e.g., cost, convenience).
N-of-1 trials: Single-case studies using randomised interventions, useful for rare or common conditions.
what are some appraisal tool questions to ask
What is the clinical question?
Was the study design appropriate?
How was the sample recruited and described?
What data was collected and how?
What was the independent variable (quantitative)?
What potential biases exist?
Other Important Study Limitations to Evaluate
- sample
- Was the number of participants sufficient to generalize results?
- Was the sample size justified, especially for efficacy studies? - dropouts
- Were the number and reasons for dropouts reported?
- How did researchers handle missing data caused by dropouts in their analysis? - measurement
- How often were outcomes measured
- Did researchers report that the outcome measures used are reliable and valid?