Week 10 - Assessing Research Flashcards

(16 cards)

1
Q

Faulty conclusions in research result from:

A

Internal validity threats - whether DV change is actually due to IV
External validity threats - whether findings are generalisable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Vaccines and autism example

A

1998 - described vaccines as ‘precipitating event’ for autism
Paper retracted in 2010 - biased sample, data manipulation, undisclosed COIs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Burden of misconduct

A

Most retractions involve scientific fraud (fabrication, falsification, plagiarism) or other misconduct (fake peer review)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Critical appraisal

A

Assessment of strengths of research against limitations (process + results)
Must consider all aspects - RQ clarity, design quality, stat analyses, result interpretation
Inherently retrospective (unless registered report)
Very important - whether to ‘believe’ effect, build future research off it, endorse particular therapy or intervention

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Formal assessment tools

A

Assessment of quality
- CASP (Critical Appraisal Skills Program) - for specific study types
- Cochrane’s Risk of Bias - for RCTs specifically
Article reporting guidelines
- CONSORT checklist (Consolidated Standards of Reporting Trials)
- APA JARS
Assessment of strength of evidence for specific interventions
- NHMRC evidence hierarchy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

NHMRC evidence hierarchy

A

Only thing better than RCTs or prospective cohort studies are systematic reviews of those studies

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

NHMRC body of evidence & grades of recommendation

A

Evidence base, consistency, clinical impact, generalisability, applicability
(all ranked from A to D)
Gives overall A to D grade

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

CASP checklist (RCP version)

A

A. Are results valid?
- 1. clearly focused issue, 2. random assignment, 3. all participants accounted for at conclusion, 4. blind study, 5. similar groups at start, 6. groups treated equally
B. What are the results?
- 7. size of effect, 8. precision of effect
C. Will results help locally?
- 9. can results be applied to local population, 10. were all clinically important DVs considered, 11. are benefits worth harms/costs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Appraisal guidelines

A

Geared to intervention studies, looking for scientific evidence of potential effects
RCTs considered gold standard (trumped by syntheses)
Need to consider why someone is reading article - research direction, clinical practice, personal interest (this will affect which appraisals get most weight)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Gamification study example - A. are results valid?

A
  1. Clear focus
  2. Random allocation
  3. Details about attrition rates given
  4. Not fully blind (but tricky in psychology)
  5. Baseline groups similar on demographics but differed on key outcomes (floor/ceiling effects)
  6. Waitlist group treated differently (received less emails)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

How to deal with attrition?

A

Check for differential attrition (could lead to bias)
Simple imputation - last observation carried forward or mean imputation
Multiple imputation (best) - sophisticated missing data imputed

Intention-to–treat protocol is the ideal - includes all participants randomized in a trial, regardless of whether they completed it, in the analysis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Gamification study example - B. what are the results?

A
  1. Active group differed from waitlist but not control on most things (not clearly reported), decent adherence effect (especially in active group)
  2. No error bars or CIs included (poor precision
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Gamification study example - C. will the results help locally?

A
  1. Fairly narrow and homogenous sample
  2. Reliability of measures seems good, knowing if all important outcomes were considered requires discipline knowledge
  3. Study is robust but room for improvement, still benefits do not seem worth it as intervention is no better than existing treatment
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Quality assessment

A

Global assessment of paper quality (assign scores to appraisal criteria, then sum scores)
- Pros - quantifies quality, enables comparisons, regardless of journal IF
- Cons - assumes same weight of items, score could fail key areas and pass overall

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Journal quality

A

Peer-review should reduce bias (but different standards of rigour, predatory journals)
Journals have impact factor (JIF) - citations over citable items in 2 year period
- Does this measure quality? - not necessarily (disciplines with few researchers, squeaky wheel)
- Different types of IF exist with different data weighting
- Can IF mislead? - some article types more likely to be cited than others, journals can act to increase their IF (excessive self-citation)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

How to evaluate and do good science

A

Use the assessment tools to better design, report and evaluate studies
Consider pre-registration or data sharing
Open Science Framework