Critically Appraising Evidence-Intervention Study Flashcards

(107 cards)

1
Q

what elements do we need to consider when critically appraising evidence/

A

purpose

study design/methods

results

appraising clinical relevance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

what is included in the study design/methods?

A

prospective/retrospective

study population

application of intervention

outcome measures

bias

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

what is included in appraising clinical relevance?

A

external validity

internal validity

applicability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

what is the definition of the purpose of a research article?

A

what the authors set out to achieve

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

the purpose of an article is important for determining the ____ to your pt

A

applicability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

t/f: the purpose of the article may not actually be achieved

A

true

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

what is the PICO question?

A

Population
Intervention
Comparison
Outcome

it outlines the parameters for the study or search

more specific is better

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

what is attrition bias?

A

systematic difference bw study groups in # and way the participants are lost from the study

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

what is confounding bias?

A

distorted measure of association bw exposure and outcome

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

are most research studies prospective or retrospective studies?

A

prospective

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

what is a prospective study?

A

a study that is designed b4 pts receive treatment

“live” data collection

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

what are the cons of prospective studies?

A

ppl may leave the study

not following protocol

money

time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

what is the advantage of prospective studies?

A

there is not as much bias

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

what is a retrospective study?

A

a study that is designed after the pts receive rx

chart review

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

what are the cons of retrospective studies?

A

there are no set parameters, quality control, and more inclined to have bias

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

t/f: single vs multiple study sites is about how many places are conducting the study, NOT about how many places the participants come from

A

true

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

t/f: more diversity in a study is generally better

A

true

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

what is the advantage of multiple study sites?

A

there are dif lifestyles and populations

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

what is the disadvantage of multiple study sites?

A

interrater reliability is inconsistent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

what is the difference bw a concurrent control trial and historical control?

A

a concurrent control trial has an investigator assigns subjects to rx (control and treatment) based on enrollment criteria

a historical control uses prior data to serve as the control group

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

what are the pros of using a historical control?

A

you cut the recruitment amount in 1/2

saves money

saves time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

what are the cons of a historical control?

A

the 2 different time points make the populations very different

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

what is consecutive sampling?

A

researchers set an entry point and screen everyone who comes through the entry point

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

what is selective sampling?

A

participants come in response to solicitation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
which type of sampling may advertise, ask for a referral, or go to places in the community and invite ppl to participate?
selective sampling
26
which type of sampling is common and practical?
selective sampling
27
what does inclusion and exclusion criteria have to do with?
who is allowed in the study
28
what questions should be considered about inclusion/exclusion criteria?
do the criteria make clinical sense? is a clinically relevant population being recruited? is there bias in the population being recruited? would your patient have qualified for the study? if not, are the differences bw your pt and the criteria relevant to potential outcomes
29
t/f: a study must have a baseline to go off of to see change effects
true
30
what are 3 important questions in the application of intervention?
1) was rx consistent (fidelity)? 2) was it realistic? can it be done realistically? 3) were groups treated equally except for the IV?
31
what are important questions to ask about outcome measures?
are they reliable are they valid? do they span the ICF? do they measure something important? do they measure something that will change w/rx?
32
what is a bias in research?
a tendency or preference toward a particular result that impairs objectivity
33
what are the selection biases?
referral, volunteer biases
34
t/f: referral bias is related to selective sampling
true
35
what is volunteer bias?
the difference bw individuals who volunteer vs those who do not leads to some people being under or not represented
36
what are the types of measurement bias?
instrument, expectation, and attention biases
37
what is instrument bias?
errors in the instrument used to collect data
38
what is expectation bias?
when no blinding occurs
39
what is attention bias?
when participants know their involvement, they are more likely to give a favorable response
40
what are the types of intervention bias?
proficiency, compliance (attrition) biases
41
what is proficiency bias?
dif skills of PTs or dif sites, interventions are not applied equally
42
what is compliance bias?
losing people in a study
43
what is confirmation bias?
researchers may miss observing a certain phenomenon bc of a focus on the hypothesis testing
44
what are the types of biases>
selection bias measurement bias intervention bias confirmation bias confounding bias
45
t/f: missing data from attrition is unavoidable in clinical research w/follow-up visits
true
46
how does attrition introduce bias?
demographics of participants in the study change ppl who leave are likely dif from those who stay, and only compliant pts are studied creates missing data
47
what is intention-to-treat analysis?
analyzing data as though the participants remained in their assigned groups after leaving a study one approach to make up for missing data created by attrition
48
what are the statistical approaches to intention-to-treat?
last observation carried forward best and worst case approaches (both often used in combo) regression models (esp multiple regression models)
49
what is confounding bias?
when a 3rd uncontrolled variable influences the DV and can falsely show an association
50
t/f: confounding bias strengthens internal validity
false, it hurts internal validity
51
t/f: confounding error makes it difficult to establish a clear cause and effect link bw IV and DV
true
52
how can we reduce confounding bias?
by setting very clear inclusion/exclusion criteria
53
what is involved in understanding the results of an intervention study?
statistics identifying potential problems in inferential stats summarizing the clinical bottom line read the tables and figures
54
what are the 3 categories of statistics?
descriptive stats inferential stats clinically relevant stats
55
what statistics evaluates the importance of changes in outcomes for PT care?
clinically relevant statistics
56
what things do we need to know about interpreting results from descriptive statistics?
how to classify different types of data which results are from descriptive stats difference bw normal and skewed distribution (and why it matters) how to interpret reported means, median, modes, SD, proportions, and ranges how different types of data are presented in descriptive statistics
57
why should we pay attention to descriptive stats?
bc it helps determine where a majority of data falls (demographics and outcomes) bc it helps us understand info b4 and after intervention
58
what are the commonly reported stats for nominal data?
proportion
59
what are the commonly reported stats for ordinal data?
proportion, range
60
what are the commonly reported stats for continuous, normally distributed data?
mean, SD, range
61
what are the commonly reported stats for continuous, not normally distributed data?
median, IQR
62
what do we need to know to decide if groups are statistically significantly different?
p values
63
t/f: descriptive stats are useful but insufficient to make conclusions about the differences bw groups
true
64
when interpreting and appeasing results of inferential stats, what questions need to be asked?
what is being compared? what type of data is being compared? (para/nonpara, categorical/continuous) was the right stat test used?
65
what is the importance of randomization?
it ensures that groups are similar
66
group differences at baseline may be due to what?
potential error/bias
67
what things may lead to group differences at baseline?
unsuccessful randomization inter/intra-rater reliability, test-retest reliability is bad reliability of instruments/tests are bad
68
what happens if alpha is larger than 0.05 (standard)?
there is less probability of type 2 error there is greater tolerance of type 1 error it is easier to have FP
69
what happens if alpha is smaller than 0.05 (standard)?
there is a reduced chance of FP it is harder to detect significance it is less likely to incorrectly reject the null
70
when would the alpha be smaller?
with post hoc bonferroni corrections
71
what is the effect size?
an estimate of the magnitude of the dif bw groups (effect of the different interventions)
72
the effect size indicates the strength of the decision on what?
H0
73
the bigger the effect size, the ___ our decision on the H0.
stronger
74
t/f: the effect size depends on the test used
true
75
what is the value used to measure the effect size for t test?
cohen's d
76
what is the value used to measure the effect size for ANOVAs?
partial eta squared
77
what are different strengths of effects sizes?
small, medium, and large effect
78
how does variability affect effect size?
the greater the variability the smaller the effect size
79
if a curve is flatter, what does this mean about the variability? the effect size? the sample size?
the variability is greater the effect size is smaller the sample size is smaller
80
when the effect size is smaller, is it more difficult r easier to distinguish differences bw null and alternative?
more difficult
81
what is statistical power?
1-beta the probability of rejecting the null hypothesis when H0 is false (TN)
82
when there is greater power is there lower type 1 or 2 error?
lower type 2 error
83
when beta increases, power ___, when beta decreases, power _____.
decreased, increases
84
t/f: greater statistical power=stronger conclusion
true
85
generally, studies should have power of greater than what?
0.8 (80% chance of detecting a real difference)
86
larger sample size=___ effect size=____ w/in group variability
larger, less
87
smaller sample size=___effect size=___w/in group variability
smaller, more
88
when should power analysis be done? why?
b4 the study in order to calculate how many samples you need
89
if there is insufficient power, there is a larger risk for what type of error?
type 2 errors
90
t/f: if there is insufficient power, the validity of findings can be questionable
true
91
why is a study with insufficient power (too small N) a problem?
bc the type 1 or 2 error will be too high bc the study might find a difference bw groups when a difference doesn't really exist bc the study might find no difference bw groups when a difference actually exists
92
what are the types of clinical meaningfulness?
minimal detectable change (MDC) minimally clinically important differences (MCID or MID)
93
what question does the MDC and MCID answer?
are the results significant and meaningful?
94
what does the MCD indicate?
the amount of change required to exceed measurement variability
95
what does the MCID indicate?
the amount of change required to produce clinically meaningful change
96
is the MDC or MCID derived using a stable sample at 2 time points?
MDC
97
is the MDC or MCID best estimated in a change sample over time?
MCID
98
t/f: statistical significance could be defined at any point greater than "no change" depending on the sample size and SD
true
99
what things do we need to consider when appraising clinical relevance?
external validity internal validity
100
what is external validity?
the generalizability of a study to a pt in clinical practice
101
what are things we need to consider with external validity?
is the study population applicable to your client? is the intervention applicable to your clinical setting? are the outcome measures applicable to your clinical question? can the results be applied to your client in your clinical setting?
102
what is internal validity?
being sure that the results of a study are due to the manipulations within the experiment
103
what things need to be considered about internal validity?
was the study designed and carried out w/sufficient QUALITY? was the study conducted w/sufficient rigor that it can be used for clinical decision making? does the way the participants were recruited avoid/minimize systematic bias? does the study design avoid/minimize systematic bias? does the application of the interventions (IV) avoid/minimize systematic bias? does the outcome measures avoid/minimize systematic bias? do they have established validity and reliability?
104
what are the study design considerations?
study design (randomized control trial, case study, etc) control vs comparison used are the participants in ACh group similar at the start of the study is there blinding? is the attrition <20%? (should be) are the reasons for dropouts explained? are follow-up assessments conducted at sufficient intervals (3 or 6 months) post intervention for LT effect? are the funding sources stated and could they create bias
105
t/f: sponsors for a study are a bad thing
false, they are not innately bad, but we need to make sure that we consider the possible effects of it
106
what are 5 things we need to look for when a study reports its stats?
1) are the statistical methods appropriate for the distribution of the data and the study design? 2) are the investigators controlling for confounding variables that could impact the outcome other than the intervention? 3) is the intent-to-treat analysis performed? 4) do the investigators address whether statistically significant results were clinically meaningful (ie MCID)? 5) are confidence intervals reported?
107
what questions are important in summarizing the clinical bottom line?
what were the characteristics and size of study samples? were the groups similar at baseline? were outcome measures reliable and valid? were appropriate descriptive and inferential stats analysis applied to the results? was there a treatment effect? if so, was it clinically relevant?