Session 7 - Reliability Flashcards

1
Q

What is the most common mechanism used to establish reliability?

A

Test-Retest.

tests interest, intratester, intertester

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What measures of reliability are based upon parametric assumptions?

A

r, rsquared, and ICC or R.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Parametric Assumptions

A

Between Group Variance
Within Group Variance
Sample Size
Number of Groups

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

what does r=1 mean?

A

perfect correlation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Pearson Product Moment Correlation Coefficient

A

Most common correlation coeficient
Can be used for reliability (test-retest, inter and intratester)
Can be used to assess strength of an association between 2 DIFFERENT variables with similar or different units of measure.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Problems with Pearson’s (r)

A

Can only compare 2 groups
Measure of ASSOCIATION not CONCORDANCE–>
can vary directly, but be consistently different.
Validity is a weakness

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Coefficient of Determination (r-squared)

A

Percentage of variance accounted for when prediction one measurement from another.
Less useful than the SEM for judging the accuracy of a single value or set of values.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Intraclass Coefficient Coreelation (ICC or R)

A

takes into consideration both association AND agreement
based upon a repeated measures ANOVA
Opposite of what we saw previously; BGV is the error rather than WGV b/c we want groups to be the same, (and overlap), rather than different.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is the difference between two curves with ICC?

A

tester error.

we want the curves to be identical

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Like Pearson, ICC is a parametric statistic, so normality of data is assumed and the statistics are sensitive to–

A

Between Group Variance
Within Group Variance
Sample Size
Number of Groups

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

In what situation will you get a lower ICC/reliability coefficient when the actual magnitude of error is the same?

A

when tested it a homogenous population

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

is statistical significance equal to clinical relevance?

A

NO!!

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

how do you quantify magnitudes of error?

A

Coefficient of determination
Standard Error of Measurement
Minimal Detectable Change

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

(SEM) Standard Error of Measurement

A

in the context of reliability, the SEM represents the variation in a measurement that is due to ERROR ALONE.
How much change in a measure is likely to be the result of random variability rather than “true change”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

(MDC) minimal detectable change or difference (MDD)

A

some identify the SEM as a measure of the error of any ONE measurement while the MDC is a quantification of the error introduced with repeated measurements.
the error of one measurement is compounded by having that much error again when the second measure is taken.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

(RCI) Reliability Change Index

A

RCI=(current measurement value-previous measurement value)/SD difference
RCI gives the probability that the measured change in a patient would have happened if the patient had not truly changed & the difference is, in fact, measurement error.

17
Q

what types of data do not meet parametric assumptions?

A

categorical, ordinal

18
Q

Cohen’s Kappa (K)

A

coefficient used for dichotomous or categorical data.
Agreement between judges adjusted for chance agreement.
Considered a true measure of concordance.

19
Q

how is a cohen’s kappa data table set up?

A

2x2 contingency table

20
Q

how is a weighted kappa set up? (Kw)

A

3x3 contingency table (greater than 2x2)
when there are more options that raters simply agreeing or not, it is possible to weight the seriousness or magnitude of disagreement.