Reliability & Validity 1 Flashcards

1
Q

reliability

A

degree to which the same event produces the same result

-consistent, repeatable, dependable, reproducable

many reliabllity (agreement) estimates are based on measures of correlation (assoc), while also accounting for measurement error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

measurement error

A

“the noise”

Inc ME = des reliability

dec ME= inc reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

types of measurement error

A

systematic errors- predictable, consistent, and constant

random errors- unpredictable, inconsistent, variable

**great graph in notes on decription

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

measurement error sources

A

examiner, examined, examination

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

examiner-

A

systematic errors- consistent use of improper landmarks

random errors- fatigue, inattention

biological variation in senses, rely on sight, hearing an touch. Variation in acuity affects agreement (between clinicians, within the same clinician)

intratester, intertester, and test retest reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

examined

A

biological variation in the system

clinical attributes vary- HR, BP, pain intensity. these variations lead to inconsistencies.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

examination

A

disruptive environments. Affecting senses (dim lighting, noise environment). Privacy of setting.

Disruptive interactions.

Incorrect use of diagnostic tools.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How to minimize measurement error

A

operational definition

proper training

inspection of equipment

indep interpretation of test results

“blinding” of examiner to diagnosis

seperate observaion from inference

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

test retest reliability

A

measures the consistency of the testing instrument over time Most basic way to assess.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

internal consistency

A

measures the extent to which an instrument measures aspects of a certain characteristic (chronbach’s apha).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

intra-rater reliability

A

measures the stability of measures recorded by one individual across 2 or more trials

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

inter-rater reliability

A

variation between 2 or more raters measuring the same group of subjects

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

ICC

A

reflects both relationship and agreement

Cant measure reliability with correlation

compares between individual error and error variability (ratio)

ratio approaching 0 indicates no agreement

ratio approaching 1.0 suggests perfect agreement

no universal standards for interpretation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

ICC interpretation

A

> 0.90 = excellent reliability

> 0.75= good reliability

<0.75= poor to moderate reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Pearson product moment correlation

A

major flaws as in its independent use as an indicator of reliability

why? it is a measure of association between ratings and not a true measure of agreement

Does not distinguish between perfect agreement and systematic bias

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

quantifying reliability-continuous data

A

ICC appears to be best realiability coefficient option when analyzing cont data

17
Q

quantifying reliability-categorical variables

A

percent agreement

kappa coefficient (K statistic). chance corrected proportion of agreement.

kappa appears to be the best reliability coefficient option when analyzing categorical data

18
Q

kappa values

A

k=proportion of observed agreements- chance agreement/ 1-chance agreement

kappa=1.0 (perfect agreement)

kappa=0.0 (no better than chance agreement)

kappa=<0.0 (less than chance agreement)

**graph in notes

19
Q

bland-altman plots

A

another approach to agreement

plots extensively used to evaluate the agreement among two different instruments or two measurements techniques