Chapter 9: Reliability Flashcards

1
Q

What are the concepts of reliability?

A

Measurement error
Observed score = true score ± error component

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are types of measurement error?

A

Systematic
Random

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are sources of measurement error?

A

The person taking the measurements
The measuring instrument
Variability in the characteristic: ex - BP

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is the generalizability theory?

A

Not all error is random
Some error components can be attributed to other sources, such as rater or test occasion.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is relative reliability?

A

Reflects true variance as a proportion of total variance in a set of scores
Measured as a unitless coefficient
Intraclass correlation coefficients (ICC) and kappa coefficients are commonly used

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is absolute reliability?

A

Indicates how much of a measured value, expressed in the original units, is likely due to error
Standard error of the measurement (SEM) is commonly used

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What should we understand about reliability?

A

Reliability exists in a context
- Relevant to a tool’s application
Reliability is not all-or-none
- Exists to some extent in any instrument

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are types of reliability?

A

Test-retest
Rater
Alternate forms
Internal consistency

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are considerations for test-retest reliability?

A

Most meaningful for measures that do not rely on raters
The interval between tests
- To support stability of the measurement
Carryover
- From practice or learning
Testing effects
- Act of measurement changes the outcome

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is intrarater reliability?

A

one rater

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is interrater reliability?

A

2+ raters
Best when all raters measure the same response

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What are change scores?

A

With large error variance, the difference from trial 1 to trial 2 may cancel out true score and be composed of mostly error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is regression toward the mean?

A

Tendency for extreme scores to fall closer to the mean upon retesting

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is minimal detectable change?

A

Based on the standard error of the measurement (SEM)
Amount of change that goes beyond error
Also known as minimal detectable difference, smallest real difference, smallest detectable change, coefficient of repeatability, or the reliability change index.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

How can we maximize reliability?

A

Standardize measurement protocols
Train raters
Calibrate and improve the instrument
Take multiple measurements
Choose a sample with a range of scores
- Must have variance in scores to show reliability
Pilot testing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is classical measurement theory?

A

any observed score (Xo) consists of two components: a true score (Xt) that is a fixed value, and an unknown error component (E) that may be large or small depending on the accuracy and precision of our measurement procedures.

17
Q

What is measurement error?

A

Any difference between the true value and the observed value

18
Q

What is systematic error?

A

Systematic errors are predictable errors of measurement. They occur in one direction, consistently overestimating or underestimating the true score.
Systematic errors are consistent. Consequently, systematic errors are not a threat to reliability. Instead, systematic errors only threaten the validity of a measure

19
Q

What is random error?

A

Presuming that an instrument is well calibrated and there is no other source of systematic bias, measurement errors can be considered “random” since their effect on the observed score is not predictable.
These errors are a matter of chance, possibly arising from factors such as examiner or subject inattention, instrument imprecision, or unanticipated environmental fluctuation.

20
Q

The intraclass correlation coefficient is used with what type of data?

A

quantitative

21
Q

The kappa coefficient is used with what type of data?

A

categorical

21
Q

What is alternate forms reliability?

A

Alternate forms reliability, also called equivalent or parallel forms, assesses the differences between scores on the different forms to determine whether they agree.

22
Q

What is internal consistency?

A

Internal consistency, or homogeneity, reflects the extent to which the items that comprise a multi-item test succeed in measuring the various aspects of the same characteristic and nothing else.