Week 7 - Measurement- Reliability and Validity Flashcards

(7 cards)

1
Q

What are some reliability terms

A

Systematic Error:
- Errors that are consistent, predictable, and usually due to a flaw in the measurement system

Random Error:
- Errors that are unpredictable and occur by chance due to variability in measurement conditions or human error

Intra-rater Reliability:
- The degree of consistency when the same person (rater) measures or assesses the same thing multiple times under similar conditions.

Inter-rater Reliability:
- The degree of agreement or consistency between different raters measuring or assessing the same thing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

what is measurement validity

A

Measurement validity ensures that a tool or method accurately measures what it is intended to measure.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

what are the types of measurement validity

A

Face Validity:
Whether the measurement appears, at first glance, to measure the intended concept.

Content Validity:
The extent to which the measurement covers all aspects or dimensions of the concept.

Criterion Validity:
How well the measurement correlates with an established “gold standard” measure of the same concept, showing that both tools measure the same thing.

Construct Validity:
How well the measurement reflects the theoretical construct it aims to assess. It evaluates whether the tool truly captures the full scope of an abstract concept, beyond just one dimension.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

what is measurement reliability

A

Reliability refers to how dependable, stable, and consistent a measurement tool or performance is when repeated under the same conditions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

what are the two key concepts of reliability

A

Consistency:
This is measured by correlation coefficients

like pearsons
ranging from 0 (no correlation) to 1 (perfect correlation), showing how strongly two measurements relate.

Agreement:
Beyond correlation, agreement measures how much two measurements differ in actual units (e.g., cm, °C, minutes).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

what factors can affect reliability

A

Test-retest reliability: consistency of the test over time

Intra-tester reliability: consistency when the same tester repeats the measurement

Inter-rater reliability: evaluates the ability of different raters to obtain the same measurement relative to each other.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Correlation Coefficients and Their Interpretation

A

Correlation coefficients (like Pearson’s r or ICC) range from 0 to 1 (or sometimes –1 to 0 for negative correlation).

0 to ±0.25: No or poor reliability

±0.25 to ±0.50: Fair reliability

±0.50 to ±0.75: Moderate to good reliability

Above ±0.75: Very good to excellent reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly