Week 7 - Measurement- Reliability and Validity Flashcards
(7 cards)
What are some reliability terms
Systematic Error:
- Errors that are consistent, predictable, and usually due to a flaw in the measurement system
Random Error:
- Errors that are unpredictable and occur by chance due to variability in measurement conditions or human error
Intra-rater Reliability:
- The degree of consistency when the same person (rater) measures or assesses the same thing multiple times under similar conditions.
Inter-rater Reliability:
- The degree of agreement or consistency between different raters measuring or assessing the same thing.
what is measurement validity
Measurement validity ensures that a tool or method accurately measures what it is intended to measure.
what are the types of measurement validity
Face Validity:
Whether the measurement appears, at first glance, to measure the intended concept.
Content Validity:
The extent to which the measurement covers all aspects or dimensions of the concept.
Criterion Validity:
How well the measurement correlates with an established “gold standard” measure of the same concept, showing that both tools measure the same thing.
Construct Validity:
How well the measurement reflects the theoretical construct it aims to assess. It evaluates whether the tool truly captures the full scope of an abstract concept, beyond just one dimension.
what is measurement reliability
Reliability refers to how dependable, stable, and consistent a measurement tool or performance is when repeated under the same conditions.
what are the two key concepts of reliability
Consistency:
This is measured by correlation coefficients
like pearsons
ranging from 0 (no correlation) to 1 (perfect correlation), showing how strongly two measurements relate.
Agreement:
Beyond correlation, agreement measures how much two measurements differ in actual units (e.g., cm, °C, minutes).
what factors can affect reliability
Test-retest reliability: consistency of the test over time
Intra-tester reliability: consistency when the same tester repeats the measurement
Inter-rater reliability: evaluates the ability of different raters to obtain the same measurement relative to each other.
Correlation Coefficients and Their Interpretation
Correlation coefficients (like Pearson’s r or ICC) range from 0 to 1 (or sometimes –1 to 0 for negative correlation).
0 to ±0.25: No or poor reliability
±0.25 to ±0.50: Fair reliability
±0.50 to ±0.75: Moderate to good reliability
Above ±0.75: Very good to excellent reliability