Module 6-Slides Flashcards

(41 cards)

1
Q

Measurement

A

Process of assigning NUMERALS to variables to represent QUANITITIES of characteristics according to certain rules

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Construct

A

An ABSTRACT variable that is not observable and is defined by the measurement used to assess it
Considered a latent trait because it reflects a property within a person and is not externally observable
~intelligence, health, pain, mobility, and depression

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Purpose of Measurement

A

Way of understanding, EVALUTING, and differentiating characteristics of people, objects, and systems by scientists and clinicians
Allows to communicate in ONJECTIVE TERMS, giving a common sense of “how much” or “how little” w/out ambiguous interpretation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Levels of Measurement

A

Nominal
Ordinal
Interval
Ratio

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Ratio

A

Distance, age, time, decibels, weight / Numbers represent units with equal intervals, measured from true zero
*The highest level of measurement with an absolute zero point

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Interval

A

Calendar years, Celsius, Fahrenheit / Numbers have equal intervals, but no true zero
*Possesses rank order but also has known and equal intervals between consecutive values but no true zero

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Ordinal

A

Manual muscle test, function, pain assessment scale / Numbers indicate rank order
*A rank-ordered measure where intervals between values are unknown and likely unequal

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Nominal

A

Gender, blood type, diagnosis, ethnicity / Numerals are category labels
*Classifies objects or people into categories with no quantitative order

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

T/F Measurements cannot be taken at different LEVELS or rated using various SCALES.

A

False; Measurements CAN be taken at different LEVELS or rated using various SCALES

Example: pain measurement
yes or no: nominal scale
from 0-10: ordinal scale

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Why is it important to accurately identify the “level” of measurement?

A

Because Selection of Statistical tests is based on certain assumptions about data including but not limited to the level of measurement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Parametric tests

A

Arithmetic manipulations requiring Interval or Ratio level of data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Nonparametric tests

A

Do not make the same assumptions; are used with Ordinal or Nominal data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Reliability

A

The extent to which “a measured value can be obtained CONSISTENTLY during REPEATED assessment of unchanging behavior”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What are the 2 basic types of measurement error?

A

Systematic error
Random error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Systematic error

A

Predictable, occurring in a consistent overestimate or underestimate of a measure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Random error

A

Have no systematic bias and can occur in any direction or amount

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Sources of measurement error

A
  1. Measuring INSTRUMENT itself: does not perform in the same way each time
  2. The person/individual taking the measurements (the rater): does not perform the test properly
  3. VARIABILITY of the characteristic being measured: the variable being measured is not consistent over time (ex: BP)
18
Q

Reliability coefficient

A

Provide values that help estimate the degree of reliability (a range from 0.0 to 1.0)

19
Q

4 general approaches to reliability testing

A
  1. Test-retest reliability
  2. Rater reliability
  3. Alternate forms
  4. Internal consistency
20
Q

Test-Retest Reliability

A

An assessment of how well an instrument will perform from one trial to another assuming that no real change in performance has occurred

Coefficient:
-ICC (Intraclass correlation coefficients) for quantitative
-Kappa coefficient for categorial

21
Q

Inter-Rater (two or more raters) Reliability

A

Concerns variation between two or more raters who are measuring the same property

Coefficient: ICC or Kappa

22
Q

Intra-Rater (one rater) Reliability

A

A measure of the stability of data recorded by one tester across two or more trials

Coefficient: ICC or Kappa

23
Q

Change Scores

A

Reflect difference in performance from one session to another, often a PRETEST AND POSTEST. If measures don’t have strong reliability, change scores may primarily be a reflection of error

24
Q

Reliability of measurement

A

A prerequisite for being able to interpret change scores

25
Minimal detectable change (MDC)
Amount of change in a variable that must be achieved beyond the minimal error in a measurement, a threshold above which can be confident that a change reflects true change and not just error
26
Minimal detectable difference (MDD)
Amount of change that goes beyond error; smallest real difference, smallest detectable change, or the reliability change index
27
Measurement validity
Concerning the meaning or interpretation that we give to a measurement Characterized as the extent to which a test measures what itis intended to measure
28
Distinctions Between Reliability and Validity
Reliability relates to consistency of a measurement Validity relates to alignment of the measurement with a targeted construct Measuring validity is NOT as straightforward as reliability
29
Similarities Between Reliability and Validity
Do not consider it as all-or-none (1 or 0)
30
How can validity be fairly evaluated?
Only within the context of an instrument's intended use
31
Reliability and Validity scores
A. Scores are reliable, not valid (missing the center) B. Scores show random error, average validity (near the center) C. Scores are not reliable, not valid (off the center) D. Scores are both reliable and valid (center)
32
T/F A reliable measure guarantees that the measure is valid
False; it does NOT guarantee it
33
Types of Evidence for Validity
Depending on specific conditions, several types of evidence can be used to support a tool's use, often the 3 Cs
34
The 3 Cs
1. Content validity 2. Criterion-related validity 3. Construct validity
35
Content validity
Establishes that the multiple items that make up a questionnaire, inventory, or scale adequately sample a wide domain or (the UNIVERSE of content) that defines the variable or construct being measured
36
Criterion-related validity
Establishes the correspondence between a Target test (to be validated) and a REFERENCE OR "GOLD" STANDARD (as the criterion) to determine that the Target test is measuring the variable of interest
37
Construct validity
Establishes the ability of an instrument to measure the dimensions and theoretical foundation of an abstract construct ~Abstract constructs do not directly manifest as physical events; thus, making inferences through observable behaviors, measurable performance, or patient self-report
38
Minimal clinically important difference (MCID)
Smallest difference that signifies an important difference in a patient's condition
39
Methodological Research
Involves the development and testing of both reliability and validity of measuring instruments to determine their application and interpretation in a variety of clinical situations
40
Ways to maximize Reliability
Standardize measurement protocols Train raters Calibrate and improve the instrument Take multiple measurements Choose a sample with a range of scores Pilot testing
41
Ways to maximize Validity
Fully understand the construct Consider the clinical context Consider several approaches to validation Consider validity issues if adapting existing tools Cross-validate outcomes