Measurement/Validity Flashcards

1
Q

Is it possible for a measurement to be perfectly reliable but not valid?
Is it possible for a measurement to be perfectly valid but not reliable?

A
  • Can get close to perfect. Need to consider the fact that data may be by chance in a score.
  • The more valid we are the more confident we are.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

When considering the degree to which a test (variable) measures the construct of interest, what do you need to consider?

A
  • The operational definition!
  • How you define the variables of interest affects the validity of the conclusions.
  • Also need to consider the time of data and its ability to be sensitive to what you are looking for.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Operational definitions gets us closer to a ____

A

unit.

Can’t test a construct but we need to specifically say the way we are analyzing the data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Examples of Operational Definitions

A
  • “Muscle Endurance”: time (s) holding a position vs gravity
  • “Standing balance”: pertibation force (N) required to engage stepping strategy
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is face validity?

A
  • Selective judgement of the measurement; Not testing it (opinion)
  • Weakest form of measurement
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is Content Validity

A
  • Requires a description (checklist) of all the characteristics of a construct; Checklist the measurement must have

Example: units, measurements, amount of sensitivity to a population.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is Criterion-Related?

A
  • How well it stack up against the gold standard
  • Ex: Force Plater - Ground Reaction Force; VO2max - Bruce Protocol; Body Composition - DXA
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Predictive validity vs concurrent validity

Criterion-related

A
  • Predictive validity: gold standard measured in the future; ability to predict something a test theoretically could be able to predict
  • Example: Berg Balance Test: after a stroke predicts length of stay, discharge destination, etc.
  • Concurrent validity: gold standard measured simultaneously; measurement and criterion measure recorded at same time
  • Useful if the test measurement typically is considered to be more efficent or less costly than theh gold standard
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Construct Validity

A
  • Hypothesis testing groups with known differences
  • Ex: people with advanced OA vs people without advanced OA
  • Ability of a test to distinguish between groups with known differences
  • Example: Males and Females known to have different 3D kinematics during weight bearing activities.
  • Need to have a good operational definition or may lead to misgrouping of people
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Most tests performed in PT are ____

A

Nominal

Ex: Normal vs Abnormal

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Sensitivity

A
  • Proportion of individuals with a particular diagnosis who are correctly identified as positive by a test.
  • “True Positives”
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Specificity

A
  • Proportion of individuals without a particular diagnosis who are correctly identified as negative by the test
  • “True negatives”
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Specificity

A
  • Proportion of individuals without a particular diagnosis who are correctly identified as negative by the test
  • “True negatives”
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Mathmatical Equations for Sensitivity and Specificity

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

High Specificity improves confidence in our ability to rule ____ that particular diagnosis

A
  • IN (SpPIN)
  • A positive test result for a test with high specificity rules IN that condition
  • You have that specific condition
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

High SeNsitivity improves confidence in our ability to rule ____ the diagnosis/outcome

A
  • Out (SnOUT)
  • A negative test result for a test with high sensitivity rules OUT that condition
  • EX: Covid Tests; Better to send someone home that isn’t sick into isolation. Error leaned into positive.
16
Q

Positive Predicitve Value

A
  • Proportion of those identified by the test as positive who actually have the diagnosis.
  • Evaluated among people who do and do not have the condition
17
Q

Negative Predictive Value

A
  • Percentage of those identified by the test as negative who actually do not have the diagnosis.
  • Evaluated among to people who do and do not have the condition
18
Q

Positive and Negative Predictive Value Equations

A
19
Q

Clinical Tests are typically good at either….

A

Ruling something in or out. Not both.

Need large populations and need consensus acrosse the board for special tests. That is why you try different special tests to see what works best for you.

20
Q

Sensitivity or Specificity?
* Sensitivity = 92% for bursitis and 88% for cuff abnormalities
* Got it right, positively identified, people that have it
* Specificity = 44% for bursitis and 43% for cuff abnormalities
* Got it right, negatively identified, people that don’t have it
* Positive predictive values = 39% for bursitis and 37% for cuff abnormalities
* Got it right, positively identified, mixed sample
* Negative predictive values = 93.1% for bursitis and 90% for cuff abnormalities
* Got it right, negatively identified, mixed sample

A
  • Good at identifying if they are there. Bad at identifying if something isn’t there. Usually only good at one of the two.
21
Q

Clinicians often ____ tests to help them com to a decision by ruling things in or out

A

cluster (use multiple)

22
Q

When determining how to measure change you need to consider:

A
  • Level of measurement
  • Stability of measurement
  • Baseline scores/end effects
  • Reliability
23
Q

Level of Measurement

A
  • Nominal and ordinal scale data cannot be added/subtracted
  • Magnitude of the change score over time is hard to interpret because these scales do not possess the property of “distance” between intervals
24
Q

Stability of Measurments

A
  • Measures that naturally vary over time (unstable) are less capable of measuring change
  • “true score naturally varies”
  • Ex: blood pressure or body temperature vs. time of day
25
Q

Baseline scores/end feels

A
  • Ceiling effect: meet max of measurement but seeing changes and measure no longer shows it
  • Floor Effect: too challenging or scale doesn’t allow it. Ex: Don’t have pain and start to feel better, can’t grade “better”. EX: how many reps at 600 lbs can you get, none way too challenging.
  • Floor and Ceiling effect, effect the data at the end of the scale. Not seen often in ratio.
26
Q

Reliability

A
  • Inter/Intra-rater reliability
  • 1 = good
  • 0 = bad
27
Q

The more reliable we are, the ____ the SD will be and therefore the ____ the error we will have.

A
  • smaller
  • smaller
28
Q

Why is minimum detectable change important?

A
  • Gives a threshold for a specific measure; tells you how much change you should see to get meaningful change in data.
  • The MDC is based off standard error
29
Q

A ____ reliable instrument leads to ____ measurement error leads to ____ MDC which leads to ____ responsiveness which leads too ____ deteching change.

A
  • highly
  • low
  • small
  • high
  • high ability