Week 5 - Reliability and Validity Flashcards

1
Q

internal and external validity are considered in a what?

A

study

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

reliability and validity are considered in a what?

A

measure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

when evaluating a ___ discuss the internal and external validity

A

study

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

when evaluating a ___ discuss the reliability and validity.

A

measure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q
  • process of assigning numerals to variables to represent quantities of characteristics according to certain rules.
  • approach to detecting and documenting relative conditions or events.
A

measurement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

____ decreases ambiguity and increases understanding via the expression of qualitative/quantitative info about a given variable.

A

measurement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

numbers represent units with equal intervals, measured from true zero.

A

ratio scale

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

name 3 examples of ratio measurements.

A

distance, age, time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

numbers have equal intervals but no true zero.

A

interval scale

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

name 2 examples of interval measurements.

A

calendar years, temperature

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

numbers indicate rank order

A

ordinal scale

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

name 2 examples of ordinal measurements.

A

mmt, pain

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

numerals are category labels.

A

nominal scale

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

name 2 examples of nominal measurements.

A

gender, blood type

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

some level of inconsistency is inevitable

A

measurement error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

name 3 sources of inconsistency in measurements.

A
  • tester (rater)
  • instrument
  • subject or character itself
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

describe the formula for observed score.

A

observed score (x) = true score(T) +- measurement error (E)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q
  • consistent, unidirectional, and predictable (if detected).
  • relatively easy to correct; recalibration or add or subject the correction.
  • a concern of validity
A

systematic errors

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

occur by chance and alter scores in unpredictable ways; chance fluctuations (tend to cancel out over repeated measurements)

A

random errors

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

name 2 examples of systematic errors.

A

illiteracy, confusing terms

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

name 3 examples of random errors.

A

mood, level of fatigue, motivation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

___ ___ are generally not influenced by magnitude of true score.

A

random error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

the ____ the sample, the more the random errors are cancelled out.

A

larger

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

name 4 common sources of error.

A
  • respondent
  • situational factors
  • measurer
  • instrument
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
- not all error is random | - some error components can be attributed to other sources, such as rater or test occasion.
generalizability theory
26
the consistency of your measurement instrument
reliability
27
the degree to which an instrument measures the same way each time it is used under the same condition with the same subjects
reliability
28
reflects how consistent and free from error a measurement is (ex: reproducible/dependable)
reliability
29
reliability estimates are based upon score variance: the variability or distribution of scores
reliability coefficient
30
how is reliability/reliability coefficient measured? (formula)
reliability coefficient = true variance/ (true variance + error variance)
31
describe the range of the reliability coefficient.
<0.50 = poor 0.50-0.75 = moderate >0.75 = good (the closer to 1 the better)
32
reflects the degree of association or proportion between scores
correlation
33
reflects the actual equality of scores
agreement
34
do not affect reliability coefficient since relative scores remain consistent (high correlation).
systematic errors
35
name the 4 types of reliability.
- test-retest reliability - rater reliability - alternate forms reliability - internal consistency
36
-indicates the stability (consistency) of an instrument through repeated trials.
test-retest reliability
37
addresses the rater's influence on the accuracy of the measurement
intra-rater reliability
38
addresses the variation between separate raters on the same group of participants.
inter-rater reliability
39
how is test-retest reliability and rater reliability assessed?
intraclass correlation coefficient (ICC) or kappa
40
- equivalent or parallel forms reliability | - eliminates memory of particular responses in traditional test-retest format.
alternate forms reliability
41
- homogeneity; the degree of relatedness of individual items measuring the same thing (factor/dimension) - how well items "hang together"
internal consistency
42
how is alternate forms reliability assessed?
correlation coefficients
43
how is internal consistency assessed?
cronbach's coefficient alpha
44
reliability exists in a ____.
context
45
reliability is not ____. it exists to some extent in any instrument.
all-or-none
46
name 6 ways to maximize reliability.
- standardize measurement protocols - train raters - calibrate and improve the instrument - take multiple measurements - choose a sample with a range of scores - pilot testing
47
how consistent it is given the same conditions
reliability
48
if it measures what it is supposed to and how accurate it is
validity
49
the degree to which an instrument actually measures what it is meant to measure
validity
50
how is validity determined?
by the relationship btwn test results and certain behaviors, characteristics or performances.
51
____ is a prerequisite for ____, but not vice-versa.
reliability, validity
52
name the 4 types of measurement validity.
- face validity - content validity - criterion-related validity - construct validity
53
instrument appears to test what it is supposed to and it seems reasonable to implement; subjective process
face validity
54
what is the weakest form of validity?
face validity
55
instrument adequately addresses all aspects of a particular variable of interest and nothing else; subjective process by "panel of experts" during text development; non-statistical procedure
content validity
56
new instrument is compared to a "gold standard" measure; objective and practical test of validity
criterion-related validity
57
target and criterion measures are taken relatively at the same time
concurrent validity
58
target measure will be suitable predictor of future criterion score
predictive validity
59
name an example of predictive validity.
the SAT
60
- instrument effectively measures a specific abstract ideal. | - reliant upon content validity of construct and underlying theoretical context
construct validity
61
name 5 methods of construct validation.
- known groups method - convergent and divergent validity - factor analysis - hypothesis testing - criterion validation
62
two measures believed to reflect the same underlying phenomenon will yield similar results or will correlate highly.
convergent validity
63
indicates that different results or low correlations are expected from measures that are believed to assess different characteristics.
divergent validity
64
a test to discriminate btwn 2 or more groups.
discriminant validity
65
name the 2 main types of construct validity.
convergent and divergent validity
66
name the 2 main types of criterion-related validity.
concurrent and predictive validity
67
the ability of an instrument to accurately detect change when it has occurred.
responsiveness to change
68
smallest difference in a measured variable that subjects perceive as beneficial.
minimally clinically important difference (MCID)
69
a standardized assessment designed to compare and rank individuals within a defined population.
norm-referenced test
70
interpreted according to a fixed standard that represents an acceptable level of performance.
criterion-referenced test
71
name 3 things that change scores are used to do.
- demonstrate effectiveness of an intervention. - track the course of a disorder over time. - provide a context for clinical decision making
72
the smallest difference that signifies an important difference in a pts. condition
minimal clinically important difference (MCID)
73
more meaningful for the subjects and clinicians
clinically important data
74
the methods and measures we used for the study are good and will produce valid results.
internal validity
75
relates to how well we can generalize the findings of the study to the entire population we're interested in.
external validity
76
must have ___ validity to also have ___ validity.
internal, external
77
way to conceptualize a variable to reduce ambiguity about it.
measurement
78
___ errors are harder to correct.
random
79
what is the first step in making a measure standardized?
reliability
80
administer a test twice to assess agreement btwn the 2 tests
test-retest reliability
81
participants could get better the second time they take the test.
practice effect
82
statistic that reflects both agreement and correlation
ICC (intraclass correlation coefficient)
83
one rater; assess the same person twice to see whether your scoring has changed
intra-rater reliability
84
considers the constructs rather than the consistency of the measurements.
factor analysis