Ch. 6 - Validity Flashcards

(43 cards)

1
Q

validity

A

judgment of how well a test measures what it purports to measure in a particular context; judgment based on evidence about the appropriateness of inferences drawn from test scores

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

inference

A

logical result or deduction

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

a valid test has been shown to be valid for

A

a particular use with a particular population of testtakers at a particular time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

no test is ____ valid

A

universally valid for all times, all uses, and with all populations

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

a test is valid within ____

A

“reasonable boundaries” of a contemplated usage

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

validation

A

the process of gathering and evaluating evidence about validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

validation studies can be done with ____

A

a group of testtakers, to provide insights regarding a particular group of testtakers as compared to a norming sample (local validation)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

what are the three categories of validity?

A

content, criterion-related, and construct (umbrella)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

content validity

A

scrutinizing the test’s content

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

criterion-related validity

A

relating scores obtained on the test to other test scores or other measures

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

construct validity

A

umbrella validity; all others fall under it. a comprehensive analysis.
analysis of how test scores relate to other measures and how scores can be understood within some theoretical framework. (maybe your hypothesis about what’s different about high and low test scorers)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

face validity

A

not one of the three C’s

what a test appears to measure or how relevant the test items look to the testtaker

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

why does face validity matter?

A

testtakers may not put forth good effort; parents may complain about their kids taking a non-face-valid test; lawsuits may be filed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

content validity

A

judgment of how adequately a test samples behavior representative of the whole universe of behavior that the test was designed to sample.

e. g. assertiveness test assesses behavior on the job, in social situations, etc
e. g. test samples all chapters

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

how can we judge content validity?

A

get a panel of judges or experts - if more than half indicate that an item is essential, that item has some content validity. more people agree, more content validiy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

what’s a problem with establishing content validity?

A

we frequently don’t know all of the items in the theoretical domain of possible items

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

criterion-related validity

A

a judgment of how adequately a test score can be used to infer an individual’s standing on a criterion being measured (3 types: concurrent, predictive, incremental)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

concurrent validity

A

a judgment of how adequately a test score can be used to infer an individual’s present standing on a criterion (ex: diagnosing someone from a test when you already know they have the thing - perhaps from a diff validated test. the test might be an easier way to reach the diagnosis)

19
Q

predictive validity

A

measures of the relationship between the test scores and a criterion measure obtained at a future time (ex: using GRE scores to predict graduate course passing)

20
Q

criterion

A

standard against which a test score is measured; can be almost anything (behavior, diagnosis)

21
Q

a good criterion is

A

relevant (pertinent to matter at hand); valid (if X is being used to predict Y, then we need to know X is valid); uncontaminated (not based on a predictor measure (if X is used to predict Y, and Y is in part based on X, then X is contaminiated)

22
Q

what are three types of criterion-related validity?

A

concurrent validity
predictive validity
incrimental validity

23
Q

base rate

A

extent to which a particular trait, behavior, etc exists in the popluation (proportion)

24
Q

hit rate

A

proportion of people that a test accurately identifies as having a specific trait

25
miss rate
proportion of people a test failes to identify as having a trait. an inaccurate prediction
26
false positive
test identifies a testtaker as having the trait when they don't
27
false negative
test does not identify a testtaker as having the trait when they do
28
validity coefficient
correlation coefficient; provides a measure of the relationship between test scores and the scores on the criterion measure; no rules to determine a minimum accepted size; affected by restriction of range
29
restriction of range
self-selection, testing firefighting skills on only firefighters, not general population
30
incremental validity
kind of part of predictive; degree to which an additional predictor variable explains something about the criterion measure that's not already explained by predictors already in use (e.g.: how much sleep, how much time spent in library, how much time spent studying should both help predict GPA. if library is overlapping with studying, it doesn't have great incremental validity)
31
one measure of a test's value is
the extent to which it improves on the hit rate for a trait that existed before the test was used
32
construct validity is not mutually exclusive with ____
criterion
33
construct validity is shown when...
(1) test is homoegenous (2) test scores change over time (3) post-test scores vary as predicted (from some intervention) (4) test scores from people of different groups vary as predicted (AKA method of contrasted groups) (5) test scores correlate with scores on other tests as predicted (BDI correlates with another depression index)
34
example of using the method of contrasted groups
psych patients are more depressed than random Wal-Mart shoppers
35
convergent evidence
if scores on the new test correlate highly in the predicted direction with scores on an older, more established, and already validated test that's testing the same thing
36
divergent evidence
if scores on the new test don't correlate with scores on a test that you didn't theorize they would correlate with
37
a valid test can be used...
fairly or unfairly
38
test bias
a factor inherent in a test that systematically prevents accurate, impartial measurement. systematic = not due to change. can be identified and remedied; ex: weighted coin toss
39
what's a type of test bias?
rating error
40
rating error
a judgment resulting from the intentional or unintentional misuse of a raiting scale
41
examples of rating error
leniency/generosity error (lenient on grading) severity errors - always rate bad central tendency errors - all ratings at the middle halo effect - tendency of a rater to give a ratee a higher rating than they deserve for everything (e.g., Lady Gaga speech never going to be bad no matter the topic if the rater is the president of her fan club)
42
test fairness
the extent to which a test is used in an impartial, just, and equitable way; has to do with values and opposing points of view
43
test ____ can be seen as a statistical problem, test ____ cannot
test bias can be seen as a statistical problem, test fairness cannot