Validity Flashcards

1
Q

A judgment or estimate of how well a test measures what it purports to measure in a particular context.

A. Validity
B. Inference
C. Validation
D. Local validation studies
E. Trinitarian view

A

A. Validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

A judgment based on evidence about the appropriateness of inferences drawn from test scores

A. Validity
B. Inference
C. Validation
D. Local validation studies
E. Trinitarian view

A

A. Validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

a term used in conjunction with the meaningfulness of a test score

A. Validity
B. Inference
C. Validation
D. Local validation studies
E. Trinitarian view

A

A. Validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

A logical result or deduction

A. Validity
B. Inference
C. Validation
D. Local validation studies
E. Trinitarian view

A

B. Inference

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

True or False: Characterizations of the validity of tests and test scores are frequently phrased in terms such as “acceptable” or “weak.”

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

The process of gathering and evaluating evidence about validity

A. Validity
B. Inference
C. Validation
D. Local validation studies
E. Trinitarian view

A

C. Validation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

True or False: Both test developers and test users may play a role in the validation of a test

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

True or False: It is the test taker’s responsibility to supply validity evidence in the test manual.

A

False; test developer’s

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

True or False: It is not appropriate for test users to conduct their own validation studies with their own groups of test takers

A

False; It may sometimes be appropriate

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

May yield insights regarding a particular population of test takers as compared to the norming sample described in a test manual

A. Validity
B. Inference
C. Validation
D. Local validation studies
E. Trinitarian view

A

D. Local validation studies

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Are absolutely necessary when the test user plans to alter in some way the format, instructions, language, or content of the test.

A. Validity
B. Inference
C. Validation
D. Local validation studies
E. Trinitarian view

A

D. Local validation studies

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Would also be necessary if a test user sought to use a test with a population of test takers that differed in some significant way from the population on which the test was standardized

A. Validity
B. Inference
C. Validation
D. Local validation studies
E. Trinitarian view

A

D. Local validation studies

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Classic conception of validity

A. Validity
B. Inference
C. Validation
D. Local validation studies
E. Trinitarian view

A

E. Trinitarian view

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Critics condemned this approach as fragmented and incomplete

A. Validity
B. Inference
C. Validation
D. Local validation studies
E. Trinitarian view

A

E. Trinitarian view

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

It might be useful to visualize construct validity as being “umbrella validity” because every other variety of validity falls under it.

A. Validity
B. Inference
C. Validation
D. Local validation studies
E. Trinitarian view

A

E. Trinitarian view

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Stated another way, all three types of validity evidence contribute to a unified picture of a test’s validity.

A. Validity
B. Inference
C. Validation
D. Local validation studies
E. Trinitarian view

A

E. Trinitarian view

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

A judgment concerning how relevant the test items appear to be

A. Face Validity
B. Content Validity
C. Test blueprint

A

A. Face Validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Relates more to what a test appears to measure to the person being tested than to what the test actually measures.

A. Face Validity
B. Content Validity
C. Test blueprint

A

A. Face Validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Frequently thought of from the perspective of the test taker, not the test user.

A. Face Validity
B. Content Validity
C. Test blueprint

A

A. Face Validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Lack of this could contribute to a lack of confidence in the perceived effectiveness of the test

A. Face Validity
B. Content Validity
C. Test blueprint

A

A. Face Validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Based on an evaluation of the subjects, topics, or content covered by the items in the test.

A. Face Validity
B. Content Validity
C. Test blueprint

A

B. Content Validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

A judgment of how adequately a test samples behavior representative of the universe of behavior that the test was designed to sample

A. Face Validity
B. Content Validity
C. Test blueprint

A

B. Content Validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

A plan regarding the types of information to be covered by the items

A. Face Validity
B. Content Validity
C. Test blueprint

A

C. Test blueprint

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

The number of items tapping each area of coverage, the organization of the items in the test

A. Face Validity
B. Content Validity
C. Test blueprint

A

C. Test blueprint

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

True or False: The content validity of a test varies across cultures and time

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

One technique frequently used in blueprinting the content areas to be covered in certain types of employment tests

A. Personality tests
B. Behavioral observation
C. Content Validity Ratio

A

B. Behavioral observation

27
Q

Measures agreement among raters regarding how essential an individual test item is for inclusion in a test

A. Personality tests
B. Behavioral observation
C. Content Validity Ratio

A

C. Content Validity Ratio

28
Q

This measure of validity is obtained by evaluating the relationship of scores obtained on the test to scores on other tests or measures

A. Criterion Validity
B. Criterion
C. Criterion contamination
D. Criterion-related Validity

A

A. Criterion Validity

29
Q

A judgment of how adequately a test score can be used to infer an individual’s most probable standing on some measure of interest

A. Criterion Validity
B. Criterion
C. Criterion contamination
D. Criterion-related Validity

A

A. Criterion Validity

30
Q

The standard against which a test or a test score is evaluated

A. Criterion Validity
B. Criterion
C. Criterion contamination
D. Criterion-related Validity

A

B. Criterion

31
Q

The term applied to a criterion measure that has been based, at least in part, on predictor measures

A. Criterion Validity
B. Criterion
C. Criterion contamination
D. Criterion-related Validity

A

C. Criterion contamination

32
Q

A judgment of how adequately a test score can be used to infer an individual’s most probable standing on some measure of interest

A. Criterion Validity
B. Criterion
C. Criterion contamination
D. Criterion-related Validity

A

D. Criterion-related Validity

33
Q

True or False: An adequate criterion must be relevant to the matter at hand

A

True

34
Q

True or False: An adequate criterion should be valid for the purpose for which it is being measured

A

True

35
Q

An index of the degree to which a test score is related to some criterion measure obtained at the same time

A. Concurrent validity
B. Predictive validity
C. Expectancy data
D. Validity coefficient

A

A. Concurrent validity

36
Q

An index of the degree to which a test score predicts some criterion measure

A. Concurrent validity
B. Predictive validity
C. Expectancy data
D. Validity coefficient

A

B. Predictive validity

37
Q

Statistical evidences for concurrent and predictive validity; information found in a table/chart

A. Concurrent validity
B. Predictive validity
C. Expectancy data
D. Validity coefficient

A

C. Expectancy data

38
Q

A correlation coefficient that provides a measure of the relationship between test scores and scores on the criterion measure

A. Concurrent validity
B. Predictive validity
C. Expectancy data
D. Validity coefficient

A

D. Validity coefficient

39
Q

The degree to which an additional predictor explains something about the criterion measure that is not explained by predictors already in use.

A. Incremental Validity
B. Hit Rate
C. Miss Rate
D. Base Rate

A

A. Incremental Validity

40
Q

Proportion of people a test accurately identifies a possessing/exhibiting a particular trait, behavior/ characteristic/attribute

A. Incremental Validity
B. Hit Rate
C. Miss Rate
D. Base Rate

A

B. Hit Rate

41
Q

Proportion of people the test fails to identify as having/ not having a particular characteristic/attribute

A. Incremental Validity
B. Hit Rate
C. Miss Rate
D. Base Rate

A

C. Miss Rate

42
Q

Percentage of people hired under the existing system for a particular position extent to which a particular trait, behavior, characteristic or attribute exists in the population expressed in proportion

A. Incremental Validity
B. Hit Rate
C. Miss Rate
D. Base Rate

A

D. Base Rate

43
Q

Numerical value that reflects the relationship between the number of people to be hired and the number of people available to be hired

A. Selection Ratio
B. False Positive
C. False Negative

A

A. Selection Ratio

44
Q

Type 1 Error

A

B. False Positive

45
Q

A miss wherein the test predicted that the examinee did possess the particular characteristic/ attribute being measured when the examinee did not.

A. Selection Ratio
B. False Positive
C. False Negative

A

B. False Positive

46
Q

Type 2 Error

A. Selection Ratio
B. False Positive
C. False Negative

A

C. False Negative

47
Q

A miss wherein the test predicted that the examinee did not possess the particular characteristic/ attribute being measured when the examinee did

A. Selection Ratio
B. False Positive
C. False Negative

A

C. False Negative

48
Q

An index of the degree to which a test score is related to some criterion measure obtained at the same time.

A. Concurrent Validity
B. Predictive Validity
C. Construct Validity

A

A. Concurrent Validity

49
Q

An index of the degree to which a test score predicts some criterion, measure

A. Concurrent Validity
B. Predictive Validity
C. Construct Validity

A

B. Predictive Validity

50
Q

This measure of validity is arrived at by executing a comprehensive analysis of how scores on the test relate to other test scores and measures and how scores on the test can be understood within some theoretical framework for understanding the construct that the test was designed to measure.

A. Concurrent Validity
B. Predictive Validity
C. Construct Validity

A

C. Construct Validity

51
Q

True or False: If a test is a valid measure of a construct, then high scorers and low scorers should behave as theorized.

A

True

52
Q

How uniform a test is in measuring a single concept

A. Evidence of homogeneity
B. Evidence of changes with age
C. Evidence of pretest–posttest changes
D. Evidence from distinct groups

A

A. Evidence of homogeneity

53
Q

Some constructs are expected to change over time (e.g., reading rate)

A. Evidence of homogeneity
B. Evidence of changes with age
C. Evidence of pretest–posttest changes
D. Evidence from distinct groups

A

B. Evidence of changes with age

54
Q

Test scores change as a result of some experience between a pretest and a posttest (e.g., therapy)

A. Evidence of homogeneity
B. Evidence of changes with age
C. Evidence of pretest–posttest changes
D. Evidence from distinct groups

A

C. Evidence of pretest–posttest changes

55
Q

Scores on a test vary in a predictable way as a function of membership in some group

A. Evidence of homogeneity
B. Evidence of changes with age
C. Evidence of pretest–posttest changes
D. Evidence from distinct groups

A

D. Evidence from distinct groups

56
Q

Scores on the test undergoing construct validation tend to correlate highly in the predicted direction with scores on older, more established tests designed to measure the same (or a similar) construct.

A. Convergent Evidence
B. Discriminant Evidence
C. Factor Analysis

A

A. Convergent Evidence

57
Q

Validity coefficient showing little relationship between test scores and/or other variables with which scores on the test should not theoretically be correlated.

A. Convergent Evidence
B. Discriminant Evidence
C. Factor Analysis

A

B. Discriminant Evidence

58
Q

Class of mathematical procedures designed to identify specific variables on which people may differ

A. Convergent Evidence
B. Discriminant Evidence
C. Factor Analysis

A

C. Factor Analysis

59
Q

A factor inherent in a test that systematically prevents accurate, impartial measurement.

A. Bias
B. Rating Error
C. Halo Effect
D. Fairness

A

A. Bias

60
Q

A judgment resulting from the intentional or unintentional misuse of a rating scale.

A. Bias
B. Rating Error
C. Halo Effect
D. Fairness

A

B. Rating Error

61
Q

Raters may be either too lenient, too severe, or reluctant to give ratings at the extremes (central tendency error).

A. Bias
B. Rating Error
C. Halo Effect
D. Fairness

A

B. Rating Error

62
Q

A tendency to give a particular person a higher rating than he or she objectively deserves because of a favorable overall impression

A. Bias
B. Rating Error
C. Halo Effect
D. Fairness

A

C. Halo Effect

63
Q

The extent to which a test is used in an impartial, just, and equitable way

A. Bias
B. Rating Error
C. Halo Effect
D. Fairness

A

D. Fairness