UNIT 6 Flashcards

1
Q

The extent to which a score from a selection measure is stable and free
from error

A. Validity
B. Consistency
C. Reliability

A

C. Reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

True or False: If a score from a measure is not stable or error-free, it is not useful.

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Each one of several people take the same test
twice.
A. Test-retest reliability
B. Temporal stability
C. Alternate-Forms Reliability

A

A. Test-retest reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

The consistency of test scores across time.
A. Test-retest reliability
B. Temporal stability
C. Alternate-Forms Reliability

A

B. Temporal stability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

True or False: the time interval should be long enough so that the specific test answers have not been memorized, but short enough so that the person has not changed significantly.

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

True or False: Typical time intervals between test administrations range from three weeks to three years

A

False; three days to three months

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

The typical test-retest reliability coefficient for tests used by organizations is

A. 0.81
B. 0.92
C. 0.86
D. 0.79

A

C. 0.86

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

amount of anxiety that an individual normally has all the time,
A. Trait anxiety
B. State anxiety

A

A. Trait anxiety

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

the amount of anxiety an individual has at any given moment.
A. Trait anxiety
B. State anxiety

A

B. State anxiety

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

The extent to which two forms of the same test are similar.

A. Alternate-forms reliability
B. Counterbalancing
C. Form stability

A

A. Alternate-forms reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

A method of controlling for order effects by giving half of a sample Test A first, followed by Test B, and giving the other half of the sample Test B first, followed by Test A.

A. Alternate-forms reliability
B. Counterbalancing
C. Form stability

A

B. Counterbalancing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

True or False: Applicants retaking the same cognitive ability test will increase their scores about twice as much as applicants taking an alternate form of the cognitive ability test

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

The extent to which the scores on two forms of a test are similar.
A. Alternate-forms reliability
B. Counterbalancing
C. Form stability

A

C. Form stability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

True or False: In alternate-forms reliability, the time interval should be as long as possible.

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

The extent to which responses to the same test items are consistent.

A. Item stability
B. Item homogeneity
C. Kuder-Richardson Formula 20

A

A. Item stability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

The extent to which test items measure the same construct.

A. Item stability
B. Item homogeneity
C. Kuder-Richardson Formula 20

A

B. Item homogeneity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

A statistic used to determine internal reliability of tests that use items with dichotomous answers

A. Item stability
B. Item homogeneity
C. Kuder-Richardson Formula 20

A

C. Kuder-Richardson Formula 20

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

A form of internal reliability in which the consistency of item responses is determined by comparing scores on half of the items with scores on the other half of the items.

A. Split-half method
B. Spearman-Brown prophecy formula
C. Coefficient alpha

A

A. Split-half method

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Used to correct reliability coefficients resulting from the split-half method.

A. Split-half method
B. Spearman-Brown prophecy formula
C. Coefficient alpha

A

B. Spearman-Brown prophecy formula

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

A statistic used to determine internal reliability of tests that use interval or ratio scales.

A. Split-half method
B. Spearman-Brown prophecy formula
C. Coefficient alpha

A

C. Coefficient alpha

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

True or False: K-R 20 is used for dichotomous items, whereas the coefficient alpha can be used not only for dichotomous items but also for tests containing interval and ratio items

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

The extent to which two people scoring a test agree on the test score, or the extent to which a test is scored correctly.

A. Scorer reliability
B. Validity

A

A. Scorer reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

When deciding whether a test demonstrates sufficient reliability, two factors must be considered:

A. The contents of the test
B. The magnitude of the reliability coefficient
C. The people who will be taking the test.
D. The validity of the test

A

B&C

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

The degree to which inferences from test scores are justified by the evidence.

A. Scorer reliability
B. Validity

A

B. Validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

The potential validity of a test is limited by its

A. Consistency
B. Accuracy
C. Reliability
D. Utility

A

C. Reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

The extent to which tests or test items sample the content that they are supposed to measure.

A. Content Validity
B. Criterion Validity
C. Construct Validity
D. Face Validity

A

A. Content Validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

The extent to which a test score is related to some measure of job performance

A. Content Validity
B. Criterion Validity
C. Construct Validity
D. Face Validity

A

B. Criterion Validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Most theoretical of all the validity types

A. Content Validity
B. Criterion Validity
C. Construct Validity
D. Face Validity

A

C. Construct Validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

The extent to which a test actually measures the construct that it purports to measure.

A. Content Validity
B. Criterion Validity
C. Construct Validity
D. Face Validity

A

C. Construct Validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

The extent to which a test appears to be valid

A. Content Validity
B. Criterion Validity
C. Construct Validity
D. Face Validity

A

D. Face Validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

A measure of job performance, such as attendance, productivity, or a supervisor rating.

A. Criterion
B. Concurrent Validity
C. Predictive Validity

A

A. Criterion

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

A form of criterion validity that correlates test scores with measures of job performance for employees currently working for an organization

A. Criterion
B. Concurrent Validity
C. Predictive Validity

A

B. Concurrent Validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Given to a group of employees who are already on the job

A. Criterion
B. Concurrent Validity
C. Predictive Validity

A

B. Concurrent Validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

A form of criterion validity in which test scores of applicants are compared at a later date with a measure of job performance

A. Criterion
B. Concurrent Validity
C. Predictive Validity

A

C. Predictive Validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

Given to job applicants who are going to be hired

A. Criterion
B. Concurrent Validity
C. Predictive Validity

A

C. Predictive Validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

A narrow range of performance scores that makes it difficult to obtain a significant validity coefficient

A. Restricted Range
B. Validity Generalization (VG)
C. Synthetic Validity

A

A. Restricted Range

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

The extent to which inferences from test scores from one organization can be applied to another organization

A. Restricted Range
B. Validity Generalization (VG)
C. Synthetic Validity

A

B. Validity Generalization (VG)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

A form of validity generalization in which validity is inferred on the basis of a match between job components and tests previously found valid for those job components

A. Restricted Range
B. Validity Generalization (VG)
C. Synthetic Validity

A

C. Synthetic Validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

A form of validity in which test scores from two contrasting groups “known” to differ on a construct are compared.

A. Known-group Validity
B. Barnum Statements
C. Mental Measurements Yearbook (MMY)

A

A. Known-group Validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

Statements, such as those used in astrological forecasts, that are so general that they can be true of almost anyone.

A. Known-group Validity
B. Barnum Statements
C. Mental Measurements Yearbook (MMY)

A

B. Barnum Statements

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

A book containing information about the reliability and validity of various psychological tests.

A. Known-group Validity
B. Barnum Statements
C. Mental Measurements Yearbook (MMY)

A

C. Mental Measurements Yearbook (MMY)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

Most common source of test information

A. Known-group Validity
B. Barnum Statements
C. Mental Measurements Yearbook (MMY)

A

C. Mental Measurements Yearbook (MMY)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

Should be considered if two or more tests have similar validities

A. Cost
B. Group testing
C. Computer-adaptive testing (CAT)

A

A. Cost

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

usually less expensive and more efficient although important information may be lost

A. Cost
B. Group testing
C. Computer-adaptive testing (CAT)

A

B. Group testing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

type of test taken on a computer in which the computer adapts the difficulty level of questions asked to the test taker’s success in answering previous questions

A. Cost
B. Group testing
C. Computer-adaptive testing (CAT)

A

C. Computer-adaptive testing (CAT)

46
Q

provide an estimate of the percentage of total new hires who will be successful employees if a test is adopted

A. Taylor-Russell tables
B. Expectancy charts and Lawshe tables
C. Utility formula

A

A. Taylor-Russell tables

47
Q

organizational success

A. Taylor-Russell tables
B. Expectancy charts and Lawshe tables
C. Utility formula

A

A. Taylor-Russell tables

48
Q

provide a probability of success for a particular applicant based on test scores

A. Taylor-Russell tables
B. Expectancy charts and Lawshe tables
C. Utility formula

A

B. Expectancy charts and Lawshe tables

49
Q

individual success

A. Taylor-Russell tables
B. Expectancy charts and Lawshe tables
C. Utility formula

A

B. Expectancy charts and Lawshe tables

50
Q

provides an estimate of the amount of money an organization will save if it adopts a new testing procedure

A. Taylor-Russell tables
B. Expectancy charts and Lawshe tables
C. Utility formula

A

C. Utility formula

51
Q

conduct a criterion validity study with test scores correlated with some measure of job performance

A. Criterion validity coefficient
B. Selection ratio
C. Base rate

A

A. Criterion validity coefficient

52
Q

Validity generalization

A. Criterion validity coefficient
B. Selection ratio
C. Base rate

A

A. Criterion validity coefficient

53
Q

the percentage of people an organization must hire

A. Criterion validity coefficient
B. Selection ratio
C. Base rate

A

B. Selection ratio

54
Q

the percentage of employees currently on the job who are considered successful

A. Criterion validity coefficient
B. Selection ratio
C. Base rate

A

C. Base rate

55
Q

True or False: The higher the selection ratio, the greater the potential usefulness of the test

A

False; the lower the selection ratio

56
Q

simplest but least accurate
A. First method
B. Second method

A

A. First method

57
Q

more meaningful method
A. First method
B. Second method

A

B. Second method

58
Q

Employees are split into two equal groups based on their scores on some criterion such as tenure or performance

A. First method
B. Second method

A

A. First method

59
Q

Choose a criterion measure score above which all employees are considered successful

A. First method
B. Second method

A

B. Second method

60
Q

True or False: Determining the proportion of correct decisions is easier but less accurate than the Taylor-Russell tables

A

True

61
Q

A utility method that compares the percentage of times a selection decision was accurate with the percentage of successful employees

A. Proportion of Correct Decisions
B. Lawshe Tables
C. Expectancy Charts

A

A. Proportion of Correct Decisions

62
Q

The only information needed to determine the proportion of correct decisions is ____ (2 items)

A

employee test scores and the scores on the criterion

63
Q

Created to know the probability that a particular applicant will be successful

A. Proportion of Correct Decisions
B. Lawshe Tables
C. Expectancy Charts

A

B. Lawshe Tables

64
Q

Three pieces of information needed to use the Lawshe Tables

A

Validity coefficient
Base rate
Applicant’s test score

65
Q

charts that indicate the chance of success for each test score range

A. Proportion of Correct Decisions
B. Lawshe Tables
C. Expectancy Charts

A

C. Expectancy Charts

66
Q

indicates the probability of being a successful employee given a particular test score

A. Proportion of Correct Decisions
B. Lawshe Tables
C. Expectancy Charts

A

C. Expectancy Chart

67
Q

Especially effective for a nontechnical audience but can be misleading if the sample size in the study is small

A. Proportion of Correct Decisions
B. Lawshe Tables
C. Expectancy Charts

A

C. Expectancy Charts

68
Q

True or False: Expectancy charts are based on raw data distributions rather than correlation coefficients

A

True

69
Q

Method of ascertaining the extent to which an organization will benefit from the use of a particular selection system

A. Utility Formula
B. Number of employees hired per year (n)
C. Average tenure (t)

A

A. Utility Formula

70
Q

Estimates the monetary savings to an organization

A. Utility Formula
B. Number of employees hired per year (n)
C. Average tenure (t)

A

A. Utility Formula

71
Q

the number of employees who are hired for a given position in a year

A. Utility Formula
B. Number of employees hired per year (n)
C. Average tenure (t)

A

B. Number of employees hired per year (n)

72
Q

the average amount of time that employees in the position tend to stay within the company

A. Utility Formula
B. Number of employees hired per year (n)
C. Average tenure (t)

A

C. Average tenure (t)

73
Q

Computed by using info from company records to identify the time that each employee in that position stayed with the company

A. Utility Formula
B. Number of employees hired per year (n)
C. Average tenure (t)

A

C. Average tenure (t)

74
Q

the length of time an employee has been with an organization

A. Tenure
B. Test validity (r)
C. Standard deviation of performance in dollars (SDy)
D. Mean standardized predictor score of selected applicants (m)

A

A. Tenure

75
Q

the criterion validity coefficient that was obtained through either a validity study or validity generalization

A. Tenure
B. Test validity (r)
C. Standard deviation of performance in dollars (SDy)
D. Mean standardized predictor score of selected applicants (m)

A

B. Test validity (r)

76
Q

the total salaries of current employees in this position in question should be averaged

A. Tenure
B. Test validity (r)
C. Standard deviation of performance in dollars (SDy)
D. Mean standardized predictor score of selected applicants (m)

A

C. Standard deviation of performance in dollars (SDy)

77
Q

For jobs in which performance between an average and a good worker is 40% of the employee’s annual salary

A. Tenure
B. Test validity (r)
C. Standard deviation of performance in dollars (SDy)
D. Mean standardized predictor score of selected applicants (m)

A

C. Standard deviation of performance in dollars (SDy)

78
Q

obtain the average score on the selection test for both applicants who are hired and the applicants who are not hired

A. First method
B. Second method

A

A. First method

79
Q

compute the proportion of applicants who are hired then use a conversion table to convert the proportion into a standard score

A. First method
B. Second method

A

B. Second method

80
Q

Refers to technical aspects of a test

A. Determining the Fairness of the Test
B. Measurement Bias

A

B. Measurement Bias

81
Q

True or False: A test is considered to not have measurement bias if there are group differences in test scores that are unrelated to the construct being measured

A

False; it is considered to have measurement bias

82
Q

An employment practice that results in members of a protected class being negatively affected at a higher rate than members of the majority class

A. Adverse Impact
B. Predictive Bias
C. Single-Group Validity

A

A. Adverse Impact

83
Q

usually determined by the four-fifths rule

A. Adverse Impact
B. Predictive Bias
C. Single-Group Validity

A

A. Adverse Impact

84
Q

Occurs when the selection rate for one group is less than 80% of the rate for the highest scoring group

A. Adverse Impact
B. Predictive Bias
C. Single-Group Validity

A

A. Adverse Impact

85
Q

situations in which the predicted level of job success falsely favors one group over another

A. Adverse Impact
B. Predictive Bias
C. Single-Group Validity

A

B. Predictive Bias

86
Q

The test will significantly predict performance for one group and not others

A. Adverse Impact
B. Predictive Bias
C. Single-Group Validity

A

C. Single-Group Validity

87
Q

very rare and is usually the result of small sample sizes and other methodological problems

A. Adverse Impact
B. Predictive Bias
C. Single-Group Validity

A

C. Single-Group Validity

88
Q

the test does not exhibit single-group validity and it passes this fairness hurdle

A. both correlations are significant
B. only one of the correlations is significant,

A

A. both correlations are significant

89
Q

the test is considered fair for only that one group.

A. both correlations are significant
B. only one of the correlations is significant,

A

B. only one of the correlations is significant,

90
Q

A test is valid for two groups but more valid for one than for the other

A. Differential Validity
B. 1991 Civil Rights Act
C. Multiple Regression

A

A. Differential Validity

91
Q

usually in occupations dominated by a single sex, tests are most valid for the dominant sex, and the tests overpredict minority performance

A. Differential Validity
B. 1991 Civil Rights Act
C. Multiple Regression

A

A. Differential Validity

91
Q

prohibits score adjustments based on race or gender.

A. Differential Validity
B. 1991 Civil Rights Act
C. Multiple Regression

A

B. 1991 Civil Rights Act

91
Q

statistical procedure that weight each test score according to how well it predicts the criterion

A. Differential Validity
B. 1991 Civil Rights Act
C. Multiple Regression

A

C. Multiple Regression

92
Q

Applicants are rank-ordered based on their test scores

A. Unadjusted Top-Down Selection
B. Compensatory Approach
C. Rule of Three (or Five)
D. Multiple regression

A

A. Unadjusted Top-Down Selection

92
Q

Combines test scores when more than one criterion-valid test is used

A. Differential Validity
B. 1991 Civil Rights Act
C. Multiple Regression

A

C. Multiple Regression

93
Q

Selection is made starting with the applicant with the highest score downwards until all openings have been filled

A. Unadjusted Top-Down Selection
B. Compensatory Approach
C. Rule of Three (or Five)
D. Multiple regression

A

A. Unadjusted Top-Down Selection

94
Q

Assumes that if multiple test scores are used, the relationship between a low score on one test can be compensated for by a high score on another

A. Unadjusted Top-Down Selection
B. Compensatory Approach
C. Rule of Three (or Five)
D. Multiple regression

A

B. Compensatory Approach

95
Q

Used to determine whether a score on one test can compensate for a score on another

A. Unadjusted Top-Down Selection
B. Compensatory Approach
C. Rule of Three (or Five)
D. Multiple regression

A

D. Multiple regression

96
Q

The names of the top three scorers are given to the person making the hiring decision so they can choose any of the three based on the immediate needs of the employer

A. Unadjusted Top-Down Selection
B. Compensatory Approach
C. Rule of Three (or Five)
D. Multiple regression

A

C. Rule of Three (or Five)

97
Q

A variation on top-down selection that provides more choices and ensures that the person hired will be well-qualified

A. Unadjusted Top-Down Selection
B. Compensatory Approach
C. Rule of Three (or Five)
D. Multiple regression

A

C. Rule of Three (or Five)

98
Q

Often used in the public sector

A. Unadjusted Top-Down Selection
B. Compensatory Approach
C. Rule of Three (or Five)
D. Multiple regression

A

C. Rule of Three (or Five)

99
Q

Its purpose is to reduce adverse impact and increase flexibility

A. Passing Scores
B. Low test scores
C. Top-down selection
D. Banding

A

A. Passing Scores

100
Q

associated with low performance on the job

A. Passing Scores
B. Low test scores
C. Top-down selection
D. Banding

A

B. Low test scores

101
Q

would usually be determined by experts reading each item on a test and providing an estimating about the percentage of minimally qualified employees that could answer the item correctly

A. Passing Scores
B. Low test scores
C. Top-down selection
D. Banding

A

A. Passing Scores

102
Q

“Who will perform the best in the future?”

A. Passing Scores
B. Low test scores
C. Top-down selection
D. Banding

A

C. Top-down selection

103
Q

“Who will be able to perform at an acceptable level in the future?”

A. Passing Scores
B. Low test scores
C. Top-down selection
D. Banding

A

A. Passing Scores

104
Q

the applicants would be administered all of the tests at one time. If they failed any of the tests, they would not be considered further for employment.

A. Multiple-cutoff approach
B. Multiple-hurdle approach
C. Banding

A

A. Multiple-cutoff approach

105
Q

costly, even if the applicant only passes 3 out of 4 tests, he/she will not be tired

A. Multiple-cutoff approach
B. Multiple-hurdle approach
C. Banding

A

A. Multiple-cutoff approach

106
Q

Used to reduce cost
A. Multiple-cutoff approach
B. Multiple-hurdle approach
C. Banding

A

B. Multiple-hurdle approach

107
Q

Applicants are administered one test at a time, and when an applicant fails a test. They are no longer eligible for further testing and consideration

A. Multiple-cutoff approach
B. Multiple-hurdle approach
C. Banding

A

B. Multiple-hurdle approach

108
Q

a compromise to top-down hiring and passing score, and at the same time allowing flexibility

A. Multiple-cutoff approach
B. Multiple-hurdle approach
C. Banding

A

C. Banding

109
Q

takes into consideration the degree of error associated with any test score

A. Multiple-cutoff approach
B. Multiple-hurdle approach
C. Banding

A

C. Banding