UNIT 6 Flashcards

1
Q

The extent to which a score from a selection measure is stable and free
from error

A. Validity
B. Consistency
C. Reliability

A

C. Reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

True or False: If a score from a measure is not stable or error-free, it is not useful.

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Each one of several people take the same test
twice.
A. Test-retest reliability
B. Temporal stability
C. Alternate-Forms Reliability

A

A. Test-retest reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

The consistency of test scores across time.
A. Test-retest reliability
B. Temporal stability
C. Alternate-Forms Reliability

A

B. Temporal stability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

True or False: the time interval should be long enough so that the specific test answers have not been memorized, but short enough so that the person has not changed significantly.

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

True or False: Typical time intervals between test administrations range from three weeks to three years

A

False; three days to three months

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

The typical test-retest reliability coefficient for tests used by organizations is

A. 0.81
B. 0.92
C. 0.86
D. 0.79

A

C. 0.86

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

amount of anxiety that an individual normally has all the time,
A. Trait anxiety
B. State anxiety

A

A. Trait anxiety

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

the amount of anxiety an individual has at any given moment.
A. Trait anxiety
B. State anxiety

A

B. State anxiety

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

The extent to which two forms of the same test are similar.

A. Alternate-forms reliability
B. Counterbalancing
C. Form stability

A

A. Alternate-forms reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

A method of controlling for order effects by giving half of a sample Test A first, followed by Test B, and giving the other half of the sample Test B first, followed by Test A.

A. Alternate-forms reliability
B. Counterbalancing
C. Form stability

A

B. Counterbalancing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

True or False: Applicants retaking the same cognitive ability test will increase their scores about twice as much as applicants taking an alternate form of the cognitive ability test

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

The extent to which the scores on two forms of a test are similar.
A. Alternate-forms reliability
B. Counterbalancing
C. Form stability

A

C. Form stability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

True or False: In alternate-forms reliability, the time interval should be as long as possible.

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

The extent to which responses to the same test items are consistent.

A. Item stability
B. Item homogeneity
C. Kuder-Richardson Formula 20

A

A. Item stability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

The extent to which test items measure the same construct.

A. Item stability
B. Item homogeneity
C. Kuder-Richardson Formula 20

A

B. Item homogeneity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

A statistic used to determine internal reliability of tests that use items with dichotomous answers

A. Item stability
B. Item homogeneity
C. Kuder-Richardson Formula 20

A

C. Kuder-Richardson Formula 20

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

A form of internal reliability in which the consistency of item responses is determined by comparing scores on half of the items with scores on the other half of the items.

A. Split-half method
B. Spearman-Brown prophecy formula
C. Coefficient alpha

A

A. Split-half method

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Used to correct reliability coefficients resulting from the split-half method.

A. Split-half method
B. Spearman-Brown prophecy formula
C. Coefficient alpha

A

B. Spearman-Brown prophecy formula

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

A statistic used to determine internal reliability of tests that use interval or ratio scales.

A. Split-half method
B. Spearman-Brown prophecy formula
C. Coefficient alpha

A

C. Coefficient alpha

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

True or False: K-R 20 is used for dichotomous items, whereas the coefficient alpha can be used not only for dichotomous items but also for tests containing interval and ratio items

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

The extent to which two people scoring a test agree on the test score, or the extent to which a test is scored correctly.

A. Scorer reliability
B. Validity

A

A. Scorer reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

When deciding whether a test demonstrates sufficient reliability, two factors must be considered:

A. The contents of the test
B. The magnitude of the reliability coefficient
C. The people who will be taking the test.
D. The validity of the test

A

B&C

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

The degree to which inferences from test scores are justified by the evidence.

A. Scorer reliability
B. Validity

A

B. Validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
The potential validity of a test is limited by its A. Consistency B. Accuracy C. Reliability D. Utility
C. Reliability
26
The extent to which tests or test items sample the content that they are supposed to measure. A. Content Validity B. Criterion Validity C. Construct Validity D. Face Validity
A. Content Validity
27
The extent to which a test score is related to some measure of job performance A. Content Validity B. Criterion Validity C. Construct Validity D. Face Validity
B. Criterion Validity
28
Most theoretical of all the validity types A. Content Validity B. Criterion Validity C. Construct Validity D. Face Validity
C. Construct Validity
29
The extent to which a test actually measures the construct that it purports to measure. A. Content Validity B. Criterion Validity C. Construct Validity D. Face Validity
C. Construct Validity
30
The extent to which a test appears to be valid A. Content Validity B. Criterion Validity C. Construct Validity D. Face Validity
D. Face Validity
31
A measure of job performance, such as attendance, productivity, or a supervisor rating. A. Criterion B. Concurrent Validity C. Predictive Validity
A. Criterion
32
A form of criterion validity that correlates test scores with measures of job performance for employees currently working for an organization A. Criterion B. Concurrent Validity C. Predictive Validity
B. Concurrent Validity
33
Given to a group of employees who are already on the job A. Criterion B. Concurrent Validity C. Predictive Validity
B. Concurrent Validity
34
A form of criterion validity in which test scores of applicants are compared at a later date with a measure of job performance A. Criterion B. Concurrent Validity C. Predictive Validity
C. Predictive Validity
35
Given to job applicants who are going to be hired A. Criterion B. Concurrent Validity C. Predictive Validity
C. Predictive Validity
36
A narrow range of performance scores that makes it difficult to obtain a significant validity coefficient A. Restricted Range B. Validity Generalization (VG) C. Synthetic Validity
A. Restricted Range
37
The extent to which inferences from test scores from one organization can be applied to another organization A. Restricted Range B. Validity Generalization (VG) C. Synthetic Validity
B. Validity Generalization (VG)
38
A form of validity generalization in which validity is inferred on the basis of a match between job components and tests previously found valid for those job components A. Restricted Range B. Validity Generalization (VG) C. Synthetic Validity
C. Synthetic Validity
39
A form of validity in which test scores from two contrasting groups “known” to differ on a construct are compared. A. Known-group Validity B. Barnum Statements C. Mental Measurements Yearbook (MMY)
A. Known-group Validity
40
Statements, such as those used in astrological forecasts, that are so general that they can be true of almost anyone. A. Known-group Validity B. Barnum Statements C. Mental Measurements Yearbook (MMY)
B. Barnum Statements
41
A book containing information about the reliability and validity of various psychological tests. A. Known-group Validity B. Barnum Statements C. Mental Measurements Yearbook (MMY)
C. Mental Measurements Yearbook (MMY)
42
Most common source of test information A. Known-group Validity B. Barnum Statements C. Mental Measurements Yearbook (MMY)
C. Mental Measurements Yearbook (MMY)
43
Should be considered if two or more tests have similar validities A. Cost B. Group testing C. Computer-adaptive testing (CAT)
A. Cost
44
usually less expensive and more efficient although important information may be lost A. Cost B. Group testing C. Computer-adaptive testing (CAT)
B. Group testing
45
type of test taken on a computer in which the computer adapts the difficulty level of questions asked to the test taker’s success in answering previous questions A. Cost B. Group testing C. Computer-adaptive testing (CAT)
C. Computer-adaptive testing (CAT)
46
provide an estimate of the percentage of total new hires who will be successful employees if a test is adopted A. Taylor-Russell tables B. Expectancy charts and Lawshe tables C. Utility formula
A. Taylor-Russell tables
47
organizational success A. Taylor-Russell tables B. Expectancy charts and Lawshe tables C. Utility formula
A. Taylor-Russell tables
48
provide a probability of success for a particular applicant based on test scores A. Taylor-Russell tables B. Expectancy charts and Lawshe tables C. Utility formula
B. Expectancy charts and Lawshe tables
49
individual success A. Taylor-Russell tables B. Expectancy charts and Lawshe tables C. Utility formula
B. Expectancy charts and Lawshe tables
50
provides an estimate of the amount of money an organization will save if it adopts a new testing procedure A. Taylor-Russell tables B. Expectancy charts and Lawshe tables C. Utility formula
C. Utility formula
51
conduct a criterion validity study with test scores correlated with some measure of job performance A. Criterion validity coefficient B. Selection ratio C. Base rate
A. Criterion validity coefficient
52
Validity generalization A. Criterion validity coefficient B. Selection ratio C. Base rate
A. Criterion validity coefficient
53
the percentage of people an organization must hire A. Criterion validity coefficient B. Selection ratio C. Base rate
B. Selection ratio
54
the percentage of employees currently on the job who are considered successful A. Criterion validity coefficient B. Selection ratio C. Base rate
C. Base rate
55
**True or False**: The higher the selection ratio, the greater the potential usefulness of the test
False; the lower the selection ratio
56
simplest but least accurate A. First method B. Second method
A. First method
57
more meaningful method A. First method B. Second method
B. Second method
58
Employees are split into two equal groups based on their scores on some criterion such as tenure or performance A. First method B. Second method
A. First method
59
Choose a criterion measure score above which all employees are considered successful A. First method B. Second method
B. Second method
60
**True or False**: Determining the proportion of correct decisions is easier but less accurate than the Taylor-Russell tables
True
61
A utility method that compares the percentage of times a selection decision was accurate with the percentage of successful employees A. Proportion of Correct Decisions B. Lawshe Tables C. Expectancy Charts
A. Proportion of Correct Decisions
62
The only information needed to determine the proportion of correct decisions is ____ (2 items)
employee test scores and the scores on the criterion
63
Created to know the probability that a particular applicant will be successful A. Proportion of Correct Decisions B. Lawshe Tables C. Expectancy Charts
B. Lawshe Tables
64
Three pieces of information needed to use the Lawshe Tables
Validity coefficient Base rate Applicant’s test score
65
charts that indicate the chance of success for each test score range A. Proportion of Correct Decisions B. Lawshe Tables C. Expectancy Charts
C. Expectancy Charts
66
indicates the probability of being a successful employee given a particular test score A. Proportion of Correct Decisions B. Lawshe Tables C. Expectancy Charts
C. Expectancy Chart
67
Especially effective for a nontechnical audience but can be misleading if the sample size in the study is small A. Proportion of Correct Decisions B. Lawshe Tables C. Expectancy Charts
C. Expectancy Charts
68
**True or False**: Expectancy charts are based on raw data distributions rather than correlation coefficients
True
69
Method of ascertaining the extent to which an organization will benefit from the use of a particular selection system A. Utility Formula B. Number of employees hired per year (n) C. Average tenure (t)
A. Utility Formula
70
Estimates the monetary savings to an organization A. Utility Formula B. Number of employees hired per year (n) C. Average tenure (t)
A. Utility Formula
71
the number of employees who are hired for a given position in a year A. Utility Formula B. Number of employees hired per year (n) C. Average tenure (t)
B. Number of employees hired per year (n)
72
the average amount of time that employees in the position tend to stay within the company A. Utility Formula B. Number of employees hired per year (n) C. Average tenure (t)
C. Average tenure (t)
73
Computed by using info from company records to identify the time that each employee in that position stayed with the company A. Utility Formula B. Number of employees hired per year (n) C. Average tenure (t)
C. Average tenure (t)
74
the length of time an employee has been with an organization A. Tenure B. Test validity (r) C. Standard deviation of performance in dollars (SDy) D. Mean standardized predictor score of selected applicants (m)
A. Tenure
75
the criterion validity coefficient that was obtained through either a validity study or validity generalization A. Tenure B. Test validity (r) C. Standard deviation of performance in dollars (SDy) D. Mean standardized predictor score of selected applicants (m)
B. Test validity (r)
76
the total salaries of current employees in this position in question should be averaged A. Tenure B. Test validity (r) C. Standard deviation of performance in dollars (SDy) D. Mean standardized predictor score of selected applicants (m)
C. Standard deviation of performance in dollars (SDy)
77
For jobs in which performance between an average and a good worker is 40% of the employee’s annual salary A. Tenure B. Test validity (r) C. Standard deviation of performance in dollars (SDy) D. Mean standardized predictor score of selected applicants (m)
C. Standard deviation of performance in dollars (SDy)
78
obtain the average score on the selection test for both applicants who are hired and the applicants who are not hired A. First method B. Second method
A. First method
79
compute the proportion of applicants who are hired then use a conversion table to convert the proportion into a standard score A. First method B. Second method
B. Second method
80
Refers to technical aspects of a test A. Determining the Fairness of the Test B. Measurement Bias
B. Measurement Bias
81
**True or False**: A test is considered to not have measurement bias if there are group differences in test scores that are unrelated to the construct being measured
False; it is considered to have measurement bias
82
An employment practice that results in members of a protected class being negatively affected at a higher rate than members of the majority class A. Adverse Impact B. Predictive Bias C. Single-Group Validity
A. Adverse Impact
83
usually determined by the four-fifths rule A. Adverse Impact B. Predictive Bias C. Single-Group Validity
A. Adverse Impact
84
Occurs when the selection rate for one group is less than 80% of the rate for the highest scoring group A. Adverse Impact B. Predictive Bias C. Single-Group Validity
A. Adverse Impact
85
situations in which the predicted level of job success falsely favors one group over another A. Adverse Impact B. Predictive Bias C. Single-Group Validity
B. Predictive Bias
86
The test will significantly predict performance for one group and not others A. Adverse Impact B. Predictive Bias C. Single-Group Validity
C. Single-Group Validity
87
very rare and is usually the result of small sample sizes and other methodological problems A. Adverse Impact B. Predictive Bias C. Single-Group Validity
C. Single-Group Validity
88
the test does not exhibit single-group validity and it passes this fairness hurdle A. both correlations are significant B. only one of the correlations is significant,
A. both correlations are significant
89
the test is considered fair for only that one group. A. both correlations are significant B. only one of the correlations is significant,
B. only one of the correlations is significant,
90
A test is valid for two groups but more valid for one than for the other A. Differential Validity B. 1991 Civil Rights Act C. Multiple Regression
A. Differential Validity
91
usually in occupations dominated by a single sex, tests are most valid for the dominant sex, and the tests overpredict minority performance A. Differential Validity B. 1991 Civil Rights Act C. Multiple Regression
A. Differential Validity
91
prohibits score adjustments based on race or gender. A. Differential Validity B. 1991 Civil Rights Act C. Multiple Regression
B. 1991 Civil Rights Act
91
statistical procedure that weight each test score according to how well it predicts the criterion A. Differential Validity B. 1991 Civil Rights Act C. Multiple Regression
C. Multiple Regression
92
Applicants are rank-ordered based on their test scores A. Unadjusted Top-Down Selection B. Compensatory Approach C. Rule of Three (or Five) D. Multiple regression
A. Unadjusted Top-Down Selection
92
Combines test scores when more than one criterion-valid test is used A. Differential Validity B. 1991 Civil Rights Act C. Multiple Regression
C. Multiple Regression
93
Selection is made starting with the applicant with the highest score downwards until all openings have been filled A. Unadjusted Top-Down Selection B. Compensatory Approach C. Rule of Three (or Five) D. Multiple regression
A. Unadjusted Top-Down Selection
94
Assumes that if multiple test scores are used, the relationship between a low score on one test can be compensated for by a high score on another A. Unadjusted Top-Down Selection B. Compensatory Approach C. Rule of Three (or Five) D. Multiple regression
B. Compensatory Approach
95
Used to determine whether a score on one test can compensate for a score on another A. Unadjusted Top-Down Selection B. Compensatory Approach C. Rule of Three (or Five) D. Multiple regression
D. Multiple regression
96
The names of the top three scorers are given to the person making the hiring decision so they can choose any of the three based on the immediate needs of the employer A. Unadjusted Top-Down Selection B. Compensatory Approach C. Rule of Three (or Five) D. Multiple regression
C. Rule of Three (or Five)
97
A variation on top-down selection that provides more choices and ensures that the person hired will be well-qualified A. Unadjusted Top-Down Selection B. Compensatory Approach C. Rule of Three (or Five) D. Multiple regression
C. Rule of Three (or Five)
98
Often used in the public sector A. Unadjusted Top-Down Selection B. Compensatory Approach C. Rule of Three (or Five) D. Multiple regression
C. Rule of Three (or Five)
99
Its purpose is to reduce adverse impact and increase flexibility A. Passing Scores B. Low test scores C. Top-down selection D. Banding
A. Passing Scores
100
associated with low performance on the job A. Passing Scores B. Low test scores C. Top-down selection D. Banding
B. Low test scores
101
would usually be determined by experts reading each item on a test and providing an estimating about the percentage of minimally qualified employees that could answer the item correctly A. Passing Scores B. Low test scores C. Top-down selection D. Banding
A. Passing Scores
102
“Who will perform the best in the future?” A. Passing Scores B. Low test scores C. Top-down selection D. Banding
C. Top-down selection
103
“Who will be able to perform at an acceptable level in the future?” A. Passing Scores B. Low test scores C. Top-down selection D. Banding
A. Passing Scores
104
the applicants would be administered all of the tests at one time. If they failed any of the tests, they would not be considered further for employment. A. Multiple-cutoff approach B. Multiple-hurdle approach C. Banding
A. Multiple-cutoff approach
105
costly, even if the applicant only passes 3 out of 4 tests, he/she will not be tired A. Multiple-cutoff approach B. Multiple-hurdle approach C. Banding
A. Multiple-cutoff approach
106
Used to reduce cost A. Multiple-cutoff approach B. Multiple-hurdle approach C. Banding
B. Multiple-hurdle approach
107
Applicants are administered one test at a time, and when an applicant fails a test. They are no longer eligible for further testing and consideration A. Multiple-cutoff approach B. Multiple-hurdle approach C. Banding
B. Multiple-hurdle approach
108
a compromise to top-down hiring and passing score, and at the same time allowing flexibility A. Multiple-cutoff approach B. Multiple-hurdle approach C. Banding
C. Banding
109
takes into consideration the degree of error associated with any test score A. Multiple-cutoff approach B. Multiple-hurdle approach C. Banding
C. Banding