Utility Flashcards

1
Q

The practical value of testing to improve efficiency

A. Utility
B. Psychometric Soundness
C. Costs
D. Benefits

A

A. Utility

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

The higher the criterion-related validity of test scores, the higher the utility of the test

A. Utility
B. Psychometric Soundness
C. Costs
D. Benefits

A

B. Psychometric Soundness

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

One of the most basic elements of utility analysis

A. Utility
B. Psychometric Soundness
C. Costs
D. Benefits

A

C. Costs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Weighed against the costs of administering, scoring, and interpreting the test

A. Utility
B. Psychometric Soundness
C. Costs
D. Benefits

A

D. Benefits

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

An assumption is made that high scores on one attribute can “balance out” low scores on another attribute

A. Compensatory Model of Selection
B. Expectancy Data
C. Taylor-Russell Tables
D. Naylor-Shine Tables
E. Brogden-Cronbach-Gleser Formula

A

A. Compensatory Model of Selection

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

The likelihood that a test taker will score within some interval of scores on a criterion measure

A. Compensatory Model of Selection
B. Expectancy Data
C. Taylor-Russell Tables
D. Naylor-Shine Tables
E. Brogden-Cronbach-Gleser Formula

A

B. Expectancy Data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

You must be able to create a set of norms where your score will fall under

A. Compensatory Model of Selection
B. Expectancy Data
C. Taylor-Russell Tables
D. Naylor-Shine Tables
E. Brogden-Cronbach-Gleser Formula

A

B. Expectancy Data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Provide an estimate of the percentage of employees hired by the use of a particular test who will be successful at their jobs

A. Compensatory Model of Selection
B. Expectancy Data
C. Taylor-Russell Tables
D. Naylor-Shine Tables
E. Brogden-Cronbach-Gleser Formula

A

C. Taylor-Russell Tables

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Help obtain the difference between the means of the selected and unselected groups to derive an index of what the test (or some other tool of assessment) is adding to already established procedures

A. Compensatory Model of Selection
B. Expectancy Data
C. Taylor-Russell Tables
D. Naylor-Shine Tables
E. Brogden-Cronbach-Gleser Formula

A

D. Naylor-Shine Tables

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

The validity coefficient comes from concurrent validation procedures.

A. Compensatory Model of Selection
B. Expectancy Data
C. Taylor-Russell Tables
D. Naylor-Shine Tables
E. Brogden-Cronbach-Gleser Formula

A

D. Naylor-Shine Tables

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Many other variables may play a role in selection decisions, including applicants’ minority status, general physical or mental health, or drug use.

A. Compensatory Model of Selection
B. Expectancy Data
C. Taylor-Russell Tables
D. Naylor-Shine Tables
E. Brogden-Cronbach-Gleser Formula

A

D. Naylor-Shine Tables

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Used to calculate the dollar/peso amount of a utility gain resulting from the use of a particular selection instrument under specified conditions

A. Compensatory Model of Selection
B. Expectancy Data
C. Taylor-Russell Tables
D. Naylor-Shine Tables
E. Brogden-Cronbach-Gleser Formula

A

E. Brogden-Cronbach-Gleser Formula

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Some utility models are based on the assumption that there will be a ready supply of viable applicants from which to choose and fill positions.

A. The Pool of Job Applicants
B. The Complexity of the Job
C. Cut-Off Score

A

A. The Pool of Job Applicants

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

The same kind of utility models are used for a variety of positions, yet the more complex the job, the bigger the difference in people who perform well or poorly

A. The Pool of Job Applicants
B. The Complexity of the Job
C. Cut-Off Score

A

B. The Complexity of the Job

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Reference point derived as a result of a judgment

A. The Pool of Job Applicants
B. The Complexity of the Job
C. Cut-Off Score

A

C. Cut-Off Score

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Used to divide a set of data into two or more classifications as basis for some actions to be taken or some inferences to be made

A. The Pool of Job Applicants
B. The Complexity of the Job
C. Cut-Off Score

A

C. Cut-Off Score

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Reference point that is set based on norm-related considerations rather than on the relationship of test scores to a criterion

A. Relative cut score
B. Fixed cut score
C. Multiple cut score
D. Multiple Hurdle

A

A. Relative cut score

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Minimum level of proficiency required to be included in a particular classification

A. Relative cut score
B. Fixed cut score
C. Multiple cut score
D. Multiple Hurdle

A

B. Fixed cut score

19
Q

Use of two or more cut scores with reference to one predictor for purpose of categorizing test takers.

A. Relative cut score
B. Fixed cut score
C. Multiple cut score
D. Multiple Hurdle

A

C. Multiple cut score

20
Q

Multistage decision-making process wherein cut score on one test is necessary in order to advance to the next stage of evaluation in a selection process

A. Relative cut score
B. Fixed cut score
C. Multiple cut score
D. Multiple Hurdle

A

D. Multiple Hurdle

21
Q

Classical Test Score Theory: The judgments of the experts are averaged to yield cut scores for the test.

A. Angoff Method
B. Known Groups Method
C. Item-Mapping Method
D. Bookmark Method
E. IRT-Based Method

A

A. Angoff Method

22
Q

Classical Test Score Theory: Can be used for personnel selection based on traits, attributes, and abilities.

A. Angoff Method
B. Known Groups Method
C. Item-Mapping Method
D. Bookmark Method
E. IRT-Based Method

A

A. Angoff Method

23
Q

Classical Test Score Theory: Problems arise if there is disagreement between experts

A. Angoff Method
B. Known Groups Method
C. Item-Mapping Method
D. Bookmark Method
E. IRT-Based Method

A

A. Angoff Method

24
Q

Entails collection of data on the predictor of interest from groups known to possess, and not to possess, a trait, attribute, or ability of interest.

A. Angoff Method
B. Known Groups Method
C. Item-Mapping Method
D. Bookmark Method
E. IRT-Based Method

A

B. Known Groups Method

25
Q

Based on the analysis of data, a cut score is set on the test that best discriminates the groups’ test performance.

A. Angoff Method
B. Known Groups Method
C. Item-Mapping Method
D. Bookmark Method
E. IRT-Based Method

A

B. Known Groups Method

26
Q

There is no standard set of guidelines for choosing contrasting groups

A. Angoff Method
B. Known Groups Method
C. Item-Mapping Method
D. Bookmark Method
E. IRT-Based Method

A

B. Known Groups Method

27
Q

Each item is associated with a particular level of difficulty.

A. Angoff Method
B. Known Groups Method
C. Item-Mapping Method
D. Bookmark Method
E. IRT-Based Method

A

E. IRT-Based Method

28
Q

In order to “pass” the test, the test taker must answer items that are deemed to be above some minimum level of difficulty, which is determined by experts and serves as the cut score

A. Angoff Method
B. Known Groups Method
C. Item-Mapping Method
D. Bookmark Method
E. IRT-Based Method

A

E. IRT-Based Method

29
Q

Entails arrangement of items in a histogram with each column containing items deemed to be of equivalent value

A. Angoff Method
B. Known Groups Method
C. Item-Mapping Method
D. Bookmark Method
E. IRT-Based Method

A

C. Item-Mapping Method

30
Q

Trained judges are provided with sample items from each column and are asked whether or not a minimally competent individual would answer those items correctly

A. Angoff Method
B. Known Groups Method
C. Item-Mapping Method
D. Bookmark Method
E. IRT-Based Method

A

C. Item-Mapping Method

31
Q

Difficulty level is set as the cut score.

A. Angoff Method
B. Known Groups Method
C. Item-Mapping Method
D. Bookmark Method
E. IRT-Based Method

A

C. Item-Mapping Method

32
Q

Training of experts with regard to the minimal knowledge, skills and / or abilities test takers should possess in order to pass

A. Angoff Method
B. Known Groups Method
C. Item-Mapping Method
D. Bookmark Method
E. IRT-Based Method

A

D. Bookmark Method

33
Q

Experts are given a book of items arranged in ascending order of difficulty and place a bookmark between 2 items deemed to separate test takers who have acquired minimal knowledge, etc

A. Angoff Method
B. Known Groups Method
C. Item-Mapping Method
D. Bookmark Method
E. IRT-Based Method

A

D. Bookmark Method

34
Q

R. L. Thorndike (1949) proposed a norm-referenced method called ______

A. Method of Predictive Yield
B. Discriminant Analysis

A

A. Method of Predictive Yield

35
Q

Took into account the number of positions to be filled, projections regarding the likelihood of offer acceptance, and the distribution of applicant scores

A. Method of Predictive Yield
B. Discriminant Analysis

A

A. Method of Predictive Yield

36
Q

A family of statistical techniques used to shed light on the relationship between identified variables (such as scores on a battery of tests) and two (or more) naturally occurring groups (such as persons judged to be successful at a job and persons judged unsuccessful at a job).

A. Method of Predictive Yield
B. Discriminant Analysis

A

B. Discriminant Analysis

37
Q

item-endorsement index

A. Index of Item Difficulty
B. Index of Item Discrimination
C. Index of Item Reliability
D. Index of Item Validity

A

A. Index of Item Difficulty

38
Q

in cognitive tests, a statistic indicating how many tests takers responded correctly to an Item

A. Index of Item Difficulty
B. Index of Item Discrimination
C. Index of Item Reliability
D. Index of Item Validity

A

A. Index of Item Difficulty

39
Q

in personality tests, a statistic indicating how many test takers responded to an item in a particular direction.

A. Index of Item Difficulty
B. Index of Item Discrimination
C. Index of Item Reliability
D. Index of Item Validity

A

A. Index of Item Difficulty

40
Q

a statistic designed to indicate how adequately a test item discriminates between high and low scorers

A. Index of Item Difficulty
B. Index of Item Discrimination
C. Index of Item Reliability
D. Index of Item Validity

A

B. Index of Item Discrimination

41
Q

provides an indication of the internal consistency of a test

A. Index of Item Difficulty
B. Index of Item Discrimination
C. Index of Item Reliability
D. Index of Item Validity

A

C. Index of Item Reliability

42
Q

is equal to the product of the item-score standard deviation (s) and the correlation (r) between the item score and the total test score

A. Index of Item Difficulty
B. Index of Item Discrimination
C. Index of Item Reliability
D. Index of Item Validity

A

C. Index of Item Reliability

43
Q

statistic indicating the degree to which a test measures what it purports to measure;

A. Index of Item Difficulty
B. Index of Item Discrimination
C. Index of Item Reliability
D. Index of Item Validity

A

D. Index of Item Validity