Chapter 7: utility Flashcards

1
Q

The practical value of testing to improve efficiency

A

utility

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

factors affecting utility

A

psychometric soundness: the higher the criterion-related validity of the test scores, the higher the utility of the test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

T or F: valid tests are not always useful tests

A

true

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

2 factors affecting utility

A
  • cost
  • benefit
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

One of the most basic elements of utility analysis is the financial cost associated with a test

A

cost

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

The benefits of testing should be weighted against the costs of administering, scoring, and interpreting the test

A

benefits

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

A family of techniques that entail a cost–benefit analysis designed to yield information relevant to a decision about the usefulness and/or practical value of a tool of assessment

A

utility analysis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

endpoint of a utility analysis

A

yields an educated decision as to which of several alternative courses of action is most optimal (in terms of costs and benefits)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

An assumption is made that high scores on one attribute can “balance out” or compensate for low scores on another attribute

A

compensatory model of selection

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

The likelihood that a test taker will score within some interval of scores on a criterion measure

A

expectancy data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Provide an estimate of the percentage of employees hired by the use of a particular test who will be successful at their jobs

A

taylor-russell tables

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

different combinations of three variables in taylor-russell tables

A
  • test’s validity
  • selection ratio used
  • base rate
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

help obtain the difference between the means of the selected and unselected groups to derive an index of what the test (or some other tool of assessment) is adding to already established procedures

A

naylor-shine tables

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

T or F: For both Taylor-Russell and Naylor-Shine tables, the validity coefficient comes from concurrent validation procedures

A

true

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

used to calculate the
dollar/peso amount of a utility gain resulting from the use of a particular selection instrument under specified conditions

A

brogden-cronbach-gleser formula

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

practical considerations

A
  • the pool of job applicants
  • the complexity of the job
  • cut off score
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Some utility models are based on the assumption that that there will be a ready supply of viable applicants from which to choose and fill positions

A

the pool of job applicants

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

The same kind of utility models are used for a variety of positions, yet the more complex the job, the bigger the difference in people who perform well or poorly

A

the complexity of the job

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

reference point derived as a result of a judgment

A

cut-off score

20
Q

types of cut-off score

A
  • relative cut score
  • fixed cut score
  • multiple cut score
  • multiple hurdle
21
Q

reference point that is set based on norm-related considerations rather than on the relationship of test scores to a criterion

a. relative cut score
b. fixed cut score
c. multiple cut score
d. multiple hurdle

A

a. relative cut score

22
Q

minimum level of proficiency required to be included in a particular classification

a. relative cut score
b. fixed cut score
c. multiple cut score
d. multiple hurdle

A

b. fixed cut score

23
Q

use of two or more cut
scores with reference to one predictor for purpose of categorizing test takers

a. relative cut score
b. fixed cut score
c. multiple cut score
d. multiple hurdle

A

c. multiple cut score

24
Q

multistage decision-making process wherein cut score on one test is necessary in order to advance to the next stage of evaluation in a selection process

a. relative cut score
b. fixed cut score
c. multiple cut score
d. multiple hurdle

A

d. multiple hurdle

25
Q

under classical test score theory

  • angoff method
  • known groups method
  • item mapping method
  • bookmark method
A
  • angoff method
  • known groups method
26
Q

under IRT-based method

  • angoff method
  • known groups method
  • item mapping method
  • bookmark method
A
  • item mapping method
  • bookmark method
27
Q

methods for setting cut scores

A
  • classical test score theory
  • IRT-based method
  • method of predictive yield
  • discriminant analysis
28
Q

The judgments of the experts are averaged to yield cut scores for the test

A

angoff method

29
Q

methods of setting cut scores: can be used for personnel selection based on traits, attributes, and abilities

A

angoff method

30
Q

problem with angoff method

A

problems arise if there is disagreement between experts

31
Q

methods of setting cut scores: Entails collection of data on the predictor of interest from groups known to possess, and not to possess, a trait, attribute, or ability of interest

A

known groups method

32
Q

problem with known groups method

A

there is no standard set of guidelines for choosing contrasting groups

33
Q

In an IRT framework, each item is associated with a particular level of difficulty; in order to “pass” the test, the test taker must answer items that are deemed to be above some minimum level of difficulty, which is determined by experts and serves as the cut score

A

IRT-based methods

34
Q

entails arrangement of items in a histogram with each column containing items deemed to be of
equivalent value

a. item-mapping method
b. bookmark method

A

a. item-mapping method

35
Q

trained judges are provided with sample items from each column and are asked whether or not a minimally competent individual would answer those items correctly

a. item-mapping method
b. bookmark method

A

a. item-mapping method

36
Q

difficulty level is set as the cut score

a. item-mapping method
b. bookmark method

A

a. item-mapping method

37
Q

training of experts with regard to the minimal knowledge, skills and/or abilities test takers should possess in order to pass

a. item-mapping method
b. bookmark method

A

b. bookmark method

38
Q

experts are given a book of items arranged in ascending order of difficulty

a. item-mapping method
b. bookmark method

A

b. bookmark method

39
Q

experts place a bookmark
between 2 items deemed to separate test takers who have acquired minimal knowledge, etc.

a. item-mapping method
b. bookmark method

A

b. bookmark method

40
Q

methods of setting cut scores: takes into account the number of positions to be filled, projections regarding the likelihood of offer acceptance, and the distribution of applicant scores

A

method of predictive yield

41
Q

methods of setting cut scores: A family of statistical techniques used to shed light on the relationship between identified variables (such as scores on a battery of tests) and two (or more) naturally occurring groups (such as persons judged to be successful at a job and persons judged unsuccessful at a job)

A

discriminant analysis

42
Q

5 item analysis

A
  • index of item difficulty
  • index of item discrimination
  • index of item reliability
  • index of item validity
  • spiral omnibus format
43
Q

item-endorsement index; (cognitive tests) a statistic indicating how many tests takers responded correctly to an Item; (personality tests) a statistic indicating how many test takers responded to an item in a particular direction

a. index of item difficulty
b. index of item discrimination
c. index of item reliability
d. index of item validity
e. spiral omnibus format

A

a. index of item difficulty

44
Q

a statistic designed to indicate how adequately a test item discriminates between high and low scorers

a. index of item difficulty
b. index of item discrimination
c. index of item reliability
d. index of item validity
e. spiral omnibus format

A

b. index of item discrimination

45
Q

provides an indication of the internal consistency of a test; is equal to the product of the item-score standard deviation (s) and the correlation (r) between the item score and the total test score

a. index of item difficulty
b. index of item discrimination
c. index of item reliability
d. index of item validity
e. spiral omnibus format

A

c. index of item reliability

46
Q

statistic indicating the degree to which a test measures what it purports to measure; the higher the item-validity index, the greater the test’s criterion-related validity

a. index of item difficulty
b. index of item discrimination
c. index of item reliability
d. index of item validity
e. spiral omnibus format

A

d. index of item validity