Flashcards in Test Construction Questions Deck (5)

Loading flashcards...

1

##
Raising the cutoff score on a predictor test would have the effect of

A. increasing true positives

B. decreasing false positives

C. decreasing true negatives

D. decreasing false negatives.

###
B. decreasing false positives

A simple way to answer this question is with reference to a chart such as the one displayed under the topic "Criterion-Related Validity" in the Test Construction section of your materials. If you look at this chart, you can see that increasing the predictor cutoff score (i.e., moving the vertical line to the right) decreases the number of false positives as well as true positives (you can also see that the number of both true and false negatives would be increased).

You can also think about this question more abstractly by coming up with an example. Imagine, for instance, that a general knowledge test is used as a predictor of job success. If the cutoff score on this test is raised, fewer people will score above this cutoff and, therefore, fewer people will be predicted to be successful. Another way of saying this is that fewer people will come up "positive" on this predictor. This applies to both true positives and false positives.

2

##
To determine two rater's level of agreement on a test you would use:

A. Kappa coefficient

B. Discriminant validity

C. Percentage of agreement

D. Item response theory

###
A. Kappa coefficient

There are a number of ways to estimate the interscorer reliability, but the most common involves calculating a correlation coefficient between the scores of two different raters. The Kappa coefficient is a measure of the agreement between two judges who each rate a set of objects using the nominal scales.

3

##
Form A is administered to a group of employees in Spring and then again in Fall. Using this method, what

type of reliability is measured?

A. split-half

B. equivalence

C. stability

D. internal consistency

###
C. stability

Test-retest reliability, or the coefficient of stability, involves administering the same test to the same group on two occasions and then correlating the scores. Alternative forms reliability, or coefficient of equivalence (response “B”), consists of administering two alternate forms of a test to the same group and then correlating the scores. Internal consistency reliability (response “D”) utilizes a single test administration and involves obtaining correlations among individual test items. Split-half reliability (response “A”) is a method of determining internal consistency reliability.

4

##
Which statement is most true about validity?

A. Validity is never higher than the reliability coefficient.

B. Validity is never higher than the square of the reliability coefficient.

C. Validity is never higher than the square root of the reliability coefficient.

D. Validity is never higher than 1 minus the reliability coefficient.

###
C. Validity is never higher than the square root of the reliability coefficient.

A test's reliability sets an upper limit on its criterion-related validity. Specifically, a test's validity coefficient can never be higher than the square root of its reliability coefficient. In practice, a validity coefficient will never be that high, but, theoretically, that's the upper limit.

5