Chapter 6: Validity Flashcards

1
Q

Validity

A

Used in conjunction with the maningfulness of a test score; what the test score truly means; judgment or estimate of how well a test measure what it purports to measure in a particular context

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Inference

A

Logical result or deduction

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Valid Test

A

The test has been shown to be valid for a particular use with a particular population of testtakers at a particular time; Validity is within reasonable boundaries of a comtemplated usage

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Validation

A

Process of gathering and evaluating evidence about validity; both the test developer and the test user may play a role in the validation of a test for a specific purpose

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Local Validation Studies

A

May yield insights regarding a particular population of testtakers as compared to the norming sample described in a test; necessary when the test user plans to alter in some way the format, instructions, language, or contest of the test;

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

How Validity is Conceptualized

A

Content Validity
Criterion-Related Validity
Construct Validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Trinitarian View of Validity

A

Construct validity is the umbrella validity;

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Approaches to assessing validity

A

Content Validity
Criterion-related Validity
Construct Validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Approaches to Assessing Validity

A

Scrutinize the test’s content
Relate scores obtained on the test to other test scores or other measures
Executing a comprehensive analysis of
How the scores on the test relate to other test scores and measures
How scores on the test can be understood within some theoretical framework for understanding the construct that he test was designed to measure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Face Validity

A

Relates more to what a test appears to measure to the person being tested than to waht the test actually measures; face validity is a judgment concerning how relevant the test items appear to be

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

High Face Validity

A

If it appears to measure what it purports to measure what it purports to measure on the face of it

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Lack of Face Validity

A

Contributes to a lack of confidence in the perceived effectiveness of the test - with a consequential decrease in the testtaker’s cooperation or motivation to do his or her best

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Content Validity

A

Describes a judgment of how adequately a test samples behavior representative of the universe of behavior that the test was designed to sample

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Test Blueprint

A

Emerges for the structure of the evaluation; a plan regarding the types of information to be covered by the items, the number of items tapping each area of coverage, the organization of the items in the test, and so forth; represents the culmination of efforts to adequately sample the universe of content areas that conceivably could be sampled in such a test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Lawshe Test

A

A method for gauging agreement among raters or judges regarding how essential a particular problem is

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

C.H. Lawshe

A

Proposed that each rater repond to the following querstion for each item: Is the skill or knowledge measured by this item:
Essential
Useful But not Essential
Not Necessary

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Content Validity Ratio

A

Negative CVR - when fewer than half the panelists indicate essential, the CVR is negative
Zero CVR - when exactly half the panelists indicate essential, the CVR is Zero
Positive CVR - when more than half but not all the panelists indicate essential, the CVR ranges between .00 to .99

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Criterion-Related Validity

A

Judgment of how adequately a test score can be used to infer an individual’s most probably standing on some measure of interest-the measure of interest being the criterion

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Types of Validity Evidence under Criterion-Related Validity

A

Concurrent Validity

Predictive Validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Concurrent Validity

A

An index of the degree to which a test score is related to some criterion measure obtained at the same time (concurrently)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Predictive Validity

A

An index of the degree to which a test score preducts some criterion measure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Characteristics of a Criterion

A

Relevant
Valid
Uncontaminated

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Criterion Contimination

A

Term applied to a criterion measure that has been based, at least in part, on predictor measures

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Concurrent Validity

A

When test scores obtained at about the same time that the criterion measures are obtained, then the measures of the relationship between test scores and the criterion provide evidence of concurrent validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Predictive Validity of a Test

A

Indicated when measures of the relationship between the test scores and a criterion measure obtained at a future time are measured; how accurately scores on the test predict some criterion measure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Criterion-Related Validity Based on

A

Validity Coefficient

Expectancy Data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Validity Coefficient

A

Correlation coefficient that provides a measure of the relationship between test scores and scores on the criterion measure; affected by restriction or inflation of range; should be high enough to result in the identification and differentiation of testtakers with respect to target attributes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Pearson Correlation Coefficient

A

Used to determine the validity between two measures

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Restriction

A

Whether the range of scores employed is appropriate to the objective of the corelational analysis
Attrition in number of subjects may occur over the course of the study and the validity coefficient may be adversely affected

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Incremental Validity

A

The degree to which an additional predictor explains something about the criterion measure that is not explained by predictors already in use

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Expectancy Data

A

Provide information that can be used in evaluating the criterion-related validity of a test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Expectancy Table

A

Shows the percentage of people within specified test score intervals who subsequently were placed in various categories of the criterion; may be created from a scattergram according to the steps listed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Taylor-Russell Tables

A

Provide an estimate of the extent to which inclusion of a particular test in the selection system will actually improve selection; determining the increase over current procedures

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

Selection Ratio

A

Numerical value that reflects the relationship between the number of people to be hired and the number of people available to be hired

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

Base Rate

A

Refers to the percentage of people hired under the existing system for a particular position

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

Steps to Create an Expectancy Table

A

Draw a scatterplot such that each point in the plot represents a particular test score-criterion score combination; Criterion on Y axis
Draw grid lines in such a way as to summarize the number of people who scored within a particular interval
Counter the number of points in each cell (n)
Count the total number of points within each Vertical interval, this number represents the number of people scoring within a particular test score interval
Convert each cell frequency to a percentage; this represents the percentage of people obtaining a particular test score-criterion score combination; write percentages in the cells; enclose the percentages in parentheses to distinguish them from the frequencies
On a separate sheet, create table headings and subheadings and copy the percentages into the appropriate cell tables
If desired, write the number and percentage of cases per test-score interval; if the number of cases in any one cell is small, it is more likely to fluctuate in subsequent charts; if cell sizes are small, the user could create fewer cells or accumulate data over several years

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

Naylor-Shine Table

A

Entails obtaining the difference between the means of the selected and unselected groups to derive an index of what the test is adding to already established procedures; determines the increase in average score on some criterion measure

38
Q

Utility of Tests

A

Usefulness or practical value of tests

39
Q

Crobrach and Gleser

A

Developed the Decision Theory of Tests

40
Q

Decision Theory of Test

A

Classification of decision problems
Various selection strategies ranging from single-stage processes to sequential analyses
Quantitative analysis of the relationship bet ween test utility, the selection ratio, the cost of the testing program, and expected value of the outcome
Recommendation that in some instances job requirements be tailored to the applicant’s ability instead of the other way around

41
Q

Adaptive treatment

A

Tailoring job requirements to the applicant’s ability instead of the other way around

42
Q

Base Rate

A

Extent to which a particular trait, behavior, characteristic, or attribute exists in the population (expressed as a proportion)

43
Q

Hit Rate

A

Defined as the proportion of people a test accurately identifies as possessing or exhibiting a particular trait, behavior, characteristic, or attribute

44
Q

Miss Rate

A

The proportion of people the test fails to identify as having, or not having a particular characteristic or attribute

45
Q

Miss

A

Amounts to an inaccurate Prediction

46
Q

Categories of Misses

A

False Positive

False Negative

47
Q

False Positive

A

Miss wherein the test predicted that the testtaker did not possess the particular characteristic or attribute being measured when in fact the testtaker did not

48
Q

False Negative

A

Miss wherein the test predicted the testtaker did not possess the particular characteristic or attribute being emasured when the testtaker actually did

49
Q

Naylor-Shine Table

A

Entails obtaining the difference between the means of the selected and unselected groups to derive an index of what the test is adding to already established procedures; determines the increase in average score on some criterion measure

50
Q

Utility of Tests

A

Usefulness or practical value of tests

51
Q

Crobrach and Gleser

A

Developed the Decision Theory of Tests

52
Q

Decision Theory of Test

A

Classification of decision problems
Various selection strategies ranging from single-stage processes to sequential analyses
Quantitative analysis of the relationship bet ween test utility, the selection ratio, the cost of the testing program, and expected value of the outcome
Recommendation that in some instances job requirements be tailored to the applicant’s ability instead of the other way around

53
Q

Adaptive treatment

A

Tailoring job requirements to the applicant’s ability instead of the other way around

54
Q

Item Analysis Procedures

A

Employed in ensuring test homogeneity; one item analysis procedure focuses on the relationship between testtakers’ scores on individual items and their score on the entire test

55
Q

Hit Rate

A

Defined as the proportion of people a test accurately identifies as possessing or exhibiting a particular trait, behavior, characteristic, or attribute

56
Q

Miss Rate

A

The proportion of people the test fails to identify as having, or not having a particular characteristic or attribute

57
Q

Miss

A

Amounts to an inaccurate Prediction

58
Q

Categories of Misses

A

False Positive

False Negative

59
Q

False Positive

A

Miss wherein the test predicted that the testtaker did not possess the particular characteristic or attribute being measured when in fact the testtaker did not

60
Q

False Negative

A

Miss wherein the test predicted the testtaker did not possess the particular characteristic or attribute being emasured when the testtaker actually did

61
Q

Construct Validity

A

Judgment about the appropriateness of inferences drawn from test scores regarding individual standings on a variable called construct

62
Q

Construct

A

An informed, scientific idea developed or hypothesized to describe or explain behavior; unobservable, presupposed (underlying) traits that a test developer may invoke to describe test behavior or criterion performance

63
Q

Evidence of Construct Validity

A
Evidence of Homogeniety
Evidence of changes with age
Evidence of Pretest-Posttest Changes
Evidence of Distint Groups
Convergent Evidence
Discriminant Evidence
Factor Analysis
64
Q

Homogeneity

A

Refers to how uniform a test is in measuring a single concept

65
Q

How Homogeneity Can be Increased

A

Use of Pearson r to correlate average subtest scores with an average total test score
Reconstruction or Elimination of subtests that in the test developer’s judgment do not correlate very well with the test as a whole
For Dichotomously scored test:Eliminating items that do not show significant correlation coefficients with total test scores
For Multipoint Scaled Tests: Items that do not show significant Spearman rank-order corellation coefficients are eliminated
Coefficient Alpha: used in estimating homogeneity of a test composed of multiple choice items

66
Q

Item Analysis Procedures

A

Employed in ensuring test homogeneity; one item analysis procedure focuses on the relationship between testtakers’ scores on individual items and their score on the entire test

67
Q

Evidence of Changes with Age

A

Tests should reflect progressive changes for constructs that could be expected to change over time

68
Q

Evidence of Pretest-Posttest Changes

A

Evidence that test scores change as a result of some experience between a pretest and posttest can be evidence of a construct validity; Any intervening life experience could be predicted to yield changes in score from pretest to posttest

69
Q

Method of Contrasted Groups

A

Demonstrating that scores on the test vary in a predictable way as a function of membership in some group; If a test is a valid measure of a particular construct, then test scores from groups of people who would be presumed to differ with respect to that construct should have correspondingly different test scores

70
Q

Convergent Evidence

A

Comes from Correlations with tests purporting to measure an identical construct and from correlations with measure purporting to measure related constructs

71
Q

Discriminant Evidence

A

When a validity coefficient shows little relationship between test scores and/or other variables with which scores on the test being construct-validated should not theoretically be correlated

72
Q

Multitrait-Multimethod Matrix

A

Experimental Technique that measured both convergent and discriminant validity evidence; matrix or table that results from correlating variables (traits) within and between methods; values for any number of traits as obtained by various methods are inserted into the table, and the resulting matrix of correlations provides insight with respect to both the convergent and the discriminant validity of the methods used

73
Q

Multritrait

A

Two or more traits

74
Q

Multimethod

A

Two or more methods

75
Q

Factor Analysis

A

Shorthand term for a class of mathematical procedures designed to identify factors or specific variables that are typically attributes, characteristics, or dimensions on which people may differ; employed as a data reduction method in which several sets of scores and the correlations between them are analyzed; identifies the factor or factors in common between test scores on subscales within a particular test or the factors in common between scores on a series of tests

76
Q

Exploratory Factor Analysis

A

Entails estimating or extracting factors, deciding how many factors to reatin, and rotating factors to an interpretable orientation

77
Q

Confirmatory Factor Analysis

A

A factor structure is explicitly hypothesized and is tested for its fit with the observed covariance structure of the measured variables

78
Q

Factor Loading

A

Each test is thought of as a vehicle carrying a certain amount of one or more abilities; conveys information about the extent to which the factor determines the test scores or scores

79
Q

Bias

A

Factor inherent in a test that systematically prevents, accurate, impartial measurement

80
Q

Intercept Bias

A

When a test systematically underpredicts or overpredicts the performance of members of a particular group with respect to a criterion; derived from the point where the regression line intersects the Y-axis

81
Q

Slope Bias

A

When a test systematically yields significatly different validity coefficient for members of different groups

82
Q

Rating

A

Numerical or verbal judgment (or both) that places a person or an attribute along a continuum identified by a scale of numerical or word descriptors

83
Q

Rating Scale

A

Scale of numerical or word descriptors

84
Q

Rating Error

A

Judgment resulting from the intentional or uninternional misuse of a rating scale

85
Q

Leniency/Generosity Error

A

Error in rating that arises from the tendency on the part of the rater to be lenient in scoring, marking, and/or grading

86
Q

Severity Error

A

Opposite of Leniency/Generosity Error; when tests are scored very critically by the scorer

87
Q

Central Tendency Error

A

The rater exhibits a general and systematic reluctance to giving ratings at either the positive or negative extreme; all the rater’s ratings would tend to clusted in the middle of the rating continuum

88
Q

Restriction-of-Range Errors

A

(Central Tendency, Leniency, Severity Errors) overcome through the use of Rankings

89
Q

Rankings

A

A procedure that requires the rater to measure individuals against one another instead of against an absolute scale; By using rankings, the rater is forced to select first, second, third choices, etc.

90
Q

Halo Effect

A

Describes the fact that, for some raters, some ratees can do no wrong; a tendency to give a particular ratee a higher rating than he or she objectively deserves because of the rater’s failure to discriminate among conceptually distinct and potentially independent aspects of a ratee’s behavior

91
Q

Fairness

A

The extent to which a test is used in an impartial, just, and equitable way