Chapter 6: Validity Flashcards

1
Q

Validity

A

Used in conjunction with the maningfulness of a test score; what the test score truly means; judgment or estimate of how well a test measure what it purports to measure in a particular context

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Inference

A

Logical result or deduction

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Valid Test

A

The test has been shown to be valid for a particular use with a particular population of testtakers at a particular time; Validity is within reasonable boundaries of a comtemplated usage

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Validation

A

Process of gathering and evaluating evidence about validity; both the test developer and the test user may play a role in the validation of a test for a specific purpose

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Local Validation Studies

A

May yield insights regarding a particular population of testtakers as compared to the norming sample described in a test; necessary when the test user plans to alter in some way the format, instructions, language, or contest of the test;

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

How Validity is Conceptualized

A

Content Validity
Criterion-Related Validity
Construct Validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Trinitarian View of Validity

A

Construct validity is the umbrella validity;

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Approaches to assessing validity

A

Content Validity
Criterion-related Validity
Construct Validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Approaches to Assessing Validity

A

Scrutinize the test’s content
Relate scores obtained on the test to other test scores or other measures
Executing a comprehensive analysis of
How the scores on the test relate to other test scores and measures
How scores on the test can be understood within some theoretical framework for understanding the construct that he test was designed to measure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Face Validity

A

Relates more to what a test appears to measure to the person being tested than to waht the test actually measures; face validity is a judgment concerning how relevant the test items appear to be

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

High Face Validity

A

If it appears to measure what it purports to measure what it purports to measure on the face of it

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Lack of Face Validity

A

Contributes to a lack of confidence in the perceived effectiveness of the test - with a consequential decrease in the testtaker’s cooperation or motivation to do his or her best

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Content Validity

A

Describes a judgment of how adequately a test samples behavior representative of the universe of behavior that the test was designed to sample

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Test Blueprint

A

Emerges for the structure of the evaluation; a plan regarding the types of information to be covered by the items, the number of items tapping each area of coverage, the organization of the items in the test, and so forth; represents the culmination of efforts to adequately sample the universe of content areas that conceivably could be sampled in such a test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Lawshe Test

A

A method for gauging agreement among raters or judges regarding how essential a particular problem is

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

C.H. Lawshe

A

Proposed that each rater repond to the following querstion for each item: Is the skill or knowledge measured by this item:
Essential
Useful But not Essential
Not Necessary

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Content Validity Ratio

A

Negative CVR - when fewer than half the panelists indicate essential, the CVR is negative
Zero CVR - when exactly half the panelists indicate essential, the CVR is Zero
Positive CVR - when more than half but not all the panelists indicate essential, the CVR ranges between .00 to .99

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Criterion-Related Validity

A

Judgment of how adequately a test score can be used to infer an individual’s most probably standing on some measure of interest-the measure of interest being the criterion

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Types of Validity Evidence under Criterion-Related Validity

A

Concurrent Validity

Predictive Validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Concurrent Validity

A

An index of the degree to which a test score is related to some criterion measure obtained at the same time (concurrently)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Predictive Validity

A

An index of the degree to which a test score preducts some criterion measure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Characteristics of a Criterion

A

Relevant
Valid
Uncontaminated

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Criterion Contimination

A

Term applied to a criterion measure that has been based, at least in part, on predictor measures

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Concurrent Validity

A

When test scores obtained at about the same time that the criterion measures are obtained, then the measures of the relationship between test scores and the criterion provide evidence of concurrent validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Predictive Validity of a Test
Indicated when measures of the relationship between the test scores and a criterion measure obtained at a future time are measured; how accurately scores on the test predict some criterion measure
26
Criterion-Related Validity Based on
Validity Coefficient | Expectancy Data
27
Validity Coefficient
Correlation coefficient that provides a measure of the relationship between test scores and scores on the criterion measure; affected by restriction or inflation of range; should be high enough to result in the identification and differentiation of testtakers with respect to target attributes
28
Pearson Correlation Coefficient
Used to determine the validity between two measures
29
Restriction
Whether the range of scores employed is appropriate to the objective of the corelational analysis Attrition in number of subjects may occur over the course of the study and the validity coefficient may be adversely affected
30
Incremental Validity
The degree to which an additional predictor explains something about the criterion measure that is not explained by predictors already in use
31
Expectancy Data
Provide information that can be used in evaluating the criterion-related validity of a test
32
Expectancy Table
Shows the percentage of people within specified test score intervals who subsequently were placed in various categories of the criterion; may be created from a scattergram according to the steps listed
33
Taylor-Russell Tables
Provide an estimate of the extent to which inclusion of a particular test in the selection system will actually improve selection; determining the increase over current procedures
34
Selection Ratio
Numerical value that reflects the relationship between the number of people to be hired and the number of people available to be hired
35
Base Rate
Refers to the percentage of people hired under the existing system for a particular position
36
Steps to Create an Expectancy Table
Draw a scatterplot such that each point in the plot represents a particular test score-criterion score combination; Criterion on Y axis Draw grid lines in such a way as to summarize the number of people who scored within a particular interval Counter the number of points in each cell (n) Count the total number of points within each Vertical interval, this number represents the number of people scoring within a particular test score interval Convert each cell frequency to a percentage; this represents the percentage of people obtaining a particular test score-criterion score combination; write percentages in the cells; enclose the percentages in parentheses to distinguish them from the frequencies On a separate sheet, create table headings and subheadings and copy the percentages into the appropriate cell tables If desired, write the number and percentage of cases per test-score interval; if the number of cases in any one cell is small, it is more likely to fluctuate in subsequent charts; if cell sizes are small, the user could create fewer cells or accumulate data over several years
37
Naylor-Shine Table
Entails obtaining the difference between the means of the selected and unselected groups to derive an index of what the test is adding to already established procedures; determines the increase in average score on some criterion measure
38
Utility of Tests
Usefulness or practical value of tests
39
Crobrach and Gleser
Developed the Decision Theory of Tests
40
Decision Theory of Test
Classification of decision problems Various selection strategies ranging from single-stage processes to sequential analyses Quantitative analysis of the relationship bet ween test utility, the selection ratio, the cost of the testing program, and expected value of the outcome Recommendation that in some instances job requirements be tailored to the applicant's ability instead of the other way around
41
Adaptive treatment
Tailoring job requirements to the applicant's ability instead of the other way around
42
Base Rate
Extent to which a particular trait, behavior, characteristic, or attribute exists in the population (expressed as a proportion)
43
Hit Rate
Defined as the proportion of people a test accurately identifies as possessing or exhibiting a particular trait, behavior, characteristic, or attribute
44
Miss Rate
The proportion of people the test fails to identify as having, or not having a particular characteristic or attribute
45
Miss
Amounts to an inaccurate Prediction
46
Categories of Misses
False Positive | False Negative
47
False Positive
Miss wherein the test predicted that the testtaker did not possess the particular characteristic or attribute being measured when in fact the testtaker did not
48
False Negative
Miss wherein the test predicted the testtaker did not possess the particular characteristic or attribute being emasured when the testtaker actually did
49
Naylor-Shine Table
Entails obtaining the difference between the means of the selected and unselected groups to derive an index of what the test is adding to already established procedures; determines the increase in average score on some criterion measure
50
Utility of Tests
Usefulness or practical value of tests
51
Crobrach and Gleser
Developed the Decision Theory of Tests
52
Decision Theory of Test
Classification of decision problems Various selection strategies ranging from single-stage processes to sequential analyses Quantitative analysis of the relationship bet ween test utility, the selection ratio, the cost of the testing program, and expected value of the outcome Recommendation that in some instances job requirements be tailored to the applicant's ability instead of the other way around
53
Adaptive treatment
Tailoring job requirements to the applicant's ability instead of the other way around
54
Item Analysis Procedures
Employed in ensuring test homogeneity; one item analysis procedure focuses on the relationship between testtakers' scores on individual items and their score on the entire test
55
Hit Rate
Defined as the proportion of people a test accurately identifies as possessing or exhibiting a particular trait, behavior, characteristic, or attribute
56
Miss Rate
The proportion of people the test fails to identify as having, or not having a particular characteristic or attribute
57
Miss
Amounts to an inaccurate Prediction
58
Categories of Misses
False Positive | False Negative
59
False Positive
Miss wherein the test predicted that the testtaker did not possess the particular characteristic or attribute being measured when in fact the testtaker did not
60
False Negative
Miss wherein the test predicted the testtaker did not possess the particular characteristic or attribute being emasured when the testtaker actually did
61
Construct Validity
Judgment about the appropriateness of inferences drawn from test scores regarding individual standings on a variable called construct
62
Construct
An informed, scientific idea developed or hypothesized to describe or explain behavior; unobservable, presupposed (underlying) traits that a test developer may invoke to describe test behavior or criterion performance
63
Evidence of Construct Validity
``` Evidence of Homogeniety Evidence of changes with age Evidence of Pretest-Posttest Changes Evidence of Distint Groups Convergent Evidence Discriminant Evidence Factor Analysis ```
64
Homogeneity
Refers to how uniform a test is in measuring a single concept
65
How Homogeneity Can be Increased
Use of Pearson r to correlate average subtest scores with an average total test score Reconstruction or Elimination of subtests that in the test developer's judgment do not correlate very well with the test as a whole For Dichotomously scored test:Eliminating items that do not show significant correlation coefficients with total test scores For Multipoint Scaled Tests: Items that do not show significant Spearman rank-order corellation coefficients are eliminated Coefficient Alpha: used in estimating homogeneity of a test composed of multiple choice items
66
Item Analysis Procedures
Employed in ensuring test homogeneity; one item analysis procedure focuses on the relationship between testtakers' scores on individual items and their score on the entire test
67
Evidence of Changes with Age
Tests should reflect progressive changes for constructs that could be expected to change over time
68
Evidence of Pretest-Posttest Changes
Evidence that test scores change as a result of some experience between a pretest and posttest can be evidence of a construct validity; Any intervening life experience could be predicted to yield changes in score from pretest to posttest
69
Method of Contrasted Groups
Demonstrating that scores on the test vary in a predictable way as a function of membership in some group; If a test is a valid measure of a particular construct, then test scores from groups of people who would be presumed to differ with respect to that construct should have correspondingly different test scores
70
Convergent Evidence
Comes from Correlations with tests purporting to measure an identical construct and from correlations with measure purporting to measure related constructs
71
Discriminant Evidence
When a validity coefficient shows little relationship between test scores and/or other variables with which scores on the test being construct-validated should not theoretically be correlated
72
Multitrait-Multimethod Matrix
Experimental Technique that measured both convergent and discriminant validity evidence; matrix or table that results from correlating variables (traits) within and between methods; values for any number of traits as obtained by various methods are inserted into the table, and the resulting matrix of correlations provides insight with respect to both the convergent and the discriminant validity of the methods used
73
Multritrait
Two or more traits
74
Multimethod
Two or more methods
75
Factor Analysis
Shorthand term for a class of mathematical procedures designed to identify factors or specific variables that are typically attributes, characteristics, or dimensions on which people may differ; employed as a data reduction method in which several sets of scores and the correlations between them are analyzed; identifies the factor or factors in common between test scores on subscales within a particular test or the factors in common between scores on a series of tests
76
Exploratory Factor Analysis
Entails estimating or extracting factors, deciding how many factors to reatin, and rotating factors to an interpretable orientation
77
Confirmatory Factor Analysis
A factor structure is explicitly hypothesized and is tested for its fit with the observed covariance structure of the measured variables
78
Factor Loading
Each test is thought of as a vehicle carrying a certain amount of one or more abilities; conveys information about the extent to which the factor determines the test scores or scores
79
Bias
Factor inherent in a test that systematically prevents, accurate, impartial measurement
80
Intercept Bias
When a test systematically underpredicts or overpredicts the performance of members of a particular group with respect to a criterion; derived from the point where the regression line intersects the Y-axis
81
Slope Bias
When a test systematically yields significatly different validity coefficient for members of different groups
82
Rating
Numerical or verbal judgment (or both) that places a person or an attribute along a continuum identified by a scale of numerical or word descriptors
83
Rating Scale
Scale of numerical or word descriptors
84
Rating Error
Judgment resulting from the intentional or uninternional misuse of a rating scale
85
Leniency/Generosity Error
Error in rating that arises from the tendency on the part of the rater to be lenient in scoring, marking, and/or grading
86
Severity Error
Opposite of Leniency/Generosity Error; when tests are scored very critically by the scorer
87
Central Tendency Error
The rater exhibits a general and systematic reluctance to giving ratings at either the positive or negative extreme; all the rater's ratings would tend to clusted in the middle of the rating continuum
88
Restriction-of-Range Errors
(Central Tendency, Leniency, Severity Errors) overcome through the use of Rankings
89
Rankings
A procedure that requires the rater to measure individuals against one another instead of against an absolute scale; By using rankings, the rater is forced to select first, second, third choices, etc.
90
Halo Effect
Describes the fact that, for some raters, some ratees can do no wrong; a tendency to give a particular ratee a higher rating than he or she objectively deserves because of the rater's failure to discriminate among conceptually distinct and potentially independent aspects of a ratee's behavior
91
Fairness
The extent to which a test is used in an impartial, just, and equitable way