h&m chap 5 Flashcards

(71 cards)

1
Q

Predictor

A

Any variable used to forecast a criterion

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Psychometric

A

The measurement (“metric”) of properties of the mind (from the Greek word “psyche”). The standards used to measure the quality of psychological assessments

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Reliability

A

A standard for evaluating tests that refers to the consistency, stability, or equivalence of test scores. Often contrasted with validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Test-Retest Reliability

A

A type of reliability that reveals the stability of test scores upon repeated applications of the test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

coefficient of stability

A

reflects the stability of the test over time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

On a reliable test, do those with high and low scores stay consistent or bounce around between two trials? What happens on an unreliable test

A

If the test is reliable, those who scored high the first time will also score high the second time, and those who scored low on the first will also score low on the second. If the test is unreliable, the scores will “bounce around” in such a way that there is no similarity in individuals’ scores between the two trials

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

At what number is a reliability coefficient considered professionally acceptable?

A

0.70 (although 0.80 and above are better)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Equivalent-form reliability

A

A type of reliability that reveals the equivalence of test scores between two versions or forms of the test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

coefficient of equivalence

A

reflects the extent to which the two forms are sufficiently comparable measures of the same concept

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is the least popular major type of reliability and why?

A

Of the three major types of reliability, test-retest reliability is the least popular because it is usually challenging to come up with one good test, let alone two

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

If the resulting coefficient of equivalence between two tests is high, what does that mean for their reliability?

A

If the resulting coefficient of equivalence is high between two tests, the tests are sufficiently comparable and are viewed as reliable measures of the same concept

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Internal Consistency Reliability

A

A type of reliability that reveals the homogeneity of the items comprising a test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Split-half reliability

A

a test is given to a group of people, and when it is time to score the test, the researcher divides the items in half

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

If a test has internal-consistency reliability, what will there be between responses to the two half-tests?

A

If the test has internal-consistency reliability, there will be a high degree of similarity between the responses to the items from the two halves

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Inter-rater reliability (aka inter-judge, inter-observer, or concept reliability)

A

A type of reliability that reveals the degree of agreement among the assessments provided by two or more raters

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Validity

A

A standard for evaluating tests that refers to the accuracy or appropriateness of drawing inferences from test scores. Often contrasted with reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Operationalization

A

The process of determining how a construct will be assessed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Construct Validity

A

The degree to which a test is an accurate and faithful measure of the construct it purports to measure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

convergent validity coefficients

A

reflect the degree to which these scores converge (or come together) in assessing a common concept

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

divergent validity coefficients

A

reflect the degree to which these scores diverge (or are separate) from each other in assessing unrelated concepts

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Criterion-related validity

A

The degree to which a test forecasts or is statistically related to a criterion

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Concurrent validity

A

used to diagnose the existing status of some criterion, whereas predictive validity is used to forecast future status

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What kind of validity focuses on how well a predictor measures the criterion at the same point in time?

A

In measuring concurrent criterion-related validity, we are concerned with how well a predictor can predict a criterion at the same time, or concurrently.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

In what kind of validity do we collect predictor information to forecast future criterion performance?

A

In measuring predictive criterion-related validity, we collect predictor information and use it to forecast future criterion performance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Validity Coefficient
A statistical index (expressed as a correlation coefficient) that reveals the degree of association between two variables
25
What does a greater correlation between the predictor and the criterion tell us?
Tell us more about the criterion based on the predictor
26
Content Validity
The degree to which subject matter experts agree that the items in a test are a representative sample of the domain of knowledge the test purports to measure
27
Which form of validity was most relevant in achievement testing?
Content
28
Face Validity
The appearance that items in a test are appropriate for the intended use of the test by the individuals who take the test
29
Individuals are more likely to bring legal challenges against companies for using tests that fail to achieve what form of validity?
Face
30
Can a test be reliable but not valid? Can a test be valid without being reliable?
Yes, a test can be reliable but not valid. However, a test can never be valid without being reliable
31
Academic intelligence
represents what intelligence tests typically measure, such as fluency with words and numbers
32
Practical intelligence
needed to be competent in the everyday world and is not highly related to academic intelligence
33
Creative intelligence
pertains to the ability to produce work that is both novel (i.e., original or unexpected) and appropriate (i.e., useful)
34
Section 106 of the Civil Rights Act
states that it is unlawful for employers in connection with the selection or referral of “applicants or candidates for employment or promotion to adjust the scores of, use different cutoffs for, or otherwise alter the results of employment related tests on the basis of race, color, religion, sex, or national origin.”
35
Within-group norming
a practice in which individual scores are converted to standard scores or percentile scores within one’s group
36
4 critical physical abilities relevant to work performance
1) Static strength: the ability to use muscle force to lift, push, pull, or carry objects; 2) Explosive strength: the ability to use short bursts of muscle force to propel oneself or an object; 3) Gross body coordination: the ability to coordinate the movement of the arms, legs, and torso in activities where the whole body is in motion; 4) Stamina: the ability of the lungs and circulatory (blood) systems of the body to perform efficiently over time
37
Psychomotor ability
involves motor skills related to flexibility, balance, and coordination (fine motor skills)
38
Fine motor skills
those that involve smaller groups of muscles such as those found in fingers
39
Sensory/perceptual ability
able to detect and recognize stimuli within their environment
40
Personality
refers to the individual differences that people have that influence how they think, feel, and behave in the world
41
Big 5 Personality Theory
A theory that defines personality in terms of five major factors: openness to experience, conscientiousness, extraversion, agreeableness, and emotional stability.
42
Dark Triad
A cluster of three dysfunctional personality types associated with counterproductive work behavior: Machiavellianism, narcissism, and psychopathy
43
Machiavellian
“the end justifies the means”
44
Narcissists
self-promoting and unaffected by criticism, they exude subtle arrogance. They tend to be dismissive of advice, primarily because they don’t view others as competent com-pared to themselves
45
Psychopaths
characterized by lacking any concern for others; not inherently violent
46
Faking
the behavior of job applicants to falsify or fake their responses to items on personality inventories to create a favorable impression
47
Integrity Test
A type of test that pretends to assess a candidate’s honesty or character
48
Overt integrity test
the job applicant clearly understands that the intent of the test is to assess integrity
49
Personality-based integrity test
makes no reference to theft; these tests contain conventional personality assessment items that have been found to be predictive of theft
50
Situational judgment test
A type of test that describes a problem to the test taker and requires the test taker to rate various possible solutions in terms of their feasibility or applicability
51
Biodata inventory
A method of assessing individuals in which biographical information pertaining to past activities, interests, and behaviors in their lives is considered
52
Rational keying
giving more points to response options that most reflect the constructs they are intended to reflect, as determined by expert judgments
53
Empirical keying
giving more points to options that are most predictive of the criterion of interest, such as job performance or turnover
54
Hybrid Keying
a combination of rational and empirical keying such that points are given to options that are most predictive of the desirable criteria, but only if they also make conceptual sense
55
Computerized adaptive testing (CAT)
A form of assessment using a computer in which the questions have been pre-calibrated in terms of difficulty, and the examinee’s response (right or wrong) to one question determines the selection of the next question
56
Unstructured interview
A format for the job interview in which the questions are different across all candidates. Often contrasted with the structured interview
57
Structured interview
A format for the job interview in which the questions are consistent across all candidates. Often contrasted with the unstructured interview
58
Which is more superior? Structured or unstructured interviews? Why?
Structured due to their high levels in inter-rated reliability and validity
59
What do highly structured interviews focus on?
constructs such as job knowledge, interpersonal and social skills, and problem solving
60
What do highly unstructured interviews focus on?
general intelligence, education, work experience, and interests
61
Behavior Description Interview
A type of job interview in which candidates are asked to provide specific examples from their past to illustrate attributes important for the position
62
Work Samples
A type of personnel selection test in which the candidate demonstrates proficiency on a task representative of the work performed in the job
63
Situational Exercise
A method of assessment in which examinees are presented with a problem and asked how they would respond to it
64
inbox assessment
predictive of the job performance of managers and executives, a traditionally difficult group of employees to select. But a major prob-lem with the test is that, like a work sample, it is an individual test
65
Leadership group discussion (LGD)
A group of applicants (normally, two to eight) engage in a job-related discussions in which no spokesperson or group leader has been named
66
Assessment Center
A technique for assessing job candidates using a series of structured, group-oriented exercises that are evaluated by raters
67
4 major standards that are useful in organizing all the information we have gathered about predictors
1) Validity 2) Fairness 3) Applicability
68
Validity
refers to the ability of the predictor to forecast criterion performance accurately
69
Fairness
refers to the ability of the predictor to render unbiased predictions of job success across applicants in various subgroups of gender, race, age, and so on
70
Applicability
refers to whether the selection method can be applied across the full range of jobs Cost - of implementation of the method is the final standard