Selection Flashcards Preview

HR Test 2 > Selection > Flashcards

Flashcards in Selection Deck (95):
1

The process through which organizations make decisions about who will or will not be allowed to join the organization

Personnel selection

2

Selection begins with

The candidates identified through recruitment

3

Selection ends with

The selected individuals placed in the jobs with the organization

4

Some feature of a person you are hiring

Predictor

5

Some organizationally relevant outcome

Criterion

6

Some feature you want to assess

Construct

7

The actual score you observe

Measure

8

These are those things used to make predictions about job applicants

Selection methods or devices

9

The goal of selection is to

Legitimately discriminate among applicants

10

The goal of this is to legitimately discriminate among applicants

Selection

11

Different selection methods may be more/less appropriate to use depending on

The specific job for which you are selecting

12

5 criteria for evaluating any selection method

Reliability
Validity
Generalizability
Utility
Legality

13

The extent to which a measurement is free from random error

Reliability

14

The more random error associated with a measure,

The less reliable it will be

15

The less reliable a measure is,

The less precise we can be in interpreting the scores it provides

16

What measures reliability

Upper-limit of correlation coefficient (standardized measure of association; r) is the product of each

17

In the selection context, this refers to the extent to which performance on the selection device/test is associated with performance on the job

Validity

18

_____ is necessary for validity but not sufficient

Reliability

19

Selection device does not measure all important aspects

Deficient

20

Selection device measures some irrelevant aspects

Contaminated

21

Three ways of measuring validity (accepted by the government's Uniform Guidelines on Employee Selection Procedures

Criterion-related
Content-related
Construct-related

22

This involves empirically assessing the relationship between scores on a selection device and scores on a "criterion"

Criterion related validity

23

The correlation between the two sets of scores assessed in criterion-related validation are assessed and referred to as

A validity coefficient

24

This was invented by Karl Pearson and Sir Francis Galton, who conducted research on genetics

Correlation (validity) coefficient

25

Correlation (validity) coefficient ranges from

-1 to 1

26

The strength of the relationship

Effect size (absolute value)

27

Strong correlation r

.5

28

Moderate correlation r

.3

29

weak correlation r

.1

30

Whether the relationship is positive or negative

Direction

31

Three relationships with strong correlation (r=.5)

Intelligence-job performance relationship
Knowledge-job performance relationship
Structure interview score-job performance relationship

32

Three relationships with moderate correlation (r=.3)

Conscientious-job performance relationship
School grades-job performance relationship
Integrity tests-job performance relationship

33

This used the test scores of all applicants and looks for a relationship between the scores and future performance of the applicants who were hired

Predictive validation

34

Drawback of predictive validation

Takes a long time, cannot collect DV for a while, may not want to wait to use a "great" test

35

This consists of administering a test to people who currently hold a job, and then comparing their scores to existing measures of job performance

Concurrent validation

36

Drawbacks of concurrent validation

May not be representative of applicants (may learn things on the job, may be less motivated to perform well on the test)
Restricted range (may predict better with more variance)

37

This involves using expert opinions/judgements that the items, questions, or task used in a selection test are representative of the kinds of situations, problems, or tasks that occur on the job

Content-related validity

38

When developing content validity measures (4)

Look at the job analysis you have already done
Compare your current and proposed methods of assessment to the KSAO or job competency matrix
Try to develop new measures of assessment that are especially relevant for each of the job components
Reject all measures that are not demonstratively related to documented KSAOs or competencies

39

Consistency between a high score on a test and high level of a construct (I.e. Intelligence or leadership ability) as well as between mastery of this construct and successful performance of the job

Construct-related validity

40

Criterion-related validity connects what?

Measures
(Predictor measure and criterion measure)

41

Content related validity connects what

Predict measure and criterion construct

42

Construct related validity connects what

Predictions, criterions
(Predictor construct and predict measure)
(Criterion construct and criterion measure)

43

These apply not only to the conditions in which the method was originally developed (a specific job, organization, industry, etc.) but also across those settings

Generalizable selection methods

44

These often measure stable traits (e.g. GMA and personality) or generic skill sets (e.g. Interviews and situational judgement tests

Generalizable selection methods

45

This takes all of the correlations found in studies of a particular relationship and calculates a weighted average (such that correlations from studies with large samples are weighted more)

Meta-analysis

46

This is a quantitive, rather than qualitative review of studies

Meta-analysis

47

_____ of a selection device is the degree to which its use improves the quality of the individuals selects

Utility

48

Procedures for this offer organizational decision-makers useful information regarding the relative values of different selection tools

Utility analysis

49

What's the best way to show job-relatedness

Through criterion-related validation

50

This is when elements of the selection system look valid

Face validity

51

High ______ results in less negative reactions and increased motivation to perform on the test/exercise

Face validity

52

These are hands-on simulations of part of all of the job that must be performed by applicants

Work sample tests

53

Work samples consist of (3)

Actual physical mock up of job tasks
In basket exercises for managerial tasks
Examples of work similar work done for another organization

54

Work sample content validity and criterion-related validity?

Highest level of content validity possible
High criterion-related validity (.54)

55

2 negatives of a work sample

Work samples can only be used with applicants who already know the job or have been trained for the occupation or job
Work samples are costly to develop and run, with costs generally increasing as job-complexity increases

56

General information processing capacity that facilitates reasoning, problem solving, decision making, and other higher order thinking skills

General Mental Ability (g, IQ, Intelligence)

57

Not the amount of information people know, but rather their ability to recognize, acquire, organize, update, select, and apply it effectively

GMA, g, IQ, Intelligence

58

This is the major key to GMA and the major distinction among jobs

Complexity

59

GMA and cognitive tests validity?

High-complexity jobs (.58)
Medium-complexity jobs (.53)
Low-complexity jobs (.23)
Counter-productive work behaviors (-.33)
GOA (.41)
Income (.2)

60

This posits that individuals, over the course of their labor market experiences, will sort themselves into jobs that are compatible with their interests, values, and abilities

The gravitational hypothesis

61

3 Negatives of using GMA in selection

Has been shown to have severe group differences and lead to adverse impact
Managers may use score banding method in HR selection process
Standard Error of the Differences (SED) bands are created such that differences in the same band may be by chance (logically flawed)

62

Solution for negatives of using GMA

Non-cognitive measures should also be used in the selection process

63

The five-factor model (FFM) of personality consists of

Conscientiousness
Emotional Stability
Extraversion
Agreeableness
Openness

64

Taken together, this provides a comprehensive yet parsimonious framework to examine the relationship between specific personality traits and job outcomes

The Five-Factor Model

65

Described as being dependable, careful, thorough, responsible, and organized
Achievement-oriented, hardworking, and persevering
Predicts job performance (.28) and leadership (.28)
Sample items include:
I am always prepared
I pay attention to details
I make plans and stick to them

Conscientiousness

66

Described as being relaxed, unenvious, tranquil, secure, and content
Often referred to as the opposite pole of neuroticism
Predicts job performance (.16) and job satisfaction (.29)
Sample items include:
I seldom feel blue
I feel comfortable with myself
I am not easily bothered by things

Emotional stability (neuroticism)

67

Described as being sociable, gregarious, assertive, active, and dominate
Predicts sales performance (.28) and leadership (.31)
Sample items include:
I feel comfortable around people
I make friends easily
I am the life of the party

Extraversion

68

Described as being curious, flexible, trusting, cooperative, and forgiving
These individuals prefer tasks calling for helping but dislike tasks calling for conflict (negotiation)
Predicts workplace defiance (-.44) and teamwork (.34)
Sample items include:
I make people feel at ease
I trust what people say
I treat all people equally

Agreeableness

69

Described as being imaginative, cultured, curious, original, and broad-minded
Prefer self-direction and flexibility of idea organization
Predicts leadership (.24) and workplace accidents (.5)
Sample items include:
I have a vivid imagination
I enjoy hearing new ideas
I enjoy thinking about things

Openness

70

Negative of a personality assessment

Self-reported personality, and subsequent response distortion

71

Structured interviews consist of(4)

Evaluations standardization
Question consistency
Question sophistication
Rapport building

72

The use of a formal rating system applied to each candidate

Evaluations standardization

73

The consistent wording and ordering of questions asked by the interviewer

Question consistency

74

The types of questions (behavioral or situational) given

Question sophistication

75

The questions asked at the beginning of the structured interview to get to know each candidate

Rapport building

76

5 things interviews are measuring

Mental capability
Declarative job knowledge and skills
Personality traits (FFM)
Applied social skills
Fit with the values of the organization

77

4 negatives of interviews

Self-presentation tactics
Evidence of applicant misinformation and over-preparing
Interviewers were very confident they could identify the best candidates
Pre-interview impressions and confirmation bias

78

These are high-fidelity simulations where assesses) current or future employees) are rated on a number of job-based exercises with the intent of predicting actual behavior on a job

Assessment Centers (ACs)

79

In these, participants work through a series of behavioral exercises (e.g. Job simulations, in-baskets, and role plays)

Assessment centers

80

Predictive validity of assessment centers

Job performance (.36) managerial potential (.53) training (.35) career advancement (.36)

81

Negatives of assessment centers

Can only be used for certain jobs, typically managerial jobs
Considerations must be given to:
High cost
Time to create proper AC
time to conduct assessment
Pre-selected individuals must 'go-away' to participate

82

These are designed to directly assess attitudes regarding dishonest behaviors
Job performance (.14) CWB (.38)

Overt integrity tests (clear purpose tests)

83

These tests specifically ask about past illegal and dishonest activities

Overt integrity tests (clear purpose tests)

84

These use composite measures of personality dimensions, such as reliability, conscientiousness, and trustworthiness
Job performance (.18) CWB (.27)

Personality based measures (disguised purpose tests)

85

These present applicants with a work related situation and multiple possible responses to the situation
Applicants are then forced to evaluate and pick from the alternative courses of action

Situational Judgement Tests (SJT)

86

Items on STJS with behavioral tendency instructions(what would you do)have higher correlations with

Personality constructs and are reflective of typical performance

87

Items with knowledge instructions (what should one do?) have higher correlations with

cognitive ability and are reflective of maximal performance

88

Why are grades used in selection

They reflect intelligence, motivation, and other abilities applicable to the job

89

This contains questions about past life experiences

Biographical data measures (or Biodata)

90

Hire sequentially based on the first applicants who score above the cut score for the job

Minimum qualification

91

the minimum level of performance that is acceptable for an applicant to be considered minimally qualified

cut scores

92

Give job offers starting from the most qualified and progressing to the lowest score

Top-down hiring

93

Arriving at a selection decision in which a very high score on one type of assessment can make up for a low score on another

Compensatory model

94

Process of arriving at a selection decision by eliminating some candidates at each stage of the selection process

Multiple-Hurdle System

95

Multiple Hurdle Systems allow organizations to balance the trade-off between

cheap or generic tests missing important characteristics and extensive tests/assessments being costly and time consuming