Selection Flashcards

1
Q

Cog ability def.

A

a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, learn from experience. (Gottfredson, 1997)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Cog. ability description

A

General Cognitive ability (McDaniel & Banks, 2011)
• Spearman (1904)
o General cognitive ability factor (g)
• Cattell’s crystallized and fluid intelligences
o Fluid intelligence is the ability to solve novel problems through reasoning
o Crystallized intelligence is the ability to rely on prior experience and knowledge to solve problems
• Carroll’s Three-Stratum Theory (1993)
o g
o fluid, crystallized, general memory and learning, broad visual perceptions, broad auditory perceptions, broad retrieval ability, broad cognitive speediness, processing speed
o lower level constructs of the above

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Cog. ability from staffing

A

Staffing Notes
• Measure of maximum performance.
• generally considerable time pressure.
• very good validity across range of jobs (validity increases with job complexity).
• job performance (.51; Hunter & Hunter, 1989)
• training success.
• considerable adverse impact
–minorities and older employees.
• but, highly valid within group.
• relatively inexpensive.
• can generally be given in-person or on-line.
• easy to score: in-person or on-line.
• questions concerning face validity:

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Emotional intelligence

A

o Law et al., 2004 has a JAP article on content/construct validity issues
o 3 dimensions: emotional perception, stability and regulation
o Joseph et al.,, 2010 JAP article found it predicts performance above and beyond CA and personality
o Good analysis.
o Idea of being self-aware and aware of other’s emotions and correctly using this information (self-regulation) is likely to be important (E.g., social workers)
o Issue: how to measure it? How to measure with reasonable cost?
 e.g., show pictures/videos, ask significant others, etc.
 vs. paper-and-pencil self-report measure. (Exhibit 9.6)
o Consulting firms claims have far exceeded data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Personality def.

A

Relatively enduring patterns of thoughts, ideas, emotions, and behaviors that are consistent over situations and time and distinguish individuals from others (Barrick & Mount)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Personality description

A

I. Model and structures of personality
a. Brief history and background
i. Meta-analysis by Barrick & Mount (1991) is a turning point to raising interest in personality
1. Conscientiousness was a valid predictor across most jobs
2. Extraversion was a predictor for interpersonal jobs
b. Big five can be clustered into “getting ahead” and “getting along”
c. Five factor model
d. HEXACO model
e. Using personality facets instead of factors
i. Someone might want one facet of conscientiousness but not another
1. i.e. I am high on achievement (achievement-striving, self-efficacy, and self-discipline) but low on conformity (orderly, dutiful (rules) cautious)
f. Nomological-web clustering approach
i. A general approach or philosophy that personality variables or facets are grouped together (by factor analysis, expert sorting methods, etc)
II. Criterion-related validity
a. Small to moderately related to leadership and career success, job performance, OCB, CWB, training, team processes, JS
b. Some provide reasons why we shouldn’t be concerned about them being small to moderate (e.g., even moderate values are important in practice (increm. V, etc.)
III. Subgroup differences - fairly minimal
IV. Faking
V. Innovations (forced choice, CRT (James), “other” ratings)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Biodata definition

A

The core attribute of biodata items is that the items pertain to historical events that may have shaped the person’s behavior and identity” (Mael, 1991 and Breaugh, 2009)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Biodata description

A

• Important that items are discrete and verifiable
The use of biodata for employee selection: Past research and future directions (Breaugh, 2009)
• Definition: An applicant’s past behavior and experiences
• Reliability and validity
o Reliability - high, depends on the construct
o Validity - Schmidt & Hunter (1998) .35 for job performance
o Incremental validity - Beyond tenure, GMA, and big 5 (Mount et al., 2000)
• Modest adverse impact
• Negative applicant reactions
• Susceptible to faking (May be reduced through a warning, elaboration)
• Issues in biodata research
o Heavy reliance on past studies on concurrent validity, which may overestimate V
o Better to have a scale tailored to fit the unique aspects of the position versus a generic scale
o Lack of information on biodata items, constructs measured
• Future research: what is biodata, item focus, technology

Idea: app. reactions to constructs relevant to job rather than generic

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Interviews def.

A

a personally interactive process of one or more people asking questions orally to another person and evaluating the answers for the purpose of determining the qualifications of that person in order to make employment decisions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Interview description

A

• Validity paradox – criterion-related validity is present, but not clear construct validity
• Modern structured interviewing
o Situational interview
o Behavior description interview
o Couple the interview with a formal rating system
• Reliability and validity
o Reasonable reliability, .75 under the right design conditions
o Greater validity for structured vs. unstructured interviews (.44 - .62 for struc.)
o When properly designed and under the right conditions, comparable to CA
o BUT the interview is a method, so any particular interview can range in validity
• Interview construct research – need to better understand what constructs are measured in interviews; limit # of constructs (Dipboye, Macan, Shahani, 2011)
• The role of structure
o Campion et al., 1997 15 components of structure
o Structured interviews can be developed by creating questions directly from KSAOs or by using critical incidents (job analysis)
o Structure as a continuum!
• Looking at interview from both interviewer’s and interviewee’s perspectives (Dipboye, Macan..)
• Future research
o Interviews across different countries, intentional response distortion/faking, cognitive demands, technological advances

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Work sample def.

A

Test in which the applicant performs a selected set of actual tasks that are physically and or psychologically similar to those performed on the job; standardized and scored with the aids of experts
Roth et al., 2005

Consist of tasks or work activities that mirror
the tasks that employees are required to perform on the job. Work sample tests
can be designed to measure almost any job task but are typically designed to measure
technically-oriented tasks, such as operating equipment, repairing and troubleshooting
equipment, organizing and planning work, and so forth. (Pulakos)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Work sample other

A
  • Validity: .33 with job perf.
  • Other: difficult to fake, positive applicant reactions, AI is dependent on the type of and constructs measured in WS (Roth et al., 2008)

Work sample tests typically involve having job applicants perform the tasks of interest
while their performance is observed and scored by trained evaluators. Similar to job
knowledge tests, work sample tests should only be used in situations where candidates
are expected to know how to perform the tested job tasks prior to job entry.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

AC intro and background

A

I. Arthur and Day, 2011 summary:
II. Definition: A comprehensive standardized procedure that uses multiple techniques (exercises) and assessors to assess multiple behavioral dimensions of interests
i. A method; thus, only as good as their and administration
b. Typically used for selection, promotion, or development w/ move towards development

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

AC design development scoring

A

Design steps: Job-analysis  Determine major work behaviors  Identify KSAOs or construct underlying major work behaviors Identify behavioral dimensions related to the KSAO Select or develop exercises to measure dimensions (usually where ACs fall short in construct validity)  Train assessors and administrators Pilot test assessment center  Refine as warranted Implement AC

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Methodological and design-related characteristics and features

A

a. Sound planning/ job analysis, limit # of dimensions, conceptual distinctiveness of dimensions, transparency of dimensions (more consistent behavior and better differential rating of dimensions), Participant-to-assessor ratio (2:1 - less susceptible to bias and errors)

b. Scoring and rating approach (e.g., within or across exercise, behavior checklists, OAR?)
i. AEDR = dimension factors; WEDR = exercise factors

c. Type of assessor (I/Os > mgrs or supervisors; should have FOR training)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Validity, reliability, faking, cost, subgroup diff.

A

a. Fairly reliable
b. Content-related: okay
c. Criterion-related: good, incremental over CA and personality
d. Construct-related: problematic (some possible issues…)
i. Methodological design factors (e.g., # of dims), use of espoused versus actual constructs, issues with analytic approaches, specifically post-exercise ratings, differential activation of traits depending on demands of particular exercise
e. Response distortion: Weak and nonsignificant relationships
VI. Cost: Very expensive, but, okay ROI
VII. Subgroup differences: Greater than originally expected

17
Q

How are ACs diff. from work samples?

A

Work samples as stand-alone tests are designed to stimulate actual job tasks, whereas AC exercises are designed to represent the general context surrounding the demands of the job

18
Q

Important articles for AC

A

Arthur et al., 2006 (dimensions); Meriac et al., 2008/2014 (M-As; incremental over CA and pers and factor structure); Arthur et al., 2008 (why ACs don’t work as they should); Dean et al., 2008 (subgroup differences)

19
Q

Integrity tests author

A

Berry et al., 2007

20
Q

Integrity tests def.

A

Overt integrity tests - Measure of theft attitudes
 Beliefs about the frequency and extent of theft, punativeness toward theft, ruminations about theft, perceived ease of theft, endorsement of common rationalization of theft, and assessments of one’s own honesty

Covert integrity tests - Personality-oriented tests
 Include personality items dealing with dependability, conscientiousness, social conformity, thrill seeking, trouble with authority, and hostility

21
Q

Integrity tests. descrip.

A

• Construct understanding
o Links to personality variables
o The overall score is nonsignificantly related to cognitive ability
o Links to situational variables

• Validity
o It is difficult to determine validity because it is hard to measure CWBs
o Majority of findings support that it is positively related to CWBs
o Absenteeism (small)

• Faking
o Huge effect sizes
o Applicants can fake, the question is, do they?
o Are certain questions or test more fakeable

22
Q

SJTs def.

A

Measurement methods that present respondents with work-related situations and then ask they how they would or should handle the situation
o Considered to be multidimensional measurement methods
o By definition, they are context bound

23
Q

SJT development and scoring

A

o Situation generation/Response option generation
o SME will identify effective and effective options
o Forced choice method (best, best and worst, rank order)
o Rate the effectiveness of each option
o Stem complexity
o Fidelity - Can improve fidelity with a video simulation
o Response options
 “Would do” - Measures of behavioral tendencies or typical performance
 “Should do” - Measures of maximum performance

24
Q

SJT validity, subgroup diff., cost, future

A

• Sources of SJT validity evidence
o Moderate criterion related validity (.20 with job performance; McDaniel et al., 2001)
 Faking can reduce the predictive validity
o Content validity - Key source of validity for SJTs because it is typically based on the job
o Construct validity
 Not possible to establish the construct validity of a method
 Correlated with personality, cognitive ability, job knowledge, and job experience
• Subgroup differences - lower than CA, depends on construct
• Fairly costly, time consuming to develop
• Reading requirements
• Future: long term validity, implicit trait policies, and other

25
Q

CRT

A
  • CRT is a methodology that asks participants to solve inductive reasoning problems.
  • Reasoning problems are not designed to measure cognitive ability and are uncorrelated with cognitive ability.
  • Designed to assess whether answers based on personality-driven implicit cognitive biases are logically appealing to respondents.
  • The solutions to items are based on logically appealing responses that are conditional on the strength of the respondent’s latent motives (e.g., motive to aggress). In other words, the respondent who is high on aggression will choose an answer to a reasoning problem because it is the logical response. The answer only seems logical because that is what someone high on aggression would select as the response.
  • Lebreton et al. (2007) showed that the purpose of CRTs should not be disclosed during administration. They showed that participants can fake good/bad on CRTs.
  • See Larry James CARMA (2008) and Psychometrics notes
26
Q

Contract breach def.

A

a. Definitions
i. Breach – extent a party is perceived to have fallen short in fulfilling its obligations
ii. Violation – emotional response to what is perceived as a willful failure to honor one’s commitments
1. Context affects whether breach is interpreted and violation

27
Q

What is psych. contract breach predictive of?

A

Strongly negatively related to trust, job satisfaction, organizational commitment and turnover intentions (Zhao et al., 2007)

28
Q

How to implement a selection system

A

• Stress communication
o Give people a voice
o Explain issues to them
o Both will help ensure continuance of your tests after the relationship ends

• Understand and try to handle emotions
o SME panels can be enhanced by including a monitor-evaluator (gauges ideas to keep team on task) and facilitator (supports team member in their strengths and helps compensate for weakness, improves team communication) roles in the SME panel

• When deciding content domain, appeal to higher authority
o The domains should represent the body of knowledge any qualified electrical maintenance mechanic should know, irrespective of their specific plant position

  • Presented item writing as a difficult task so SME would be more accepting of revisions
  • Local validation study can enhance organizational commitment to the test
  • Important to value reliability and validity while also considering time and money

• Schein’s models of organizational change
o Purchase/expert model – premised on org correctly diagnosing the problem & change agent is source of needed commodity
o Doctor-patient model – psychologist regarded as authority who id root causes of organizations problem, and provide prescription for their resolution
 Individuals don’t feel listened to with this method
o Process consultation – concerned with client learning a process to solve its own problems in the future; stresses need to compromise on some things.

• Implementing a selection system in practice is a balance between scientific rigor and practicality. Need to figure out what components of implementation you will compromise on and where you will not.

29
Q

The Selection Model (Ployhart, 20110

A

a. Identify critical tasks and selection KSAOs through a job analysis
i. Impacted through idiosyncratic rater differences

b. Defining the latent performance domain and the latent performance predictor
i. Cannot be directly tested

c. Test the relationship between the predictor and criterion
Measurement artifacts
1. Measurement error
a. Contributes to the unreliability of the measure, lower the reliability, and lower criterion-related validity
2. Range restriction
a. Occurs when the variance in the predictor, criterion, or both is reduced relative to what it would be in the larger population
3. Correction?

Creating predictor composites

  1. Multiple hurtles or top down
    a. Can impact subgroup differences and adverse impact

Validation methodological issues

  1. Predictive or concurrent designs
  2. Retesting of applicants
30
Q

Personnel Selection

A

Ployhart, 2011 compares constructs (homogeneous KSAOs) and methods (collection of diff. constructs)

Also points out that we need a reality check; the area of selection is where we see the most discrepancy b/w research and practice/manager’s beliefs. Are our selection tools being used as they should be?

  • Read outline!
  • Related reminder on Ployart: in his CARMA he points out we need to be asking Qs managers are asking. He advocates for capitalizing on our longitudinal research to ask things like…how long does this training last? Is there a point to where we need to reinstate another intervention?
31
Q

Koneg et al., 2010

A

I. Predictive validity was not the main focus of whether a predictor was used in a selection decision

II. Highest predictors of use

a. Diffusion is an important aspect (mimicking other organizations)
b. Perceived cost
c. Applicant reaction

Relatedly, Breaugh, 2009 edited piece on:
• Gerrymandering (improving outcomes of some applicants at the expense of others; central focus may be on other things beyond job relatedness)

32
Q

Predictor retest issues

A

(Schleicher et al., 2010)
• Importance of discussing retesting:
o Encouraged for legal and professional guidelines
o Heighten perceptions of procedural justice, company reputation, acceptance of an offer, and external referrals
o There ARE consistent improvements which can considerably impact who is hired
• Results: Retesting effects not homogenous across type of assessment (White perform better on written, women on performance tests)
o Whites performed greater on written re-tests (i.e., job knowledge, biodata, and verbal ability.)
o Blacks showed significantly more improvement in the interviews than whites.
o Applicant under age of 40 showed significantly larger score gains on all tests
o Women showed greater score gains on the performance measures
 Women may react more positively and make use of negative feedback
• Allowing retesting might be most important with novel tests
o Can also be accompanied with criterion-relevant changes

33
Q

App. reactions author

A

Truxillo & Bauer

34
Q

Why do we care about app. reactions?

A

Hypothesized to be related to legal challenges, perceptions of the organization