DONE: Criterion Development Flashcards

1
Q

Austin & Villanova (1992) on what is the criterion problem

A

The difficulty in capturing the conceptual criterion with the actual criteria measures. This is because the job analysis does not completely define the conceptual criterion, plus the actual criterion measures are unreliable and contain some measurement error.

Other problems with criteria: They are dynamic, multidimensional, situation-specific, and serve multiple functions, AND people sometimes do the same job in different ways!

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

How to minimize the criterion problem according to Austin & Villanova (1992) and extra source (______), who said,

Conduct a thorough job analysis; the criterion problem is even worse if the job analysis was poor!

A

How can the criterion problem be mitigated?
Choose actual criteria measures with the least error, that are reliable, and valid
Reduce criterion deficiency and contamination as much as possible.

Choose actual criteria that overlap as much as possible with the conceptual/ultimate criterion, but not with each other – avoid redundancy (add incremental validity). Goal is to explain as much variance as possible in the criterion without having redundant criteria
Need criteria that will be able to detect differences among employees (e.g., on a 1 to 5 scale, can’t have everyone receive a 5)

(Binning & Barrett, 1989)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What makes a good criterion? (Ployhart et al., 2005)

A

Relevance
Discriminability (can you differentiate between people; effective or ineffective employee)
Reliability (between/within people, over time)
Practicality
Minimized contamination and deficiency
Best practice: most common source of performance data are ratings, so they should be based on job analyses, and raters should be sufficiently familiar with demands or work and should be trained on how to observe and evaluate work and how to rate employees (SIOP Principles)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Typical vs Maximal performance (Sacket et al., 1998)

A

Table 1
shows that speed and accuracy are not highly correlated: The
typical performance measures of speed and accuracy correlated
.17 and -.02 in the new-hire and current-employee samples,
respectively (Sacket et al., 1998)

AND

Typical and max perf only correlated between .16 and .36.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Mesmer-Magnus & Viswesvaran (2007) META-analysis of:

Inducing Maximal Versus Typical
Learning Through the Provision of a
Pretraining Goal Orientation

A

-Do pretraining goals induce trainees to
maximize learning efforts in training? Does the type of goal matter?

Results suggest pretraining goals (regardless of type) yield higher performance on posttraining cognitive, skill, and affective
learning assessments

BUT - Performance-oriented goals facilitated better performance on measures of declarative knowledge, whereas mastery-oriented goals yielded greater learning on higher levels of cognitive learning and
for all levels of skill-based learning

-Further, mastery-oriented goals fostered greater
posttraining self-efficacy, more positive attitudes toward training, and better intentions to transfer training material than performance-oriented goals and no-goal conditions.

Trainers may use goals to inspire superior mastery/learning of training content
(mastery/learning-oriented goals) and/or better performance on posttraining as sessments of learning (performance-oriented goals).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Sacket & Lievens (2008)

A

Studies have examined whether predictors of job performance differ across job stages. The transitional job stage, where there is a need to learn new things, is typically contrasted to the more routine maintenance job stage (Murphy 1989)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Stewart (1999)

A

Stewart (1999) showed that the dependability aspects of the Conscientiousness factor (e.g., self-discipline) were related to job performance at the transitional stage, whereas the volitional facets of Conscientiousness (e.g., achievement motivation) were linked to job performance at the maintenance stage

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Viswesvaran (2005) - general factor underling perf

A

After controlling for halo error, and 3 other sources of measurement error, there remained a general factor in
job performance ratings at the construct level accounting for 60% of total variance.

Viswesvaran et al. (2005) provided useful insights into this issue by meta-analytically comparing correlations between performance dimensions made by differing raters with those made by the same rater. Although ratings by the same rater were higher (mean interdimension r = 0.72) than those from different raters (mean r = 0.54), a strong general factor was found in both. Thus, the finding of a strong general factor is not an artifact of rater-specific halo.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Thorndike (1920)

A

Coined the term halo error and defined it as an overall impression of the person as generally good or bad, which then influences the rater’s ratings of nearly al the individual’s attributes, even when they have enough info to give independent judgments on those attributes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Austin & Villanova (1992)

A

Objective measures:
Sales figures, units produced, absenteeism, presenteeism, tardiness, turnover, customer complaints, theft,

Subjective measures: Based on judgements (e.g., supervisor ratings of performance) – Potential problems: could be biased, influenced by wanting to be liked by employees, avoiding confrontation, not wanting to lose a good employee to a promotion, etc. self-ratings can be inflated.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Borman & Motowidlo (1993)

SEMINAL

A

What does it mean to expand the criterion domain?

-Including context performance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Organ et al. (2011)

A

argue that it would make sense to select job candidates who have a propensity to exhibit OCBs

-Evidence that OCB info is used in performance ratings, that OCBs are related to job performance indicators, and that OCBs are related to unit/org performance indicators

-How to select for people that will be likely to exhibit OCBs?
Personality, esp. conscientiousness, Interviews (encouraging initial evidence, but more needed), SJT items may be fairly good predictors of OCB-like behavior

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Future directions

A
Adaptive Behavior (Pulakos et al., 2000)
Important because jobs can change quickly, increased use of technology in jobs

Org-Level Criteria
Given need to show value of IO, it may be worthwhile to move beyond individual-level criteria to also consider whether a selection procedure or training program improves the overall productivity of work units and orgs as a whole
(Aguinis & Kraiger, 2009).

Identifying Performance Criteria in Jobs that are Constantly Evolving
Similar to the need for job analysis changing due to the decrease in single “jobs”
Work now requires a broader range of skills

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

(Thornton & Gibbons, 2009).

A

ACs traced back to WWII, and then the 50s when AT&T first used them

Validity of ACs seems to be a large issue. Keep in mind, Thornton & Gibbons (2009) point out that when AC are used to aid selection decisions, the inference of interest is that the overall assessment rating predicts performance in the future in some specified job or set of jobs.

Keep in mind that because an AC is a method, you can’t just average criterion-related validity coefficients across studies and get an estimate of ‘true validity coefficient’ – you are just looking at average across ACs. They are likely measuring different constructs (though with some overlap) so it makes no sense to think about ‘true relationship’

Overview: ACs use multiple assessment techniques, standardized methods of making inference from such techniques, & pool judgments of multiple assessors in rating each candidates behavior. Thornton & Gibbons (2009) further specify that AC refers to a method involving a ‘unique combination of essential elements codified in the Guidelines & Ethical Considerations of Assessment Center Operations, developed by International Task Force (2008). (see full description of elements in paper). Objective of AC is to provide an overall eval. of a candidate’s ability to be successful in the future in a new assignment. Ratings are typically combined into a single overall assessment rating (OAR) to use in decision-making.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Arthur et al. (2003)

A

Arthur et al are really arguing against overall AC scores – saying this would be like giving an ‘overall personality score’ (non-sensical). Argue that we need to focus more on what construct we are measuring with each exercise, and combine scores across same-construct exercises rather than same-method exercises. They are able to categorize the exercises in their meta along 6 dimensions (with criterion-related V for that dimension in parentheses):

How well did you know this?
1
Not at all
2
3
4
5
Perfectly