Study fun! Flashcards

1
Q

C vs. M Background

A

• Predictors can be represented by WHAT they measure and HOW they are measured
• Predictor construct
o Behavioral domain - E.g., psych constructs, situational or job-content based beh.
• Predictor method (or method)
o Method of obtaining info about the behavioral domain of the predictor
o E.g., interviews, paper and pencil tests, simulation-based models of assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Why do we care about C vs. M distinction

A

• Allows us to isolate variance due to predictor constructs from the variance due to predictor methods
o To compare predictor constructs, hold the predictor method constant.
o To compare predictor methods, hold the predictor construct constant.
• Isolation of variance permits meaningful, theoretically & conceptually interpretable comparisons.

Hello again, Binning and Barrett (1989)
o Highlights the fact that the validation of specified predictors can’t be separate from a discussion of what the predictors are designed to measure
o As Binning and Barrett say, we often are “comparing apples to sandwiches to sandwedges”
• This isn’t how it is usually done, however! Predictor constructs are OFTEN compared to predictor methods. Unclear what such a comparison really represents.
o E.g, what constructs are you measuring in the interview? And how was GMA and conscientiousness measured?
o Value validation studies will be much higher if info is provided about the constructs assessed by predictor batteries

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What does the C vs. M distinction impact?

A
  1. Criterion-related validity and incremental validity estimates?
    Predictors are often compared in terms of their relative criterion-related validity and incremental V, but most of these studies are actually comparisons of constructs to methods
    • E.g., Criterion related validities of GMA and conscientiousness have been compared to those of interviews, assessment centers, among others.
  2. Techniques for Reducing Subgroup differences?
    Research on reducing subgroup differences suffers from the predictor construct-predictor method confound
    • GMA often combined with other non-ability predictor constructs (e.g., personality, integrity) to reduce AI; in reality, GMA often combined with other predictor METHODS
    o Studies result in widely professed conclusions (ex: “AI is less of a problem with assessment centers, work samples, interviews, etc than with GMA”
    o Any method of assessment can display high or low levels of subgroup differences. It just depends on the construct/s being measured
  3. Applicant Reactions?
    Confound is also a problem in studies of applicant reactions (i.e., are reactions to constructs? Or methods? This is especially true of interviews
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What can we conclude about C vs. M? Any issues?

A

How should we compare/evaluate predictors?
• Main takeaway: research should be conducted in a manner that recognizing constructs and methods as two factors
• Try to conduct studies that either 1. Don’t confound the two or 2. That provide info on what constructs are being measured by the methods being studied
• Making this distinction should help us understand WHY certain methods work rather than just saying that they work

Issues
• Ambiguity over whether a predictor is a construct or a method (e.g., SJTs)
• Issue of diff. methods differing on the ease to which they measure constructs
• Related topics to pull in: AC dim vs. ex. debacle; interviews, biodata

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

C v. M names

A

Arthur & Villado, 2008

Binning & Barrett, 1989

How well did you know this?
1
Not at all
2
3
4
5
Perfectly