Exam 1 Review Flashcards

1
Q

Munsterberg, Taylor, the Gilbreths were the first to focus on which aspect of I/O Psychology?

A

A focus on the “I” side; concerned with personnel issues.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is the significance of the Hawthorne Studies?

A

Shifted interest towards the “O” side and became concerned with the work environment.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Describe the Hawthorne Studies.

A

Workers increased production regardless of illumination levels.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is the Hawthorne Effect?

A

Change in behavior due to novel treatment, attention, etc.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

True or False: In practice, “I” and “O” overlap considerably.

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Explain the components of Experimental Strategies.

A
  • True experiments have several defining characteristics:
    • Manipulation of an independent variable.
    • Control over confounding variables.
    • Random assignment to conditions.
  • Without these our ability to determine causality is hindered.
    • We want to be able to say that changes in the IV (and nothing else) caused changes in the DV.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Why is experimentation rare in I/O Psychology?

A
  • Difficult to randomly assign workers to conditions, i.e., to other jobs or other procedures
  • Employers fear loss of productivity or disruption to their operations
  • Can be costly, time-consuming, etc.
  • Quasi-experiments may be acceptable
    • Like a true experiment, but lacking a defining element
      • ex: maybe no random assignment, or maybe IV not under the experimenter’s control
  • Lab experiments may be acceptable
    • Issues of generalizabilty from the lab to the field become a consideration
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Explain the components of Correlational Strategies.

A

Examining relationships between variables.

  1. Correlational strategies include surveys, interviews, naturalistic observation, questionnaires.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is the Correlation Coefficient?

A

The statistic that describes the relationship between two variables.

  • Related = magnitude and direction
    1. Variables can be related to each other, strongly, weakly, or anywhere in between
    2. Variables can be positively related, negatively related, or unrelated to one another
  • Positive correlation: high scores on one variable are associated with high schores on the other
  • Negative correlation: high scores on one variable are associated with low scares on the other

Correlation coefficient ranges between -1.00 and +1.00

Correlation only indicates that two variables are associated in some way

Causality is unknown with correlations

The correlation coefficient is the basis for estimating both reliability and validity of measures

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are the three ways to explain the correlation between two variables?

A
  1. The first variable caused the second
  2. The second variable caused the first
    • the problem of directionality
  3. An unmeasured third variable is responsible for the relationship between the other two
    • the third-variable problem
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Define Reliability.

A

The consistency of a measure; how well scores on the same subject are replicable across repeated measurements of the same variable.

  • A person’s observed score equals the true score plus a component of error
    1. The reliability of a test is the ratio of true score varience divided by observed score variance
    2. Whatever portion is left over is due to error variance
  • The way to obtain that ratio of true variance to measured variance is by computing a correlation coefficient, here called a reliability coefficient
    • A reliability of .70 means that 30% of whatever’s being measured is due to error
      • crummy items, misinterpretations, faking good by the respondents, etc.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What are the three types of Reliability?

A
  1. Test-Retest: same individuals, same test, two separate testing sessions
    • consistency of responses over time
  2. Internal Consistency: consistency of multiple items in a single measuring instrument
    • correlate responses to every item with responses to every other item
    • the average of those intercorrelations is Coefficient Alpha, or the average interitem correlation
  3. Inter-Rater Reliability: consistency of ratings made by independent scorers

A test must be reliable before it can be valid

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is Face Validity?

A
  1. Do the items on this test or the procedures we use look like they measure the domain of interest?
    • “Public-relations” validity: test should look like it measures what it claims to measure
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is Construct Validity?

A

The correlation between the predictor (the test score) and the criterion (the performance, behavior, or other test score that you want to predict) is the basis of criterion validity

The square of any validity coefficient tells the percent of variance accounted for in the criterion by the predictor (vice-versa)

r = .50 means that 25% of the variance in the criterion is accounted for by the predictor

25% of variability in the criterion can be predicted by knowing scores of the predictor variable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Explain Multiple Regression.

A

Combining predictors to account for the maxium amount of variability in a criterion.

  • Determining the optimal weighted combination of predictors to acount for the most variability in a criterion
    • a set of correlations between the predictors and the criterion
    • the unique contribution made by each predictor is taken into account, along with the correlations among the predictors themselves
      • each predictor should contribute independently of the others
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is Job Analysis?

A
  • Comprehensive description of a job and the attributes needed to perform it.
    • Used for career development, selection and training, performance appraisal, hedge againsts
  • Information about jobs come from many different sources
    • Job analysts, job incumbants, supervisors, observers contribute to defining aspects of a job
17
Q

What are the two approaches to job analysis?

A
  1. Job-oriented approach:
    • examining the nature of the tasks to be performed
  2. Person-oriented approach:
    • examining characteristics of a person necessary to perform the job
    • KSAOs:
      • Knowledge, skills, abilities, and other personal characteristics

O*NET shows the multifaceted nature of most jobs

18
Q

Define Performance Appraisal.

A

The process of providing feedback to employees regarding their performance

A proper job analysis can identify necessary elements of a job; this adds the appraisal process

  • Performance appraisal should involve matching employee performance against standards or criteria
19
Q

Describe Theoretical and Actual Criterion.

A
  1. Theoretical: the concept of what acceptable performance should be
  2. Actual: the way in which the theoretical criterion is assessed
20
Q

Describe Criterion Relebance, Contamination, and Deficiency.

A
  1. Relevance: the extent to which the actual and theoretical criteria overlap
  2. Contamination: actual criterion measures something other than the theoretical criterion
  3. Deficiency: elements of the theoretical criterion not tapped by the actual criterion
21
Q

What are some examples of Objective Performance Indicators?

A

Production records, absenteeism, sales, turnover rate, number of accidents, etc.

22
Q

Who uses Subjective Performance Indicators and what are some examples?

A

Managers, supervisors, making appraisals

GRF, MSS, BARS, BOS

23
Q

What are some shortcomings when using Subjective Ratings?

A
  1. Negativity effect: negative information weighted more heavily than positive in evaluations
  2. Halo effect: positive performance in one area makes other bad areas look not-so-bad
  3. Mood effects: positive moods lead to favorable judgments; negative moods are the opposite
  4. Attractiveness: a physically attractive employee may be rated more favorably
24
Q

What are the two types of criterion validity?

A
  1. Predictive Validity: predict future performance
    • predictor is administered now; criterion is measured later
  2. Concurrent Validity: predict present performance
    • both predictor and criterion are given at about the same time
25
Q

What are the testing options for criterion validity?

A
  1. Group vs. individual administration
  2. Objective vs. open-ended format
  3. Speed vs. power test
  4. Written vs. performance test
26
Q

What are the common predictors and typical measures?

A
  1. Interest Inventories: self-directed search, Strong Vocational Interest Blank, Jackson Vocational Interest Survey
  2. Cognitive Abilities: Wechsler Adult Intelligence Scale, Wonderlic Personnel Test, specific test of specific abilities
  3. Personality Tests: Self reports (MBTI, CPI, MMPI) vs. projective techniques (TAT)
  4. Skill and Ability Tests: Psychomotor skills (Crawford Small Parts Dexterity Test), mechanical aptitude
  5. Integrity, Lie Detection, Drug Testing: controversial issues surround these methods, especially for selection
  6. Interviews and biographical information: structured vs. unstructured interviews / objective vs. subjective background info
  7. Work samples and assessment centers: sample of actual job to be performed / in-basket, leaderless group activities
27
Q
A