Mid-Term study guide Flashcards

1
Q

Industrial-organizational (I-O) psychology

A

The application of psychological principles, theory, and research to the work setting.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Society for Industrial and Organizational Psychology (SIOP)

A

An association to which many I-O psychologists, both practitioners and researchers, belong. Designated as Division 14 of the American Psychological Association (APA)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Personnel psychology

A

field of psychology that addresses issues such as recruitment, selection, training, preformance appraisal, promotion, transfer, and termination

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Human Resources Management (HRM)

A

Practices such as recruitment, selection, retention, training, and development of people (human resources) in order to achieve individual and organizational goals.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Organizational psychology

A

Field of psychology that combines research from social psychology and organizational behavior and addresses the emotional and motivational side of work.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Human engineering or human factors psychology

A

the study of the capacities and limitations of humans with respect to a particular environment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Scientist-practitioner model

A

A model that uses scientific tools and research in the practice of I-O psychology

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

TIP (The Industrial-Organizational Psychologist)

A

Quarterly newsletter published by the Society for Industrial and Organizational Psychology: provides I-O psychologists and those interested in I-O psychology with the latest relevant information about the field

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Telecommuting

A

Accomplishing work tasks from a distant location using electronic communication media

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Virtual team

A

Team that has widely dispersed members working together towards a common goal and linked through computers and other technology

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Title VII of Civil Rights Act of 1964

A

Federal legislation that prohibits employment discrimination on the basis of race, color, religion, sex, or national, origin, which define what are know as protective groups. Prohibits not only intentional discrimination but also practices that have the unintentional effect of discrimination against individuals because of their race, color, national origin, religion, or sex

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

American Psychological Association (APA)

A

The major professional organization for psychologists of all kinds in the United States.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Experimental design

A

Participants are randomly assigned to different conditions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Quasi-experimental design

A

Participants are assigned to different conditions, but random assignment to conditions is not possible

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Nonexperimental design

A

Does not include any “treatment” or assignment to different conditions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Observation design

A

The researcher observes employee behavior and systematically records what is observed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Survey design

A

Research strategy in which participants are asked to complete a questionnaire or survey

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Quantitative methods

A

Rely on tests, rating scales, questionnaires, and physiological measures and yield numerical results

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Qualitative methods

A

Rely on observations, interviews, case studies, and analysis of diaries or written documents and produce flow diagrams and narrative descriptions of events or processes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Triangulation

A

Approach in which researchers seek converging information from different sources

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Experimental control

A

Characteristic of research in which possible confounding influences that might make results less reliable or harder to interpret are eliminated: often easier to establish in laboratory studies that in field studies

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Statistical control

A

Using statistical techniques to control for the influence of certain variables. Such control allows researchers to concentrate exclusively on the primary relationships of interest.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Descriptive statistics

A

Statistics that summarize, organize, and describe a sample of data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Measure of Central Tendency

A

Statistics that indicates where the center of a distribution is located. Mean, median, and mode are measures of central tendency

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Variability
The extent to which scores in a distribution vary
26
Skew
The extent to which scores in a distribution are lopsided or tend to fall on the left or right side of the distribution
27
mean
The arithmetic average of the scores in a distribution: obtained by summing all of the scores in a distribution and dividing by the sample size
28
Mode
The most common or frequently occurring score in a distribution
29
Median
The middle score in the distribution
30
Inferential statistics
Statistics used to aid the researcher in testing hypotheses and making inferences from sample data to a larger sample or population
31
Statistical Significance
Indicates that the probability of the observed statistic is less than the stated significance level adopted by the researcher (commonly p<.05). A statistically significant finding indicates that the results found are unlikely to have occurred by chance, and thus the null hypothesis (hypothesis of no effect) is rejected
32
Statistical power
the likelihood of finding a statistically significant difference when a true difference exists
33
Correlation coefficient
Statistic assessing the bivariate, linear association between two variables. Provides information about both the magnitude (numerical value) and the direction (+ or -) of the relationship between two variables
34
Regression line
Straight line that best “fits” the scatter plot and describes the relationship between the variables in the graph; can also be presented as an equation that specifies where the line intersects the vertical axis and what the angle or slope of the line is.
35
Linear
Relationship between two variables that can be depicted by a straight line
36
Nonlinear
Relationship between two variables that cannot be depicted by a straight line; sometimes called “curvilinear” and most easily identified by examining a scatter plot
37
Meta-analysis
statistical method for combining and analyzing the results from many studies to draw a general conclusion about relationships among variables.
38
Reliability
consistency or stability of a measure
39
Validity
The accuracy of inferences made based on test or performance data; also addresses whether a measure accurately and completely represents what was intended to be measured.
40
Test-retest reliability
A type of reliability calculated by correlating measurements taken at the time 1 with measurements taken at time 2
41
Equivalent forms reliability
a type of reliability calculated by correlating measurements from a sample of individuals who complete two different forms of the same test
42
Internal consistency
form of reliability that assesses how consistently the items of a test measure a single construct; affected by the number of items in the test and the correlations among the test items
43
Generalizability Theory
A sophisticated approach to the question of reliability that simultaneously considers all types of error in reliability estimates (e.g., test-retest, equivalent forms, and internal consistency).
44
Predictor
The test chosen to assess attributes identified as important for successful job performance
45
Criterion
An outcome variable that describes important aspects or demands of the job; the variable that we predict when evaluating the validity of a predictor
46
Criterion-related validity
Validity approached that is demonstrated by correlating a test score with a performance measure; improves researcher’s confidence in the inference that people with higher test scores have higher performance
47
Validity coefficient
correlation coefficient between a test score (predictor) and a performance measure (criterion)
48
Predictive validity design
Criterion-related validity design in which there is a time lag between collection of the test data and the criterion data
49
Concurrent validity design
Criterion-related validity design in which there is a time lag between collection of the test data and the criterion data
50
Content-related validation design
A design that demonstrates that the content of the selection produce represents an adequate sample of important work behaviors and activities and/or worker KSAOs defined by the job analysis
51
Construct validity
Validity approach in which investigators gather evidence to support decisions or inferences about psychological constructs; often begins with investigators demonstrating that a test designed to measure a particular construct correlates with other tests in the predicted manner
52
Construct
psychological concept or characteristic that a predictor is intended to measure; examples are intelligence, personality, and leadership
53
Individual differences
Dissimilarities between or among two or more people
54
Psychometrics
Practice of measuring a characteristic such as mental ability, placing it on a scale or metric
55
Intelligence test
Instrument designed to measure the ability to reason, learn, and solve problems
56
Psychometrician
psychologist trained in measuring characteristics such as mental ability
57
Cognitive ability
Capacity to reason, plan, and solve problems; mental ability
58
“g”
Abbreviation for general mental ability
59
Personality
An individual’s behavioral and emotional characteristics, generally found to be stable over time and in a variety of circumstances; an individual’s habitual way of responding
60
Americans with Disabilities Act
Federal legislation enacted in 1990 requiring employers to give applicants and employees with disabilities the same consideration as other applicants and employees, and to make certain adaptations in the work environment to accommodate disabilities.
61
Big 5
A taxonomy of five personality factors; the Five-Factor Model (FFM)
62
Five-Factor Model (FFM)
A taxonomy of five personality factors, composed of conscientiousness, extraversion, agreeableness, emotional stability, and openness to experience
63
Integrity
Quality of being honest, reliable, and ethical
64
O*NET
Collection of electronic databases, based on well-developed taxonomies, that has updated and replaced the Dictionary of Occupational Titles (DOT)
65
Procedural knowledge
Familiarity with a procedure or process; knowing “how”
66
Declarative knowledge
Understanding what is required to perform a task; knowing information about a job or job task
67
Test battery
Collection of tests that usually assess a variety of different attributes
68
Bias
Technical and statistical term that deals exclusively with a situation where a given test results in errors of prediction for a subgroup
69
Fairness
Value judgment about actions or decisions based on test scores
70
Screen-out test
A test used to eliminate candidates who are clearly unsuitable for employment; tests of psychopathology are examples of screen-out tests in the employment setting
71
Screen-in test
A test used to add information about the positive attributes of a candidate that might predict outstanding performance; tests of normal personality are examples of screen-in tests in the employment setting
72
Self- presentation
A person’s public face or “game face”
73
Emotional intelligence (EI)
A proposed kind of intelligence focused on people’s awareness of their own and others’ emotions
74
Structured interview
Assessment procedure that consists of very specific questions asked of each candidate; includes tightly crafted scoring schemes with detailed outlines for the interviewer with respect to assigning ratings or scores based on interview performance
75
Situational interview
An assessment procedure in which the interviewee is asked to describe in specific and behavioral detail how he or she would respond to a hypothetical situation
76
Unstructured interview
An interview format that includes questions that may vary by candidate and that allows the candidate to answer in any form he or she prefers
77
Assessment center
Collection of procedures for evaluation that is administered to groups of individuals; assessments are typically performed by multiple assessors.
78
Work sample test
Assessment procedure that measures job skills by taking samples of behavior under realistic job-like conditions
79
Situational judgment test
Commonly a paper-and-pencil test that presents the candidate with a written scenario and asks the candidate to choose the best response from a series of alternatives
80
Incremental validity
The value in terms of increased validity of adding a particular predictor to an existing selection system
81
Biodata
information collected on an application blank or in a standardized test that includes questions about previous jobs, education, specialized training, and personal history; also known as biographical data
82
Computer Adaptive Testing (CAT)
A type of testing that presents a test taker with a few items that cover the range of difficulty of the test, identifies a test taker’s approximate level of ability, and then asks only questions to further refine the test taker’s position within that ability level
83
Objective performance measure
Usually a quantitative count of the results of work, such as sales volume, complaint letters, and output
84
Judgmental performance measure
Evaluation made of the effectiveness of an individual’s work behavior; judgment most often made by supervisors in the context of a performance evaluation
85
Performance management
System that emphasizes the link between individual behavior and organizational strategies and goals by defining performance in the context of those goals; jointly developed by managers and the people who report to them
86
Performance
Actions or behaviors relevant to the organization’s goals; measured in terms of each individual’s proficiency
87
Effectiveness
Evaluation of the results of performance; often controlled by factors beyond the actions of an individual
88
Productivity
ratio of effectiveness (output) to the cost of achieving that level of effectiveness (input)
89
Declarative knowledge (DK)
Understanding what is required to perform a task; knowing information about a job or job task
90
Procedural knowledge and skill (PKS)
knowing how to perform a job or task; often developed through practice and experience
91
Motivation (M)
Concerns the conditions responsible for variations in intensity, persistence quality, and direction of ongoing behavior
92
Determinants of performance
Basic building blocks or causes of performance, which are declarative knowledge, procedural knowledge, and motivation
93
Performance components
Components that may appear in different jobs and results from the determinants of performance; john Campbell and colleagues identified eight performance components, some or all of which can be found in every job
94
Criterion deficiency
A situation that occurs when an actual criterion is missing information that is part of the behavior one is trying to measure
95
Criterion contamination
a situation that occurs when an actual criterion includes information unrelated to the behavior one is trying to measure
96
Ultimate criterion
ideal measure of all the relevant aspects of job performance
97
Actual criterion
actual measure of job performance obtained
98
Adaptive performance
performance component that includes flexibility and the ability to adapt to changing circumstances
99
Expert performance
Performance exhibited by those who have been practicing for at least 10 years and have spent an average of four hours per day in deliberate practice
100
Personnel measure
Measure typically kept in a personnel file, including absences, accidents, tardiness, rate of advancement, disciplinary actions, and commendations of meritorious behavior
101
Task-oriented job analysis
Approach that begins with a statement of the actual tasks as well as what is accomplished by those tasks
102
Work-oriented job analysis
Approach that focuses on the attributes of the worker necessary to accomplish the tasks
103
KSAOs
individuals attributes of knowledge, skills, abilities, and other characteristics that are required to successfully perform job tasks
104
Subject matter expert (SME)
Employee (incumbent) who provides information about a job in a job analysis interview or survey
105
Critical incident technique
Approach in which subject matter experts are asked to identify critical aspects of behavior or performance in a particular job that led to success or failure
106
Cognitive task analysis
A process that consists of methods for decomposing job and task performance into discrete, measurable units, with special emphasis on eliciting mental processes and knowledge content
107
Think-aloud protocol
Approach used by cognitive psychologists to investigate the thought processes of experts who achieve high levels of performance; an expert performer describes in words the thought process that he or she uses to accomplish a task
108
Job evaluation
method for making internal pay decisions by comparing job titles to one another and determining their relative merit by way of these comparisons
109
Compensable factors
Factors in a job evaluation system that are given points that are later linked to compensation for various jobs within the organization; factors usually include skills, responsibility, effort, and working conditions.
110
Comparable worth
notion that people who are performing jobs of comparable worth to the organization should receive comparable pay
111
Equal Pay Act of 1963
Federal legislation that prohibits discrimination on the basis of sex in the payment of wages or benefits, where men and women perform work of similar skill, effort, and responsibility for the same employer under similar working conditions
112
Critical incidents
Examples of behavior that appear “critical” in determining whether performance would be good, average, or poor in specific performance areas
113
Graphic rating scale
Graphic display of performance scores that runs from high on one end to low on the other end
114
Checklist
list of behaviors presented to a rater, who places a check next to each of the items that best (or least) describe the rate
115
Weighted checklist
A checklist that includes items that have values or weights assigned to them that are derived from the expert judgments of incumbents and supervisors of the position in question
116
Behaviorally anchored rating scales (BARS)
Rating format that includes behavioral anchors describing what a worker has done, or might be expected to do, in a particular duty area
117
Behavioral observation scale (BOS)
Format that asks the rater to consider how frequently an employee has been seen to act in a particular way
118
Employee comparison methods
form of evaluation that involves the direct comparison of one person with another
119
360-degree feedback
Process of collecting and providing a manager or executive with feedback from many sources, including supervisors, peers, subordinates, customers, and suppliers.
120
Central tendency error
Error in which raters choose a middle point on the scale to describe performance, even though a more extreme point might better describe the employee
121
Leniency error
error that occurs with raters who are unusually easy in their rating
122
Severity error
Error that occurs with raters who are unusually harsh in their ratings
123
Halo error
Error that occurs when a rater assigns the same rating to an employee on a series of dimensions, creating a halo or aura that surrounds all of the ratings, causing them to be similar
124
Psychometric training
Training that makes raters aware of common rating errors (central tendency, leniency/severity, and halo) in the hope that this will reduce the likelihood of errors
125
Frame-of-reference (FOR) training
Training based on the assumption that a rater needs a context or “frame” for providing a rating; includes (1) providing information on the multidimensional nature of performance (2) ensuring that raters understand the meaning of anchors on the scale (3) engaging in practice rating exercises, and (4) providing feedback on practice exercises.