Exam 3 Flashcards

(97 cards)

1
Q

A measure of typical responding that focuses on observable behavior
Typically a typical response test, but can occur with a maximum performance test.

A

Behavioral assessments

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Schools provide special education and related services to students with emotional disorders
Schools should identify students and expand assessment practices to evaluate personality, behavior, and related constructs

A

Public Law 94-142, Individuals with Disabilities Act (IDEA)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

practices and strategies used by professionals in a field, that have been shown to work with a specific group of people through research

A

Evidence-based practice

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

starts with broad, general questions such as, “Why are you here?”, “How can I help you?”

A

Behavioral interviewing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Six steps of behavioral interviewing

A
Identify problem and define in behavioral terms
Identify and evaluate environment
Develop a plan to alter contingencies
Implement plan
Evaluate outcomes
Modify intervention plan, if necessary
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Inventory that asks a knowledgeable informant to rate an individual on a number of dimensions

A

Behavior rating scale

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Behavior rating scale that has two forms

A

CBCL

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q
Behavior rating scales that includes Teacher rating scale
Parent rating scale
Self-report scale
Classroom observation
Developmental history
A

BASC

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Designed to help identify autism in children over 2

A

Childhood Autism Rating Scale (CARS)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Designed to assess primary symptoms of ADHD in 4-18 year olds

A

BASC Monitor for ADHD

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Designed to identify early onset bipolar disorder in people ages 3-18

A

Pediatric Behavior Rating Scale

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Three scales that assess adaptive behaviors

A

Adaptive Behavior Rating Scales

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Single domain: Pros

A

more thorough assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Single Domain: Cons

A

limited to very specific domain

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Two 45 item scales (one for teachers, one for parents)
Designed to use with adolescents and children 4-18 years
Monitors symptoms and behaviors and also tracks effectiveness of treatments

A

BASC for ADHD

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

The four scales of the BASC for ADHD

A

Attention problems
Hyperactivity
Internalizing problems
Adaptive skills

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Two scales: one for teachers (95 items) and one for parents (102 items)
3-18 years old
Designed to identify early onset bipolar disorder and differentiate it from other disorders with similar symptoms

A

Pediatric: bipolar disorder

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Assesses adaptive behaviors: Conceptual skills (literacy, telling time, using money), practical skills (brushing teeth, getting dressed), social skills (following laws, social behavior)

A

ABRS:

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Oldest method of behavioral assessment

A

Direct observation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

The most widely used system of coding behavior

A

Student Observation System (SOS)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Student Observation System (SOS) is a component of

A

BASC-2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Measures vigilance, attention and executive control

A

Continuous Performance Tests (CPTs)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Recording physical changes in the body during some specific event

A

Psychophysiological Assessments

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Applying psychological principles to work and organizational settings

A

Industrial-Organizational Psychology

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
The largest contributor to I-O psych
WWI
26
Poor job performance Turnover Counterproductive work behavior
Organizational costs:
27
Primary goal is to save organizations $ and effort by providing them with adequate resources to hire employees who will be able to perform required job duties and be satisfied with the org culture
Personnel selection approaches:
28
Personnel selection approaches include:
``` Cognitive ability Interview Personality testing Integrity testing Assessment centers Work sample tests Biodata ```
29
Measure verbal ability, math ability, perception, problem solving Often multiple choice or short answer Can predict job performance Organizational tension
Cognitive Ability
30
Interviewer generates questions that are relevant to the applicant or content of interview Subjective Cannot compare across applicants
Unstructured Interviews
31
Develop questions and scoring key prior to interviews | Can give each applicant a score
Structured Interviews
32
Characteristics that define an individual and are used by the individual when interacting with others Most often assessed with self-report measures Debate over use and validity of personality tests in employment settings
Personality
33
Self-report tests that are designed to identify dishonest people.
Integrity Testing
34
Questions about general beliefs towards theft, admissions of previous wrongdoings
Overt tests:
35
Focus on personality traits associated with theft
Personality-Oriented: Integrity Testing
36
Collection of tasks or exercises that simulate a variety of situations that one would experience in the work environment Participants complete tasks and are observed. Performance is rated across multiple dimensions
Assessment centers
37
Require the applicant to perform tasks related to the job that they applied for
Work sample tests
38
Limitations of work sample tests
Representativeness of selected tasks? Assumes applicants already have knowledge and skills Expensive
39
Applicant’s personal experiences and background Measures broader domains than personality tests Can be collected through a variety of measures
biographical data
40
: best used for jobs that have a lot of manual labor or activities that can be completed in a short amount of time. Involves observing people who currently have the job to see what is required.
Direct observation
41
employees who are considered experts at their jobs are placed into groups of 5-10 and perform a job analysis to determine what is needed to succeed at job
Subject-matter expert panels
42
typically measure a variety of tasks. People who hold a job are asked to rate the relevance of specific tasks to performing their job functions
Questionnaires
43
Different types of Job analysis
Interview Direct observation Expert panels Questionnaires
44
Type of evaluation of job performance
Performance ratings Relative rating methods Absolute rating methods
45
Compare the performance of the ratee to other persons performing the same or similar jobs
Relative rating methods
46
Compare the ratee’s performance to a defined measure of performance
Absolute rating methods
47
Errors in evaluating job performance
leniency severity central tendency Halo
48
employee is rated more favorably than they should be
leniency
49
employee is rated more negatively than they should be
Severity
50
everyone is rated about the same (average)
Central tendency
51
rater uses a global impression of an employee when rating, resulting in either overly positive or negative ratings
Halo
52
systematically overestimates or underestimates the value of the variable it is designed to measure
Biased assessment:
53
Any gender, ethnic, racial or other nominally determined groups who perform differently on mental tests are due to inherent, artifactual biases produced within the tests through flawed psychometric methodology
Cultural Test Bias Hypothesis
54
Explanations for difference in test scores across cultures
Differences primarily have a genetic basis Differences have an environmental bases (SES, education, culture) Differences are due to an interaction between genes and the environment Tests are defective and systematically underestimate the knowledge and skills of minorities
55
Objections to Tests
``` Inappropriate content Inappropriate standardization sample Examiner and language bias Inequitable social consequences Measurement of different constructs Differential predictive validity Qualitatively distinct aptitude and personality ```
56
minority children might not have been exposed to material involved in test
Inappropriate content
57
ethnic minorities are underrepresented in standardization samples. Therefore the normative reference group does not accurately reflect minority students.
Inappropriate standardization sample
58
because most psychologists are white and speak English, they may intimidate Black and other minorities and may be unable to communicate, therefore creating examiner and language bias. Need to have more people who come from the same background and come equipped with the skills and knowledge to best serve students. (We see this in many areas of society—it’s important for children to have role models who look like them and come from the same backgrounds)
Examiner and language bias
59
: minority group members who perform poorly on tests are put on lower, dead-end educational tracks. It’s difficult to get off of these tracks throughout their education.
Inequitable social consequences
60
tests measure different constructs for minority children than for white, middle class children
Measurement of different constructs
61
test usage might be accurate for one group, but invalid for another
Differential predictive validity
62
tests measure “European-centered” cognitive style, but there are other cognitive styles that are not measured on tests
Qualitatively distinct aptitude and personality
63
Posits that when a negative stereotype about a group becomes relevant, one might do something to confirm that stereotype
Stereotype Threat
64
The degree of cultural specificity present in the test or individual items on the test
Cultural loading
65
What happens when cultural loading is high
the test has a greater chance of bias
66
all human populations are identical on all mental traits or abilities
Egalitarian Fallacy
67
What are the three parameters for IRT
Difficulty, Discrimination, Guessing
68
A test is shown to measure different hypothetical traits for one group than another or to measure the same trait but with differing degrees of accuracy
Construct measurement test bias
69
The inference drawn from the test score is not made with the smallest feasible random error or if there is constant error in an inference or prediction as a function of membership in a particular group
biased to prediction
70
What is used to detect test construct bias
factor analysis
71
Identifies clusters of test items that correlate highly with one another, and less so with other items
Factor analysis
72
How can you tell bias from factor analysis
If these clusters are different for different groups, test may be biased---or, if one group clusters in a certain way, but another group does not, the test might be biased
73
What is used to detect test prediction bias
regression lines
74
How can you tell bias from regression lines
If slopes or intercepts (or both) are different for different groups, then there is bias.
75
Changes in the standard assessment procedures that are implemented to minimize the impact of examinee characteristics that are irrelevant to the construct being measured by the assessment that would alter obtained scores if the test was administered under standardized conditions
assessment accommodations
76
implies a potential change in the construct being measured
Modification
77
implies that the construct measured by the test is not altered
Accommodation
78
When Are Accommodations NOT Acceptable?
If the affected ability is directly relevant to the construct being measured If the purpose of the test is to assess the presence and degree of the disability Accommodations are not necessary for all examinees with disabilities
79
What is the goal of accommodations
to obtain more valid score interpretations.
80
Modifying or changing the way that the directions, items, or tasks are presented
Modifications of presentation format
81
Allow examinees to respond with their preferred method of communication (ex: verbally instead of written)
Modifications of response format
82
Extended time for examinees who may have reduced processing speed, reading speed, or writing speed
Modifications of timing
83
Allow examinees to be tested in a setting that will allow them to perform at their best
Modification of setting
84
Determining Accommodations to Provide
Modifications should be tailed to meet the specific needs of the examinee Accommodations that students receive in regular classroom instruction should be appropriate for testing Use accommodations that promote independent functioning Follow publishers guidelines Periodically reevaluate the needs of the examinee
85
Four phases of test development
Test conceptualization Specification of test structure and format Planning standardization and psychometric studies Plan implementation
86
Conduct a review of the literature and develop a statement of need Describe the proposed uses and interpretation of results from the test Decide who will use the test and why Develop conceptual and operational definitions Determine whether measures of dissimulation are needed
Phase I: Test Conceptualization
87
Age range for the measure Testing format and potential administrators The structure of the test and how items/subscales are organized Table of specifications Item formats and summary of instructions for administration and scoring Written explanation of methods for item development, testing, and item selection
Phase II: Test Structure and Format
88
Describe the reference group/norm group Describe choice of scaling methods Outline the reliability studies and their rationale Outline validity studies and rationale Include special studies that may be needed for development of this test or to support proposed interpretations of performance List the components of the test
Phase III: Standardization and Psychometrics
89
Reevaluate the test content and structure Prepare the test manual Submit test proposal
Phase IV: Plan Implementation
90
To measure convergent and discriminate validity what do you use
MTMM
91
How do you get the coefficient of determination
square of the correlation coefficient
92
What percentile is one standard deviation above the mean
84.1
93
What percentile is 2 standard deviations above the mean
97.7
94
What percentile is 3 standard deviations above the mean
99.9
95
What percentile is one standard deviation below the mean
15.9
96
What percentile is 2 standard deviations below the mean
2.3
97
What percentile is 3 standard deviations below the mean
.10