Study Guide Exam 2 (Assessment and Diagnosis) Flashcards

(214 cards)

1
Q

Norm samples: what they need to be

A

Representative of the population taking the test
Consistent with that population
Current (must match current generation)
Large enough sample size

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Flynn effect

A

Intelligence increases over successive generations

In order to stay accurate, intelligence tests must be renormed every couple of years

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Types of norm samples

A

Nationally representative sample (reflects society as a whole)
Local sample
Clinical sample (compare to people with given diagnosis)
Criminal sample (utilizing criminals)
Employee sample (used in hiring decisions)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Ungrouped frequency distributions

A

For each score/criteria, number of people/items that fit criteria are listed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Grouped frequency distributions

A

Scores are grouped (ex: 90-100) and number of people whose scores lie in that range are listed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Frequency graphs

A

Histograms

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Mean

A

Arithmetic average

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Median

A

Point that divides distribution in half

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Mode

A

Most frequent score

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Which measure of central tendency to pick

A

Normal distribution: mean
Skewed distribution: median
Nominal data: mode

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Positions of mean and median in positively and negatively skewed distributions

A
Positively skewed (right skewed): mean is higher than median
Negatively skewed (left skewed): median is higher than mean
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Standard deviations

A

Average distance of scores and how far they vary from mean

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Raw scores

A

Number of questions answered correctly on a test

Only used to calculate other scores

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Percentile ranks

A

Percentage of people scoring below

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

z scores

A

M=0

SD=1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

t scores

A

M=50

SD=10

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

IQ scores

A

M=100

SD=15

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Content sampling error

A

Difference between sample of items on test and total domain of items

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Time sampling error

A

Random fluctuations in performance over time

Can be due to examinee (fatigue, illness, anxiety, maturation) or due to environment (distractions, temperature)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Interrater differences

A

When scoring is subjective, different scorers may score answers differently

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Test-retest reliability

A

Administer the same test on 2 occasions
Correlate the scores from both administrations
Sensitive to sampling error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Things to consider surrounding test-retest reliability

A

Length of interval between testing
Activities during interval (distraction or not)
Carry-over effects from one test to next

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Alternate-form reliability

A

Develop two parallel forms of test
Administer both forms (simultaneously or delayed)
Correlate the scores of the different forms
Sensitive to content sampling error (simultaneous and delayed) and time sampling error (delayed only)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Things to consider surrounding alternate-form reliability

A

Few tests have alternate forms

Reduction of carry-over effects

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Split-half reliability
Administer the test Divide it into 2 equivalent halves Correlate the scores for the half tests Sensitive to content sampling error
26
Things to consider surrounding split-half reliability
Only 1 administration (no time sampling error) How to split test up Short tests have worse reliability
27
Kuder-Richardson and coefficient (Cronbach's) alpha
Administer test Compare each item to all other items Use KR-20 for dichotomous answers and Cronbach's alpha for any type of variable Sensitive to content sampling error and item heterogeneity Measures internal consistency
28
Inter-rater reliability
Administer test 2 individuals score test Calculate agreement between scores Sensitive to differences between raters
29
High-stake decision tests: reliability coefficient used
Greater than 0.9 or 0.95
30
General clinical use: reliability coefficient used
Greater than 0.8
31
Class tests and screening tests: reliability coefficient used
Greater than 0.7
32
Content validity
Degree to which the items on the test are representative of the behavior the test was designed to sample
33
How content validity is determined
Expert judges systematically review the test content | Evaluate item relevance and content coverage
34
Criterion-related validity
Degree to which the test is effective in estimating performance on an outcome measure
35
Predictive validity
Form of criterion-related validity Time interval between test and criterion Example: ACT and college performance
36
Concurrent validity
Form of criterion-related validity Test and criterion are measured at same time Example: language test and GPA
37
Construct validity
Degree to which test measures what it is designed to measure
38
Convergent validity
Form of construct validity | Correlate test scores with tests of same or similar construct to determine
39
Discriminant validity
Form of construct validity | Correlate test scores with tests of dissimilar construct to determine
40
Incremental validity
Determines if the test provides a gain over another test
41
Face validity
Determines if the test appears to measure what it is designed to measure Not a true form of validity Problem with tests high in these: can fake them
42
Type of material that should be used on a matching test
Homogenous material (all items should relate to a common theme)
43
Multiple choice tests: what kinds of stems should not be included?
Negatively-stated ones | Unclear ones
44
Multiple choice tests: how many alternatives should be given?
3-5
45
Multiple choice tests: what makes a bad alternative?
Long Grammatically incorrect in question Implausible
46
Multiple choice tests: how should placement of correct answer be determined?
Random (otherwise, examinees can detect pattern)
47
Multiple choice tests, true/false tests, and typical response tests: what kind of wording should be avoided?
"Never" or "always" for all 3 "Usually" for true/false "All of the above" or "none of the above" for multiple choice
48
True/false tests: how many ideas per item?
1
49
True/false tests: what should be the ratio of true to false answers?
1:1
50
Matching tests: ratio of responses to stems?
More responses than stems (make it possible to get only 1 wrong)
51
Matching tests: how long should responses and lists be?
Brief
52
Essay tests and short answer tests: what needs to be created?
Scoring rubric
53
Essay tests: what kinds of material should be covered?
Objectives that can't be easily measured with selected-response items
54
Essay tests: how should grading be done?
Blindly
55
Short answer tests: how long should answers be?
Questions should be able to be answered in only a few words
56
Short answer tests: how many correct responses?
1
57
Short answer tests: for quantitative items, what should be specified?
Desired level of precision
58
Short answer tests: how many blanks should be included? How long should they be?
Only 1 blank included Should be long enough to write out answer Otherwise, becomes dead giveaway
59
Short answer tests: where should blanks be included?
At the end of the sentence
60
Typical response tests: what should be covered?
Focus items on experiences (thoughts, feelings, behaviors) | Limit items to a single experience
61
Typical response tests: what kinds of questions should be avoided?
Items that will be answered universally the same | Leading questions
62
Typical response tests: how should response scales be constructed?
If neutral option is desired, have odd numbered scale High numbers shouldn't always represent the same thing Options should be labeled as Likert-type scale (rating from 0-7, etc.)
63
Spearman
Identified a general intelligence "G" | Underlies everything else about you
64
Cattell-Horn-Carroll
10 types of intelligence theory
65
3 abilities incorporated by most definitions of intelligence
Problem solving Abstract reasoning Ability to acquire knowledge
66
Original determination of IQ (used by Binet)
Mental age/chronological age * 100
67
How IQ is currently determined
Raw score compared to age/grade appropriate norm sample | M=100, SD=15
68
Why professionals have a love/hate relationship with intelligence tests
Good: reliable and valid (psychometrically sound, predict academic success, fairly stable over time) Bad: limited (make complex construct into 1 number), misunderstood and overused
69
Group administered tests: who administers and who scores?
Standardized: anyone can administer (teachers, etc.), but professionals interpret
70
Group administered tests: content focuses on which skills most?
Verbal skills
71
Examples of group-administered aptitude tests
Otis-Lennon School Ability Test | American College Test (ACT)
72
Individually administered tests: how standardized?
Very standardized No feedback given during testing regarding performance or test Additional queries only when specified (only can say "Tell me more about that.") Answers are recorded verbatim
73
Individually administered tests: starting point
Starting point determined by age/grade | Reversals sometimes needed (person gets 1st question wrong: must back down in level)
74
Individually administered tests: ending point
Testing ends when person answers 5 questions wrong in a row
75
Individually administered tests: skills tested
Verbal and performance
76
3 individually administered IQ tests for adults
Wechsler Adult Intelligence Scale (WAIS; most commonly used) Stanford-Binet Woodcock-Johnson Tests of Cognitive Abilities
77
Child version of Wechsler Adult Intelligence Scale
Wechsler Intelligence Scale for Children (WISC)
78
WAIS: subtests and index scores
15 subtests combine to make 4 index scores: Verbal Comprehension Index (VCI), Perceptual Reasoning Index (PRI), Working Memory Index (WMI), Processing Speed Index (PSI) 4 index scores combined to make Full Scale IQ score
79
WAIS: norm set
Older teenagers to elderly
80
WISC: basics
2-3 hours to administer and score Administered by professionals Normed for children elementary-aged to older adolescence
81
Stanford-Binet: norm set
Young children to elderly
82
Stanford-Binet: IQ scores
3 composite IQ scores: verbal IQ, nonverbal IQ, full scale IQ
83
Score range difference between WAIS/WISC and Stanford-Binet
Stanford-Binet: possible to score higher than 160 (not possible for WAIS or WISC)
84
Woodcock-Johnson: norm set
Young children to elderly
85
What Woodcock-Johnson is based on
Cattell-Horn-Carroll theory of 10 types of intelligence
86
Woodcock-Johnson full scale IQ
Based on comprehensive assessment of Cattell-Horn-Carrol abilities
87
Full scale IQ
Overall, composite IQ (# reported)
88
What kind of a construct is IQ?
Unitary construct
89
2 disorders that include intelligence in the criteria
``` Intellectual disability (IQ less than 70, impairments across multiple domains- occupational, educational, social function, activities of daily living) Learning disorders (discrepancy between intelligence and achievement; math, reading, written expression) Neither is based on intelligence alone ```
90
Response to intervention
Method of preventing struggling students from being placed in special ed Students are provided regular instruction: progress is monitored If they don't progress, they get additional instruction: progress is monitored Those who still don't respond receive special education or special education evaluation
91
Achievement definition
Knowledge in a skill or content domain in which one has received instruction
92
Aptitude vs. achievement
Aptitude measures cognitive abilities/ knowledge accumulated across life experience Achievement measures learning due to instruction
93
Group administered achievement tests
Can be administered by anyone, but interpreted by professionals Standardized Items increase in difficulty as exam progresses Time limits often included Often focus on verbal skills
94
Examples of group administered achievement tests
Stanford Achievement Tests Iowa Tests of Basic Skills (Iowa Basics) California Achievement Tests
95
What individually administered achievement tests are used for
Used to determine presence of learning disorders
96
Standardization of individually administered achievement tests
No feedback given during testing regarding performance or test Additional queries used only when specified Answers are recorded verbatim
97
Examples of individually administered achievement tests
Wechsler Individual Achievement Test Woodcock-Johnson Tests of Achievement Wide Range Achievement Test
98
Wechsler Individual Achievement Test: norm set and areas tested
Normed for young children to elderly | Scores: reading, math, written language (handwriting), oral language
99
Woodcock-Johnson Tests of Achievement: norm set and areas tested
Normed for young children to elderly | Scores: reading, oral language, math, writing
100
Wide Range Achievement Test: norm set and areas tested
Normed for young children to elderly | Scores: word reading, reading comprehension, spelling, math
101
How Wide Range Achievement Test differs from other 2
WRAT is used as a screening test: it takes only 20-30 minutes to administer (others take 1.5-3 hours)
102
Other examples of achievement tests
School tests (teacher-constructed tests) Psych GRE MCAT Licensing exams (EPPP- psychologists)
103
Personality
Characteristic way of behaving/thinking across situations
104
Uses for personality assessments
``` Diagnosis Treatment planning Self-understanding Identifying children with emotional/behavioral problems Hiring decisions Legal questions ```
105
Woodworth
Developed first personality test (Personal Data Sheet)
106
Trait vs. state
Trait: stable internal characteristic, test-retest reliability can be greater than 0.8 State: transient, lower test-retest reliability
107
Response set
Unconscious responding in a negative or positive manner | Test taker bias that affects formal personality assessment
108
Dissimulation
Faking the test Increases with face validity Test taker bias that affects formal personality assessment
109
Validity scales
Used to detect individuals not responding in an accurate manner on personality assessments
110
Content rational approach
Similar to process of determining content validity: expert looks at test and decides if it represents what it should be testing
111
Empirical criterion keying
Large pool of items is administered to 2 groups: clinical group with specific diagnosis and control group Items that discriminate between groups are retained (may or may not be directly associated with psychopathology- not necessarily face valid)
112
Minnesota Multiphasic Personality Inventory (MMPI)
Most used personality measure Developed using empirical criterion keying Contains validity scales (detect random responding, lying, etc.) Adequate reliability 10 clinical scales
113
Hypochondriasis
Clinical scale on MMPI | Somatic complaints
114
Depression
Clinical scale on MMPI | Pessimism, hopelessness, discouragement
115
Hysteria
Clinical scale on MMPI | Development of physical symptoms in response to stress
116
Psychopathic deviate
Clinical scale on MMPI | Difficulty incorporating societal standards and values
117
Masculinity/femininity
Clinical scale on MMPI | Tendency to reject stereotypical gender norms
118
Paranoia
Clinical scale on MMPI | Paranoid delusions
119
Psychasthenia
Clinical scale on MMPI | Anxiety, agitation, discomfort
120
Schizophrenia
Clinical scale on MMPI | Psychotic symptoms, confusion, disorientation
121
Hypomania
Clinical scale on MMPI | High energy levels, narcissism, possibly mania
122
Social introversion
Clinical scale on MMPI | Prefers being alone to being with others
123
Factor analysis
Statistical approach to personality assessment development | Evaluates the presence/structure of latent constructs
124
NEO Personality Inventory
Developed using factor analysis 5-factor model (Neuroticism, Extraversion, Openness, Agreeableness, Conscientiousness) Pretty good reliability and validity
125
Theoretical approach
Match test to theory
126
Myers-Briggs Type Indicator
Developed using theoretical approach Based on Jung's theories 4 scales: introversion (I)/extraversion (E), sensing (S)/intuition (N), thinking (T)/feeling (F), judging (J)/perceiving (P) Personality represented by one of 16 4 letter combinations
127
Millon Clinical Multiaxial Inventory (MCMI)
Developed using theoretical approach Based on Millon's theories surrounding personality disorders 2 scales: clinical personality scales and clinical syndrome scales Good reliability and validity, but high correlations between scales (problem)
128
Objective personality assessments given to children
Child Behavior Checklist Barkley Scales (ADHD) Each test has a version for the parent, a version for the child, and a version for the teacher to fill out
129
Broad-band vs. symptom measures
Broad-band: lots of info on a variety of topics, allow for a comprehensive view (example: MMPI) Symptom measure: identify specific symptoms (example: Beck Depression Inventory)
130
Ink blot test
Examinee is presented with an ambiguous inkblot and asked to identify what they see Limited validity
131
Rorschach ink blot test scoring/interpreting
Exner developed most comprehensive system for scoring (including norm set) Limited validity, though
132
Apperception tests
Given an ambiguous picture, examinee must make up story Themes presented in stories tell something about examinee Have issues with validity
133
Projective drawings: advantage
Require little verbal abilities/ child friendly
134
House-tree-person test
House-tree-person test (house: home life and family relationships, tree: deep feelings about self, person: less deep view of self)
135
Pros and cons of projective tests
Pros: popular in clinical settings, supply rich information (not a lot of face validity) Cons: questionable psychometrics (poor reliability and validity), so should be used with caution
136
Anatomical dolls
Controversial assessment technique Used to assess sexual assault in children (watch what child is paying attention to, how child plays with doll, etc.) Lots of false positives
137
Hypnosis assisted assessment and drug assisted assessment
Controversial assessment technique Truth serum (sodium amytol): help people relax and share difficult information Help people relax and remember things People under hypnosis or sodium amytol are suggestible to forming false memories
138
Neuropsychology
Study of brain-behavior relationships
139
Neurology vs. neuropsychology
Neurologist focuses on anatomy and physiology of brain | Neuropsychologist focuses on functional product (behavior and cognition) of CNS dysfunction
140
Uses of neuropsychology
Identify damaged areas of brain Identify impairments caused by damage Assessing brain function
141
Common referral questions
``` Traumatic brain injury Cerebrovascular accidents (example: stroke) Tumors Dementia and delirium Neurological conditions ```
142
A thorough neuropsychological assessment includes...
``` Higher order information processing Anterior and posterior cortical regions Presence of specific deficits Intact functional systems Affect, personality, behavior ```
143
Fixed battery
Comprehensive, standard set of tests administered to everyone Take a long time to administer (about 10 hours)
144
Most commonly used fixed battery
Halstead-Reitan Neuropsychological Test Battery for Adults (HRNB)
145
Flexible battery
Flexible combination of tests to address specific referral question
146
Brief screeners
Quickly administered tests that provide general information on functioning Used to determine whether more testing is needed Example: mini mental status exam
147
Memory assessments
Memory is impaired in functional and organic disorders (forgetting recent events) Can be used to discriminate between psychiatric disorders and brain injury (forgetting is common in brain injury but not in psychiatric disorders)
148
Most commonly used memory test
Wechsler Memory Scale
149
Continuous performance tests
``` Used to assess attention (ADHD diagnosis, etc.) Boring tasks (press a key when an x shows up on the screen, etc.): measure how well person stays with them ```
150
Executive function tests
Stroop task: measure ability to ignore reading word (name color of ink only) Wisconsin card sort: measure adaptability to new rules Delay discounting: measure ability to delay gratification in order to gain a greater outcome later on
151
Motor function tests
Grip strength Finger tapping test Purdue pegboard (fine motor skills: put pegs on peg board, put washers on pegs)
152
Sensory functioning tests
``` Clock drawing test Facial recognition test Left-right orientation Smell identification Finger orientation ```
153
Language functioning tests
Measure ability to develop language skills and ability to use language
154
Example of language functioning test
Expressive Vocabulary Test | Boston Diagnostic Aphasia Examination
155
Normative approach to interpretation
Compare current performance against normative standard | Inferences made within context of premorbid ability
156
Ideographic approach to interpretation
Compare within the individual: compare current scores to previous scores or estimates of premorbid functioning
157
How to estimate premorbid functioning
Prior testing Reviewing records Clinical interview ("What were you like beforehand?") Interviewing others Demographic estimation (assuming that you were average) Hold tests (tests that are resistant to brain damage, such as vocabulary- scores are used to estimate IQ)
158
Pattern analysis approach to interpretation
Patterns across tasks differentiate functional/dysfunctional systems
159
Pathognomonic signs
Signs that are highly indicative of dysfunction
160
ABCs of behavioral assessment
A: antecedent (what was happening before behavior took place) B: behavior (what did the person do) C: consequent (what happened after the behavior took place)
161
Direct observation
Method of behavioral assessment | Observe behavior in its context (real world)
162
Analogue assessment
Method of behavioral assessment | Simulate real world events in a therapy setting through role play
163
Indirect observation
Client monitors observations through self-monitoring (recording behavior) or self-report (remembering what happened after the fact)
164
Behavioral interview
Clinical interview focusing on ABCs | Relies on self-report
165
Sources of information for behavioral assessment
``` Client Therapist Parents Teachers Spouses Friends ```
166
Pros and cons for behavioral assessment
Pros: direct information, contextual Cons: labor intensive, reactivity, not everything is observable
167
Reactivity
Problem with direct observation: behavior changes when being observed Decreases as observation time increases
168
Settings for behavioral assessment
School Home Therapy setting Real world is preferable to therapy setting
169
Formal inventories
Used to enable comparison across people (standardization) Informants rate behavior on a number of dimensions Parents, teachers, spouse, child, etc.
170
Formal inventories: broad-based vs. single domain
Broad based: cover a number of behaviors/disorders (example: Achenbach) Single domain: assess behavior for 1 disorder (example: Childhood Autism Rating Scale, Barkley Scales- ADHD)
171
Psychophysiology
Used to record internal behavior/physiological responses
172
EEG
Used in psychophysiology | Measures brain waves by measuring electrical activity across scalp
173
GSR (Galvanic skin response)
Used in psychophysiology | Measures sweat
174
Settings for forensic psychology
``` Prison (most common) Police departments Law firms Government agencies Private practice (consultants) ```
175
Role of psychologists in court
Provide testimony as an expert witness
176
Expert witness
Person who possesses knowledge and expertise necessary to assist judge/jury Objective presentation is goal
177
Differences between clinical and forensic assessment: purpose
Clinical: purpose is diagnosis and treatment Forensic: purpose is gaining information for court
178
Differences between clinical and forensic assessment: participation
Clinical: participation is voluntary Forensic: participation is involuntary
179
Differences between clinical and forensic assessment: confidentiality
Clinical: confidentiality Forensic: no confidentiality
180
Differences between clinical and forensic assessment: setting
Clinical: office Forensic: jail
181
Differences between clinical and forensic assessment: testing attitude
Clinical: positive, genuine Forensic: hostile, coached (by lawyer; malingering is a big concern)
182
Not guilty by reason of insanity (NGRI)
At the time of the offense the defendant, by reason of mental illness or mental defect, did not know his/her conduct was wrong Used in less than 1% of felony cases; successful in about 25% Results in mandatory hospitalization (prison-based state hospital; stay until person is no longer a danger)
183
NGRI defense: what assessment involves
Review of case records Review of mental health history Clinical interview Psychological testing
184
Competency to be sentenced
Criminal is required to understand reason for punishment If cannot understand reason for punishment, don't receive it Rarely contested: most common cases of contesting are capital cases
185
Mitigation in sentencing
Determining whether circumstances exist that lessen moral culpability Examples: crime of passion, brain injury causing impulsivity Evaluate probability of future violence
186
Juvenile tried as adult
Determining whether to transfer juvenile to adult court | Decision is based on cognitive, emotional, and moral maturity
187
Capital sentencing and intellectual disability
Execution of people with intellectual disabilities is outlawed Testing assesses cognitive capacity
188
Personal injury litigation
Attempt to seek recovery of actual damages (out of pocket costs) and/or punitive damages (grief/emotional distress) Psychologist has to determine presence of CNS damage, emotional injury assessment, quantify degree of injury, and verify injury actually took place
189
Divorce and child custody
Must determine best interests of children | Assess parent factors and child factors
190
Civil competency
Determining whether person is able to manage his/her affairs, make medical decisions, and waive rights Neuropsych testing used
191
Other civil matters relating to children
Child abuse/neglect investigations Removing children from the home Adoption considerations
192
Admissibility
Expert standing doesn't guarantee testimony will be accepted
193
Daubert standard
Expert's reasoning/methods must be reliable, logical, and scientific Credible link between reasoning and conclusion Credibility must be established as well
194
Third-party observers
Attorneys or other experts may ask to be present during assessment Issues: standardization procedures, professional standards, test security
195
Demographic factors that serve as a potential basis for bias
Intelligence scores are often higher for Whites than for Blacks, Hispanics, or Native Americans Intelligence scores are often higher for Asian Americans than for Whites
196
Explanations for differences in psychological assessments
Genetic factors Environmental factors (SES, education, culture) Gene-environment interaction Test bias
197
Bias
Systematic influence that distorts measurement or prediction by test scores (systematic difference in test scores)
198
Fairness
Moral, philosophical, legal issue | Is it okay that differences across groups exist on assessments?
199
Offensiveness
Content that is viewed as offensive or demeaning
200
Inappropriate content
Source of potential bias | Minority children haven't been exposed to the content on the test or needed for test
201
Inappropriate standardization samples
Source of potential bias | Minorities are underrepresented in standardization samples
202
Examiner and language bias
Source of potential bias Most psychologists are White and speak standard English May intimidate ethnic minorities Difficulties communicating accurately with minority children
203
Inequitable social consequences
Objection to testing Consequences of testing results different for minorities Perceived as unable to learn, assigned to dead-end jobs, previous discrimination, labeling effects
204
Measurement of different constructs
Source of potential bias | Tests measure different constructs when used with minorities
205
Differential predictive validity
Source of potential bias | Valid predictions apply for one group, but not for another
206
Qualitatively distinct aptitude and personality
Source of potential bias Minority/majority groups possess qualitatively different aptitude and personality structure Test development should begin with different definitions for different groups
207
Cultural loading
Degree of cultural specificity present in the test | Test can be culturally loaded without being culturally biased
208
Culture free tests
Several attempts have been made to create these, but ultimately have been unsuccessful
209
Ways to reduce bias on tests
Use minority review panels to look for cultural loading (problem: high disagreement) Factor analysis: use statistics to determine if questions differ across groups Assess across groups (does it work for everyone?)
210
Evidence for cultural bias
Little evidence exists (well-developed, standardized tests show little bias)
211
General ethics to consider
Stick to referral question Match test to your purpose Consider reliability and validity Understand norm sample
212
Using testing in context
Use multiple measures to converge on a diagnosis | Attend to behavior observations
213
Client considerations
Informed consent Involve client in decisions Maintain confidentiality Be sensitive in presenting results
214
Other considerations
Maintain test security Don't practice outside of expertise Cultural sensitivity