Lesson 1-3 Flashcards

(132 cards)

1
Q

The process of measuring Psychology-related variables by means of devices or procedures designed to obtain a sample of behavior.

A

Psychological Testing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

It is the gathering and integration of
Psychology-related data for the purpose of making a psychological evaluation that is accomplished through the use of tools such as tests, interviews, case studies, behavioral observation, and specifically designed apparatuses and measurement procedures.

A

Psychological Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

To obtain some gauge, usually numerical in nature, with regard to an ability or attribute

A

Objective of Testing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

To answer a referral question, solve a
problem, or arrive at a decision through
the use of tools of evaluation

A

Objective of Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

May be individual or group in nature

A

Process of Testing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

It is typically individualized

A

Process of Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Tester is not the key to the process

A

Role of Evaluator in Testing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Assessor is the key to the process:
selecting tests/tools and drawing conclusions

A

Role of Evaluator in Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Requires technician-like skills: administering, scoring, and interpreting

A

Skill of Evaluator in Testing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Requires an educated selection of tools of evaluation, skills in evaluation, and integration of data

A

Skill of Evaluator in Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Yields a test score or a series of test scores

A

Outcome of Testing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Entails a logical problem-solving approach to shed light on a referral question

A

Outcome of Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Process of Assessment

A

Referral, Initial Meeting, Tool Selection, Formal Assessment, Report Writing, Feedback Sessions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

From: Teacher, Counselor, Health Provider, Employer, Individual

A

Referral

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Intake Interview (clarify reason for
referral)

A

Initial Meeting

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Preparation for assessment

A

Tool Selection

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Actual assessment begins

A

Formal Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Writes a report of the findings that is designed to answer the referral question

A

Report Writing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Between client and assessor (third
parties may be scheduled)

A

Feedback Sessions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

8 Tools of Psychological Assessment

A

Test, Interview, Portfolio, Case History Data, Behavioral Observation, Role-Play Tests, Computers, Other tools

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

A measuring device or procedure

A

Test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Device or procedure designed to measure variables related to Psychology

A

Psychological Test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Almost always involves analysis of a sample of behavior

A

Psychological Test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Behavioral sample could range from responses to a pencil-and-paper questionnaire, to oral responses to questions related to the performance of some task.

A

Psychological Test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Method of gathering information through direct communication involving reciprocal exchange
INTERVIEW
26
Face-to-face: Verbal and non-verbal behavior
Face-to-face
27
Changes in voice pitch, long pauses, signs of emotions
Telephone
28
online interview, e-mail interview, text messaging
Electronic
29
Samples of one’s ability and accomplishment
Portfolio
30
Refers to records, transcripts, and other accounts in written, pictorial, or other form that preserve archival information, official and informal accounts, and other data and items relevant to an assessee
CASE HISTORY DATA
31
Monitoring the actions of others or oneself by visual or electronic means while recording and/or quantitative and or qualitative information regarding those actions
Behavioral Observation
32
Tool of assessment wherein assesses are directed to act as if they were in a particular situation
ROLE-PLAY TESTS
33
Can serve as test administrators and as highly efficient test scorers
Computer
34
Mere listing of scores
Simple scoring
35
statistical analyses
Extended scoring
36
Numerical or narrative statements
Interpretive
37
Written in language appropriate for communication between professionals, may provide expert opinion (analysis of data)
Consultative
38
Inclusion of data from sources other than the test
Integrative
39
Video Thermometer Sphygmomanometer
OTHER TOOLS
40
Create tests or other methods of assessment
Test Developer
41
Clinicians, counselors, school psychologists, human resources personnel, etc.
Test User
42
Anyone who is the subject of an assessment or an evaluation
Test-taker
43
Evolving society causes changes to psychological variables
Society at large
44
Tests or aids that can be adequately be administered, scored, and interpreted with the aid of the manual and general orientation
Level A
45
Achievement, Proficiency
Level A
46
Tests or aids that require some technical knowledge of test construction and use of supporting psychological and educational fields
Level B
47
Aptitude
Level B
48
Tests or aids that require substantial understanding of testing and supporting psychological fields together with supervised experience in the use of these devices
Level C
49
Projective tests, Individual Mental Tests
Level C
50
The nature of transformation of the test into a form ready for administration to the individual with disabling conditions will depend on the nature of the disability
Testing people with disabilities
51
Legal and Ethical Considerations *Rights of Testtakers
1. Right of Informed Consent 2. Right to be Informed of Test Findings 3. Right to privacy and confidentiality 4. Right to Least of Stigmatizing Label
52
1. Psychological Traits and States exist 2. Psychological Traits and States can be quantified and measured 3. Test-related behavior predicts non-test related behavior 4. Tests and measurement techniques have strength and weaknesses 5. Various sources of error are part of the assessment process 6. Testing and Assessment can be conducted in fair and unbiased manner 7. Testing and Assessment benefit society
Some Assumptions about Psychological Testing and Assessment
53
Any distinguishable, relatively enduring way in which one individual varies from another
Psychological Traits and States exist Trait
54
Also distinguishes one person from another but is relatively less enduring
Psychological Traits and States exist State
55
Test developer provided test users with a clear operational definition of the construct under study/assessment.
Psychological Traits and States can be quantified and measured
56
Once having defined the trait, state or other construct to be measured, a test developer considers the types of item content that would provide insight to it.
Psychological Traits and States can be quantified and measured
57
Measuring traits and states by means of a test also entails appropriate ways to score the test and interpret the result.
Psychological Traits and States can be quantified and measured
58
The tasks in some tests mimic the actual behaviors that the test user is trying to understand.
Test-related behavior predicts non-test related behavior
59
The obtained sample of behavior is typically used to make predictions about the future behavior.
Test-related behavior predicts non-test related behavior
60
In some forensic matters, psychological tests may be used not to predict behavior but to postdict it.
Test-related behavior predicts non-test related behavior
61
Understanding of behavior that has already taken place
Postdict
62
1. Complex nature of violence 2. Low base rate 3. False positives and false negatives 4. Dynamic nature of behavior 5. Ethical and legal concerns 6. Cultural and social bias 7. Inadequate data and research 8. Limited understanding of causality 9. Contextual factors
Why do you think it is difficult to predict violence by means of test?
63
Competent test user understand and appreciate the limitations of the tests they use as well as how those limitations might be compensated for by data from other sources. * Users understand: * How a test was developed * Circumstances under which it is appropriate * How it should be administered and to whom * How results should be interpreted
Tests and other measurement techniques have a strength and weaknesses
64
-How a test was developed -Circumstances under which it is appropriate -How it should be administered and to whom -How results should be interpreted
Users understand
65
Refers to factors other than what a test attempts to measure will influence performance on the test
Various sources of error are part of the assessment process Error
66
Component of a test score attributable to sources other than the trait or ability measured
Error variance
67
Potential sources of error variance
1. Assessee 2. Assessor 3. Measuring instruments
68
* All major test publishers strive to develop instruments that are fair when used in a strict accordance with guidelines in the test manual. * One source of fairness-related problems is the test user who attempts to use a particular test with people whose background and experience are different from the background and experience of people for whom the test was intended.
Testing and Assessment can be conducted in a fair and unbiased manner
69
In a world without tests or assessment procedures: 1. People could present themselves as professionals regardless of their background, ability, or professional credentials. 2. Personnel might be hired on the basis of nepotism rather than documented merit. 3. Teachers and school administrators could arbitrarily place children in different types of special classes simply because that is where they children belonged.
Testing and Assessment benefit the society
70
What is a “good” test?
Criteria for a good test: * Clear instructions for administration, scoring, and interpretation * Offered economy in the time and money it took to administer, score, and interpret it * Measures what it suppose to measure
71
Psychometric Soundness
Reliability, Validity
72
Involves the consistency of the tool
Reliability
73
Measure what it purports measure
Validity
74
Refers to the consistency and stability of the results obtained from a particular assessment tool or measurement instrument. * High _________ is crucial in psychological testing because it indicates that the results are dependable and not subject to significant fluctuations or random errors.
Reliability
75
Reliability Estimates
1. Test-Retest 2. Parallel-Forms and Alternate Forms 3. Split-Half 4. Inter-Rater Reliability 5. Internal Consistency 6. Others
76
* Refers to the extent to which a test or assessment tool accurately and effectively measures the specific psychological construct it is intended to assess. * It is a critical concept because it ensures that the results obtained from a test are meaningful and relevant for the purpose for which the test was designed.
Validity
77
Types of Validity
1. Content Validity 2. Criterion-Related Validity 3. Construct Validity 4. Face Validity
78
Refers to the established standards or reference points that allow test scores to be interpreted in a meaningful way.
Norms
79
Also referred as normative data
Norms
80
Provide context by comparing an individual’s group test’s scores to a representative sample of people who have taken the same test under similar conditions.
Norms Norm-referenced testing and assessment
81
Process of administering a test to a representative sample of testtakers under clearly specified conditions and the data are scored and interpreted for the purpose of establishing norms.
Standardization
82
A portion of the universe of people deemed to be representative of the whole population
Sample
83
Process of selecting the portion of the universe deemed to be representative of the whole population
Sampling
84
Population is divided into subgroups, called strata, based on certain characteristics or attributes that are interest to the researcher
Stratified Sampling
85
Population is divided into subgroups, called strata based on characteristics. Involves random selection of participants from each strata
Stratified-random sampling
86
Selecting individuals or groups from a population based on a predetermined criteria and the researcher’s judgment
Purposive Sampling
87
Occurs when data is gathered opportunistically when the opportunity arises, without primary intention of conducting formal research
Incidental Sampling
88
Basic steps: 1. Define the test and its purpose. 2. Identify the target population. 3. Collect data from the target population. 4. Collect demographic information. 5. Score the test. 6. Analyze the data. 7. Create norm tables or charts. 8. Interpret the norms. 9. Publish the norms. 10. Regularly update norms. 11. Ensure that ethical guidelines are followed.
Developing Norms
89
Types of Norms
1. Percentile 2. Age Norms 3. Grade Norms 4. National Norms 5. National Anchor Norms 6. Subgroup Norms 7. Local Norms
90
Divide the distribution to 100 equal parts
Percentile
91
Used in the context of norms to indicate the relative standing or performance an individual or a group within a larger population.
Percentile
92
Based on the principle that individuals of different ages may have varying abilities, characteristics, and developmental stages.
Age Norms
93
Used to evaluate an individual’s performance, development, or behavior in relation to what is considered typical or expected for their age group.
Age Norms
94
Typically used in the context of standardized tests and assessments to evaluate how students in a particular grade are performing in relation to their peers of the same grade.
Grade Norms
95
Used to assess and compare the performance or characteristics of a specific group or population within a given country Provide a benchmark for understanding how individuals or groups in the country compare to the larger national population in terms of various attributes
National Norms
96
Provide a benchmark for understanding how individuals or groups in the country compare to the larger national population in terms of various attributes
National Norms
97
Designed to serve as common benchmarks that guide the development of educational standards, curricula, and assessments, ensuring that students across different regions or school systems are held to the same standards
National Anchor Norms
98
Derived by examining the data from subgroups of subpopulations that share common characteristics, such as gender, age, ethnicity, socioeconomic status or other demographic factors
Subgroup Norms
99
Used to evaluate and compare the performance of students or educational institutions within a specific local or regional context
Local Norms
100
Typically derived from data collected from schools, districts, or educational institutions within a particular geographic area
Local Norms
101
Compare individual’s performance to that of a norming or reference group
Norm-Referenced
102
Aim to determine how a testtaker’s performance ranks relative to others
Norm-Referenced
103
Scores: percentiles or standard score
Norm-Referenced
104
Determine whether a student has achieved specific learning objectives, skills, or standards
Criterion-Referenced
105
Focus on the mastery of content or skills
Criterion-Referenced
106
Scores: “predefined criterion or standard
Criterion-Referenced
107
Refers to the consistency in measurement
Reliability
108
An index of reliability, a proportion that indicates the ratio between the true score variance on a test and the total variance
Reliability coefficient
109
A statistic useful in describing sources of test score variability
Variance
110
Refers to all the factors associated with the process of measuring some variable other than the variable being measured.
MEASUREMENT ERROR
111
caused by unpredictable fluctuations and inconsistencies of other variables in a measurement process
Random Error
112
caused by typically constant or proportionate to what is presumed to be the true value of the variable being measure
Systematic Error
113
SOURCES OF VARIANCE
TEST CONSTRUCTION TEST ADMINISTRATION TEST SCORING & INTERPRETATION
114
Item sampling or content sampling
TEST CONSTRUCTION
115
Test environment, testtaker variables, examiner-related variables
TEST ADMINISTRATION
116
Scorers and scoring systems
TEST SCORING & INTERPRETATION
117
Obtained by correlating pairs of scores from the same people on two different administrations of the same test
TEST-RETEST RELIABILITY
118
Appropriate: reliability of a test that purports to measure something that is relatively stable over time
TEST-RETEST RELIABILITY Appropriate
119
passage of time
TEST-RETEST RELIABILITY Possible source of error variance
120
When the interval between testing is greater than 6 months
TEST-RETEST RELIABILITY Coefficient stability
121
Degree of relationship between various forms of a test that can be evaluated by means of an alternate-forms or parallel forms coefficient of reliability
PARALLEL-FORMS & ALTERNATE-FORMS COEFFICIENT OF EQUIVALENCE
122
Obtained by administering different versions of an assessment tool (both versions must contain items that probe the same construct) to the same group of individuals at the same time
PARALLEL-FORMS
123
Consistency of test results between two different – but equivalent – forms of a test. Used when it is necessary to have two forms of the same tests (administered different time)
ALTERNATE FORMS
124
DEGREE OF CORRELATION AMONG ALL ITEMS SINGLE ADMINISTRATION OF A SINGLE FORM OF A TEST USEFUL: HOMOGENEITY OF THE TEST
INTERNAL CONSISTENCY
125
Obtained by correlating two pairs of scores from equivalent halves of a single test administered once
SPLIT-HALF
126
1. Divide the test into equivalent halves. * Randomly assign items to one or the other half of the test * Odd-even reliability * Divide the test by content 2. Calculate a Pearson r scores on the two halves of the test. 3. Adjust the half-test reliability using the Spearman-Brown formula. * Spearman-Brown formula allows a test developer or user to estimate internal consistency from a correlation of two halves of a test Interpretation: At least 0.70 or higher to determine reliability
COMPUTATION OF A COEFFICIENT OF SPLIT-HALF RELIABILITY
127
A statistic of choice for determining the inter-item consistency of dichotomous items
Kuder-Richardson formula 20 or KR-20
128
Appropriate for use on tests containing non-dichotomous items Calculated to help answer questions about how similar sets of data
COEFFICIENT ALPHA
129
Note: It is possible to conceive of data sets that would yield negative values of alpha. If this happens, alpha coefficient should be reported as 0
COEFFICIENT ALPHA
130
Focuses on the degree of difference that exists between item scores
APD Average Proportional Distance
131
1. Calculate the absolute differences between scores for all the items. 2. Average the difference between scores. 3. Obtain the APD by dividing the average difference between scores by the number of response option on the test, minus one. * An obtained value of .2 or lower:Excellent internal consistency * A value of .25 to .2: Acceptable range
COMPUTATION OF AVERAGE PROPORTIONAL DISTANCE
132
-scorer reliability”, “judge reliability”, “observer reliability”, “inter-scorer reliability” * Degree of agreement of consistency between two or more scorers with regards to a particular measure * If consensus can be demonstrated in the ratings, the researchers can be more confident regarding the accuracy of the ratings and their conformity with the established rating system. * Method: Calculate a coefficient of correlation
INTER-SCORER RELIABILITY