MIDTERM Flashcards

(187 cards)

1
Q

What is Psychological Assessment and Testing all about?

A
  1. To measure behavior (overt and covert)
  2. To describe and predict behavior and personality (traits, states,personality types, attitudes, interests, values, etc.)
  3. To determine signs and symptoms of dysfunctionality (for case formulation, diagnosis, and basis for intervention/plan for action)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Gathering and integration of psychology-related data for the purpose of making psychological evaluation that is accomplished through the use of tools (test,interviews, case studies, behavioral observations)and specially designed and measurement procedures.

A

Psychological Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Process of measuring psychology-related variables by means of devices on procedures designed to obtain sample of behavior.

A

Psychological Testing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

A standardized measuring device or procedure used to describe the ability, knowledge, skills or attitude of the individual.

A

Psychological Test(s)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

The process of quantifying the amount or number of a particular occurrence of event, situation, phenomenon, object or person.

A

Measurement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

The process of synthesizing the results of measurement with reference to some norms and standards.

A

Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Tools of Psychological Assessment

A

1) Psychological Tests
2) Interviews
3) Portfolio Assessment
4) Case-History Data
5) Behavioral Observation
6) Role Play Tests
7) Computers as Tools

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

The process of judging the worth of any occurrence of event, situation, phenomenon, object or person which concludes with a particular decision.

A

Evaluation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

A tool of assessment in which information is gathered through direct, reciprocal communication.

  • Ideally conducted face to face
  • Telephone: pitch, pause are signs of emotion
A

Interviews

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Three types of interviews

A

Structured
Semi-structured
Unstructured

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Method of gathering information through direct communication involves:

A

1) Reciprocal exchange
2) Take note of verbal and non-verbal actions—facial expressions, eye contact and general reaction to the demand of the interview

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

A type of work sample is used as an assessment tool—sample of one’s ability and accomplishment.

  • Education (writing samples) tools for hiring instructors.
A

Portfolio Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Records, transcripts, and other accounts in any media that preserve archival information, official and informal accounts, and other data and items relevant to the assessee

  • Records, transcripts and other accounts in written, pictorial or other form that present archival information, official and informal accounts and other data and items relevant to assessee
  • Files/excerpts from files maintained in situation and agencies
  • Letters, written correspondence, photos, family albums, newspaper,magazine clippings, home news and pictures, movies and audio tapes
  • Shed light on individuals
  • Past and current adjustment as well as on the events/circumstances that may have contributed to any changes in assessment
A

Case-History Data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Monitoring the actions of others or oneself by visual or electronic means while recording quantitative/qualitative information regarding those sections—can be used as diagnostic aid (inpatient facilities, behavioral research lab, classroom)

A

Behavioral Observation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Tool for assessment wherein assesses are directed to act as if they were in a particular situation—used when real settings are too impractical.

  • Substance abusers can used as both a tool for assessment and measure of outcome
A

Role Play Tests

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q
  • As test administrators, computers do much more than replace the “equipment” that was so widely used in the past (a number 2 pencil).
  • Computers can serve as test administrators (online or off) and as highly efficient test scorers. Within seconds they can derive not only test scores but patterns of test scores.
A

Computers as Tools

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Types of Tests Based on the Number of Examinees

A

1) Individual Test
2) Group Test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

The examiner/test administrator gives the test to only one person

A

Individual Test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

The examiner/test administrator gives to more than one person

A

Group Test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Tests Based on the Type of Behavior They Measure

A

1) Ability Test
a) Achievement Test
b) Aptitude Test
c) Intelligence Test
2) Personality Test
3) Interest Test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q
  • Cognitive Performance—Based Measures
  • Measures what people can do
  • Pertains to capacity or potential; items are scored according to speed, accuracy or both
  • Variable measurement
  • Presence of right and wrong
  • IQ, Aptitude, Achievement
A

Ability Test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Measures previous learning

A

Achievement Test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Measures potential for learning or acquiring a specific skill

A

Aptitude Test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

General potential to solve problems, adapt to changing circumstances, think abstractly, and profit from experience.

A

Intelligence Test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
It has to do with an individual's covert and overt dispositions, such as a person's tendency to act in a certain way or respond in a certain way in a given situation.
Personality Test
26
Provides a self-report statement to which the person responds "True" or "False", "Yes" or "No".
Structured Personality Test
27
Provides an ambiguous test stimulus
Projective Personality Test
28
Originally developed for vocational guidance but later found its way to employee selection and career development
Interest Test
29
Three-Tier System of Psychological Tests
1) Level A 2) Level B 3) Level C
30
These tests are those that can be administered, scored and interpreted by responsible non-psychologist who have carefully read the manual and are familiar with the overall purpose of testing. - Educational achievement tests fall into this category - Examples: Achievement tests and other specialized (skill-based) aptitude tests
Level A
31
These tests require technical knowledge of test construction and use of appropriate advanced coursework in psychology and related courses - Examples: group intelligence tests and personality tests
Level B
32
These tests require an advanced degree in Psychology or License as Psychologist and advanced training/supervised experience in a particular test - Examples: Projective tests, Individual Intelligence tests, Diagnostic tests
Level C
33
Testing was instituted as a means of selecting who, of the many applicants, would obtain government jobs
Chinese Civilization
34
Tests were used to measure intelligence and physical skills
Greek Civilization
35
These universities relied on formal exams in conferring degrees and honors
European Universities
36
Believed that despite our similarities, no two humans are exactly alike. Some of these individual differences are more “adaptive than others and these differences lead to more complex, intelligent organisms over time.
Charles Darwin
37
He established the testing movement; introduced the anthropometric records of students; pioneered the application of-rating-scale and questionnaire method, and the free association technique; he also pioneered the use of statistical methods for the analysis of psychological tests. - Moreover, he also noted that persons with mental retardation tend to have diminished ability to discriminate among heat, cold and pain.
Francis Galton
38
Visual Discrimination Length
Galton Bar
39
determining the highest audible pitch
Galton whistle
40
Mathematical models of the mind; father of pedagogy as an academic discipline; went against Wundt
Johan Friedrich Herbart
41
Sensory thresholds; just noticeable differences (JND)
Ernst Heinrich Weber
42
Mathematics of sensory thresholds of experience; founder of psychophysics; considered one of the founders of experimental psychology
Gustav Theodor Fechner
43
First to relate sensation and stimulus
Weber-Fechner Law
44
Considered one of the founders of Psychology; first to set up a psychology laboratory
Wilhelm Wundt
45
Succeeded Wundt; brought Structuralism to America; his brain is still on display in the psychology department at Cornell
Edward Titchner
46
Pioneer of human ability testing; conducted seminars that changed the field of psychological testing
Guy Montrose Whipple
47
Large contributor of factor analysis; approach to measurement was termed as the law of comparative judgment
Louis Leon Thurstone
48
Provided the first accurate description of mental retardation as an entity separate from insanity
Jean Esquirol
49
Pioneered modern educational methods for teaching people who are mentally retarded/intellectually disabled
Edouard Seguin
50
An American psychologist who coined the term “mental test”
James McKeen Cattell
51
The father of IQ testing
Alfred Binet
52
Introduced the concept of IQ as determined by the mental age and chronological age
Lewis M. Terman
53
Introduced the two-factor theory of intelligence - General ability or “g”: required for performance on mental tests of all kinds - Special abilities or “s”: required for performance on mental test of only one kind
Charles Spearman
54
Primary Mental Abilities
Thurstone
55
Wechsler Intelligence Tests (WISC, WAIS)
Wechsler
56
Introduced the components of “g” - Fluid “g”: ability to see relationships as in analogies and letter and number series, also known as the primary reasoning ability which decreases with age - Crystalized “g”: acquired knowledge and skills which increases with age
Raymond Cattell
57
Theorized the “many factor intelligence theory” (6 types of operations X 5 types of contents X 6 types of products = 180 elementary abilities)
Guilford
58
Introduced the 3 “g’s” - Academic g, Practical g, and Creative g
Sternberg
59
Conceptualized the multiple intelligences theory
Howard Gardner
60
Translated the Binet-Simon test into French
Henry Goddard
61
Pioneered the first group intelligence test known as the Army Alpha (for literate) and Army Beta (for functionally illiterate)
Robert Yerkes
62
Introduced multiple-choice and other “objective” item type of tests
Arthur S. Otis
63
Devised the Personal Data Sheet (known as the first personality test) which aimed to identify soldiers who are at risk for shell shock
Robert S. Woodworth
64
Slow rise of projective testing - _________ Inkblot Test
Herman Rorschach
65
Thematic Apperception Test
Henry Murray & Christina Morgan
66
Structure tests were being developed based on their better psychometric properties
Early 1940’s
67
16 Personality Factors
Raymond B. Cattell
68
Big 5 Personality Factors
McCrae & Costa
69
Panukat ng Ugali at Pagkatao or PUP
Virgilio Enriquez
70
Panukat ng Katalinuhang Pilipino or PKP
Aurora R. Palacio
71
Panukat ng Pagkataong Pilipino or PPP
Anadaisy Carlota
72
Masaklaw na Panukad ng Loob or Mapa ng Loob
Gregorio E.H. Del Pilar
73
Philippine Thematic Apperception Test (PTAT)
Alfredo Lagmay
74
Initial ideas or thoughts of the psychologists - No brainer
Some Assumptions about Psychological Testing and Assessment / Basic Assumptions
75
Some Assumptions about Psychological Testing and Assessment
Assumption 1: Psychological Traits and States Exist Assumption 2: Psychological Traits and States Can Be Quantified and Measured Assumption 3: Test-Related Behavior Predicts Non-Test-Related Behavior Assumption 4: Tests and Other Measurement Techniques Have Strengths and Weaknesses Assumption 5: Various Sources of Error Are Part of the Assessment Process Assumption 6: Testing and Assessment Can Be Conducted in a Fair and Unbiased Manner Assumption 7: Testing and Assessment Benefit Society
76
Defined as “any distinguishable, relatively enduring way in which one individual varies from another” - Specific and unique
Trait
77
Distinguish one person from another but are relatively less enduring - Arises depending on context - Part of the personality
States
78
An informed, scientific concept developed or constructed to describe or explain behavior.
Construct
79
Refers to an observable action or the product of an observable action, including test- or assessment-related responses.
Overt Behavior
80
Reminder that a trait is not expected to be manifested in behavior 100% of the time. - Thus, it is important to be aware of the context or situation in which a particular behavior is displayed.
Relatively enduring
81
The test score is presumed to represent the strength of the targeted ability or trait or state and is frequently based on _______________
Cumulative Scoring
82
May refer to either: 1) a sample of behaviors from all possible behaviors that could conceivably be indicative of a particular construct or 2) a sample of test items from all possible items that could conceivably be used to measure a particular construct.
Domain Sampling
83
Refers to a long-standing assumption that factors other than what a test attempts to measure will influence performance on the test. - Test scores are always subject to questions about the degree to which the measurement process includes _______.
Error
84
The component of a test score attributable to sources other than the trait or ability measured.
Error Variance
85
An assumption is made that each testtaker has a true score on a test that would be obtained but for the random action of measurement error.
Classical or True Score Theory
86
- Assess what a person usually does - There are no right or wrong - Values, Personality, Interest
Test of Typical Performance
87
Specific Types of Psychological Tests
1) Intelligence Test 2) Aptitude Test 3) Achievement Test 4) Personality Test 5) Projective Test 6) Interest Test 7) Attitude Inventory 8) Values Inventory 9) Diagnostic Test (For remedial test) 10) Powered / Power Test — easy to difficult (measures the ability) 11) Speed Test — uniformed 12) Creativity Test 13) Neuropsychological Test
88
- Decision theory as applied to psychological testing and measurement - Making inferences and decisions
Base rate Hit rate Miss rate
89
Is the extent to which a particular trait, behavior, characteristic, or attribute exists in the population (expressed as a proportion). - 10/100 people have depression
Base rate
90
May be defined as the proportion of people a test accurately identifies as possessing or exhibiting a particular trait, behavior, characteristic, or attribute. - Could refer to the proportion of people accurately predicted to be able to perform work at the graduate school level or to the proportion of neurological patients accurately identified as having a brain tumor. - Who are the 10 out of 100?
Hit rate
91
May be defined as the proportion of people the test fails to identify as having, or not having, a particular characteristic or attribute. - Amounts to an inaccurate prediction.
Miss rate
92
The category of misses may be further subdivided:
1) False Positive 2) False Negative
93
- Is a miss wherein the test predicted that the testtaker did possess the particular characteristic or attribute being measured when in fact the testtaker did not. - Accepting what should not be accepted
Type 1 / False Positive
94
- Is a miss wherein the test predicted that the testtaker did not possess the particular characteristic or attribute being measured when the testtaker actually did. - Rejecting what should not be rejected
Type 2 / False Negative
95
Basic Principles in the Use of Psychological Test
1) Tests are samples of behavior 2) Tests do not reveal traits and capacities directly 3) Psychological maladjustments selectively and differentially affects scores 4) Psychometric and projective approach are mutually complementary
96
Steps in Clinical Psychology Assessment
1) Deciding what is being assess 2) Determining the goals of assessment 3) Selective standards for making decisions 4) Collecting assessment data 5) Making assessment and judgment 6) Communicating results
97
Approaches Use in Psychological Assessment and Testing
Nomothetic Idiographic
98
- General / Population - Norms - Attempts to generalize - Objective - Numerical data
Nomothetic
99
- Focus on one / individual - Subjective experiences - Comparative only to itself
Idiographic
100
Cross Cultural Testing Parameters:
1) Language 2) Test Content 3) Education 4) Speed
101
A test or assessment process designed to minimize the influence of culture with regard to various aspects of the evaluation procedures, such as administration instructions, item content, responses required of testtakers, and interpretations made from the resulting data.
Culture-Fair Intelligence Test
102
May be defined as the extent to which a test incorporates the vocabulary, concepts, traditions, knowledge, and feelings associated with a particular culture.
Culture loading
103
- To isolate nature - Interaction between nature and nurture are not relative but cumulative
Culture-Free Test
104
As act of assigning numbers or symbols to characteristics of things (people, events, whatever) according to rules.
Measurement
105
Is a set of numbers (or other symbols) whose properties model empirical properties of the objects to which the numbers are assigned.
Scale
106
Primary Scales of Measurement
Nominal Ordinal Interval Ratio
107
- Simplest form of measurement - Weakest - These scales involve classification or categorization based on one or more distinguishing characteristics, where all things measured must be placed into mutually exclusive and exhaustive categories. For example, people may be characterized by gender in a study designed to compare performance of men and women on some test. - No magnitude, equal intervals, and absolute 0 - Nonparametric but can be quantified
Nominal
108
- Permit classification - Rank ordering on some characteristic is also permissible - Has a magnitude but no equal intervals and absolute 0 - Nonparametric - Median
Ordinal
109
- Contain equal intervals between numbers - Each unit on the scale is exactly equal to any other unit on the scale - Contains no absolute zero point - Parametric
Interval
110
- Has a true zero point - Strongest - All mathematical operations can meaningfully be performed because there exist equal intervals between the numbers on the scale as well as a true or absolute zero point - Contains nominal, ordinal, interval, and ratio - Parametric
Ratio
111
- To describe the data - Merely describes the results
Descriptive Statistics
112
May be defined as a set of test scores arrayed for recording or study.
Distribution
113
Is a straightforward, unmodified accounting of performance that is usually numerical. - may reflect a simple tally, as in number of items responded to correctly on an achievement test.
Raw Score
114
- All scores are listed alongside the number of times each score occurred. - Distribution of raw scores
Frequency of Distributions / Frequency Distributions
115
- Is a statistic that indicates the average or midmost score between the extreme scores in a distribution. - Mean, median, and mode
Measures of Central Tendency
116
- Statistics that describe the amount of variation in a distribution - Range: Interquartile and semi-quartile - Standard deviation
Measures of Variability
117
Indication of how scores in a distribution are scattered or dispersed.
Variability
118
- An indication of how the measurements in a distribution are distributed. - Distributions can be characterized by their _________, or the nature and extent to which symmetry is absent.
Skewness
119
When relatively few of the scores fall at the high end of the distribution. - Low scores >
Positive skew / Positively skewed
120
When relatively few of the scores fall at the low end of the distribution. - High scores >
Negative skew / Negatively skewed
121
The term testing professionals use to refer to the steepness of a distribution in its center
Kurtosis
122
Relatively flat distribution
Platykurtic
123
Relatively peak distribution
Leptokurtic
124
Somewhere in the middle / normally distributed
Mesokurtic
125
A bell-shaped, smooth, mathematically defined curve that is highest at its center.
Normal Curve
126
- Normal Distribution - Homogenous Variance - Interval or Ratio Data - Pearson’s Correlation, Independent Measures T-Test, One-way / Independent-Measures ANOVA, Paired T-Test, One-way / Repeated Measures ANOVA
Parametric Test
127
- Normal Distribution is not required - Homogenous Variance is not required - Nominal or Ordinal Data - Spearman’s Correlation, Mann-Whitney U Test, Kruskal-Wallis H Test, Wilcoxon Signed-Rank Test, Friedman’s Test
Non-Parametric Test
128
Measures of Correlation
1) Pearson's Product Moment Correlation 2) Spearman Rho's Correlation 3) Kendall's Coefficient of Concordance 4) Phi Coefficient 5) Lambda
129
Parametric test for interval data
Pearson's Product Moment Correlation
130
Non-parametric test for ordinal data
Spearman Rho's Correlation
131
Non-parametric test for ordinal data
Kendall's Coefficient of Concordance
132
Non-parametric test for dichotomous nominal data
Phi Coefficient
133
Non-parametric test for 2 groups (dependent and independent variable) of nominal data
Lambda
134
Measures of Prediction
1) Biserial Correlation 2) Point-Biserial Correlation 3) Tetrachoric Correlation 4) Simple Linear Regression 5) Multiple Linear Regression 6) Ordinal Regression
135
Predictive test for artificially dichotomized and categorical data as criterion with continuous data as predictors
Biserial Correlation
136
Predictive test for genuinely dichotomized and categorical data as criterion with continuous data as predictors
Point-Biserial Correlation
137
Predictive test for dichotomous data with categorical data as criterion and categorical data as predictors
Tetrachoric Correlation
138
A predictive test which involves one criterion that is continuous in nature with only one predictor that is continuous
Simple Linear Regression
139
A predictive test which involves one criterion that is continuous in nature with more than one continuous predictor
Multiple Linear Regression
140
A predictive test which involves a criterion that is ordinal in nature with more than one predictors that are continuous
Ordinal Regression
141
Chi-Square Test
1) Goodness of Fit 2) Test of Independence
142
Used to measure differences and involves nominal data and only one variable with 2 or more categories
Goodness of Fit
143
Used to measure correlation and involves nominal data and two variables with two or more categories
Test of Independence
144
Comparison of two groups
1) Paired T-Test 2) Unpaired T-Test 3) Wilcoxon Signed-Rank Test 4) Mann-Whitney U Test
145
A parametric test for paired groups with normal distribution
Paired T-Test
146
A parametric test for unpaired groups with normal distribution
Unpaired T-Test
147
A non-parametric test for paired groups with non-normal distribution
Wilcoxon Signed-Rank Test
148
A non-parametric test for unpaired groups with non-normal distribution
Mann-Whitney U Test
149
Comparison of three or more groups
1) Repeated Measures ANOVA 2) One-way/Two-Way ANOVA 3) Friedman F Test 4) Kruskal-Wallis H Test
150
A parametric test for matched groups with normal distribution
Repeated Measures ANOVA
151
A parametric test for unmatched groups with normal distribution
One-way/Two-Way ANOVA
152
A non-parametric test for matched groups with non-normal distribution
Friedman F Test
153
A non parametric test for unmatched groups with non-normal distribution
Kruskal-Wallis H Test
154
- The stability or consistency of the measurement Goals: A) Estimate errors in psychological measurement B) Devise techniques to improve testing so errors are reduced
Reliability
155
Types of Reliability
1) Test-Retest Reliability 2) Parallel-Forms / Alternate Forms Reliability 3) Split-Half Reliability 4) Inter-Rater / Inter-Observer Reliability 5) Standard Error of Measurement
156
- Compare the scores of individuals who have been measured twice by the instrument - This is not applicable for tests involving reasoning and ingenuity - Longer interval will result to lower correlation coefficient while shorter interval will result to higher correlation - The ideal time is 2-4 weeks - Source of error variance is time sampling - Utilizes Pearson R or Spearman Rho
Test-Retest Reliability
157
- Same persons are tested with one form on the first occasion and with another equivalent form on the second - The administration of the second, equivalent form either takes place immediately or fairly soon. - The two forms should be truly paralleled, independently constructed tests designed to meet the same specifications, contain the same number of items, have items which are expressed in the same form, have items that cover the same type of content, have items with the some range of difficulty, and have the same instructions, time limits illustrative examples, format and all other aspects of the test - Has the most universal applicability - For immediate, the source of error variance is content sampling - For delayed, the source of error variance is time sampling and content sampling - Utilizes Pearson R or Spearman Rho
Parallel-Forms / Alternate Forms Reliability
158
- Two scores are obtained for each person by dividing the test into equivalent halves (odd-even split or top-bottom split) - The reliability of the test is directly related to the length of the test - The source of error variance is content sampling - Utilizes the Spearman-Brown Formula
Split-Half Reliability
159
- Degree of agreement between raters on a measure - Source of error variance is inter-scorer differences - Often utilizes Cobon's Kappa statistic
Inter-Rater / Inter-Observer Reliability
160
- An index of the amount of inconsistency of the amount of expected error in an individual's score - The higher the reliability, the lower SEM
Standard Error of Measurement
161
Long standing assumption that factors other than what a test attempts to measure will influence performance on the test
Error
162
The component of test score attributable to sources other than the trait or ability being measured
Error Variance
163
Are those sources of errors that reside within an individual taking the test (such as: I didn't study enough, I felt bad that I missed blind date, I forgot to set the alarm, excuses)
Trait Error
164
Are those sources of errors that reside in the testing situation, such as lousy test instructions, too warm room, or missing pages
Method Error
165
A range or band of test scores that is likely to contain the true score
Confidence Interval
166
A statistical measure that can aid a test user in determining how large a difference should be before it is considered statistically significant
Standard Error of the Difference
167
A judgment or estimate of how well a test measures what it purports to measure in a particular test
Validity
168
Types of Validity
1) Face Validity 2) Content Validity 3) Criterion-Related Validity 4) Construct Validity
169
The least stringent type of validity: whether a test looks valid to test users, examiners and examinees
Face Validity
170
- Definitions and concepts - Whether the test covers the behavior domain to be measured which is built through the choice of appropriate content areas, question, tasks and items
Content Validity
171
Issues arising from lack of content validity:
1) Construct Underrepresentation-Failure 2) Construct-Irrelevant Variance
172
To capture important components of a construct (e.g. An English test which only contains vocabulary items but no grammar items will have a poor content validity.)
Construct Underrepresentation-Failure
173
Happens when scores are influenced by factors irrelevant to the construct le.g. test anxiety, reading speed, reading comprehension, illness)
Construct-Irrelevant Variance
174
Types of Criterion-Related Validity
1) Concurrent Validity 2) Predictive
175
Standard against which a test or a test score is evaluated
Criterion
176
The extent to which test scores may be used to estimate an individual's present standing on a criterion
Concurrent Validity
177
The scores on a test can predict future behavior or scores on another test taken in the future
Predictive
178
- Assembling evidence about what a test means - Series of statistical analysis that one variable is a separate variable - Is like proving a theory through evidences and statistical analysis
Construct Validity
179
Discriminant Validation
1) Convergent Validity 2) Divergent Validity
180
A test correlates highly with other variables with which it should correlate (example: Extraversion which is highly correlated sociability)
Convergent Validity
181
A test does not correlate significantly with variables from which it should differ (example: Optimism which is negatively correlated with Pessimism)
Divergent Validity
182
A retained statistical technique for analyzing the interrelationships of behavior data
Factor Analysis
183
A method of data reduction
Principal Components Analysis
184
Items do not make a factor, the factor should predict scores on the item and is classified into two (Exploratory Factor Analysis for summarizing data and Confirmatory Factor Analysis for generalization of factors)
Common Factor Analysis
185
May be defined as a method of evaluation and a way of deriving meaning from test scores by evaluating an individual’s score with reference to a set standard - To be eligible for a high-school diploma, students must demonstrate at least a sixth-grade reading level. - Has also been referred to as Domain or Content-Referenced Testing
Criterion-Referenced Testing
186
One way to derive meaning from a test score is to evaluate the test score in relation to other scores on the same test - Percentile - NMAT
Norm-Referenced Testing
187
Is an expression of the degree and direction of correspondence between two things.
Correlation