Psychological Testing & Assessment; Assumptions & Norms Flashcards

(126 cards)

1
Q

The gathering and integration of Psychology-related data for the purpose of making a psychological evaluation that is accomplished through the use of tools.

A

Psychological Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

The process of measuring Psychology related variables by means of devices or procedures designed to obtain a sample of behavior.

A

Psychological Testing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Its objective is typically to answer a referral question, solve a problem or arrive at a decision through the use of tools of evaluation.

A

Psychological Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

In psychological assessment, the _ is the key to the process of selecting tests and other tools of evaluation as well as in drawing the conclusions.

A

Assessor

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is the typical outcome of Psychological testing?

A

Test scores

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are the 2 different approaches to assessment?

A

Collaborative Psychological Assessment
Dynamic Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

In this approach, the assessor and assessee may work as partners from the initial contact through final feedback.

A

Collaborative Psychological Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

An interactive approach to Psychological Assessment that usually follows a model of: evaluation> intervention of some sort> evaluation. This provides a means for evaluating how the assessee processes or benefits from some type of intervention.

A

Dynamic Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

A measuring device or procedure.

A

Test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Psychological tests almost always involve analysis of _.

A

Sample of behavior

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

The subject matter of the test.

A

Content

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Form, plan, structure, arrangement and layout of test items as well as related considerations. It also refers to the form in which a test is administered.

A

Format

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Demonstration of various kinds of tasks demanded of the assessee, as well as trained observation of an assessee’s performance.

A

Administration procedures

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

For tests that are designed for administration on _ may require an active and knowledgeable test administrator.

A

One-to-one basis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

The process of assigning such evaluative codes or statements to performance on tests, tasks, interviews or some other behavior samples.

A

Scoring

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Most tests of intelligence come with __, that are explicit about scoring criteria and the nature of interpretations.

A

Test manuals

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Refers to how consistently, how accurately a psychological test measures what it purports to measure, the usefulness or practical value that a test or other tool of assessment has for a particular purpose.

A

Psychometric soundness

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

The method of gathering information through direct communication involving reciprocal exchange.

A

Interview

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Samples of one’s ability and accomplishment.

A

Portfolio

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Refers to records, transcripts and other accounts in written, pictorial or other form that preserve archival information, official and informal accounts and other data and items relevant to an assessee.

A

Case History Data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

A report or illustrative account concerning a person or an event that was compiled on the basis of case history data.

A

Case study

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Monitoring the actions of others or oneself by visual or electronic means while recording quantitative and/or qualitative information regarding those actions.

A

Behavioral observation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Observe behavior of humans in natural settings in which the behavior would typically be expected to occur.

A

Naturalistic Observation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

A tool of assessment wherein assessees are directed to act as if they were in a particular situation. Assessees may then be evaluated with regard to their expressed thoughts, behaviors, abilities and other variables.

A

Role-Play Tests

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
It can serve as test administrators and as highly efficient test scorers.
Computers
26
What are the different types of scoring reports?
Simple scoring report Extended scoring report Interpretive report Consultative report Integrative report
27
A scoring report that includes statistical analysis of the testtaker's performance.
Extended scoring report
28
A scoring report that includes numerical or narrative interpretive statements in the report. Some of it contain relatively little interpretation and simply call attention to certain high, low or unusual scores that need to be focused on.
Interpretive report
29
A scoring report that is usually written in language appropriate for communication between assessment professionals and may provide expert opinion concerning analysis of the data.
Consultative report
30
A scoring report that integrates data from the sources other than the test itself into the interpretive report.
Integrative report
31
Create tests or other methods of assessment.
Test developer
32
Psychological tests and assessment methodologies are used by a wide range of professionals. These are called _.
Test user
33
Anyone who is the subject of an assessment or an evaluation.
Testtaker
34
Reconstruction of a deceased individual's psychological profile on the basis of archival records, artifacts and interviews previously conducted with the deceased or people who knew him or her.
Psychological autopsy
35
Test that evaluates accomplishment or the degree of learning that has taken place.
Achievement test
36
Tool of assessment used to help narrow down and identify areas of deficit to be targeted for intervention.
Diagnostic test
37
Nonsystematic assessment that leads to the formation of an opinion or attitude.
Informal evaluation
38
In this setting, tests are mandated early in school life to help identify children who may have special needs.
Educational settings
39
In this setting, tests and many other tools of assessment are used to help screen for or diagnose behavior problems.
Clinical settings
40
Group testing in clinical settings is primarily used for _ - identifying those individuals who require further diagnostic evaluation.
Screening
41
In this setting, the ultimate objective of many such assessments is the improvement of the assessee in terms of adjustment, productivity or some related variables.
Counseling setting
42
In these settings, a wide range of achievement, aptitude, interest, motivational and other tests may be employed in the decision to hire as well as in related decisions regarding promotion, transfer, job satisfaction and eligibility for further training.
Business and Military Setting
43
What is the well known application of measurement in governmental settings?
Governmental licensing certificate
44
An observable action or the product of an observable action including test-or assessment-related responses.
Overt Behavior
45
The more the testtaker respond in a particular direction as keyed by the test manual as correct or consistent with a particular trait, the higher that testtaker are presumed to be on the targeted ability or trait.
Cumulative scoring
46
Understanding of behavior that has already taken place. It is typically the use of psychological tests in forensic matters.
Postdict
47
Refers to factors other than what a test attempts to measure will influence performance on the test.
Error
48
Component of a test score attributable to sources other than the trait or ability measured.
Error variance
49
What are the potential sources of error variance?
Assessee Assessor Measuring instruments
50
The test performance data of a particular group of testtakers that are designed for use as a reference when evaluating or interpreting individual test scores.
Norms
51
The group of people whose performance on a particular test is analyzed for reference in evaluating the performance of individual testtakers.
Normative sample
52
The process of deriving norms.
Norming
53
A method of evaluation and a way of deriving meaning from test scores by evaluating an individual teststaker's score and comparing it to scores or a group of testtakers.
Norm-referenced testing and assessment
54
The process of administering a test to a representative sample of testtakers for the purpose of establishing norms.
Standardization
55
In the process of developing a test, a test developer has targeted some defined group as the population for which the test is designed.
Sampling
56
The complete universe or set of individuals with atleast one common, observable characteristic.
Population
57
A portion of the universe of people deemed to be representative of the whole population.
Sample
58
The process of selecting the portion of that universe deemed to be representative of the whole population.
Sampling
59
A sampling method where differences with respect to some characteristics of subgroups within a defined population are proportionately represented in the sample. It helps prevent sampling bias and ultimately aid in the interpretation of findings.
Stratified sampling
60
Arbitrarily selecting some people because it is believed to be representative of the population.
Purposive sampling
61
People who are most available to participate in the study.
Incidental sample or convenience sample
62
What are the 6 types of norms?
Percentile Developmental Norms National Norms National Anchor Norms Subgroup Norms Local Norms
63
An expression of the percentage of people whose score on a test falls below a particular raw score.
Percentile
64
Applied broadly to norms developed on the basis of any trait, ability, skill or other characteristic that is presumed to develop, deteriorate or otherwise be affected by chronological age, school grade or stage of life.
Developmental Norms
65
A developmental norm that indicates the average performance of different samples of testtakers who were at various ages at the time the test was administered.
Age norms
66
A developmental norm designed to indicate the average test performance of testtakers in a given school grade.
Grade norms
67
Derived from a normative sample that was nationally representative of the population at the time the norming study was conducted.
National Norms
68
A type of norms that provides some stability to test scores by anchoring them to other test scores.
National anchor norms
69
A normative sample that is segmented by any of the criteria initially used in selecting subjects for sample.
Subgroup norms
70
A type of norm that provides normative information with respect to the local population's performance on some test.
Local Norms
71
The distribution of scores obtained on the test from one group of testtakers is used as the basis for the calculation of test scores for future administration of the test.
Fixed-reference group scoring system
72
Evaluating the test score in relation to other scores on the same test. The usual area of focus is how an individual performed relative to other people who took the test.
Norm-referenced
73
A method of evaluation and a way of deriving meaning from test scores by evaluating an individual's score with reference to a set standard.
Criterion referenced testing and assessment
74
It is an index of reliability, a proportion that indicates the ratio between the true score variance on a test and the total variance.
Reliability coefficient
75
The degree of the relationship between various forms of a test that can be evaluated by means of an alternate-forms or parallel-forms coefficient of reliability.
Coefficient of Equivalence
76
The means and the variances of the observed test scores are equal.
Parallel forms
77
The estimate of the extent to which item sampling and other errors have affected test scores on versions of the same test when, for each form of the test, the means and variances of observed test scores are equal.
Parallel Forms reliability
78
Typically designed to be equivalent with respect to variables such as content and level of difficulty.
Alternate forms
79
Refers to an estimate of the extent to which these different forms of the same test have been affected item sampling error of other error
Alternate forms reliability
80
It refers to the degree of correlation among all the items on a scale. Calculated from a single administration of a single form of a test.
Internal consistency of reliability
81
An index of _ consistency is useful in assessing the homogeneity of the test.
Inter-item consistency
82
The statistic of choice for determining the inter-item consistency or dichotomous items, primarily those items that can be scored right or wrong.
Kuder-Richardson Formula 20 or KR-20
83
It is appropriate for use on tests containing no dichotomous items.
Coefficient alpha
84
Because negative values of alpha are theoretically impossible, it is recommended under such circumstances that the alpha coefficient be reported as _.
Zero
85
It is a measure used to evaluate the internal consistency of a test that focuses on the degree of difference that exist between item scores.
Average proportional distance
86
The higher the reliability of a test, the _ the SEM/SEE.
Lower
87
Yield insights regarding a particular population of test takers as compared to the norming sample described in a test manual.
Local validation studies
88
It describes a judgement of how adequately a test samples behavior representative of the universe of behavior that the test was designed to sample.
Content validity
89
He developed a formula termed content validity ratio.
C.H. Lawshe
90
A method for gauging agreement among raters or judges regarding how essential a particular item is.
Quantification of content validity
91
When fewer than half the panelists indicate "essential" in content validity ratio.
Negative CVR
92
When exactly half the panelists indicate “essential” in content validity ratio.
Zero CVR
93
When more than half the panelists indicate “essential” in content validity ratio.
Positive CVR
94
The content validity ratio ranges between _.
00. and .99
95
The standard against which a test or a test score is evaluated.
Criterion
96
A judgment of how adequately a test score can be used to infer an individual's most probable standing of some measure of interest.
Criterion- related validity
97
What are the 3 characteristics of a criterion
Relevant Valid Uncontaminated
98
The term applied to a criterion measure that has been based, atleast in part, on predictor measures.
Criterion contamination
99
It is a type of criterion-related validity that indicates the extent to which test scores may be used to estimate an individual's present standing on a criterion.
Concurrent validity
100
It is a type of criterion-related validity. It is the measure of the relationship between the test scores and a criterion measure obtained at a future time.
Predictive validity
101
Judgments of criterion-related validity are based on 2 types of statistical evidence:
Validity coefficient Expectancy data
102
A correlation coefficient that provides a measure of the relationship between test scores and scores on the criterion measure.
Validity coefficient
103
The degree to which an additional predictor explains something about the criterion measure that is not explained by predictors in use.
Incremental validity
104
Table that illustrate the likelihood that the test takers will score within some interval of scores on a criterion measure-an interval that may be seen as "passing" or "acceptable".
Expectancy table
105
A graphic representation of an expectancy table.
Expectancy chart
106
A judgment about the appropriateness of inferences drawn from test scores regarding the individual standings on a variable called a construct.
Construct Validity
107
An informed, scientific idea developed or hypothesized to describe or explain behavior. Unobservable, presupposed traits that a test developer may invoke to describe test behavior or criterion performance.
Construct
108
Viewed as the unifying concept for all validity evidence.
Construct validity
109
It refers to how uniform a test is in measuring a single concept.
Homogeneity
110
It may be used in estimating the homogeneity of a test composed of multiple-choice items.
Coefficient alpha
111
What are the evidence of construct validity?
Homogeneity Changes with age Pretest-posttest changes Distinct groups Convergent evidence Discriminant evidence Factor Analysis
112
A validity coefficient showing little relationship between test scores and/or other variables with which scores on the test being construct validated should not theoretically be correlated. This provides what kind of evidence?
Discriminant evidence
113
An experimental technique useful for examining both convergent and discriminant validity. It is the matrix or table that results from correlating variables (traits) within and between methods.
Multitrait-multimethod matrix
114
It is designed to identify factors or specific variables that are typically attributes, characteristics or dimensions on which people may differ. It is employed as a data reduction method in which several sets of scores and the correlations between them are analyzed.
Factor analysis
115
Factor analysis is conducted on three basis:
Exploratory factor analysis Confirmatory factor analysis Factor loading
116
A factor analysis that typically entails estimating or extracting factors, deciding how many factors to retain, and rotating factors to an interpretable orientation.
Exploratory factor analysis
117
Factor analysis wherein the researchers test the degree to which a hypothetical model includes factors that fit the actual data.
Confirmatory factor analysis
118
It conveys information about the extent to which the favor determines the test scores.
Factor loading
119
High factor loadings would provide _ evidence of construct validity.
Convergent
120
Moderate to low factor loadings would provide _ evidence of construct validity.
Discriminant
121
A factor inherent in a test that systematically prevents accurate, impartial measurement.
Bias
122
A numerical or verbal judgment that places a person or an attribute along a continuum identified by a scale of numerical word descriptions known as rating scale.
Rating
123
A judgment resulting from the intentional or unintentional misuse of a rating scale.
Rating error
124
3 types of rating errors:
Leiniency or Generosity error Severity error Central Tendency Error
125
One way to overcome restriction of range errors. It is a procedure that requires the rater to measure individuals against one another instead of against an absolute scale.
Ranking
126
The extent to which a test is used in an impartial, just and equitable way.
Fairness