Lesson 1-3 Flashcards

1
Q

The process of measuring Psychology-related variables by means of devices or procedures designed to obtain a sample of behavior.

A

Psychological Testing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

It is the gathering and integration of
Psychology-related data for the purpose of making a psychological evaluation that is accomplished through the use of tools such as tests, interviews, case studies, behavioral observation, and specifically designed apparatuses and measurement procedures.

A

Psychological Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

To obtain some gauge, usually numerical in nature, with regard to an ability or attribute

A

Objective of Testing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

To answer a referral question, solve a
problem, or arrive at a decision through
the use of tools of evaluation

A

Objective of Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

May be individual or group in nature

A

Process of Testing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

It is typically individualized

A

Process of Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Tester is not the key to the process

A

Role of Evaluator in Testing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Assessor is the key to the process:
selecting tests/tools and drawing conclusions

A

Role of Evaluator in Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Requires technician-like skills: administering, scoring, and interpreting

A

Skill of Evaluator in Testing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Requires an educated selection of tools of evaluation, skills in evaluation, and integration of data

A

Skill of Evaluator in Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Yields a test score or a series of test scores

A

Outcome of Testing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Entails a logical problem-solving approach to shed light on a referral question

A

Outcome of Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Process of Assessment

A

Referral, Initial Meeting, Tool Selection, Formal Assessment, Report Writing, Feedback Sessions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

From: Teacher, Counselor, Health Provider, Employer, Individual

A

Referral

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Intake Interview (clarify reason for
referral)

A

Initial Meeting

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Preparation for assessment

A

Tool Selection

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Actual assessment begins

A

Formal Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Writes a report of the findings that is designed to answer the referral question

A

Report Writing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Between client and assessor (third
parties may be scheduled)

A

Feedback Sessions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

8 Tools of Psychological Assessment

A

Test, Interview, Portfolio, Case History Data, Behavioral Observation, Role-Play Tests, Computers, Other tools

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

A measuring device or procedure

A

Test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Device or procedure designed to measure variables related to Psychology

A

Psychological Test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Almost always involves analysis of a sample of behavior

A

Psychological Test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Behavioral sample could range from responses to a pencil-and-paper questionnaire, to oral responses to questions related to the performance of some task.

A

Psychological Test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Method of gathering information through direct communication involving reciprocal exchange

A

INTERVIEW

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Face-to-face: Verbal and non-verbal behavior

A

Face-to-face

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Changes in voice pitch, long pauses, signs of emotions

A

Telephone

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

online interview, e-mail interview, text messaging

A

Electronic

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Samples of one’s ability and
accomplishment

A

Portfolio

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Refers to records, transcripts, and other
accounts in written, pictorial, or other
form that preserve archival information,
official and informal accounts, and other
data and items relevant to an assessee

A

CASE HISTORY DATA

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Monitoring the actions of others or
oneself by visual or electronic means
while recording and/or quantitative
and or qualitative information
regarding those actions

A

Behavioral Observation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Tool of assessment wherein assesses are
directed to act as if they were in a particular situation

A

ROLE-PLAY TESTS

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Can serve as test administrators and as highly efficient test scorers

A

Computer

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

Mere listing of scores

A

Simple scoring

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

statistical analyses

A

Extended scoring

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

Numerical or narrative statements

A

Interpretive

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

Written in language appropriate for communication between professionals, may provide expert opinion (analysis of data)

A

Consultative

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

Inclusion of data from sources other than the test

A

Integrative

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

Video Thermometer Sphygmomanometer

A

OTHER TOOLS

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

Create tests or other methods of assessment

A

Test Developer

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

Clinicians, counselors, school psychologists, human resources personnel, etc.

A

Test User

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

Anyone who is the subject of an
assessment or an evaluation

A

Test-taker

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

Evolving society causes changes to
psychological variables

A

Society at large

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

Tests or aids that can be adequately be administered, scored, and interpreted with the aid of the manual and general orientation

A

Level A

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

Achievement, Proficiency

A

Level A

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

Tests or aids that require some technical knowledge of test construction and use of supporting psychological and educational fields

A

Level B

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

Aptitude

A

Level B

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

Tests or aids that require substantial understanding of testing and supporting psychological fields together with supervised experience in the use of these devices

A

Level C

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

Projective tests, Individual Mental Tests

A

Level C

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

The nature of transformation of the test into a form ready for administration to the individual with disabling conditions will depend on the nature of the disability

A

Testing people with disabilities

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
51
Q

Legal and Ethical Considerations
*Rights of Testtakers

A
  1. Right of Informed Consent
  2. Right to be Informed of Test Findings
  3. Right to privacy and confidentiality
  4. Right to Least of Stigmatizing Label
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
52
Q
  1. Psychological Traits and States exist
  2. Psychological Traits and States can be quantified and measured
  3. Test-related behavior predicts non-test related behavior
  4. Tests and measurement techniques have strength and weaknesses
  5. Various sources of error are part of the assessment process
  6. Testing and Assessment can be conducted in fair and unbiased manner
  7. Testing and Assessment benefit society
A

Some Assumptions about
Psychological Testing and
Assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
53
Q

Any distinguishable, relatively enduring way in which one individual varies
from another

A

Psychological Traits and States exist

Trait

54
Q

Also distinguishes one person from another but is relatively less enduring

A

Psychological Traits and States exist

State

55
Q

Test developer provided test users with a clear operational definition of the construct under study/assessment.

A

Psychological Traits and States can be quantified and measured

56
Q

Once having defined the trait, state or other construct to be measured, a test developer considers the types of item content that would provide insight to it.

A

Psychological Traits and States can be quantified and measured

57
Q

Measuring traits and states by means of a test also entails appropriate ways to score the test and interpret the result.

A

Psychological Traits and States can be quantified and measured

58
Q

The tasks in some tests mimic the actual behaviors that the test user is trying to understand.

A

Test-related behavior predicts non-test related behavior

59
Q

The obtained sample of behavior is typically used to make predictions about the future behavior.

A

Test-related behavior predicts non-test related behavior

60
Q

In some forensic matters, psychological tests may be used not to predict
behavior but to postdict it.

A

Test-related behavior predicts non-test related behavior

61
Q

Understanding of behavior that has already taken place

A

Postdict

62
Q
  1. Complex nature of violence
  2. Low base rate
  3. False positives and false negatives
  4. Dynamic nature of behavior
  5. Ethical and legal concerns
  6. Cultural and social bias
  7. Inadequate data and research
  8. Limited understanding of causality
  9. Contextual factors
A

Why do you think it is difficult to predict violence by means of
test?

63
Q

Competent test user understand and appreciate the limitations of the tests they use as well as how those limitations might be compensated for by data from other sources.
* Users understand:
* How a test was developed
* Circumstances under which it is appropriate
* How it should be administered and to whom
* How results should be interpreted

A

Tests and other measurement techniques have a strength and weaknesses

64
Q

-How a test was developed
-Circumstances under which it is appropriate
-How it should be administered and to whom
-How results should be interpreted

A

Users understand

65
Q

Refers to factors other than what a test attempts to measure will influence performance on the test

A

Various sources of error are part of the assessment process

Error

66
Q

Component of a test score attributable to sources other than the trait or ability measured

A

Error variance

67
Q

Potential sources of error variance

A
  1. Assessee
  2. Assessor
  3. Measuring instruments
68
Q
  • All major test publishers strive to develop instruments that are fair when used in a strict accordance with guidelines in the test manual.
  • One source of fairness-related problems is the test user who attempts to use a particular test with people whose background and experience are different from the background and experience of people for whom the test was intended.
A

Testing and Assessment can be conducted in a fair and unbiased manner

69
Q

In a world without tests or assessment procedures:
1. People could present themselves as professionals regardless of their background, ability, or professional credentials.
2. Personnel might be hired on the basis of nepotism rather than documented merit.
3. Teachers and school administrators could arbitrarily place children in different types of special classes simply because that is where they children belonged.

A

Testing and Assessment benefit the society

70
Q

What is a “good” test?

A

Criteria for a good test:
* Clear instructions for administration, scoring, and interpretation
* Offered economy in the time and money it took to administer, score, and interpret it
* Measures what it suppose to measure

71
Q

Psychometric Soundness

A

Reliability, Validity

72
Q

Involves the consistency of the tool

A

Reliability

73
Q

Measure what it purports measure

A

Validity

74
Q

Refers to the consistency and stability of the results obtained from a particular assessment tool or measurement instrument.
* High _________ is crucial in psychological testing because it indicates that the results are dependable and not subject to significant fluctuations or random errors.

A

Reliability

75
Q

Reliability Estimates

A
  1. Test-Retest
  2. Parallel-Forms and Alternate Forms
  3. Split-Half
  4. Inter-Rater Reliability
  5. Internal Consistency
  6. Others
76
Q
  • Refers to the extent to which a test
    or assessment tool accurately and
    effectively measures the specific
    psychological construct it is
    intended to assess.
  • It is a critical concept because it
    ensures that the results obtained
    from a test are meaningful and
    relevant for the purpose for which
    the test was designed.
A

Validity

77
Q

Types of Validity

A
  1. Content Validity
  2. Criterion-Related Validity
  3. Construct Validity
  4. Face Validity
78
Q

Refers to the established standards or reference points that allow test scores to be interpreted in a meaningful way.

A

Norms

79
Q

Also referred as normative data

A

Norms

80
Q

Provide context by comparing an individual’s group test’s scores to a representative sample of people who have taken the same test under similar conditions.

A

Norms

Norm-referenced testing and assessment

81
Q

Process of administering a test to a representative sample of testtakers under clearly specified conditions and the data are scored and interpreted for the purpose of establishing norms.

A

Standardization

82
Q

A portion of the universe of people deemed to be representative of the whole population

A

Sample

83
Q

Process of selecting the portion of the universe deemed to be representative of the whole population

A

Sampling

84
Q

Population is divided into subgroups, called strata, based on certain
characteristics or attributes that are interest to the researcher

A

Stratified Sampling

85
Q

Population is divided into subgroups, called strata based on characteristics. Involves random selection of
participants from each strata

A

Stratified-random sampling

86
Q

Selecting individuals or groups from a population based on a predetermined criteria and the researcher’s judgment

A

Purposive Sampling

87
Q

Occurs when data is gathered opportunistically when the opportunity arises, without primary intention of
conducting formal research

A

Incidental Sampling

88
Q

Basic steps:
1. Define the test and its purpose.
2. Identify the target population.
3. Collect data from the target population.
4. Collect demographic information.
5. Score the test.
6. Analyze the data.
7. Create norm tables or charts.
8. Interpret the norms.
9. Publish the norms.
10. Regularly update norms.
11. Ensure that ethical guidelines are followed.

A

Developing Norms

89
Q

Types of Norms

A
  1. Percentile
  2. Age Norms
  3. Grade Norms
  4. National Norms
  5. National Anchor Norms
  6. Subgroup Norms
  7. Local Norms
90
Q

Divide the distribution to 100 equal parts

A

Percentile

91
Q

Used in the context of norms to indicate the relative standing or performance an individual or a group within a larger population.

A

Percentile

92
Q

Based on the principle that individuals of different ages may have varying abilities, characteristics, and developmental stages.

A

Age Norms

93
Q

Used to evaluate an individual’s performance, development, or behavior in relation to what is considered typical or expected for their age group.

A

Age Norms

94
Q

Typically used in the context of standardized tests and assessments to evaluate how students in a particular grade are performing in relation to their peers of the same grade.

A

Grade Norms

95
Q

Used to assess and compare the performance or characteristics of a specific group or population within a given country
Provide a benchmark for understanding how individuals or
groups in the country compare to the larger national population in
terms of various attributes

A

National Norms

96
Q

Provide a benchmark for understanding how individuals or groups in the country compare to the larger national population in terms of various attributes

A

National Norms

97
Q

Designed to serve as common benchmarks that guide the development of educational standards, curricula, and assessments, ensuring that students across different regions or school systems are held to the same standards

A

National Anchor Norms

98
Q

Derived by examining the data from subgroups of subpopulations that share common characteristics, such as gender, age, ethnicity, socioeconomic status or other demographic factors

A

Subgroup Norms

99
Q

Used to evaluate and compare the performance of students or educational institutions within a specific local or regional context

A

Local Norms

100
Q

Typically derived from data collected from schools, districts, or educational institutions within a particular geographic area

A

Local Norms

101
Q

Compare individual’s performance to that of a norming or reference group

A

Norm-Referenced

102
Q

Aim to determine how a testtaker’s performance ranks relative to others

A

Norm-Referenced

103
Q

Scores: percentiles or standard score

A

Norm-Referenced

104
Q

Determine whether a student has achieved specific learning objectives, skills, or standards

A

Criterion-Referenced

105
Q

Focus on the mastery of content or skills

A

Criterion-Referenced

106
Q

Scores: “predefined criterion or standard

A

Criterion-Referenced

107
Q

Refers to the consistency in measurement

A

Reliability

108
Q

An index of reliability, a proportion that indicates the ratio between the true score variance on a test and the total variance

A

Reliability coefficient

109
Q

A statistic useful in describing sources of test score variability

A

Variance

110
Q

Refers to all the factors associated with the process of measuring some variable other than the variable being measured.

A

MEASUREMENT ERROR

111
Q

caused by unpredictable fluctuations and inconsistencies of other variables in a measurement process

A

Random Error

112
Q

caused by typically constant or proportionate to what is presumed to be the true value of the variable being measure

A

Systematic Error

113
Q

SOURCES OF VARIANCE

A

TEST CONSTRUCTION
TEST ADMINISTRATION
TEST SCORING & INTERPRETATION

114
Q

Item sampling or content sampling

A

TEST CONSTRUCTION

115
Q

Test environment, testtaker variables, examiner-related variables

A

TEST ADMINISTRATION

116
Q

Scorers and scoring systems

A

TEST SCORING & INTERPRETATION

117
Q

Obtained by correlating pairs of scores from the same people on two different administrations of the same test

A

TEST-RETEST RELIABILITY

118
Q

Appropriate: reliability of a test that purports to measure something that is relatively stable over time

A

TEST-RETEST RELIABILITY

Appropriate

119
Q

passage of time

A

TEST-RETEST RELIABILITY

Possible source of error
variance

120
Q

When the interval between testing is
greater than 6 months

A

TEST-RETEST RELIABILITY

Coefficient stability

121
Q

Degree of relationship between various forms of a test that can be evaluated by means of an alternate-forms or parallel forms coefficient of reliability

A

PARALLEL-FORMS & ALTERNATE-FORMS

COEFFICIENT OF EQUIVALENCE

122
Q

Obtained by administering different versions of an assessment tool (both versions must contain items that probe the same construct) to the same group of individuals at the same time

A

PARALLEL-FORMS

123
Q

Consistency of test results between two different – but equivalent – forms of a test.
Used when it is necessary to have two forms of the same tests (administered different time)

A

ALTERNATE FORMS

124
Q

DEGREE OF CORRELATION AMONG ALL ITEMS

SINGLE ADMINISTRATION OF A SINGLE FORM OF A TEST

USEFUL: HOMOGENEITY OF THE TEST

A

INTERNAL
CONSISTENCY

125
Q

Obtained by correlating two pairs of scores from equivalent halves of a single test administered once

A

SPLIT-HALF

126
Q
  1. Divide the test into equivalent halves.
    * Randomly assign items to one or the other half of the test
    * Odd-even reliability
    * Divide the test by content
  2. Calculate a Pearson r scores on the two halves of the test.
  3. Adjust the half-test reliability using the Spearman-Brown formula.
    * Spearman-Brown formula allows a test developer or user to estimate internal consistency from a correlation of two halves of a test
    Interpretation: At least 0.70 or higher to determine reliability
A

COMPUTATION OF A COEFFICIENT OF SPLIT-HALF RELIABILITY

127
Q

A statistic of choice for determining the inter-item consistency of dichotomous items

A

Kuder-Richardson formula 20 or KR-20

128
Q

Appropriate for use on tests containing non-dichotomous items

Calculated to help answer questions about how similar sets of data

A

COEFFICIENT ALPHA

129
Q

Note: It is possible to conceive of data sets that would yield negative values of alpha. If this happens, alpha coefficient should be reported as 0

A

COEFFICIENT ALPHA

130
Q

Focuses on the degree of difference that exists between item scores

A

APD
Average Proportional Distance

131
Q
  1. Calculate the absolute differences between scores for all the items.
  2. Average the difference between scores.
  3. Obtain the APD by dividing the average difference between scores by the number of response option on the test, minus one.
    * An obtained value of .2 or lower:Excellent internal consistency
    * A value of .25 to .2: Acceptable range
A

COMPUTATION OF AVERAGE PROPORTIONAL DISTANCE

132
Q

-scorer reliability”, “judge
reliability”, “observer reliability”,
“inter-scorer reliability”
* Degree of agreement of consistency between two or more scorers with regards to a particular measure
* If consensus can be demonstrated in the ratings, the researchers can be more
confident regarding the accuracy of the ratings and their conformity with the established rating system.
* Method: Calculate a coefficient of correlation

A

INTER-SCORER RELIABILITY