Vocab Flashcards

(59 cards)

1
Q

Reliability

A

Consistency of an assessment instrument’s data across repeated administrations

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Internal Consistence Reliability

A

Consistency of test items with one another bu measuring the same quantity/construct

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Intra-rater Consistency

A

Individuals consistency in rating responses to various test items

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Validity

A

Whether a test measures what it claims/intends to measure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Content Validity

A

Trst that includes items representing the complete range of possible items

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Construct Validity

A

When test’s scores measure the construct they are meant to measure such as intelligence

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Criterion Validity

A

Test’s scores effectively measure a construct according to established criteria

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Concurrent Validity

A

(type of criterion validity) a test measures the criterion and the construct at the same time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Predictive Validity

A

(type of criterion validity) means test scores effectively predict future outcomes, as when aptitude tests predict future subject grades

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Generalizability

A

the consistency of test scores over repeated administrations

(The results of one test can be generalized to apply to other tests with similar formats, content, and operations

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Compensatory Grading

A

the practice of balancing out lower performance in one area or subject with higher performance in another

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Noncompensatory grading

A

does not permit balancing of lower perfoemance in one subject with higher performance in another, but requires a similar standard of achievement in each area or subject

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Cut score

A

a predetermined number used to divide categories of data or results from a test instrument

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Standard Deviation

A

measures variability within a set of number
When interpreting assessment results it measures how much scores among a group of test-takers vary around the mean/average

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Standard Score (z score)

A

represents the amount whereby an individual score deviates from the mean, measured in SDs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Domain

A

the identified scope of expected learning to be assessed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Item Response Theory

A

Performance on a test item is attributed to three influences
The item itself
The Test-taker
The interaction between the two

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Mean

A

the average of a group of numbers

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Median

A

Center-most score in a group

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Mode

A

most frequent score in a set

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Positive Skew

A

When the majority of a group of numbers, such as test scores, is concentrated toward the high end of the range/distribution with the minority “tail” of scores near the low end

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Negative Skew

A

When the majority of scores is bunched near the lower end of the distribution, with the minority “tail” near the high end

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Normal Curve/Bell Curve

A

resembles the shape of a bell because the largest number of scores collected around the center mean and the numbers of scores descending as they move away from the center a mean

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Standard Error of Measurement

A

If an individual student took a lot of tests that were similar in size or length, the assessors can estimate how much that students score will vary

25
Standard Error of Mean
the estimate of variance around the mean of a groups test scores
26
Confidence Interval
used by staticians to express the range wherein a true or real score is situated Purpose is to acknowledge and address the fact that the measurement of a students performance contains "noise" (interfering/confounding variables) Giving a confidence interval shows the probability that a students true score is within the range defined by the interval
27
Cut score
Used by educators to determine which scores above it are passing and which are failing
28
Nedelsky Method
Used to establish a cut score, the assessor identifies a "borderline" group of students (those who do not always pass or fail but tend to score on the borderline of passing/failing) then estimates how many of these "borderline" student will probably answer a given test item correctly
29
Angoff Method
used to establish a cu score, the assessor selects a group of "borderline" students and estimates which choice(s) in a multiple choice test item these students could eliminate as an incorrect answer and what percent of the choices left these students would guess as correct
30
Modified Angoff procedure
Assessors also estimate how many of the students would fail the test, and if they deem it necessary, they modify their estimations to produce a number of failures they find more reasonable
31
Ebel Method
used to determine an appropriate cut score for a particular instrument Considers the importance and difficulty level of each test item in establishing a cut score for a test
32
Hofstee Method/Compromise Method
addresses the difference between norm-referenced and criterion-referenced tests tests that compare individual student scores to the average score scores of a normative sample of students found representative of the larger population versus tests that compare student scores to a pre-established criterion of achievement This method allows them to determine how many items students could miss and how many failed items would affect the number of students who could fail
33
norm-reference test
compare students scores to a normative sample of students deemed representaive of the general population these tests seek to determine the highest or lowest achievement rather than the absolute score achieves
34
Criterion-referenced test
may or may not be standardized and compare student scores to a predetermined set of criteria for acceptable performance
35
Formative Assessments
given during a lesson, unit, course, or program to give the teachers and students an idea of how well each student is learning what the teacher has planned and expected for them to learn Results are used to explain to each student his/her strengths, weaknesses, and how he/she can build on the strengths or improve weaknesses Results also used to report progress to parents, administrators, etc CFUS
36
Summative Assessments
Given after a lesson, unit, course, or program has been completed to determine whether the student has passed the segment of instruction helps teacher to determine whether they need to repeat the instruction or can move on to successive segments summative assessments apply to lessons or units within a class, to coureses in a subject, to promotion from one grade level to the next and to graduation Exit tickets, quizzes, etc
37
Item Analysis
Used to evaluate test items that use multiple-choice formats to show the quality of the test item and of the test overall Has an implicit orientation of being normed-referenced rather than criterion or domain referenced Evaluates test items using performance within the group of test-takers rather than an externally present citerion for expected achievement
38
Discrimination Index
Measure of how well a specific test item can separate students who generally score high on the test from students who generally score low on it
39
Difficulty Index
Simple measure of how difficult a test item is considered Obtained by calculating the percentage of all students taking a test who answered a certain test item correctly
40
Specificity
refers to how well a test identifies every member of a defined group the more specific a test is, the more likely it will omit some individuals who should be included in that group
41
ePortfolios
enabled by technology can support students internal motivation and autonomy by establishing online environments wherein others feel good about participating
42
Working Memory
the ability to retain current information temporarily well enough to manipulate it, such as combining additional parts to form a coherent whole, as in understanding words in a sentence or paragraph
43
Memory Span
measures the ability to recall information presented once, immediately and in correct sequence
44
Associative Memory
the ability to recall one item from a previously learned (unrelated) pair when presented with the other item
45
Ideational Fluency
the ability to generate many varied responses to one stimulus
46
Processing Speed
the ability to perform easy or familiar cognitive operations quickly and automativally, especially when they require focused attention and concentration
47
WJ Visual Matching
measures perceptual speed through finding, identifying, comparing and contrasting visual elements Pattern recognition scanning, perceptual memory, and complex processing abilities
48
Decision Speed
measures semantic processing, meaning the reaction time to a stimulus, requiring some encoding and mental manipulation
49
Rapid Picture Naming
measures naming facility, meaning the ability of rapidly naming familiar presented things with namees retrieved from long term memory
50
Pair Cancellation
measures the students ability to attend to and concentrate on presented stimuli
51
Crystallized Intelligence WJ
the solidified knowledge that an individual has acquired from his or her culture through life experiences and formal/informal education Measured on the WJ by its subtests of General Informationa dn Verbal Comprehension
52
Fluid Intelligence WJ
Reasoning stands in contrast to crystallized intelligence or knowledge This is the ability to solve novel problems by performing mental operations Fluid reasoning is measured on the WJ by its Concept Formation subtests which tests inductive reasoning: the ability to relate a specific problem to a generalized, underlying rule or concept
53
Spatial Relationships
the ability to perceive objects in space, their orientation, and visual patterns, and to maintain and manipulate these rapidly
54
Visualization
the ability to match objects in space, including mentally manipulating them three-dimensionally more than once, regardless of response spped
55
Spatial Scanning
involves quickly and accurately identifying paths through complex, large, visual, or spatial fields
56
Auditory Processing
the ability to interpret sound signals from one's sense of synthesis
57
Incomplete Words
measuring phonetic coding for analysis
58
Auditory Attention
measuring ideational fluency
59
Phonetic coding
synthesis involved with putting sounds together meaningfully as in words