VALIDITY AND RELIABILITY Flashcards
refers to the decisions we
make, and not to the test itself or to the measurement.
VALIDITY
Like______, validity is not all or nothing concept; it is never totally absent
or absolutely perfect.
REALIBILITY
____ can never be finally determined;
it is specific to each administration of the
test.
VALIDITY
It is done by examining the physical
appearance of the instrument to make it
readable and understandable.
FACE VALIDITY
A type of validation that refers to the relationship between a test and instructional objectives; establishes content
so that the test measures what it intends to measure.
CONTENT VALIDITY
A type of validation that refers to the measure of how accurately a student‘s
current test score can be used to estimate a score on criterion measure.
CRITERION-RELATED VALIDITY
Two purpose of Criterion-related validity
a. concurrent
b.predictive
It describes the present status of the individual by correlating the sets of scores
obtained from two measures given at a
close interval.
CONCURRENT VALIDITY
It describes the future performance of
the individual by correlating the sets of
scores obtained from two measures given at a longer time interval.
PREDICITVE VALIDITY
It is a type of validation that refers to a measure of the extent to which a test measures a hypothetical and unobservable variable or quality such as intelligence, math achievement, performance anxiety, etc.
CONSTRUCT VALIDITY
FACTORS AFFECTING VALIDITY
- the test itself
2.The administration and scoring of test - Personal factors influencing how students
respond to the test
4.Validity is always specific to a particular test
WAYS TO REDUCE VALIDITY
1.Poorly constructed test items
2.Unclear directions
3.Ambiguous items
4.Reading vocabulary too difficult
5.Inadequate time limit
6.Inappropriate level of difficulty
7.Unintended clues
8.Improper arrangement of items
TEST DESIGN TO IMPROVE VALIDITY
1.What is the purpose of the test?
2.How well do the instructional objectives
selected for the test represent the
instructional goals?
3. Which test item format will best measure the
achievement of each objective?
4. How many test items will be required to measure
the performance adequately on each objective?
5. When and how will the test be administered?
the consistency of measurement
(how consistent test scores or other
assessment results are from one measurement to another.)
REALIBILITY
Types of Reliability
Measures
1.TEST-RETEST METHOD
2.EQUIVALENT FORM METHOD
3.SPLIT-HALF METHOD
4.KUDER-RICHARDSN FORMULA
It is determined by administering the
same test twice to the same group of students with any time interval between tests.
- Measure of stability
TEST-RETEST METHOD
It is determined by administering two different but equivalent forms of test to the same group of students with close intervals between tests.
- Measure of equivalence
EQUIVALENT- FORM METHOD
It is determined by administering the test once to the same group of students. Scores two equivalent halves
of the test. To split the test into halves that are
equivalent, the usual procedure is to score the even-numbered and odd-numbered separately.
SPLIT-HALF METHOD
it indicates the degree
to which consistent results are obtained from two halves of
the test.
Measure of internal consistency (SPLIT-HALF METHOD)
It is determined by administering the test
once, then correlate the proportion/percentage of
the students passing and not passing a given item. Measure of internal consistency
KUDER-RICHARDSON FORMULA
Factors affecting Reliability
1.Length of the test
2.Moderate item difficulty
3.Objective scoring
4.Heterogeneity of the student group
5.Limited time