Assessment and Evaluation Flashcards
(17 cards)
_____ is defined as a process of appraising something or someone. It is made to identify the level of performance of an individual.
Assessment
_____ Making judgment about the value or worth of objects or events based on some measurement to arrive at a decision.
Performed to determine the degree to which goals are attained.
Evaluation
_____ it is a process of collecting reviewing and using data for the purpose of improvement in the current performance.
_____ it is described as an act of passing judgment on the basis of set of standards.
Assessment
Evaluation
Steps in evaluation process
- Specify the purpose.
- Determine what to test.
- Develop an examination blueprint, considering the objectives of the course.
- That you determine what method of assessment or test in instrument to use.
- Device the measuring instrument.
- Collect data.
- Set standard of acceptable performance.
- Summarize report results.
- Make a judgment.
- Make the appropriate decision based on the judgment.
_____ a chart that spells out the content and level of
knowledge to be tested; it can be general or specified based on teacher’s preference.
It contains the following:
• Objectives to be measured
• Taxonomy of learning
• Number of questions
• Relative weight to be given to each area
Examination blueprint
Factors that determines the number of items in the
Examination:
a. amount of material taught
b. type of test question used
c. amount of time available for testing
_____- learner is compared with a reference group of learners.
_____- learner is compared to a well-defined criteria.
Relative terms (norm-reference)
Absolute terms (criterion-reference)
CRITERIA FOR SELECTION OF EVALUATIVE
DEVICES/TOOL
- Validity: the DEGREE OF ACCURACY which a test measures what is intends to measure.
- Reliability: the CONSISTENCY with which the test measures what it intends to measure.
- Objectivity: the DEGREE OF AGREEMENT between the judgment made by independent and competent observers as to whether or not a learner’s test performance meets the criteria stated in a learning objective.
- Relevance: the degree to which the criteria established for selection of questions confrom with the aims of the measuring instrument.
- Practicality: refersto the development of evaluative device capable of being administered and scored with reasonable ease within the limits of time and of the resources imposed by circumstances.
a. Content validity- adequacy with which the test item measured on the areas.
b. Predictive validity- the extent to which a relationship exists between the test scores and later success.
c. Concurrent validity- extent to which a relationship exist between test scores ad an accepted contemporary criterion of performance on the variable, the test is supposed to assess .
Reliability of the test may be estimated by:
a. _____
• a test is administered twice; the correlation between the scores is an estimate of temporal reliability.
b. _____
• two alternate or equivalent forms of a test are constructed and administered and 1-3 weeks interval.
c. _____
• odd numbered and even numbered items are scored as separate tests.
Test-retest method
Alternate or equivalent forms method
Split-half method
Requisites of the successful test
construction:
1. Thorough knowledge of subject matter.
2. Intimate understanding of specific teaching objectives.
3. Insight into the backgrounds, abilities and mental processes of the examinees.
4. Facility in clear and economical use of language
5. Willingness to devote the time and energy necessary to the task.
6. Acquianted with the most useful forms of test, their unique, virtual and limitations.
Reminders during organizations of a
test:
1. Group items according to the objectives they are written
for.
2. Group the items according to format.
3. In a test with several formats, place format with minimum response at the beginning.
4. Place easier items at the beginning of each section.
5. Each item must be complete in one page.
6. Seek critical review of the examination items from at least two colleagues.
7. Assign score points to each item before the test.
8. Prepare a key before the test.
9. Determine the appropriate time limit for the test.
10. Review items for typing errors.
11. Provide detailed directions for the examinees and
examiner.
Methods of Evaluating a Health Education Plan
I. Formal Evaluation Instrument
- Objective Examination
A. Selection Type
• Multiple choice:contains the stem, the key and a distractor.
• Matching type: contains premises/hypotheses, and the responses/alternatives which contain the jokers.
- Jokers: responses which do not match any premise
• Alternate responses: there are only two possible answers, that is true or false, correct or incorrect, right or wrong.
II. Non-Formal Evaluation Instrument
- Practical Exam: the learners perform the skills to be evaluated; simulated or actual.
- Oral Exams
- Project assignment: assigning a learner a task or project to complete and then evaluate on the basis of the product of the completed performance.
B. ATTITUDES
I. DIRECT METHOD
- Questionnaire
- Semantic Differential: usually a six step bipolar adjective scale indicating direction and intensity
- Attitude Goals
• Likert Scale - Observational Rating Scale: observation at work of cognitive and affective behavior and scale of observable behavior.
II. INDIRECT METHOD
- Test of judgment
• Free answer testing - Test of memory and perception
• based on the assumption that what is perceived and remembered is influenced by one’s attitude. - Information test
• based on the assumptions that incases of uncertainty, people tend to guess in the direction of their attitude.
C. SKILLS
• It can be in the form of performance checklist or rating scale.