Lecture 2 - Testing concepts (Validity and reliability) Flashcards Preview

EPHE 444 > Lecture 2 - Testing concepts (Validity and reliability) > Flashcards

Flashcards in Lecture 2 - Testing concepts (Validity and reliability) Deck (22):
1

validity

-ability of test to measure accurately
-degree to which a test measures what it purports it measures
-dependent on reliability, relevance and appropriateness of scores

2

reliability

-consistency or repeatability of observation
-degree to which repeated measurements of trait are reproducible under same condition

3

4 types of validity

construct, logical (face), criterion, convergent
--> all can be estimated either logically or statistically

4

construct validity

test effectively measures desired construct (eg vertical jump for volleyball player)

5

logical/ face validity

measure obviously involves performance being measured, no stat evidence required (mirrows work in actual job)

6

criterion validity

AKA statistical/ correlational
- degree to which scores on test are related to recognized standard or criterion
- obtained by determining correlation/ validity coefficient (r) between scores for test and criterion emasure

7

criterion validity can be split into 2 types

concurrent & predictive

8

what is concurrent validity

criterion measured at approx same time as alternate measure and scores are compared (eg skin folds and water weighing)

9

what is predictive validity

criterion measured in future (weeks, months, years later)
- pre-selection test batterys core and selection course success

10

convergent validity

2 or more measurements are conducted to collect data and establish that a test battery is measuring what it purports to measure (how representative of their tasks)

11

what do you use a test-retest for?

to calculate stability reliability coefficient (r> 0.9 indicates high reliability)

12

3 types of reliability

stability, internal-consistency, objectivity

13

stability reliability

-scores do not change across days (look at relationship between multiple trials across multiple days)

14

what are 3 factors that contribute to low stability

1. people tested may perform differently
2. measuring instrument may operate or be applied differently
3. person administering measurement may change

15

internal consistency reliability

-evaluator gives at least 2 trials of test within single day
-changes in score between trials indicates poor reliability
-benefit: all measurements are taken within same day
(this coefficient is not comparable to stab.-reliab. coeffcient)

16

objectivity

rater/ judge reliability
--> inter-tester reliability

17

factors affecting objectivity

-clarity of scoring system
-degree to which judge can assign a score accurately

18

considerations for reducing measurement error

-valid and reliable tests
-instructions
-test complexity
-warmup and test trials
-equipment quality and prep (calibration)
- testing environment
- scoring accuracy
- experience and state of mind of person conducting test and person being tested

19

why do we want to calibrate our equipment

-important to confirm accuracy of what equipment is telling you

20

how do we calibrate equipment

-requires comparison between measurements
-one of known magnitude and one of unknown that needs to be confirmed
- ensure equipment calibration schedule is up to date
- heck quipment for safety and proper functioning

21

when should you calibrate the equipment?

at least every 6 months or according to the manufacturer's guidelines

22

when can you expect reliability

- testing environment is favorable for good performance
- people are motivated and ready to be tested, informed
- administrator is trained and competent