assessment principles Flashcards Preview

OCC1022 > assessment principles > Flashcards

Flashcards in assessment principles Deck (56):
1

define discriminative measurements

attempts to differentiate between two or more groups of people

2

define predictive measurements

attempts to classify people into a set of predefined measurement categories for purpose of estimating outcome

3

define evaluative measurement

pertains to measurement of change in an individual or group over time

4

define descriptive measurement

pertains to efforts to obtain a ‘clinical picture’ or baseline of person’s skills

5

what are the 4 types of assessment

non standardised
standardised
criterion referenced
norm referenced

6

what does measurement enable therapists to do

- quantify attributes of individuals
- make comparisons
- document on performance change

7

define evaluation

The process of determining the worth of
something in relation to established benchmarks
using assessment information.

8

define re-evaluation

process of critical analysis of client response
to intervention

9

define screening

A quick review of the client’s situation to determine if an occupational therapy evaluation is warranted

10

define testing

a systematic procedure for observing a person’s behaviour & describing it with the aid of a numerical scale or a category-system

11

define evidence based practice

The integration of best research evidence available, clinical experience and patient values

12

define non standardised assessments

Do not follow a standard approach or protocol

May contain data collected from interviews, questionnaires and observation of performance

13

define standardised assessments

- Are developed using prescribed procedures

- Are administered and scored in a consistent manner under the same conditions and test directions

14

define descriptive assessments

to describe individuals within groups and to characterise differences

15

define evaluative assessments

use criteria or items to measure an individuals trait over time

16

define predictive assessments

use criteria to classify individuals to predict trait against criteria

17

define criterion referenced assessment

client performance is assessed against a set of predetermined standards

18

define norm referenced assessment

client performance is assessed relative to the other students

19

pros of criterion referenced assessments

- sets minimum performance expectations
- demonstrates what clients can and can not do

20

cons of criterion referenced assessments

- hard to know where to set boundary conditions
- lack of comparison data

21

define norm referenced assessments

Based upon the assumption of a standard normal (Gaussian) distribution with n > 30.

22

pros of norm referenced assessments

- ensures a spread
- shows client performance relevant to group

23

cons of norm referenced assessments

- in a strong group, some will be ensured an f
- above average performance is not necessarily good

24

define reliability

The reproducibility of test results on more than one occasion by the same researcher using a measure.

range from 0 - 1

25

define random error

errors that can not be predicted

26

define systematic error

errors that have predictable fluctuations

27

list the types of reliability


Intra-rater reliability
Inter-rater reliability
Test-retest reliability / temporal stability
Alternate form reliability
Split half reliability
Internal consistency

28

intra rater reliability

The stability of data collected by one person more than 2 times

29

inter rater reliability

Detecting variability between 2 eaters who measure the same client

30

test retest reliability

The reliability/stability of measurements when given to the same people over time

31

alternate form reliability

the degree of correlation between two different, but equivalent forms from the same test completed by the same group of people

32

split half reliability

the degree of correlation between one half the items of a test and the other half of the items of a test (e.g., odd numbered items correlated with the even numbered items)

33

internal consistency

the the degree of agreement between the items in a test that measures a construct

34

cronbachs coefficient alpha

used to assess internal consistency; estimate the reliability of scales or commonality of one item in a test with other items in a test; ranges from 0.10-0.99

35

kappa (k)

used in assessments yielding multiple nominal placements since it corrects for chance

36

weighted k

used to determine the reliability of a test when rating on an ordinal scale

37

validity

the extent to which a test measures what it purports to measure

38

construct validity

Establishes whether assessment measures a construct and its theoretical components

39

what are the 3 parts of construct validity

1. describe the constructs that amount for test performance
2. compose hypotheses that explains relationship
3. test hypotheses

40

list the 4 subtypes of construct validity

- convergent
- divergent
- discriminant
- factor analysis

41

covergent validity

Level of agreement between 2 tests that are being used to measure the same construct

42

divergent validity

Distinguishing the construct from confounding factors

43

discriminant validity

The level of disagreement when two tests measure a trait

44

factor analysis validity

statistical procedure used to determine whether test items group together to measure a discreet construct or variable

45

content validity


The extent to which a measurement reflects a specific domain

46

criterion validity

Implies outcome can be used a substitute for 'gold' standard criterion test

47

what are the 2 subtypes of criterion validity

1. concurrent/congruent validity (degree to which results agree with others)

2. predictive validity (extent to which measure can forecast)

48

face validity

A test appears to measure what its author intended it to measure

49

ecological validity

The outcome of an assessment can hold up in the real-world circumstances

50

what are the 2 types of experimental validity

- internal
- external

51

sensitivity

Ability of a test to detect genuine changes in a client’s clinical condition or ability

52

specify

A test’s ability to obtain a negative result when the condition is really absent (a true negative)

53

responsiveness

providing evidence of the ability of a measure to assess and quantify clinically important change

54

nominal measurement

only have two response options to items; for example male/female; yes/no wet/dry; happy/sad

55

ordinal measurement

data has some order, with one score being better/worse than another.

56

interval scales

the differences between any two scores ratings are identical (such as weight, temperature and distance); statistics can be used correctly.