week 3 online Flashcards

1
Q

types of purpose of measurement

A

discirminate
predicitive
evaluative
decriptive

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

whats discriminative measurement

A

attempts to differentiate between two or more groups of people

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

whats predictive measurement

A

attempts to classify people into a set of predefined measurement categories for purpose of estimating prognosis.: use criteria to classify individuals to predict a certain trait in comparison to set criteria. For example, a predictive tool can measure skills underlying driving performance to predict whether an older adult will be able to successfully return to driving.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

whats evaluative meansuremetnn

A

: pertains to measurement of changing an individual or group over time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

whats descriptive measurement

A

pertains to efforts to obtain a clinical picture or baseline of persons skills

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

what do measurements enable therapists to do

A

Enables therapists to:

  1. quantify attributes of individuals in a standardised way
  2. Make comparisons about performance or capacity across individuals or groups of individuals.
  3. Document how an individual performance has changed over time and across performance contexts.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

whats an assessments

A

the process of determining the meaning of a measurement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

whats the four types of assessment

A
  1. non standardised
  2. standardised
  3. Criterion reference where the client is graded in terms of some behavioural standard
  4. Norm referenced where the client is compared to a group of other people who have taken the same measure
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

whats evaluations

A

the process of determining the worth of something in relation to established benchmarks using assessment information. Process of obtaining and interpreting information needed for intervention planning ad effectiveness review

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

whats re-evaluation

A

: process of critical analysis of client response to intervention. Enables the therapist to assess the clients response to clients intervention and to collaborate with the client to determine changes to the intervention plan.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

whats screening

A

a quick review of the clients situation to determine if an OT evaluated is warranted. Typically a hands off process

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

whats testing

A

a systematic procedure for observing a persons behaviour and describing it with the aid of a numerical scale or category system.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

whats types of testing

A
  • Observation
  • Interview/ history
  • Review records/ survey
  • Paper and pencil test
  • Oral tests
  • Apparatus tests requiring equipment
  • Performance tests requiring participant to perform nonverbal task
  • Online test where individuals answer questions
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

why do we need to assess

A

For practice to be effective one must demonstrate and document its effectiveness. To demonstrate and document effectives one must measure.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

whats evidence based practice

A

The conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients”. The integration of best research evidence available, clinical experience and patient values

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

whats non standardised assessment

A

do not follow a standard approach or protocol and may contain data collected from interviews, questionaries and observation of performance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

whats standardised assessment

A

: developed using prescribed procedures and are administered and scored in a consistent manner under the same conditions and test directions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

whats criterion referenced assessment

A

client performance is assessed against a set of predetermined standards. Based on a predetermined set of criteria eg HD= 90% up D = 70 to 80% etc.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

whats pros and cons to criterion referenced assessment

A

Pros:
1. Set minimum performance expectations.
2. Demonstrate what clients can and cannot do in relation to important content area standards.
Cons
1. Sometimes its hard to know just where to set boundary conditions
2. Lack of comparison data with other clients and or agencies

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

whats norm referenced assessment

A

client performance is assessed relative to the other students. Based upon the assumption of a standard normal distribution. Eg. Top10% of people get an A next 20 % get a B.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

whats pros and cons to norm referenced assessment

A

Pros:
1. Ensures a “spread” between top and bottom of the client for clear grade setting.
2. Shows client performance relative to group.
Cons:
1. In a group with great performance, some will be ensured an “F” or failing result
2. Top and bottom performances can sometimes be very close.
3. Dispenses with absolute criteria for performance.
4. Being above average does not necessarily imply “A” performance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

whats operational definitions

A

: instructions given to the tester about exactly what to observe, precise procedure to follow crucial for reliability, accuracy and consistency

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

wats types of scoring

A
  • Checklist
  • Rating scale
  • Graph
  • Counting performance of specific task
  • Developmental milestones
  • Time taken to complete a specific task
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

whats different administaiont format

A
  • ask performance or observation
  • Self report
  • Proxy or caregiver report
  • Mode of report: face to face interview, telephone or mailed questionnaire
  • Online completion of items, tasks or answering questions
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

define reliabilty

A

: consistency & repeatability of the results obtained when a scale is administered on more than one occasion by the same researcher using a measure.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

whats random error and systematic error

A

Random error: represented by inconsistencies that cannot be predicted eg fatigue
Systematic error: predictable fluctuations occurring during measurement.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

wast intra-rater reliabilty

A

: the degree to which scores on a measure obtained by one trained observer agree with the scores obtained on two or more trails.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

whats interater reliability

A

the degree to which the scores on measure obtained by one trained assessor agree with scores obtained by another trained assessor.

29
Q

whats alternate form reliability

A

the degree of correlation between two different, but equivalent forms from the same test completed by the same group of people.

30
Q

whats split half reliabilty

A

the degree of correlation between one half the items of a test and the other half of the items of a test eg odd numbers items correlated with the even numbered items.

31
Q

whats test retest relibailyt

A

: stability over time or the degree to which the scores changed on repeated administration.

32
Q

internal consistency

A

the way individual items of the instrument group together to form a unit. Reflects the homogeneity among the items of a test. Degree of agreement between the items in a test measuring an underlying trait.

33
Q

whats cronbach coefficient alpha

A

used to assess internal consistency; estimate the reliability of scales or commonality of one item in a test with other items in a test; ranges from 0.10-0.99

34
Q

what crude agreement

A

used with nominal or ordinal categories; simply described in terms of percentage

35
Q

whats kappa K

A

): used in assessments yielding multiple nominal placements since it corrects for chance

36
Q

whats correlation

A

: used with test-retest, intra-rater and inter-rater reliability; usually Pearson R or Spearman Rho are used correlation used; ranges from (-1.0) to (+1.0)

37
Q

whats weighted K or intra class correlation coeffiencetn

A

used to determine the reliability of a test when rating on an ordinal scale

38
Q

define validity

A

the extent to which a test measures what it purports to measure.

39
Q

whats construct validity

A

the degree to which the scores obtained concur with the underlying theories related to the content. Constructs are concepts with multiple attributes and are embedded in theory.
Three parts of construct validity:
1. Described the concepts/constructs that account for test performance
2. Compose hypotheses that explain the relationships of the concepts
3. Test the hypotheses.

40
Q

whats the subtypes of construct validity

A

convergent validity
divergent validity
discriminant validity
factor analysis/ factorial validity

41
Q

convergent validity

A

seeing whether a measure displays the pattern of converging or predictive relationships it should.

42
Q

divergent validity

A

distinguishing the construct from confounding factors

43
Q

discriminant validity

A

ability of a test to differentiate between two groups of people with a known difference

44
Q

factory analysis/ factorial validity

A

statistical procedure used to determine whether test items group together to measure a discreet construct or variable

45
Q

content validity

A

refers to the items selected for inclusion in the assessment. Should be representative of the construct being measured. Can be selected by experts in the field, comprehensive review of relevant literature or practical experience.

46
Q

criterion validitiy

A

recognised gold standard against which new tests are compared. Results from a new test should correlate with those of a recognised standard.

47
Q

two subtypes of criterion validity

A
  1. Concurrent/congruent validity: the degree to which test results agree with other measures of the same or similar traits and behaviours.
  2. predicative validity: the extent to which a measure is able to forecast or predict an important future event or criterion. The extent to which test results agree with a future outcome or criterion.
48
Q

face validity

A

test appears t measure what its author intended it to measure. Test items appears to be related to the variable being tested and relevant to the state purpose. No statistical test for face validity, but rather relies on logic and subjective judgment

49
Q

ecological validity

A

the outcome of an assessment can hold up in the real world circumstance.

50
Q

types of experimental validity

A

Internal validity: is the experimenter measuring the effect of the independent variable on the dependent variable?
External validity: can the results be generalised to the wider population

51
Q

whats sensitivity

A

ability of a test to detect genuine changes in a clients clinical condition or ability. Ability to obtain a positive test when the condition really exists. Depends on the scale used eg 7 point scale more sensitive than 2 point scale.

52
Q

whats specificity

A

: a tests ability to obtain a negative result when the condition is absent.

53
Q

whats responsiveness

A

deal with notion of providing evidence of the ability of a measure to assess and quantify clinically important change. Individual test items should be responsive to clinically important change. Scale should measure longitudinal change. Variations between replicate assessment should be small

54
Q

whats ceiling effects

A

when the task is too easy, and all patients preform at or near perfect.

55
Q

whats flood effect

A

when the task is too hard and everyone performs at the worst possible level.

56
Q

whats simplicity

A

will improve the patient and user compliance and will increase reliability. Simple measures are usually quick and therefore more readily used

57
Q

whats clinical utility

A

: the more user friendly an instrument is and more relevant it is to the clinical diagnostic group it is meant more, the great its utility.

58
Q

whats communicability

A

measures should give results which are easily understood by others. Results which are presented numerical with zero representing the worst level and 10 or 100 representing the est level are more easily understood

59
Q

discriminability

A

: are the performance on induvial test items positively correlated with overall client performances.

60
Q

why we use standardised assessments

A
  • Problem identification / Basis for intervention
  • Provide the basis for goal-setting with clients & families
  • Outcome measurement
  • Prediction or prognosis
  • Research
  • Communication & reporting
  • Accountability & quality assurance
  • Funding & reimbursement
  • Comparison & tracking
61
Q

nominal

A

only have two response options it items eg female/male. Numbers represent ordering of categories, not absolute value of each category.

62
Q

ordinal

A

data has some order with one score being better/worse than another. Numbers can be assigned to categories and a total score can be obtained. Normal statistical procedures should not be used.

63
Q

ratio scales

A

: have a zero point. Nominal data is not summed whereas interval, ordinal and ratio are.

64
Q

interval scales

A

the differences between any two scores rating are identical

65
Q

hierachical scales

A

these are usually ordinal scales with the items arranged into a hierarchy. Patients passing items at one level can be assumed to be passing all easier items.

66
Q

direct observation

A
  • Highly reliable and validity but cost as require time and judgment f skilled an trained personnel.
  • No direct observation of performance in natural environments such home or school.
  • Prediction after discharge based on hospital performance may not be accurate to real life context.
67
Q

self report

A
  • More subjective, less reliable, less costly to administer
  • More accurate with higher functioning patients
  • Lack of detail operational definitions in questionnaires
68
Q

problems with modificaiton of measures

A
  • Resist the temptation to change, modify or adapt assessment tools
  • Any adaption renders standardisation of assessment as null and void.
  • Modifying test impacts its validity and reliability nd therefore the results can be questionable.