Unit 3: Psychometrics and Measurement Principles Flashcards

1
Q

Biases

A
  • Evaluator Biases
  • Hawthorne effect
  • Observer expectation
  • Test Biases
  • Scoring Errors
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Evaluator Biases (Biases)

A
  • Background
  • Severity or leniency: Give 1 or 5 (extremes of scale)
  • Central tendency: Can’t really make a decision, give 3 out of 0-5)
  • Halo effect: Prior experiences impact evaluations
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Hawthorne Effect (Biases)

A
  • Happens to individual test is being given to.

- Person changes their performance since they are being watched (can be positive or negative)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Observer Expectation (Biases)

A

-Our interest impacts their effort

we want them to do better so we give them a few tries to show we want them to progress

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Test Biases (Biases)

A
  • Gender
  • Education Level
  • SES
  • Ethnicity/Culture
  • Geographic
  • Medical Status
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Scoring Errors (Biases)

A
  • Generosity: we are giving them more credit than deserved for their performance
  • Ambiguity: Interpretation error (not sure what it means about the client)
  • Halo: Scoring different based on previous experiences
  • Central Tendency
  • Leniency/Severity
  • Proximity: How preceding events affect scoring
  • Logical Error: Insufficient info to decide on an answer
  • Contrast Error: Too much divergence, score very different on the variety of tests
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Practice Q: An error of the halo effect means what?

A

Your past experiences color your current opinion.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Practice Q: Which assessment bias is occurring in the following scenario?
A child is taking a handwriting assessment and doing very well. They are taking their time, sitting appropriately at the table, and holding the pencil correctly. However, the samples of handwriting provided by the teacher demonstrate significantly illegible handwriting, and there are reports that the child rushes through work or refuses to participate in classroom writing activities; has a difficult time sitting at their seat; and uses an immature pencil grasp.
-What might be causing the child’s performance to be so different on the assessment than in the classroom?

A

Hawthorne effect

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Errors of Measurement

A
  • Item Bias: Some items may be harder/easier, better/worse
  • Rater Error
  • Individual Error: Inability to perform or understand the task
  • Standard Error of Measurement
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Standard Error of Measurement (SEM)

A

Best prediction of how much error still exists

-Despite how closely you follow directions, there is still error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Reliability

A

Types

  • Accuracy and stability or consistency of a measure
  • Intrarater: Same therapist gets same measurement for the same client
  • Interrater: 2 therapists can get the same measurement

Test-Retest: Get same results for same test

Relaibility Coeeficient: (+.80= good)

Internal consistency or homogeneity: Items within the same test can be pulled apart and reliable when compared to each other

  • Split half
  • Covariance
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Validity

A

Does a test measure what it says it does?
-Face: Weakest, Appears to measure what it says it does
-Content: Enough items to efficiently represent the construct
-Criteria Related: (Most objective) Concurrent (performance on one assessment as it relates to another), Predictive (predicting performance on one assessment to another), Sensitivity (chance of getting a true positive), and Specificity (chance of getting a true negative)
-Construct: Test is measuring a theoretical concept that is a true representation of what you are trying to assess
(has to be valid, should be reliable)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Practice Q: What is the standard error of measurement?

A

A prediction of how much error may still exist despite the standardization of the test.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Practice Q: Which of the following types of reliability refers to the ability of an assessment to consistently measure the construct when a person takes the assessment twice with two different therapists who get the same results?

A

Inter-rater reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

PRACTICE Q: Specificity means…

A

The chance of finding a true negative

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Raw Scores initially obtained (Scoring and Interpretation)

A

Raw scores are initially obtained by calculating the points obtained based on the scoring methods outlined by the assessment (counting up the number of “yes” responses or totaling the number of points scored in a particular section)

17
Q

Standard Scores vs. Scaled Scores (Scoring and Interpretation)

A
  • Raw scores can then be converted into a standard score.
  • Standard scores allow for the comparison of results between assessments and over time. Compares the performance of the client to a larger sample that has similar characteristics (ex. compare the performance of a 3-year-old to other 3-year-olds)
  • Can also be converted into scaled Scores
  • Scaled Scores: Allow for comparison of items across tests or subtests. Calculated to account for differences in difficulty across testing items
18
Q

Standard Error

A

The amount of potential error that is reported for each score and should be taken into account when interpreting the results of an assessment
-Ex. A student may have a standard score of 79 with a standard error of 4. This means that you could say the students “true” score would be anywhere from 75 to 83 (79+4 and 79-4)

19
Q

Standard Deviation (SD)

A

Based on a normal distribution of scores and indicated the degree variance from the mean (average score). When a client falls within 1 SD of the mean (+/-), we typically say they are average (or approximately 68.29% of people would perform the same as them on the assessment)

  • 1-2 SD (+/-) from the mean indicates a definite difference in performance, and would generally be a “red flag” to an assessor. (less than approximately 5% of the population would score similarly on this assessment)
  • For each test, there is a different mean and standard deviation. (The manual will indicate statistics for you)
20
Q

Standard Deviation Example

A