week 6: Reliability and validity Flashcards

1
Q

what is measurement error?

A

obtained score = true score +- error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

what are 3 common factors for measurement error?

A

1) situational contaminants
2) response set biases
3) Transitory personal factors

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

what are situational contaminants?

A

data is affected by conditions which are produced such as
- friendliness of researchers
- location of data gathering
- environment factors : light, temperature

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Explain response set biases 3

A

1- extreme responses
2- acquiescence response: agree with everything
3- social desirability response

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

explain transitory personal factors

A

data affected by temporary states such as hunger, mood. factors can affect measurement, anxiety can increase heart rate.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

what are the 3 strategies to reduce error?

A

1) standardisation
2) anonymity
3) train interviewers

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

explain the 3 strategies to reduce error

A

standardisation:
collect data at same place and time
assure respondents of ample time given

anonymity:
answer without fear of judgement

train interviewers:
behave the same
assess participants readiness

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

what is reliability?

A

The consistency with which an instrument
measures the target attribute.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

what is validity?

A

The degree to which an instrument measures what it is
supposed to be measuring.
4
Body T
Not BP
Pain scale
Pain level
Anxiety level
Polit & Beck 2014

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

what are the 3 aspects of reliability

A

Stability
Internal Consistency
Equivalence

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

what is stability - reliability test?

A

Stability
* The extent to which scores are similar on two
separate administration of an instrument

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

how do u assess for stability reliability test?

A

Assessed through test-retest reliability procedure
* The same instrument is given twice to the same group of people
* The reliability is the correlation between the scores on the two tests
* The scores on the two tests are not identical but most differences are small
* Reliability coefficient (r), a numeric index that quantifies an instrument’s
reliability can be computed.
0.7 to 0.8 is good
 More appropriate for fairly enduring characteristics (e.g. self-
esteem, personality, IQ test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

what is internal consistency reliability test?

A

Internal consistency (homogeneous)
* The extent that all the subparts of the
instrument measure the same trait.
* Appropriate for most multi-item instruments.
* Evaluated by administering instrument on one
occasion.
* The most widely used reliability approach

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

how to evaluate internal consistency reliability test?

A

Internal consistency
* Evaluated by Cronbach’s alpha (coefficient alpha)
 Acceptable level = 0.7 – 0.9

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

what does it mean if:

 Cronbach’s alpha too low -
 Cronbach’s alpha too high

A

low:-Items are measuring different traits.
too high: – Redundancy of items

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

what is equivalence reliability test?

A

Equivalence
* Concerns the degree to which two or
more independent observers or
coders agree about the scoring on an
instrument.
* Assessed by comparing observations
or ratings of two or more observers.
* A high level of agreement between
the raters indicates a good
equivalence of the instrument.
Reliability

17
Q

how to assess for equivalence reliability test?

A

Assessed through Inter-rater (interobserver) reliability
procedure: which having two or more trained
observers/coders watching an event simultaneously, and
independently recording data according to the
instrument’s instructions.
* A index of agreement is calculated.
* Cohen’s Kappa (κ) is used to measure inter-rater reliability for
categorical outcomes. ( 0.6)
* Intraclass Correlation Coefficient (ICC) is used to measure inter-
rater reliability for continuous measures. ( 0.7)

18
Q

what are the 2 major aspects of validity?

A

Major aspects of validity
* Content Validity
* Criterion-Related Validity

19
Q

what is content validity?

A

Content validity
* Concerns the degree to which an instrument
has an appropriate sample of items for the
construct being measured.
* Adequacy of content of the instrument in
providing full coverage of the concepts of
interest.

20
Q

how to assess for content validity ?

A

An instrument’s content validity is necessarily based on
judgment by an expert panel.
* Have experts rate items on a four-point scale
1 = not relevant 2 = somewhat
3 = relevant 4 = very relevant
* A formal content validity index (CVI) across the
experts’ ratings of each item’s relevance.
* The CVI for the instrument is the proportion of items
rated as either 3 or 4. A CVI score of 0.90 or better
indicates good content validity.

21
Q

what is criterion validity test?

A

Criterion-related validity
* The extent to which an instrument is correspond to some
external criterion of the variable of interest.
* External criterion:
* A gold standard or well-established valid measure for the variable
of interest.

22
Q

what is the difference between concurrent validity and predictive validity?

A

Concurrent
Validity
To reflect the same incident of behavior
with a criterion measure at the same time.
* Administer the tested instrument together
with the criterion measure.

Predictive
Validity
* To predict subjects’ responses in the
future
* Criterion measure is used to assess
subjects’ response in the future.