Week 6: Reliability Flashcards

1
Q

validity: design vs measurement

A

[statistical conclusion validity, internal validity, external validity] =/= [construct validity, measurement reliability]

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

6 indicators of construct validity

A
  • face validity
  • content validity
  • predictive validity
  • concurrent validity
  • convergent validity
  • discriminative validity
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

face validity

A

content of measure appears to reflect construct being measured (eg rating scale)
- based on judgement of instrument

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

content validity

A
  • comparing content of measurement w content that defines construct
  • questionnaire w good content validity should include questions on all components
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

predictive validity

A
  • measurement to predict future behaviour/outcome
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

concurrent validity

A
  • relationship between measurement and criteria at the same time (concurrently)
  • important when trying to validate new instrument
  • demonstrated when test correlates well w measure previously validated
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

convergent validity

A

measuring 2 theoretical constructs that are related in theory -> measurements should also be related
- extent to which scores are related to other measures

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

discriminative validity

A
  • degree to which measurement is NOT related to other (not theoretically related) measurements
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

threats to construct validity

A
  • inadequate description of construct
  • inadequate measurement
  • inadequate attention to IV levels
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

inadequate description of construct

A
  • focus on limited number of components -> concluding that entire construct is evaluated
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

inadequate measurement

A

only one measurement/method to gather data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

inadequate attention to IV levels

A

only one/two levels of multilevel variable are used

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

reliability

A
  • replicability/consistency
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

internal reliability

A
  • reliability of instrument chosen to evaluate DV in study
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

reliability coefficient

A
#; relationship between multiple test administration/multiple items 
- 0 - 1 scale (0.8 is good)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

reliability of instrument (internal reliability)

A

evaluated by:

  • parallel forms
  • test-retest
  • split-half
  • Cronbach’s alpha
17
Q

parallel forms reliability

A
  • similarity between different sets of questions targeting same construct
  • different test to different people
18
Q

test-retest reliability

A
  • over time
  • testing same people on same test on different times
  • expected to yield similar results
19
Q

split-half reliability

A
  • expected to score similar on both halves

- both halves used in same test

20
Q

Cronbach’s alpha

A
  • estimate of different items on same scale to represent same content
  • calculated between each pair of questions on test
21
Q

reliability of observations (2)

A
  • inter-rated reliability

- intra-rater reliability

22
Q

inter-rated reliability

A
  • degree to which 2 independent raters/observers record/code same situation similarly
23
Q

intra-rater reliability

A
  • degree to which same rater/observer records similar data about same observation on 2 different occasions