W9 - Reliability & Validity Flashcards

1
Q

Define reliability

A

Consistency of measurements, an ind’s performance on a test or the absence of measurement error.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Classical Test Theory

A

Spearman 1904

O = T + e

O = Observed score 
T = True score 
e = Error
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are the 2 types of error measurement?

A

Systematic error

Random error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Define systematic error

A

Consistent error which biases the true score + doesn’t affect reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Define random error

A

Unpredictable error which biases the true score + does affect reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Ways to minimise error

A

Train researcher to ensure proficient use of instrument

Repeats

Compare data from 2+ researchers

Careful design of study protocol

Consider choice of instrument

Calibrate instrument

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Common technique used to assess relative reliability across time/researchers/writers…

A

Pearsons correlation coefficient

Higher correlation = ⬆️ reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How can relative reliability be assessed?

A

Through the test-retest reliability

= Assess the stability of the measurements on different occasions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is used when doing the test-retest reliability?

A

2 tests: Pearsons correlation coefficient

2 or + tests: Intraclass correlation coefficient (ICC)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Inter-rater reliability

Reliability / consistency across raters

A

Correlating the scores obtained from a group of participants by 2 or + researchers

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What does internal consistency stand for?

A

Reliability across different parts of a measurement instrument

i.e items within a sub-scale on a questionnaire

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

How is internal consistency assessed?

A

Using Chronbach’s alpha reliability coefficient

Values range from 0-1

Closer to 1 = higher reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

List some terms used for absolute reliability

Also known as measures of absolute reliability

A

Technical error of measurement

SE of measurement

Coefficient of variation

Limits of agreement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Define validity

A

Extent to which a test/instrument measures what its supposed to measure.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What are the types of validity?

A

Validity of measurement

Validity of a study

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What comes under validity of measurement?

A

Face validity

Content

Construct validity

Criterion

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Validity of measurement

What comes under criterion validity?

A

Concurrent

Predictive

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What comes under validity of a study?

A

Internal

External

19
Q

Define face validity

A

Whether the method of data collection obviously involves the factor being measured.

20
Q

Define content validity

A

If the instrument adequately covers the domain of interest.

21
Q

Define construct validity

A

Assesses extent to which an instrument accurately measures hypothetical constructs

22
Q

What are the ways of assessing construct validity?

A

Convergent validity

Discriminant validity

23
Q

ASSESSING CONSTRUCT VALIDITY

Convergent validity

A

Scores on an instrument to those on a similar measure

24
Q

ASSESSING CONSTRUCT VALIDITY

Discriminant validity

A

Scores on an instrument are NOT related to those from an instrument which assess a different construct.

25
Criterion-related validity
Looks at whether the scores on an instrument are related to scores on a previously validated measure.
26
What are the ways of criterion-related validity?
Concurrent validity Predicative validity
27
CRITERION-RELATED VALIDITY Concurrent validity
Scores collected at roughly the same time
28
CRITERION-RELATED VALIDITY Predicative validity
Criterion instrument completes at a later date
29
Commonly used technique to assess criterion-related + construct validity
Pearsons Correlation Coefficient
30
Can an instrument be reliable but not valid?
Yes As it could be consistently measuring the wrong thing
31
Can an instrument be valid but not reliable?
NO
32
Internal validity
Refers to the ability to attribute changes in the dependent variable to the manipulation of the independent variable
33
External validity
Refers to the ability to generalise the results of a study to other settings + other individuals
34
Threats to internal validity
Maturation (Age/Growth) ? Selection bias ? Expecting certain results ? Measurement + equipment? - can be overcome by freq calibration Mortality ? (w/drawal / drop out)
35
Threats to internal validity How can expecting certain results be avoided?
Blinding / double blind study
36
Threats to external validity
Reactive or interactive effects of testing Interaction of selection of bias + treatment Reactive effects of experimental arrangements Multiple-treatment interference
37
THREATS TO EXTERNAL VALIDITY How does reactive or interactive effects of testing have an influence?
Pre-test makes a participant more aware or sensitive to the treatment
38
THREATS TO EXTERNAL VALIDITY How does interaction of selection bias + treatment have an influence?
Treatment is only effective in the group selected
39
THREATS TO EXTERNAL VALIDITY How does Reactive effects of experimental arrangements have an influence?
Treatments effective in lab may not transfer to the real world
40
THREATS TO EXTERNAL VALIDITY How does Multiple-treatment interference have an influence?
Effects of a previous treatment may influence subsequent ones
41
What is the definition of relative reliability?
The degree to which data maintain their position in a sample with repeated measurements
42
What is the definition of absolute reliability?
The degree to which repeated measurements vary for individuals
43
3. Which of the following describes test-retest reliability? a. Consistency across items b. Consistency across raters c. Consistency across time points d. None of the above
Consistency across time points
44
When scores on an instrument are related to scores on a previously validated measure, which type of validity has been established?
Criterion-related validity