# W9 - Reliability & Validity Flashcards

1
Q

Define reliability

A

Consistency of measurements, an ind’s performance on a test or the absence of measurement error.

2
Q

Classical Test Theory

A

Spearman 1904

O = T + e

O = Observed score
T = True score
e = Error
3
Q

What are the 2 types of error measurement?

A

Systematic error

Random error

4
Q

Define systematic error

A

Consistent error which biases the true score + doesn’t affect reliability

5
Q

Define random error

A

Unpredictable error which biases the true score + does affect reliability

6
Q

Ways to minimise error

A

Train researcher to ensure proficient use of instrument

Repeats

Compare data from 2+ researchers

Careful design of study protocol

Consider choice of instrument

Calibrate instrument

7
Q

Common technique used to assess relative reliability across time/researchers/writers…

A

Pearsons correlation coefficient

Higher correlation = ⬆️ reliability

8
Q

How can relative reliability be assessed?

A

Through the test-retest reliability

= Assess the stability of the measurements on different occasions.

9
Q

What is used when doing the test-retest reliability?

A

2 tests: Pearsons correlation coefficient

2 or + tests: Intraclass correlation coefficient (ICC)

10
Q

Inter-rater reliability

Reliability / consistency across raters

A

Correlating the scores obtained from a group of participants by 2 or + researchers

11
Q

What does internal consistency stand for?

A

Reliability across different parts of a measurement instrument

i.e items within a sub-scale on a questionnaire

12
Q

How is internal consistency assessed?

A

Using Chronbach’s alpha reliability coefficient

Values range from 0-1

Closer to 1 = higher reliability

13
Q

List some terms used for absolute reliability

Also known as measures of absolute reliability

A

Technical error of measurement

SE of measurement

Coefficient of variation

Limits of agreement

14
Q

Define validity

A

Extent to which a test/instrument measures what its supposed to measure.

15
Q

What are the types of validity?

A

Validity of measurement

Validity of a study

16
Q

What comes under validity of measurement?

A

Face validity

Content

Construct validity

Criterion

17
Q

Validity of measurement

What comes under criterion validity?

A

Concurrent

Predictive

18
Q

What comes under validity of a study?

A

Internal

External

19
Q

Define face validity

A

Whether the method of data collection obviously involves the factor being measured.

20
Q

Define content validity

A

If the instrument adequately covers the domain of interest.

21
Q

Define construct validity

A

Assesses extent to which an instrument accurately measures hypothetical constructs

22
Q

What are the ways of assessing construct validity?

A

Convergent validity

Discriminant validity

23
Q

ASSESSING CONSTRUCT VALIDITY

Convergent validity

A

Scores on an instrument to those on a similar measure

24
Q

ASSESSING CONSTRUCT VALIDITY

Discriminant validity

A

Scores on an instrument are NOT related to those from an instrument which assess a different construct.

25
Q

Criterion-related validity

A

Looks at whether the scores on an instrument are related to scores on a previously validated measure.

26
Q

What are the ways of criterion-related validity?

A

Concurrent validity

Predicative validity

27
Q

CRITERION-RELATED VALIDITY

Concurrent validity

A

Scores collected at roughly the same time

28
Q

CRITERION-RELATED VALIDITY

Predicative validity

A

Criterion instrument completes at a later date

29
Q

Commonly used technique to assess criterion-related + construct validity

A

Pearsons Correlation Coefficient

30
Q

Can an instrument be reliable but not valid?

A

Yes

As it could be consistently measuring the wrong thing

31
Q

Can an instrument be valid but not reliable?

A

NO

32
Q

Internal validity

A

Refers to the ability to attribute changes in the dependent variable to the manipulation of the independent variable

33
Q

External validity

A

Refers to the ability to generalise the results of a study to other settings + other individuals

34
Q

Threats to internal validity

A

Maturation (Age/Growth) ?

Selection bias ?

Expecting certain results ?

Measurement + equipment? - can be overcome by freq calibration

Mortality ? (w/drawal / drop out)

35
Q

Threats to internal validity

How can expecting certain results be avoided?

A

Blinding / double blind study

36
Q

Threats to external validity

A

Reactive or interactive effects of testing

Interaction of selection of bias + treatment

Reactive effects of experimental arrangements

Multiple-treatment interference

37
Q

THREATS TO EXTERNAL VALIDITY

How does reactive or interactive effects of testing have an influence?

A

Pre-test makes a participant more aware or sensitive to the treatment

38
Q

THREATS TO EXTERNAL VALIDITY

How does interaction of selection bias + treatment have an influence?

A

Treatment is only effective in the group selected

39
Q

THREATS TO EXTERNAL VALIDITY

How does Reactive effects of experimental arrangements have an influence?

A

Treatments effective in lab may not transfer to the real world

40
Q

THREATS TO EXTERNAL VALIDITY

How does Multiple-treatment interference have an influence?

A

Effects of a previous treatment may influence subsequent ones

41
Q

What is the definition of relative reliability?

A

The degree to which data maintain their position in a sample with repeated measurements

42
Q

What is the definition of absolute reliability?

A

The degree to which repeated measurements vary for individuals

43
Q
1. Which of the following describes test-retest reliability?

a. Consistency across items
b. Consistency across raters
c. Consistency across time points
d. None of the above

A

Consistency across time points

44
Q

When scores on an instrument are related to scores on a previously validated measure, which type of validity has been established?

A

Criterion-related validity