Lecture 16 - Questionnaire Design Flashcards Preview

Statistics > Lecture 16 - Questionnaire Design > Flashcards

Flashcards in Lecture 16 - Questionnaire Design Deck (19)
Loading flashcards...
1
Q

What is one advantage of open questions?

A

More rich, qualitative data capturing peoples experiences

2
Q

What is one disadvantage of open questions?

A

the responses are time consuming and harder to analyse

3
Q

What is a dichotomous scale?

A

A question with only two possible answers, usually ‘yes’ and ‘no’ or ‘true’ and ‘false’

4
Q

What is the classical theory of error?

A

by P. Kline

that any observed score is comprised of the TRUE score and error together

5
Q

What is a split-half reliability test?

A

Items are split into two halves, usually randomly or arbitrarily

Then the scores of each half are correlated

correlation of 0.8 or higher is adequate reliability

6
Q

What is a parallel forms reliability test?

A

where there is a large pool of items, and these items are randomly divided into two tests which are then given to the same participants

then the correlation between the two forms is calculated

difficult as needs a large number of items to be generated

7
Q

What is Cronbach’s Alpha?

A

it is a value that is mathematically equivalent to the average of all possible split-half extimates

It goes up to +1 and values of +0.7 or above mean acceptable internal reliability

8
Q

What is the Kuder-Richardson Formula? (KR-20)

A

it measures internal reliability for measures with dichotomous choices

Goes up to +1, scores of +0.7 or greater indicate acceptable internal reliability

9
Q

How can we use test-retest to assess reliability

A

administer the test twice with an interval of time between

correlate scores between the two, 0.7 or above means we can assume test-retest reliabilitu

can be influenced by practice effects, boredom effects etc

10
Q

How can we test for inter-rater reliability?

A

Cohen’s Kappa - values up to +1. Used when there are two raters

Fleiss’ Kappa - an adaptation of the above for when there are more than 2 raters

They both measure agreement between the raters, not accuracy!!

11
Q

What is intra-rater reliability?

A

when the SAME rater does the SAME assessment on two or more occastions

they are then correlated

not great as rater is aware of their previous assessments and may be influenced by them

12
Q

What are some things which can cause a lack of reliability during self-report

A
SDB
guessing
ambiguous or leading questions
poor instructions
low response rate
13
Q

What is faith validity?

A

Just a belief in the validity of something without any objective data!

14
Q

What is face validity?

A

Wether a test looks like it measures the concept it intends to - usually experts will look at it and say wether or not they think it will measure X accurately

15
Q

What is content validity?

A

the extent to which a measure represents ALL facets of the phenomena being measured: e.g. jealous attitudes, jealous feelings, jealous behaviours

16
Q

What is construct validity?

A

it establishes a clear relationship between the theoretical idea and the measure that has been created

Two types:
convergent
discriminant

17
Q

What is convergent validity?

A

A type of construct validity

That the measure shows associations with measures you would expect it to - e.g. jealously towards a spouse has been tested and is related to jealousy towards friends and colleagues.

18
Q

What is discriminant validity?

A

the opposite of convergent - the measure is not related to things it shouldn’t be - e.g. jealousy is not strongly correlated with introversion as they should be different measures

19
Q

What is predictive validity?

A

Can a measure accurately predict someones future behaviour?