Week 3 (Advanced Issues in Survey Research Methods) Flashcards

(38 cards)

1
Q

Types of Survey Design

A

-Questionnaire
-Observations
-Interviews

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What does survey measure/look at

A

-Associations/correlations
-But NOT causal relationship, as doesn’t involve manipulation
-Aims to describe individuals and offer predictions about them based on socio-demographic information

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Stages of survey design

A

-Theory
-Hypothesis
-Operationalisation of concepts
-Survey studies/ Experimental design
-Selection of participants
-Data collection
-Data analysis
-Findings

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Open Questions (Survey)

A

-Participants are free to answer how they want
-Potentially more representative of person’s true opinions
-More true to real life (higher ecological validity)
-Often highlights aspects of research topic the researcher may not have thought of
-Difficult to analyse

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Closed Questions (Survey)

A

-Participants have to select a pre-specified response (1-7)
-Useful for statistical analysis/ diagrams
-Much easier to score and analyse
-More susceptible to designer bias

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Checklist for standardised measures

A

-Item generation: How were the items generated? Who was consulted?
-What evidence is there that the measure is reliable?
-Is the measure valid?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Types of reliability

A

-Test-retest reliability
-Inter-rater reliability
-Inter-method reliability
-Internal consistency

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Types of validity

A

-Predictive validity
-Concurrent validity
-Convergent validity
-Discriminant validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Questions to ask when planning scale (Kryiazos & Stalkikas, 2018)

A

-How many items are needed?
-What response scale is most appropriate?
-How will the scale be scored?
-Which psychometric model is most appropriate
-What item evaluation process is suitable
-How to administer the test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Likert scale

A

-Used to express extent of agreement, frequency etc.
-Offer different strengths of opinion
-Often used in standardised measures to produce scores
-Direct form of questioning

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Semantic Differential

A

-First developed by Osgood in 1952
-More indirect
-Takes advantage of peoples ability to think metaphorically
-Asks people to rate something on a number of pairs dimensions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Categorical Answers

A

-Short questions offer a range of options or categories
-Often used for demographic information
-Not typically used on standardised measures

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Common errors when writing questions

A

-Ambiguous questions
-Technical terminology
-Leading questions
-Hypothetical questions
-Patronising tone
-Value judgements
-Context effects
-Multiple content/ double-barrelled questions
-Hidden assumptions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Ambiguous questions

A

The format or focus of the required answer is unclear

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Technical terminology

A

Words may be unfamiliar

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Leading questions

A

Leads participants to a particular response

17
Q

Hypothetical questions

A

Response dependent on hypothetical condition

18
Q

Value judgements

A

Personal bias affecting wording

19
Q

Context effects

A

Issues relating to other questions on the form

20
Q

Multiple content/ double-barrelled

A

Asks about more than one thing

21
Q

Item Response Theory

A

-Provides an estimation of the discrimination parameter; how well an item functions as a measure of a latent construct (similar to factor loading)
-Allows assessment of difficulty/ severity threshold in the latent construct continuum

22
Q

Measurement invariance

A

-Captures the degree to which your measure is testing the same thing across conditions
-Increasingly popular
-Useful for testing differences across time or across conditions

23
Q

Test-retest reliability

A

If I measure your height today, will it be the same as yesterday

24
Q

Inter-rater reliability

A

If I measure you, will your height be the same as when someone else measured you

25
Inter-method (or parallel forms) reliability
Will your height be the same if I use a tape measure as when I used a ruler
26
Internal consistency
Did you respond to similar questions in a similar way
27
Split-half technique (Internal consistency)
You test consistency across two halves -E.g, if questionnaire has 20 items, you test if first 10 responses correlate with second 10 questions.
28
Construct validity
Is this a valid measure of tapping into the construct you are attempting to measure
29
Content Validity
The extent to which the domain of interest is adequately represented by the scale items
30
Predictive Validity
Is able to predict future behaviour
31
Concurrent validity
Is score related to another criterion of this construct tested at the same time
32
Convergent validity
Is the score related to other measures of the same construct
33
Discriminant validity
Is the score different from measures that theoretically measure something else
34
Floor-ceiling effect
The extent to which the scores cluster near the low/high extreme on the scale
35
Mail Survey Pros and Cons
Pros -Low costs -No interviewer bias -Suitable for sensitive topics Cons -Low completion rates -Prone to non-response bias -Errors arising from participants misunderstanding questions
36
Personal interviews Pros and Cons
Pros -More control and flexibility over how the survey is administered -Can make use of computer technology Cons -Interviewer bias = influences responses or records them incorrectly -High costs (time/financial)
37
Telephone surveys Pros and Cons
Pros -Cost effective & time-effective -Can make use of computer technology Cons -Sampling bias -Interviewer bias -Low response rate
38
Internet survey Pros and Cons
Pros -Low costs -Access to large, diverse, geographically remote, or underrepresented populations -Quick data collection -Can be used in conjunction with specialist software to automate recruitment, data entry, and incipient data analysis Cons -Sampling bias -Non-response bias -Lower response rates compared to mail, personal and telephone surveys -Lack of control over the research environment