Research Methods⚗️ Flashcards

(28 cards)

1
Q

Goals of questionnaire design

A

Obtain facts about a person
Obtain info about their attitudes and beliefs
Find out behaviour

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Guidelines for constructing questionnaire items

A

should be exact, simple, avoid biased and emotive words short items
respondents should be able to read items quickly, understand intent and give answer easily

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Rules of thumb for constructing questionnaires

A

Clarity- unambiguous
No unwarranted assumptions-only follow up on those who answered yes to something
Use simple language
Avoid double-barralled language- responents could agree with one part not the other
Respondents must be competent to answer- avoid expert language
Avoid using ‘not’- may be misread or overlooked
Avoid double negatives-
Avoid biased language- social desirability bias
Avoid very mild or extreme statements everyone will agree or disagree with- reduces variance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Response scales

A

Obtain variability in responses so a measure can covariation with other measures
Can be achieved by having many items or many response items

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Likert scales

A

5-7 items, extreme ends called Likert scale response anchors
Associate numbers with responses to conduct statistical analysis to data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Evaluate likert scale

A

Universal method, easily understood and quantified for analysis
Doesn’t force respondents to take a stance on a topic
Allows degree of agreement and neutral viewpoint

Limited items of choice
Space between choices is not equidistant
Not objective measure (depends how interpreted)
Answers may be influenced by previous items, may concentrate on one response side
Tend to avoid extreme options on scale

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

0-100 scale and negative

A

On a scale of 0-100 how confident are you?

May respond in multiples of 5 or 10 reducing response options
False precision, how meaningful is the difference between 34 and 36?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Visual analogue scale

Evaluate

A

Level of agreement between two end points
Boring———-interesting

More continuous, sensitive to changes from variable, cannot remember previous response
False precision problematic to interpret

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Semantic differential scale

A

Select point between adjectives
Harmful -2 -1 0 1 2 Beneficial

Easy to understand, reliable, versatile and accurate
Position response bias, difficult to interpret if it is neutral

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Thurston type scale

A

Number scale applied to attitudes by experts on panel with extremes at each end (1-11)calculate mean for each item, becomes the item’s scale values

For each scale if person agrees they are given a score equivalent to strength of that item
Scores in brackets (strength) applied if respondent agrees

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Thurston type scale

Evaluate

A

Easy for respondents to complete, only indicate agreement/disagreement rather than strength of agreement
Easy to develop alternate forms

Judges cannot be completely neutral
Difficult to choose most discriminating items

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

General questionnaire format (spaces)

A

Maximise white space, don’t squeeze questions to the page

Balance between having too many pages too

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

General instructions on info sheet

A
Researcher’s and contact details 
Purpose of questionnaire 
Why selected 
How long takes 
How to rate items 
How to return to researcher
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Question order

A

Randomised items? or may be confusing
May place most important questions at the start
Duller less important questions at the end

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Online survey tools evaluate

A

Collect responses automatically, efficient, good for large samples and can format in SPSS

Need email details, low response or completion rates, reduced sample, biased, limits generalisability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Latent variable

A

Psychologists have to study what is not observable

Latent variable is concealed and not explicitly presented
Inferred from other variables that are observed (directly measured)

17
Q

Self report measures and capturing the construct

A

Measures are fallible and imperfect, to capture the construct

18
Q

Two types of measurement error-random error

A

Random error- caused by factors that randomly affect measurement e.g. low mood. No consistent effects on sample, as many negative as positive errors so would add to 0. Does not affect group performance (known as ‘noise’) doesn’t affect the average only the variability around the average

19
Q

Two types of measurement error- systematic error

A

Any factor that systematically affects measurement of variable across the sample e.g. noise disruption
Consistently positioned or negative, considered biased
Does affect the average (bias)

20
Q

How to reduce measurement error

A

Pilot test population
Feedback from respondents: how easy or hard the measure was and how testing affected performance
Train data collectors to not introduce bias
Make measurement tool as accurate as possible

21
Q

Reliability

A

Consistency of test scores: how free it is from random error
Ability to produce repeatable and consistent results across time, situation, researchers
It is constrained slightly by psychological variable
Some remain the same over time (personality) while others change rapidly (mood)

22
Q

Reliability types

A

Test-retest- Same person tested on two different occasions, calculate correlation between scores

Split half reliability- scores on first half of scale correlate with scores on second half or odd items correlated with even

Internal consistency-degree items on scale measure same underlying attribute. Respondents should answer the same for each question, if not the scale may be poorly worded
=CHRONBACH’S ALPHA

23
Q

Chronbach’s alpha

A

Indicates average correlation among scale items
Better indicator than test-retest and split-half

Range from 0-1
0.70 + is desired
If negative = a problem. May need reversing
Affected by number of scale items however

24
Q

Validity

A

Accuracy, extent measure assesses what it claims to

Psychological measures rely on theory (not constructed randomly)

25
Validity types
Face/ content validity- simplest form, do items look like they measure what they intend to? Expert panels may assess this Construct validity- test score against theoretical hypothesis concerning underlying construct Criterion validity- relationship between your scale scores and another measurable criterion Different types: concurrent, convergent, discrimination, predictive
26
Types of criterion validity
Concurrent - scores should correlate to valid measures of same/highly related construct e.g. to Beck’s depression inventory Convergent-compare to an observation of the construct e.g. Clinical psychologist’s assessment Discriminant-two measures not supposed to be related are unrelated Predictive validity-how we’ll measure can predict future outcome
27
Guidelines for scale development
Define latent variable of interest Generate item pool (based on theory) Review items for content (and send to experts) Administer items to pilot Evaluate items (reverse code negatively worded items) Compute coefficient alpha Validate the scale
28
Skegness and Kurtosis
Skewness z= Skewness stat/std error (s) Kurtosis z= Kurtosis stat/std error (k) If either z is greater than 3.08 then it is significant so reject it