Research Methods⚗️ Flashcards
(28 cards)
Goals of questionnaire design
Obtain facts about a person
Obtain info about their attitudes and beliefs
Find out behaviour
Guidelines for constructing questionnaire items
should be exact, simple, avoid biased and emotive words short items
respondents should be able to read items quickly, understand intent and give answer easily
Rules of thumb for constructing questionnaires
Clarity- unambiguous
No unwarranted assumptions-only follow up on those who answered yes to something
Use simple language
Avoid double-barralled language- responents could agree with one part not the other
Respondents must be competent to answer- avoid expert language
Avoid using ‘not’- may be misread or overlooked
Avoid double negatives-
Avoid biased language- social desirability bias
Avoid very mild or extreme statements everyone will agree or disagree with- reduces variance
Response scales
Obtain variability in responses so a measure can covariation with other measures
Can be achieved by having many items or many response items
Likert scales
5-7 items, extreme ends called Likert scale response anchors
Associate numbers with responses to conduct statistical analysis to data
Evaluate likert scale
Universal method, easily understood and quantified for analysis
Doesn’t force respondents to take a stance on a topic
Allows degree of agreement and neutral viewpoint
Limited items of choice
Space between choices is not equidistant
Not objective measure (depends how interpreted)
Answers may be influenced by previous items, may concentrate on one response side
Tend to avoid extreme options on scale
0-100 scale and negative
On a scale of 0-100 how confident are you?
May respond in multiples of 5 or 10 reducing response options
False precision, how meaningful is the difference between 34 and 36?
Visual analogue scale
Evaluate
Level of agreement between two end points
Boring———-interesting
More continuous, sensitive to changes from variable, cannot remember previous response
False precision problematic to interpret
Semantic differential scale
Select point between adjectives
Harmful -2 -1 0 1 2 Beneficial
Easy to understand, reliable, versatile and accurate
Position response bias, difficult to interpret if it is neutral
Thurston type scale
Number scale applied to attitudes by experts on panel with extremes at each end (1-11)calculate mean for each item, becomes the item’s scale values
For each scale if person agrees they are given a score equivalent to strength of that item
Scores in brackets (strength) applied if respondent agrees
Thurston type scale
Evaluate
Easy for respondents to complete, only indicate agreement/disagreement rather than strength of agreement
Easy to develop alternate forms
Judges cannot be completely neutral
Difficult to choose most discriminating items
General questionnaire format (spaces)
Maximise white space, don’t squeeze questions to the page
Balance between having too many pages too
General instructions on info sheet
Researcher’s and contact details Purpose of questionnaire Why selected How long takes How to rate items How to return to researcher
Question order
Randomised items? or may be confusing
May place most important questions at the start
Duller less important questions at the end
Online survey tools evaluate
Collect responses automatically, efficient, good for large samples and can format in SPSS
Need email details, low response or completion rates, reduced sample, biased, limits generalisability
Latent variable
Psychologists have to study what is not observable
Latent variable is concealed and not explicitly presented
Inferred from other variables that are observed (directly measured)
Self report measures and capturing the construct
Measures are fallible and imperfect, to capture the construct
Two types of measurement error-random error
Random error- caused by factors that randomly affect measurement e.g. low mood. No consistent effects on sample, as many negative as positive errors so would add to 0. Does not affect group performance (known as ‘noise’) doesn’t affect the average only the variability around the average
Two types of measurement error- systematic error
Any factor that systematically affects measurement of variable across the sample e.g. noise disruption
Consistently positioned or negative, considered biased
Does affect the average (bias)
How to reduce measurement error
Pilot test population
Feedback from respondents: how easy or hard the measure was and how testing affected performance
Train data collectors to not introduce bias
Make measurement tool as accurate as possible
Reliability
Consistency of test scores: how free it is from random error
Ability to produce repeatable and consistent results across time, situation, researchers
It is constrained slightly by psychological variable
Some remain the same over time (personality) while others change rapidly (mood)
Reliability types
Test-retest- Same person tested on two different occasions, calculate correlation between scores
Split half reliability- scores on first half of scale correlate with scores on second half or odd items correlated with even
Internal consistency-degree items on scale measure same underlying attribute. Respondents should answer the same for each question, if not the scale may be poorly worded
=CHRONBACH’S ALPHA
Chronbach’s alpha
Indicates average correlation among scale items
Better indicator than test-retest and split-half
Range from 0-1
0.70 + is desired
If negative = a problem. May need reversing
Affected by number of scale items however
Validity
Accuracy, extent measure assesses what it claims to
Psychological measures rely on theory (not constructed randomly)