04 Survey research 1/2 Flashcards
(44 cards)
What is a survey?
Survey: A cross-sectional design in relation to which data are collected predominantly by questionaire or by structured interview on more than one case and at a single point in time to collect a body of quantitative data
What are the two forms of information in a survey?
Forms of information:
- Self reports: (respondent) –>individuals answer for themselves
- Key informant reports (“informant) –>one or a few answers for a larger propulation
What is the typical survey process?
1.Selection of research variables
2.Selection of survey method
3.Questionaire design
4.Data collection
5.Measurement evaluation
6.Data analysis
What is the psychology of survey response?
- Participants typically develop attitudes when answering a survey
- Survey questions trigger a cognitive process of response generation
- Comprehension (Attent survey, deduce the survey´s content)
- Retrieval (Memory search, long-term information enters short-term infomation)
- Judgement: (Combination of info and formation judgement)
- Response selection(Map judgement on offered response categories)
- Response reporting (provide answer)
What are the two measurement error of a survey?
random error:
- Expected value equal 0
- has no correlation with systematic error or true value
- threat to measureent reliability
–>(Hit the area, but point spread across whole area)
systematic error:
- expected value not equal 0
- often assumed no correlation with true value
- threat to measurement validity
(points are not on the area at all)
When do systematic error generally happen?
systematic errors happen when there are a lot of factors that influence the measurement results,
–>i.e, such as social desirability that could lead to inflated answers
What is the impact of the random error measurement on obersved correlationas between variables? (correlatiion coefficient)
Random error:
- low reliability, leads to a high variance random error
–>only denominator, decrease correlation coefficent
Random errors, have no impact on the sign of the correlation
- lead to underestimation of correlation
- No change of the sign of the correlation
What is the impact of the systematic error measurement on obersved correlationas between variables? (correlatiion coefficient)
Systematic error:
- low validity, lead to high covariance of systematic error and high variance systematic error
–>both numerator and denominator:
Impact of systematic error is not predictable:
- Correlations between systematic error components can dishort the strength as well as the sign of the true correlation
–>high covariance: inflate (increase) correlation
–>low covariance: lowers correlation and eventuall flips the sign of the correlaton efficient
- IN the abscence of such correlations: systematic measurement error has the same consequences as random error
WWhat are the sources of systematic errors in survey research?
(Common method bias, key informant bias, non-response bias)
Common method bias: Distortion of samples covariance structure that arises due to the fact that the same data source was used for measuring both IV and DV within a certain dependence structure
Key informant bias: Distortion of samples covariance structure that arises due to the fact that the data collection has taken place through key informants (provide info about larger social unit)
Non-response bias: Distortion of samples covariance structure that arises due to the fact that the structure of the effective sample, does not coincide with the structure of the original sample
WWhat are the sources of systematic errors in survey research?
(Social desirability, response patterns)
Social desirability: Distortion of samples covariance structure that arises due to the fact that the tendency of respondents to reply in a manner thatt will be viewed favorably by others –>i.e. over-reporting goood behavior, undereporting bad behavior
Response patterns: Distortion of samples covariance structure that arises due to the fact that the respondents, regardless of the question content, favor certain response categories
a) Tendency to agree
b) tendency to Cross-middle points
c) tendency to give extreme responses
What are the four relevant forms of construct validity?
- Predictive validity (demonstrate that question/scale has similar properties as expected
- Content validity/face validity: check if used items actually capture the right concept
- Convergent validity: items that capture same construct, have high relationships
- Discriminant validity: items that are suppose to measure different, have weak relationships
What are the two reliability checks?
- reliability of the complete scale (construct reliability)
- reliability of single items (indicator reliability)
What are the methods of first generation?
- Cronbach´s alpha and explaine variance in exploratory factor analysis at construc level
- Item-to-total correlation and factor loading in exploratory factor analysis at item level
What are the methods of second generaton?
- Factor reliability and averrage variance explained at construct level
- Indicator reliability at indicator level (not relevant)
What is cronbach´s alpha?
Issue not possible to measure error variance –>Cronbachs alpha
Idea: Variance shared by the items reflects the variance of the phenomenon
Cronbach´s alpha: how well survey items hang together and measure a commong underlying construct consistently
–>threshold: >70 –>that items good
What is the interpretation of cronbach´s alpha?
- alpha correspnds to the correlation of the scale with an alternative scale with the same number of items
- alpha corresponds to the correlation of the scale with true score
What is reliability?
Reliability: refers to the degree to which the observed variable measures the true value and i error free
–>Basic idea: the larger the variance of the true variable compared to the overall variance, the higher the reliability
What is the issue with cronbach´s alpha?
issue: Cronbachs alpha does not provide any diagnostic information at the item level
–>Solution: Item-to-total correllation:
- assess the relationship between individual items and the total score of the scale.
- It provides insights into how well each item correlates with the overall score of the scale
Briefly explain two options to increase measurement reliability in this situation? (Cronbachs alpha, item to toal correlation)
- Add further items to the scale (can potentially increase reliability depends on the item added
- Eliminate Item 5 , correlations between item 5 and the other items are dangeroulsly low
Explain how a situation can arise in which a high Alpha is observed, even though the items do not validly measure the underlying construct?
When the scales measures something else: reliability does not automatically lead to validity, it is a necessary but not sufficient condition
What is the process of questionnaire design?
- Decision about the survey content
- Decision about the question content
- Decision about the question format
- Decision about the question wording
- Decision about the question sequence
- Decision about the survey layout
- Pretest of the questionaire
Question content: what are guidelines for scale development?
Guidelines for scale development:
– Ideally: four to six items
– Use the same expressions for the same issues
– Point out the necessity of repeating similar questions
– The entire width of the phenomenon should be covered by items (try to achieve representativeness
What are the two formats possible in the decision about the question format?
open-ended vs. closed questions
Open-ended: respondents can respond in any way they want
Closed-questions: respondents choose an answer from a given list
What are the advantages of open-ended questions?
- participants can answer in their own way
- allows for unusual answers
- little prior knowledge of researcher necessary