Class 8: Research and methods Flashcards

(52 cards)

1
Q

Stantcheva: What are the benefits of using survey data?

A

Surveys allow researchers to directly elicit otherwise invisible factors like beliefs, attitudes, and perceptions, creating controlled variation instead of relying on observational data on behavior

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Stantcheva: What is coverage error?

A

The difference between the potential pool of respondents and the target population -> for example with online survey, you cannot survey people who are not online, even though they might be in your target population

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Stantcheva: What is sampling error?

A

The difference between the potential pool of respondents and your planned sample (the fact you are only drawing a sample)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Stantcheva: What is non-response error?

A

The difference between the target sample and actual sample, due to respondents ignoring the survey invitation or refusing to participate

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Stantcheva: What is attrition?

A

Respondents dropping out of the study before completing it -> if not random (for example connected to specific respondent characteristics), introduces bias

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Stantcheva: What is selection bias?

A

The difference between those who start they survey and those in the target population

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Stantcheva: What are 4 benefits of online surveys in terms of selection?

A
  1. Flexibility for respondents
  2. Convenience of technology for filling it out
  3. Reaching hard-to-reach populations such as the youth, people in remote areas etc.
  4. Variety of potential rewards for taking the survey
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Stantcheva: What are 3 good practices before respondents start the survey?

A
  1. Provide information on how data is stored
  2. Assure anonymity and confidentiality
  3. Provide limited information about the purpose to not bias respondents
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Stantcheva: 3 ways to prevent/minimize attrition

A
  1. Smooth respondent experience
  2. Shorter survey
  3. Incentives
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Stantcheva: What are 3 response biases?

A
  1. Moderacy response bias: choosing a middle category every question
  2. Extreme response bias: choosing extreme values every question
  3. Response order bias: the order of the response options affects responses (systematically choosing first option)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Stantcheva: What are cognitive-based and normative-based order effects?

A

Cognitive: priming (content becomes salient in later questions), carryover (answering questions similarly)

Normative: wanting to appear fair, consistent, and moderate

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Stantcheva: What is social desirability bias?

A

The desire of respondents to avoid embarassment and project a favorable image to others, resulting in not revealing their actual attitudes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Stantcheva: 2 ways to reduce social desirability bias

A
  1. Online surveys with no surveyor
  2. Assurance of anonymity and confidentiality
  3. Reminding people before sensitive questions of anonymity
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Stantcheva: What is acquiescene bias?

A

The tendency to answer questions in a positive way, such as systematically selecting agree, true, or yes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Stantcheva: 2 ways to minimize acquiescene bias

A
  1. Asking clear, unambiguous questions
  2. Avoid questions that only have options agree/disagree, true/false, yes/no
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Stantcheva: What are 3 challenges of survey experiments?

A
  1. Risk of confounding and pre-treatment contamination
  2. Different respondents interpreting treatment in different ways
  3. How well treatment mimics real-world treatment (external validity)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Stantcheva: 3 types of survey experiments

A
  1. Information treatments
  2. Priming treatments
  3. Factorial experiments (vignette and conjoint)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Stantcheva: Between-subject vs. within-subject designs

A

Between-subject: each respondent only receives one treatment

Within-subject: Each respondent is subject to multiple treatments, in different orders

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Stantcheva: What is a conjoint experiment?

A

List descriptions of people and situations that vary attributes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Stantcheva: 2 benefits and 1 disadvantage of factorial designs

A

They present realistic, hypothetical scenarios

They limit social desirability bias

Limited external validity, as people might not make similar choices in real life

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Stantcheva: How to measure persistence of treatment?

A

Follow-up surveys

22
Q

Wood et al.: How are left behind communities often defined, and how do Wood et al. define them?

A

Traditionally: Working class, socially excluded, lacking in education, and ethnically white

Wood et al.: Residents of places that are stigmatized by elite actors as harboring support for populist views due to those places having multiple forms of deprivation and a lack of education

23
Q

Wood et al.: What is the RQ?

A

What are the preferences of left behind communities for policy change processes related to populist politics?

24
Q

Wood et al.: What are the two methodological challenges of assessing policy preferences of left behind communities, and what does it lead researchers to do?

A
  1. Preferences are difficult to asses because of emotional attachment and affective polarization
  2. Because they are socially stigmatized as poor and lacking education, which creates social desirability bias and distrust of researchers and hestitancy to participate

This leads researchers to oversimply left behind views, often conflating them with populist narratives and ignoring the complexities of their views

25
Wood et al.: What is stigmatization?
The process in which a particular group and individuals develop stigma as they are subjected to shame, scorn, ridicule or discrimination in their interactions with others
26
Wood et al.: How do they overcome the methodological challenges?
By using photo elicitation which allows them to process emotions they have related to a subject and avoids social pressures associated with stigmatization
27
Wood et al.: What is photo elicitation?
Participants consider a photograph of a contentious issue and clarify their preferences in dialogue with researchers following reflection, allowing them to speak freely about contentious issues
28
Wood et al.: Research methods
418 interviews in 5 left-behind communities in the UK through street intercept recruitment method Focus on claim that they voted for Brexit to improve NHS investment by showing picture of bus: "We send the EU 350 million pounds a week. Let's fund our NHS instead" and asking what comes to mind and nudging them towards legitimacy and accountability Photo elicitation and dialogue with the researchers, who engaged positively and noted down their responses afterwards
29
Wood et al.: What is the traditional understanding of why left-behind communities voted Leave?
They want to reduce immigration through Brexit to increase access to public goods such as the NHS
30
Wood et al.: What 6 themes do they identify?
1. "Bullshit" and skepticism about the bus statement contrary to expectations of support 2. Distrust in politicians and distancing from politicians and politics 3. Need for NHS investments 4. Lack of clarity and conflicting views regarding responsibility for improving NHS 5. Lack of link between NHS and Brexit -> either failed to recognize or outright rejected link 6. Anti-immigration sentiment among a minority of hard Leavers
31
Wood et al.: What is positive marginalization, and how does it work in this case?
Cognitive process through which stigmatized groups can reinterpret stigma positively by rejecting the logic of elite positions and then articulating distinct policy preferences through their own personal experiences In the themes bullshit and distrust of politicians, left-behind communities reject elite Leave positions and framings, and then articulate their own policy preferences of NHS investment
32
Wood et al.: What do their results say about the general narrative of why left-behind communities voted leave?
There is no link identified between Brexit and wanting NHS investment There is dissonance between support for investment and distrust that it will happen as a result of Brexit
33
Elkjær & Wlezien: What is the traditional wisdom regarding a don't know (DK) option in surveys on preferences and opinions?
Offering a DK response option encourages satisficing = respondents use the option even when they actually have a preference
34
Elkjær & Wlezien: What do they add to the conventional wisdom on DK options?
That while it is correct, at the same time it downplays the possibility that omitting the option encourages respondents without preferences to give one (providing random responses that bias the results)
35
Elkjær & Wlezien: Research methods
4800 US respondents IV: having the don't know option or not DV: Preferences on 8 policies After each question, the respondents who expressed a preference were asked how sure they felt about their answer Afterwards, respondents were asked to revisit 3 of their responses, where the control group now receive the DK treatment while those in the treatment do not
36
Elkjær & Wlezien: What are their 5 main findings?
Respondents are more likely to provide non-responses when offered the DK option -> allowing them to skip a question is not effective The more information needed to answer the question, the more the DK option is used Respondents with lower levels of political knowledge are more likely to choose DK, whereas high-information respondents are virtually unaffected by the inclusion of a DK-response option Respondents offered the DK option are more confident in their responses Omitting the DK option can bias aggregate opinion toward 50-50 splits
37
Elkjær & Wlezien: When should you include/exclude a DK option?
For high-salience issues where most respondents already have an opinion, DKs mostly reflect satisficing -> omit For low-salience issues, many respondents will not have an opinion or be unsure -> include
38
Egami & Hartman: What is external validity?
Whether a study's results can be applied to other people, situations, treatments, or outcomes
39
Egami & Hartman: What are the 4 types of external validity?
X-, T-, Y- and C-validity
40
Egami & Hartman: What is X-validity?
Problem: Can the results be generalized to a different group of people/population? Is the sample representative of the target population? Solution: Adjust for differences between the study sample (for example students) and target population (for example overall population) using covariates
41
Egami & Hartman: What is T-validity?
Problem: Would the same effect occur if the treatment were implemented differently? Does it reflect how it works in the real world? Does it seem realistic? Solution: Design experiment so the treatment matches real-world intervention as closely as possible
42
Egami & Hartman: What is Y-validity?
Problem: Would the same effect be found if a different or more realistic outcome was measured? Do we measure the right thing? Solution: Use outcomes that are as close as possible to the real-world behavior or policy goal
43
Egami & Hartman: What is C-validity?
Problem: Would the effect be the same in a different setting, time, or institutional environment (where/when)? Is this contingent on a specific setting? Solution: Adjust for or match on key "context moderators" that explain why results might differ across time, geography or institutions
44
Egami & Hartman: What is effect-generalization and what is it useful for?
Estimating how big the treatment effect would be in a new setting Useful for X- and C-validity, because you can statistically adjust for differences in who is studied and where the study takes place
45
Egami & Hartman: What is sign-generalization and what is it useful for?
Testing whether the direction of the effect (positive/negative) stays the same in other settings Useful for T- and Y-validity, as it requires fewer assumptions. Also helpful for X- and C-validity when effect-generalization is too hard
46
Egami & Hartman: What are the 3 types of effect-generalization estimators?
Weighting-based: Adjusts the sample to resemble the target population by applying weights Outcome-based: Models the outcome based on characteristics and applies it to the target group Doubly-robust: Combines both weighting and outcome modeling
47
Cavaillé et al.: What is preference intensity?
How strongly someone prefers one policy option over another, predicting whether they will act in opinion-congruent ways
48
Cavaillé et al.: Why are usual measures of opinion insufficient?
They capture what people think, but not how much they care. They also allow cheap talk and partisan signaling, leading to bunching at the extremes without differentiating true commitment
49
Cavaillé et al.: What are the advantages and disadvantages of Likert and Likert+?
Simplicity and lower drop-out rates! Likert rates support/opposition on simple scale, Likert+ adds follow-up question of how important the issue is to the respondents But puts respondents in a world where talk is cheap and there are no consequences for misrepresenting your opinion or making one up, for example to stay within party lines
50
Cavaillé et al.: What are the advantages and disadvantages of Quadratic Voting for Survey Research (QVSR)?
QVSR gives respondents a fixed budget to buy favor/oppose votes, which prices increasing quadratically This better approximates real-world opportunity costs and induces people to realize what they care about the most, regardless of partisan views But is difficult and costly to scale up to larger surveys, requires higher cognitive engagement, and an appropriately sized fixed budget
51
Cavaillé et al.: Research methods, incl. IV and DV
US citizens asked opinion on 10 policy issues, randomly varying method used to measure their opinion IV: Likert (1-3), Likert+, or QVSR (100 credits) DV: opinions on 10 policy choices + asked if they would donate money to advocacy group or write to Senator
52
Cavaillé et al.: What are their main findings?
All 3 methods predict opinion-congruent behavior, but QVSR works best Individuals who choose end-of-scale response categories in Likert(+) end up de-bunching under the QVSR in ways that align with the donation/writing task QVSR helps distinguish between respondents who directly benefit from policy and respondents who don't. In question on gender equality, with Likert there is little variation based on gender, but QVSR shows that women are more likely than men to strongly support gender equality policies