The Replication Crisis Flashcards

1
Q

What was Bem’s “feeling the future” study?

A
  • 9 experiments, 1000+ participants
  • Set-up: behind 1 curtain is an erotic image. Participants select left or right curtain. Then a computer randomly assigns an image to each.
  • Result: Participants choose porn curtain 53% of the time.
  • Lots of studies failed to replicate it, but Bem did it again and replicated it.
  • Interpretation: Can we simulate false conclusions using currently acceptable methods?
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What was the open science collaboration?

A

○ Replicated 100 experiments from the top 3 psychology journals in 2008, using the original designs and more participants

○ 39% replicated

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What was the hot potato studies?

A
  • Independent variable: what song participants were played (“Hot Potato” or a control song)
  • Dependent variable: “How old do you feel?”
  • Result: People felt older after listening to Hot Potato (p < 0.05)
  • Discussion: People contrast their age to the children’s song, so they feel older
  • Study 2: People played When I’m 64 and asked how old they ARE. People were 1.5 years younger when listening to When I’m 64 than the control group.
  • Conclusion: Relatively common, but unreported, scientific practices can produce evidence for effects we know to be wrong.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

How are studies evaluated for publication?

A

○ Important
○ Internal validity
○ Novelty
○ Statistically significant, p < 0.05

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

How does the pressure for publication create incentives to use unscientific practices?

A

○ Have to publish to get a PhD, a post-doc position, before a professor, get grant funding, get tenure…
○ “The more any quantitative social indicator is used for decision- making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor.” – Donald Campbell
○ Sharp drop-off in papers written at 0.05p

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

In what ways do professors report fiddling their data / using shortcuts? (7 ways)

A

○ 66.5% of research professors self-reported that they have not reported all dependent variables at least once.
○ 58% have collected more data if current result isn’t significant (same as Bem did–he collected results from 50 participants at a time, and kept going until he met < 0.05)
○ 50% have only reported studies that ‘worked’–ie that supported the hypothesis
○ 27.4% said they haven’t always reported all conditions
○ 22.5% have stopped data collection early because the p value was already under 0.05 and they were worried if they kept collecting data it might go down they might not be able to publish
○ 23.3% have rounded p values
○ 1.7% have falsified data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Is there a penalty for using shortcuts?

A

Papers that are less likely to be true–that weren’t replicated in the big research project Andrew was involved in–are just as likely to be cited in papers than papers that were more likely to be true

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How does the pharmaceutical industry fuel bad science?

A

“Those who have the gold make the evidence”

Example of SSRIs:
○ Costs a LOT to do RCTs so it’s only really manufacturers that get to do it
○ Meta-analysis of all published studies on SSRIs found 94% of RCTs found p < 0.05 effectiveness.
○ In the US all RCTs have to be published to the FDA in advance–have to tell them what dose and what dependent variables
○ FDA audit studies: 51% of RCTs find p < 0.05 effectiveness. ONLY THE SUCCESSFUL TRIALS ARE PUBLISHED.
○ Wyeth (now owned by Pfizer) said negative results were “failed studies” (not successful studies that show it doesn’t work)
○ Biases in research design: Large doses are given in effectiveness studies and low doses are given in side-effect studies

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What practices are (slowly) leading to a brighter, more replicable, future? (6)

A

○ Pre-registering hypotheses and data collection and data analysis plans: stops HARKing, paper this year showed 97% of preregistered studies were replicable
○ Publishing null results and replications
○ Increased sample sizes: makes shortcut of taking a few participants at a time less effective
○ Publishing raw data, methods, and analysis scripts
○ Embracing messy data: if study measures two dependent variables and 1 shows effect and the other doesn’t, tells us there’s something more going on that we need to understand
○ Professors critiquing their own work: some professors coming forward and saying “this idea might be good but please don’t draw conclusions from these parts of it”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly