PSY2001 W9 Critical Perspective Flashcards
(43 cards)
What are the transparency and openness promotion guidelines ?
TOP guidelines, Nosek et al. 2015
- Citation standards
- Data transparency
- Analytic methods (code) transparency
- Research materials transparency
- Design and analysis transparency
- Preregistration of studies
- Preregistration of analysis plans
What is replication?
Research team 1 do a study and research team 2, replicates the study in a different setting (e.g. participants) and replicate or not the findings of research team 1.
Why are replication useful?
Repeatedly finding the same results:
Protects against false positives (e.g. sampling error)
Controls for artifacts
Addresses researcher fraud
Test whether findings generalise to different populations
Test the same hypothesis using a different procedure²
What is direct replication?
A scientific attempt to recreate the critical elements (e.g., samples, procedures, and measures) of an original study.
The same—or similar—results are an indication that the findings are accurate and reproducible.
What is conceptual replication?
To test the same hypothesis using a different procedure
The same—or similar—results are an indication that the findings are robust to alternative research designs, operational definitions, and samples
How many studies are replication overall?
36%
How many findings in journal of personality and social psychology are replicated ?
23%
How many findings in journal of experimental psychology: learning, memory, cognition are replicated ?
48%
How many findings in journal of psychological science, social articles are replicated ?
29%
How many findings in journal of psychological science, Cognitive articles are replicated ?
53%
What is a historical example of faking?
Diederik stapel: influential and admitted to faking data
papers are todl to be neat and nice, coherent
The kind of articles that are published they needed to be neat. If the results were not coherent or in line with the hypothesis, he had to either dump the study or faking the data.
A third of his papers were retracted.
What are reasons for non-replication?
faking, sloppy science,outcome switching,small samples/lack of statistical power
What are the nine circles of scientific hell?
Neuroskeptic 2012
Limbo
Overselling
Post-Hoc Storytelling
P-value fishing
Creative Outliers
Plagiarism
Non-publication
Partial Publication
Inventing Data
What is Limbo?
you see the dubious things done by peers
What is post-hoc storytelling?
writing after the study and making it pretty and coherent
What is non-publication?
he reason it’s down there, it is referring to intentional non-publication, deciding whatever or not to publish or not depending on results. If you were planning on publishing and not doing it.
What is partial publication?
publishing what worked and what did not work.
What is creative outliers?
running the analysis with and without the outliers and picking which provides the best results.
What is outcome switching?
pertains to p-valeu fishing
Changing the outcomes of interest in the study depending on the observed results.
E.G.: “p-hacking” - Taking decisions to maximise the likelihood of a statistically significant effect, rather than on objective or scientific grounds.
We are looking for that significant p-value. You can do things to make it more likely to have a significant p-value. If you run multiple variables and you notice only one variable has a significant p-value and you focus and report only that one.
Why does small samples and lack of statistical power explain non-replications?
The smaller sample the more likely your effect is less robust.
Seminal Training Studies: Klingberg et al.’s (2002) training study
First evidence for training and transfer effect but very small group sizes (n = 7). Are the effects replicable?
How common is sloppy science?
John et al. 2012
Survey about involvement in questionable research practices
Failing to report all the measures or conditions.
Deciding whether to collect more data after looking to see whether the results were significant. [Early on, the p-value wonders about a lot. The problem with picking at the data, you might run the analysis when the p-value looks promising or not but it does wonder around.]
Selectively reporting studies that “worked”.
Results: A lot of people did this. Still quite common
Concluded that the percentage of respondents who have engaged in questionable practices was surprisingly high
What did SImmons et al. find about the frequency of sloppy sickens?
2011
Flexibility in data collection, analysis, and reporting dramatically increases actual false-positive rates.
They show support for something that is completely impossible. They didn’t report everything. They only report the co-variant that found a significant effect. They analysed the data every 10 participants and stopped when they got the data they were looking for.
Journal articles don’t want to publish messy long and difficult reports and are stirring scientist to making neat and short reports.
What are moderators?
Variables that influence the nature (e.g., direction and / or size) of an effect.
E.g. country or culture - ” Reverse” ego-depletion (Savani & Job, 2017)
Identifying moderators is good because it improves our understanding. cf. “Second generation research” (Zanna & Fazio, 1982)
Scientist error or poor replication could explain non-replicants?
Doyen and her colleagues are “incompetent and ill informed”, making “gross” methodological changes.
He was critical of the study that was trying to replicate his own work. (John Bargh, Priming effects replicate just fine, thanks)
He said that the methodology was not replicated properly