PSY2001 W9 Critical Perspective Flashcards

(43 cards)

1
Q

What are the transparency and openness promotion guidelines ?

TOP guidelines, Nosek et al. 2015

A
  1. Citation standards
  2. Data transparency
  3. Analytic methods (code) transparency
  4. Research materials transparency
  5. Design and analysis transparency
  6. Preregistration of studies
  7. Preregistration of analysis plans
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is replication?

A

Research team 1 do a study and research team 2, replicates the study in a different setting (e.g. participants) and replicate or not the findings of research team 1.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Why are replication useful?

A

Repeatedly finding the same results:
Protects against false positives (e.g. sampling error)
Controls for artifacts
Addresses researcher fraud
Test whether findings generalise to different populations
Test the same hypothesis using a different procedure²

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is direct replication?

A

A scientific attempt to recreate the critical elements (e.g., samples, procedures, and measures) of an original study.
The same—or similar—results are an indication that the findings are accurate and reproducible.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is conceptual replication?

A

To test the same hypothesis using a different procedure
The same—or similar—results are an indication that the findings are robust to alternative research designs, operational definitions, and samples

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

How many studies are replication overall?

A

36%

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

How many findings in journal of personality and social psychology are replicated ?

A

23%

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How many findings in journal of experimental psychology: learning, memory, cognition are replicated ?

A

48%

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

How many findings in journal of psychological science, social articles are replicated ?

A

29%

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

How many findings in journal of psychological science, Cognitive articles are replicated ?

A

53%

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is a historical example of faking?

A

Diederik stapel: influential and admitted to faking data
papers are todl to be neat and nice, coherent
The kind of articles that are published they needed to be neat. If the results were not coherent or in line with the hypothesis, he had to either dump the study or faking the data.
A third of his papers were retracted.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What are reasons for non-replication?

A

faking, sloppy science,outcome switching,small samples/lack of statistical power

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What are the nine circles of scientific hell?

Neuroskeptic 2012

A

Limbo
Overselling
Post-Hoc Storytelling
P-value fishing
Creative Outliers
Plagiarism
Non-publication
Partial Publication
Inventing Data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is Limbo?

A

you see the dubious things done by peers

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is post-hoc storytelling?

A

writing after the study and making it pretty and coherent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is non-publication?

A

he reason it’s down there, it is referring to intentional non-publication, deciding whatever or not to publish or not depending on results. If you were planning on publishing and not doing it.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is partial publication?

A

publishing what worked and what did not work.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What is creative outliers?

A

running the analysis with and without the outliers and picking which provides the best results.

19
Q

What is outcome switching?

A

pertains to p-valeu fishing
Changing the outcomes of interest in the study depending on the observed results.
E.G.: “p-hacking” - Taking decisions to maximise the likelihood of a statistically significant effect, rather than on objective or scientific grounds.

We are looking for that significant p-value. You can do things to make it more likely to have a significant p-value. If you run multiple variables and you notice only one variable has a significant p-value and you focus and report only that one.

20
Q

Why does small samples and lack of statistical power explain non-replications?

A

The smaller sample the more likely your effect is less robust.
Seminal Training Studies: Klingberg et al.’s (2002) training study
First evidence for training and transfer effect but very small group sizes (n = 7). Are the effects replicable?

21
Q

How common is sloppy science?

John et al. 2012

A

Survey about involvement in questionable research practices
Failing to report all the measures or conditions.
Deciding whether to collect more data after looking to see whether the results were significant. [Early on, the p-value wonders about a lot. The problem with picking at the data, you might run the analysis when the p-value looks promising or not but it does wonder around.]
Selectively reporting studies that “worked”.
Results: A lot of people did this. Still quite common
Concluded that the percentage of respondents who have engaged in questionable practices was surprisingly high

22
Q

What did SImmons et al. find about the frequency of sloppy sickens?

2011

A

Flexibility in data collection, analysis, and reporting dramatically increases actual false-positive rates.

They show support for something that is completely impossible. They didn’t report everything. They only report the co-variant that found a significant effect. They analysed the data every 10 participants and stopped when they got the data they were looking for.

Journal articles don’t want to publish messy long and difficult reports and are stirring scientist to making neat and short reports.

23
Q

What are moderators?

A

Variables that influence the nature (e.g., direction and / or size) of an effect.
E.g. country or culture - ” Reverse” ego-depletion (Savani & Job, 2017)

Identifying moderators is good because it improves our understanding. cf. “Second generation research” (Zanna & Fazio, 1982)

24
Q

Scientist error or poor replication could explain non-replicants?

A

Doyen and her colleagues are “incompetent and ill informed”, making “gross” methodological changes.

He was critical of the study that was trying to replicate his own work. (John Bargh, Priming effects replicate just fine, thanks)

He said that the methodology was not replicated properly

25
What is publication bias?
“non-publication” and “partial publication” Findings that are statistically significant are more likely to be published than those that are not. In general, there are good reasons for this, cf. ambiguity over the reasons for null findings, BUT could published studies represent the 5% of findings that occur by chance alone?Known as “the file drawer problem” (Rosenthal, 1979)
26
What is a solution to the problem of sloppy science?
Open Science
27
What is open science?
“the process of making the content and process of producing evidence and claims transparent and accessible to others” “without transparency, claims only achieve credibility based on trust in the confidence or authority of the originator. Transparency is superior to trust.” - Transparency is superior to trust.
28
What is open methodology
Documenting the methods and process by which those methods were developed / decided upon.
29
What is pre-registration?
Define the research questions, methods, and approach to analysis before observing the research outcomes
30
What does Pre-registration prevent?
HARKing Hypothesizing after results are known Hindisht bias
31
What was the psychology's registration revolution?
moves to uphold transparency are not only making psychology more scientific they are harnessing our knowledge of the mind to strengthen science (Chris Chambers, 2014, the guardian)
32
Some debate about the value of pre-registration?
Too constraining?Make it clear what is exploratory and what is not. What happened to letting the data speak and chance of discoveries
33
Does per-registration improve replicability? | Protzko et al. (2023)
4 labs attempted to replicate 16 novel experimental findings using rigour-enhancing practices: confirmatory tests, large sample sizes, preregistration and methodological transparency. Replicated the expected effects in 86% of attempts. Contrast with the Open Science Collaboration (2015) who found that only 23% of findings in social psychology could be replicated… BUT replicability was not the original outcome of interest in the project - associated with replicability were not pre-registrered as claimed. Instead the originally planned study set out to examine whether the mere act of scienfically investigating a phenomenon could cause effect size to decline on subsequent investigation.
34
What are registered reports?
Split the peer review process into two stages 1- reviewers and editors assess a detailed protocol - study reationale, procedure and detailed analysis plan 2-floowing favourable reviews, the journal offers acceptance in-principle - Publication of the findings is guaranteed provided that the authors adhere to the approved protocol, the study meets pre-specified quality checks, and conclusions are appropriately evidence-bound.
35
What is open data?
Making the dataset freely available: Allows other scientists to verify the (original) analyses Facilitates research beyond the scope of the original research Avoids duplication of data collection The data needs to be FAIR (Wilkinson et al., 2016) Findable, Accessible, Interoperable, Reusable
36
What is the traditional model of publication?
Researchers submit a paper to a scientific journal who decide whether or not to publish. The researcher then signs copyright over to the journal who then charge Universities / libraries / individuals for access.
37
What are limits of traditional model of publication?
o Limits access to those who have funds to pay for articles / subscriptions
38
What are the types of open access publishing?
Golden open access and green open access
39
What is Gold open access ?
The researchers (or more likely the funders or host institution) pay the journal to publish the article The final (formatted) version is freely and permanently accessible for everyone.
40
What is green open access?
Also referred to as self-archiving Put an (unformatted) version of a manuscript into a repository.
41
What are the effects of open access publication? | Teant et al. 2016
open access works are used more: within academia, open access works are cited between 36% and 600% more than works that are not open access Outside of academic: open access works are given more coverage by journalist and discussed more in non-scientifict settings Open access works facilitate meta-anlsyis: enable the use of automated text and data-mining toools
42
What is used to acknowledge open-science practices?
Badgets promote open scienceIf you give them digital badges they are more likely to do open science practices
43
Do badges increase people's trusts?
Students Teachers increase trust in badged articles. And Social scientist were also more trusting btu the public did not care.