Chapter 14 Flashcards
3 key components of the best practices/open science
transparency
reproducibility
replicability
reproducibility
reproducing identical results from the same data
replicability
replicating results generated from older data by collecting new data through similar procedures
what does replication give to a study
credibility
3 types of replication
direct replication
conceptual replication
replication-plus-extension
direct replication
the original study is repeated as similarly as possible to determine whether the original effect is found in the new data
conceptual replication
the same research question and same conceptual variables but different operationalizations
in replication plus extension in what 2 ways can you replicate the original study
- add another level to an existing IV
- add another variable (makes it a factorial design)
what does a meta-analysis yield?
a quantitative summary of a scientific literature/ an average of the effects from all studies (published and unpublished( on the same variables
scientific literature
series of related studies conducted by different researchers who have tested similar variables
limitations to meta-analyses
null and opposite effects are rarely published so a meta analysis might overestimate the true effect size (file drawer problem)
solution to the file drawer problem of meta-analyses
actively seek unpublished data and use social media forums
origin of the replication crisis
only 39% of a random sample of 100 studies published in journals had been replicated
recommended rxns to the replication crisis
- ask why replication studies might fail
- ask what the best practices are to improve reproducibility
why might a study fail to replicate?
-if direct replication was used when it doesn’t make sense to use it
-if the researchers relied on only 1 replication study
-questionable research practices
best known QRPs
underreporting null fx
p-hacking
HARKing
using small samples
how does underreporting null fx influence readers?
makes people think that the effects are stronger than they actually are
p-hacking
when researchers try running different statistical analyses or computing their data differently than they originally intended (in hopes of obtaining a significant p value, it’s not done intentionally but they can become biased and not be aware they’re doing it)
HARKing
hypothesizing after the results are known; misleads readers about the strength of the evidence
why can a small sample size be problematic
the study’s estimate is usually imprecise and not replicable because it doesn’t take many extreme variables to greatly influence the data set
best practices for scientific studies
pre-registration
power analysis
report all analyses
report all variables measured
report all conditions
pre-registration
preregister the study’s methods, hypotheses and statistical analyses online BEFORE DATA COLLECTION
power analysis
determines the adequate sample size according to the design; done before submission to ethics committee
what 2 factors are considered in external validity
-how well the results can generalize to a population of interest
-how the sample was selected (random?)