week 10 - open research in computational modelling Flashcards
(14 cards)
What is the replication crisis?
What came of this crisis?
-psychological research had issues of academic misconduct -> less replicability in results
-open research
Who was Diederik Stapel?
-psychologist that published 58 fraudulent articles with fabricated data and false positives - later retracted
What are things which cause issues of credibility in psychological research?
-large number of EVs for a small N -> false positives
-lack pf reproducibility
-lack of replicability
-not being aware of negative impact of DOF
-publication bias
-questionable research practices QRPs
What are DOFs?
What are the different DOFs which negatively impact credibility of experiment?
=different ways researchers are flexible in their experiments
-deciding when to stop collecting data -> after you get the desired result
-excluding/including participants post hoc (when experiments have finished)
-trying different statistical models/covariates until results are significant (phacking)
-changing hypothesis after data analysis
-only reporting significant results
How does Bayesian Truth Serum work?
How is it carried out?
Incentivizing truthfulness: by rewarding respondents who predict the majority opinion correctly or provide answers that deviate in a way that aligns with likely truths -> more likely to admit that they did something wrong themselves (when asked self-admission Qs)
by asking indirect question about interviewer’s eg colleagues
Why did John et al. 2012 use BTS?
encourages honest answers in environments where self-reporting could be biased by social desirability or personal interests
What did John et al. 2012 discover about falsifying data in psychological research?
-the impact of truth-telling incentives BTS on self-admissions of questionable research practices was positive, and this impact was greater for practices that respondents judged to be less defensible.
-combining three different estimation methods, we found that the percentage of respondents who have engaged in questionable practices was surprisingly high
What did The Turing Way Community claim were ‘good’ research practice requirements?
needs to be
Reproducible: same analysis of same data should give same answers
Replicable: same analysis, different data -> same answers
Robust: different analysis, same data -> same answers
Generalisable: different analysis, different data -> same answers
What are the principles of open research?
preregistration of hypothesis before starting experiment
- open materials and methods
-open research data
-open sources software
-open source code
-open access publications
Why make your work reproducible? for personal reasons
-aids writing papers
-helps reviewers see it from your perspective
-allows continuity of your work
-helps build your rep as a researcher
What are the some good practices when researching with cognitive modelling?
Why is each of them useful?
-Keep a model logbook -> to avoid forgetting what you did earlier and to keep a track of changes
-do parameter recovery studies -> assesses goodness of model fit
-do robustness and generalisation (sensitivity) studies -> ensures model is not overfit
-Quantify uncertainty in parameter estimates -> improves model transparency
-Share model code, data, and simulation scripts ->improves reproducibility
What is the aim of open research?
that research is robust, reproducible, replicable and generalisable
What are soem examples where open research practice are not always good for cognitive modelling experiments?
-if you have an exploratory approach to modelling -> you can state hypothesis is exploratory and that it will change
-preregistration is not a substitute for good judgement -> deviate where needed but log deviations transparently
-modelling is an iterative process and requires exploration
What are some good practices when sharing your code?
-follow coding conventions
-give variables and functions sensible names
-put concise and useful comments
-track simulation outputs with notebooks (Juypter)
-add ‘readme’ file to explain how to run code