week 6 - Intro into cognitive modelling Flashcards
(27 cards)
Does computational neuroscience (CN) has have top-down or bottom-up approach to computational modelling? why?
bottom-up approach as CN looks at finer details first such as neuronal activity patterns to create biologically plausible representations (the models)
How does the approach to computational modelling differ from CN to cognitive science?
CN is bottom-up whereas cognitive science is top-down
cognitive looks at behavioural patterns to make models whereas CN looks as activity on a neuronal level to form model
What is the issue from deriving a computational model from data (data models)?
data model have no intrinsic psychological content (no explanations to the patterns in the data -> so you can’t really build a model/theory upon data)
Give an example of a cognitive model
What did this model propose about how our brains work?
Baddely and Hitch working memory model with the visuospatial sketchpad and the phonological loop
-working memory isn’t just a single short-term storage space, but a system with multiple components
Give an example of a data model
What is the issue with this model?
-Study by Heathcote investigating data patterns of the ‘practice effect’. Is the learning rate better described by a power function or an exponential function?
-Just describes patterns and doesnt explain why psychologically and there is no biological representation in the brain
What is the problem with the cognitive science approach to computational modelling?
top-down approach means there is no biological representation
What type of model is the Spreading-Activation Model by Collins and Loftus?
How does it work?
-verbal model
-semantic memory model: when one word is activated then other words associated in meaning are activated too. Different associations in different people.
What are the benefit of MODELLING (in general) cognition?
-can compare different plausible models systematically
-make implicit assumptions (inferred) -> explicit (apparent)
-communicate theoretical ideas (box and arrow)
-test THEORETICAL hypothesis and predictions
Guest and Martin
What is the benefit of using computational modeling to describe cognitive theories (in cognitive science)?
-removes ambiguity from verbal description in cognitive theory
-constrains the dimensions of which the cognitive theory can span
Guest and Martin 2021
What is the pipeline in their diagram?
Which parts implement cognitive modelling?
framework, theory, specification, implementation, hypothesis, data
theory, specification and implementation
Guest and Martin 2021
Where is cognitive modelling implemented in the pipeline of a cognitive science experiment (from theory to data)?
What do the other three parts do?
Theory: define relationships in a model based on science
Specification: formalize model mathematically
Implementation: building code for the model
framework: conceptual system providing context
X
testable statement:
data: empirical data/observations from real-world experiment or model simulations
Guest & Martin (2021)
What are the benefits of adding computational modelling to building cognitive theories?
-computational modelling helps clarify the theory,
-makes theory makes it more explicit
-makes research more repeatable for other researchers
-helps with replicability crisis
-constraining our inference process through modeling enables us to build explanatory and predictive theories.
Why must your model be precise but also falsifiable?
How is this represented graphically by Farrell and Lewandowsky in their paper?
precise: because the theory’s hypothesis must be concise and have a precise selection criteria otherwise everything would be valid for the hypothesis
falsifiable: because there must be criteria in the hypothesis which you can reject data observed from the experiment
-precision = length of the cross arms
-falsifiable = thickness of the dots line
Farrel +Lewandowsky
Falsifiable=?
thickness of the dots
What do the length of the cross arms represent in the Farrell & Lewandowsky paper about precision and falsifying data?
error bars: the longer the cross arms, the less falsifiable the hypothesis is (more room for error)
What do Farrell and Lewandowsky theorise about data and predictions in cognitive modelling?
cognitive modelling brings data and predictions together
What is the difference between free and fixed parameters?
free are flexibly adjusted but fixed are set
what is the benefit of using free parameters?
you can adjust the parameters when fitting the model until the difference between the predicted model values and real data is minimised
How are computational models (created from theory) connected to experiments (also created from theory)?
model makes predictions which can be compared and contrasted to the data produced from the experiments
What is model identifiability?
Extent to which you can uniquely predict each parameter value in a model by determining these values from a data set
Are non-identifiable models informative?
yes they can be if the model is also falsifiable + additional constraints to the model
What does it mean when a model is non-identifiable?
you cannot determine its parameters uniquely, meaning different combinations of parameter values could lead to the same predictions or outcomes.
What is the goal of when fitting a model?
to minimise the discrepancy between predicted and observed data
What does the discrepancy function describe when fitting a model?
expresses the deviation between predictions and observations in a single value
(distance between dots (real data) and the curve of best fit (predicted data))