Week 7, model landscapes Flashcards

(8 cards)

1
Q

comp neuro research goals:

A

You want to identify phenomena to be studied with intended goals

You want to link relavent real components to analogous modelled counterpoints

You then carry out the process of stress testing and fine tuning of models

You can generate model outputs and from them, articulate what has been learnt in the process

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are some dimensions of models

A
  • From data representations to first principles theory
  • From structural, biophysical realism to functional phenomenology
  • From elementary descriptions to coarse grained approximations

All models involve abstraction/simplification and corresponding assumptions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Describe Vasa et al’s case study on lesions and metastability

A
  • Took a structural connectome, and combined it with an oscillator model to model simulated functional connectivity matrix and corresponding intrinsic connectivity networks
  • The also introduced simulated lesions onto one of the oscillators
  • They quantified synchrony and metastability

HYPOTHESIS: Node importants predicts dynamical change following lesion

  1. They tuned the model to working point (find the point at which the model best matches empirical data)

RESULTS:

  • Negative relationship between the change in synchrony of the network and the eigenvector centrality
  • Positive relationship between metastability and eigenvector centrality, and metastability and participation coefficient
  • These results were the case both globally and in the neighbourhood of each node
  • So ‘lesioning’ was asssociated with lower eigenvector centrality and lower correlation coefficient

RELATING TO LITERATURE:

  • Studies in stroke patients found that when connector nodes were damaged the lesion led to a decrease in modularity
  • Focal injury to connector hubs shows more damage than injury to high degree hubs

HYPOTHESIS:

  • Modularity and dynamical variability may be markers of cognitive impairment?
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is evolutionary optimisation and how is it used in computational neuroscience?

A:

A

Evolutionary optimisation is a computational method inspired by natural selection, used to find the best parameters for complex models.

It involves:

Generating a population of candidate solutions (e.g. sets of model parameters).

Evaluating their fitness (how well they match empirical data).

Applying mutation, crossover, and selection to evolve better solutions over generations.

Commonly used to:

Fit large-scale brain models (e.g. neural mass or spiking models) to real data (like fMRI, EEG, or FC matrices).

Tune global parameters in whole-brain simulations (e.g. neurolib, The Virtual Brain).

Optimise networks for tasks or performance criteria.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are the three types of model validity according to Bassett et al.?

A

Descriptive Validity

Asks: Does the model resemble the real system structurally?

Example: Does the model’s network topology resemble the brain’s actual anatomical connectivity?

Explanatory Validity

Asks: Can the model be used to explain or test hypotheses about real-world dynamics?

A theoretical construct for deriving testable predictions and causal mechanisms from simulated data.

Example: Using a model to understand how network dynamics generate behaviour in C. elegans.

Predictive Validity

Asks: Does the model accurately predict the system’s response to perturbations?

Example: When the model’s response to stimulation, drug treatment, or training matches the actual organism’s response.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are the key components involved in group vs individual parameter tuning in generative and dynamical models?

A

Given input data:

Generative modelling: (sparse) connectivity
𝐶
seed
C
seed

, distance
𝐷
D

Dynamical models: connectivity
𝐶
C, distance
𝐷
D

Parameters:

Generative modelling:
𝜂
η,
𝛾
γ

Dynamical models:
𝑘
k,
𝜏
τ

Target data:

Generative modelling: denser connectivity
𝐶
C (with extra edges)

Dynamical models: functional connectivity

Fit function:

Generative modelling: Kolmogorov-Smirnov statistic (of graph theory)

Dynamical models: correlation (of FC)

Outcome:

Generative modelling: other graph theory metrics (not used to fit model)

Dynamical models: synchrony, metastability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are three approaches to parameter tuning in mechanistic models, and what do they allow us to study?

A

Tune on group average input/target →
Study non-individual properties of the connectome or model.

Tune on group average input/target, apply to individual input →
Study differences in fit to individual targets and/or differences in individual outcome measures.

Tune optimal parameters on individual input/target →
Study differences in fit to individual targets and/or individual differences in outcome measures.

Note:

Generating group-average structural connectomes isn’t trivial if sparse (see Betzel et al., Net. Neurosci. 2019).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are key guidelines for applying (network) models in research?

A

Define aim(s) and hypothesis(es) based on the literature.

Select, adapt, or build a suitable model

Guided by time-frame and literature

For mechanistic models, constrain parameter search if appropriate

Consider definitions, assumptions, and limitations

Go beyond just “Is there a difference in graph theory between X and Y?”

Interpret results relative to assumptions, limitations, and literature.

Generate novel hypotheses based on findings.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly