Neurocognitive Modelling Flashcards
(196 cards)
What are neural models?
- Consider neural properties such as receptive fields and tuning curves
- Works for relatively small networks and simple tasks
What are cognitive models?
- Consider latent parameters that allow us to make inferences about cognitive processes
- Very general, but need some constraints on structure
What are normative models?
- Derive optimal solution for a task within constraints
- Independent of actual behaviour, but can be compared
- Theory driven
What are behavioural models?
- Fit directly to actual behavioural data (process model)
- Very data intensive
- Needs data & theory
What is the general equation for a cell? (eg, a simple cell in the primary visual cortex)
response = function (stimulus)
r = f(s)
What questions can we ask about the neurons response function?
- What is f? (descriptive approach)
- How does f arise? (development / learning)
- What should f be? (Normative approach - efficient coding)
What is the job of a neuron?
To transmit information
Who is the father of Information theory?
Claude Shannon for his 1948 paper “A mathematical theory of communication”
What is Information theory?
A field that studies how to measure, quantify and transmit information
What is the key principle in Information theory?
The less we can predict something, the more information it gives us
Information as suprise
How does suprise link to information?
Low suprise, low information
High suprise, high information
No suprise, no information
How do we calculate information / suprise?
Negative logarithm of the probability of an event
What log base do we normally use for calculating information / suprise?
Log base 2 - this means it is in bits
If P(heads) is the probability of heads, what is the way to write the information / suprise of heads?
I(heads)
What is suprise?
The measure of information for a specific outcome
What is entropy?
How much information does any one outcome give us on average
Weighted average of information
How do we calculate the entropy of a system?
Entropy = p(x1) I(x1) + P(x2) I(x2) + ….
What is the general equation for Entropy?
Entropy = -Σ p (xi) log(p(xi))
n over i on the sigma
What does more randomness lead to?
More suprise and therefore more information
How can we represent randomness as a probability?
A perfectly uniform distribution
What does the equation for entropy simplify to for a uniform distribution?
log(n)
Where has information theory been used outside of neuroscience?
- Communications (original form)
- Language
How was information theory used in language?
English alphabet has 26 letters, log2(26) is 4.7 but in reality each letter transmits ~2.3 bits of information
What is Zipf’s law?
Frequency of a word is inversely proportional to its rank in the frequency table
All natural languages are inefficient
Looks the same for all natural languages