week 4 - Efficient Coding II Flashcards

(29 cards)

1
Q

For EC:
How does decorrelating neurons/pixels reduce redundancy between neurons/pixels?
How does this contribute to the efficient coding hypothesis?

A

-decorrelation removes the redundant information between the two neurons/pixels
-redundant info is what one neuron already know about the other neuron or overlapping info. this is therefore predictable info. remove redundancy = efficient coding

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Why is it not a good EC hypothesis if neighbouring pixels have similar brightness/ are correlated?

A

we can predict what neuron 1 is going to do from neuron 1 = lots of redundant information

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are the two steps of whitening? what does this look like on a Gaussian linear model graph for each step?

A
  1. decorrelate data (making it a vertical line instead of diagonal on graph)
  2. scale axes to equalize range (make data points one big circle)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

How does whitening comply to the ECH?

A

reduces redundancy by decorrelating data
reduce redundancy = good ECH

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are the three MATHEMATICAL steps in the process of whitening?

A
  1. Eigen-decomposition of the covariance matrix (PCA!!)
  2. Rotate data
  3. Scale data
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is PCA?
Where is it included in the process of natural image generation?

A

prinicpal componenet analysis
-step 1 of whitening: PCA= when you do the eigen-decomposition of the covariance matrix

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is the main goal of PCA and whitening?

A

remove redundancy my decorrelating pixels

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Does a Fourier DECOMPOSITION comply to ECH? why?

A

no because there is negative CORRELATION between y =power x = frequency of change (of pixels)
therefore there is redundant information between pixels = not good ECH

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What does the linear Gaussian Image Model capture in natural images (without implementing the ECH)?

A

captures pair-wise pixel correlations

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

When generating a natural image, what does this image look like from applying basic whitening functions? Does it look natural/look like anything occurring in nature?

A

checkboard receptive fields which don’t look like natural receptive fields

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is the issue with just adding whitening to the linear Gaussian Image Model?
What can you do to fix this?

A

-whitening = large circle of data on graph
issue= you can make more changes to the model like rotating the circle
-add another constraint and localise the circle in space = localised whitening

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What does the image look like when you add LOCALIZED whitening basis functions to the model?

A

it look like receptive fields in neurons lower down in neural circuit - sensory neurons (circle in a bigger circle)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Additionally, what can you do to improve the model once you have added localised whitening basis functions?

A

-filter out noise and also make energy efficient

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What does a NAUTRAL image look like when the second-order redundancy is removed? (has been whitened)
What is second-order redundancy?
What has happened to the correlation between neighbouring pixels in this image?

A

-you can still see the structure however the image looks washed out because edges and contrast are missing
-correlation between neighbouring pixels
-correlation between neighbouring pixels = 0 (correlation/redundant info has been removed)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What does a perfectly decorrelated image look like?

A

like swirls in a tree stump

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What does applying whitening to a Gaussian Image Model look like on a graph?
After whitening, how do you find the FINAL rotation?

A

-like an x made of data points in a big circle
-find the direction in the data that are the least Gaussian (skinny pointy peak curve = non-Gaussian)

16
Q

Why, after localised whitening do we find directions in the data that are the ‘least’ Gaussian?

A

so we can recover our independent components -> do ICA

17
Q

***For ICA:
Why can’t independent components be Gaussian?

A

majority of independent components are non-Gaussian -> if they were all Gaussian, we wouldn’t be able to separate them properly because a Gaussian distribution is symmetric and lacks unique structure

18
Q

ICA:
Are the independent components non-Gaussian or Gaussian?
What is the central limit theorem (CLT) in terms of Gaussianity?
What does ICA do to recover the independent components?

A

-independent components are non-Gaussian
-when we mix multiple non-Gaussian signals (independent components), their combination becomes more Gaussian
-ICA finds direction in the data which are the least Gaussian

19
Q

ICA:
Why do we want to recover our independent components?
How can we achieve this?

A

-the independent components are non-Gaussian however, due to CLT, when mixed they have become Gaussian. so we want to recover independent components

-find directions in the data where the output is least Gaussian

20
Q

ICA:
What are the three ways we can recover the independent components?

A
  1. Maximize a measure of non-Gaussianity (kurtosis)
  2. Pick non-Gaussian distribution as prior over inputs
  3. Minimize mutual information between outputs
21
Q

How is independent component analysis ICA explained with an example of the eyes?

A

eyes detect signals and mix them in brain. brain then also de-mixes them to find original sources

22
Q

What does the image look like when you add ICA filters?

A

like receptive fields in primary visual cortex V1 which are localised and orientation specific
they are Gabor-like

23
Q

The response properties of retinal ganglion neurons (sensory neurons) can be (mostly) explained by ________ ?

A

decorrelation

24
Is redundant information and mutual information the same thing? Why?
-mutual info is how well the system can predict the input from the output redundant info is information you can predict in input/output -no because we want to maximise mutual info and minimise redundant info
25
Which image model does ICA build on?
localised whitening
26
What are higher order models?
using ICA
27
What is ICA? What does the generated natural image look like from this?
demixing the data/finding the direction which is least Gaussian in the data -> to recover non-Gaussian components in the data primary visual cortex receptive fields (Gabor-like)
28
What is the equation for the Gaussian Image Model? What is equation after localised whitening? What does each variable mean? What is the equation after ICA? What does each variable mean?
p(s) = N(s I μ, Σ ) HELP lol