week 4 - Efficient Coding II Flashcards
(29 cards)
For EC:
How does decorrelating neurons/pixels reduce redundancy between neurons/pixels?
How does this contribute to the efficient coding hypothesis?
-decorrelation removes the redundant information between the two neurons/pixels
-redundant info is what one neuron already know about the other neuron or overlapping info. this is therefore predictable info. remove redundancy = efficient coding
Why is it not a good EC hypothesis if neighbouring pixels have similar brightness/ are correlated?
we can predict what neuron 1 is going to do from neuron 1 = lots of redundant information
What are the two steps of whitening? what does this look like on a Gaussian linear model graph for each step?
- decorrelate data (making it a vertical line instead of diagonal on graph)
- scale axes to equalize range (make data points one big circle)
How does whitening comply to the ECH?
reduces redundancy by decorrelating data
reduce redundancy = good ECH
What are the three MATHEMATICAL steps in the process of whitening?
- Eigen-decomposition of the covariance matrix (PCA!!)
- Rotate data
- Scale data
What is PCA?
Where is it included in the process of natural image generation?
prinicpal componenet analysis
-step 1 of whitening: PCA= when you do the eigen-decomposition of the covariance matrix
What is the main goal of PCA and whitening?
remove redundancy my decorrelating pixels
Does a Fourier DECOMPOSITION comply to ECH? why?
no because there is negative CORRELATION between y =power x = frequency of change (of pixels)
therefore there is redundant information between pixels = not good ECH
What does the linear Gaussian Image Model capture in natural images (without implementing the ECH)?
captures pair-wise pixel correlations
When generating a natural image, what does this image look like from applying basic whitening functions? Does it look natural/look like anything occurring in nature?
checkboard receptive fields which don’t look like natural receptive fields
What is the issue with just adding whitening to the linear Gaussian Image Model?
What can you do to fix this?
-whitening = large circle of data on graph
issue= you can make more changes to the model like rotating the circle
-add another constraint and localise the circle in space = localised whitening
What does the image look like when you add LOCALIZED whitening basis functions to the model?
it look like receptive fields in neurons lower down in neural circuit - sensory neurons (circle in a bigger circle)
Additionally, what can you do to improve the model once you have added localised whitening basis functions?
-filter out noise and also make energy efficient
What does a NAUTRAL image look like when the second-order redundancy is removed? (has been whitened)
What is second-order redundancy?
What has happened to the correlation between neighbouring pixels in this image?
-you can still see the structure however the image looks washed out because edges and contrast are missing
-correlation between neighbouring pixels
-correlation between neighbouring pixels = 0 (correlation/redundant info has been removed)
What does a perfectly decorrelated image look like?
like swirls in a tree stump
What does applying whitening to a Gaussian Image Model look like on a graph?
After whitening, how do you find the FINAL rotation?
-like an x made of data points in a big circle
-find the direction in the data that are the least Gaussian (skinny pointy peak curve = non-Gaussian)
Why, after localised whitening do we find directions in the data that are the ‘least’ Gaussian?
so we can recover our independent components -> do ICA
***For ICA:
Why can’t independent components be Gaussian?
majority of independent components are non-Gaussian -> if they were all Gaussian, we wouldn’t be able to separate them properly because a Gaussian distribution is symmetric and lacks unique structure
ICA:
Are the independent components non-Gaussian or Gaussian?
What is the central limit theorem (CLT) in terms of Gaussianity?
What does ICA do to recover the independent components?
-independent components are non-Gaussian
-when we mix multiple non-Gaussian signals (independent components), their combination becomes more Gaussian
-ICA finds direction in the data which are the least Gaussian
ICA:
Why do we want to recover our independent components?
How can we achieve this?
-the independent components are non-Gaussian however, due to CLT, when mixed they have become Gaussian. so we want to recover independent components
-find directions in the data where the output is least Gaussian
ICA:
What are the three ways we can recover the independent components?
- Maximize a measure of non-Gaussianity (kurtosis)
- Pick non-Gaussian distribution as prior over inputs
- Minimize mutual information between outputs
How is independent component analysis ICA explained with an example of the eyes?
eyes detect signals and mix them in brain. brain then also de-mixes them to find original sources
What does the image look like when you add ICA filters?
like receptive fields in primary visual cortex V1 which are localised and orientation specific
they are Gabor-like
The response properties of retinal ganglion neurons (sensory neurons) can be (mostly) explained by ________ ?
decorrelation