week 4 - Efficient Coding II Flashcards

1
Q

How does decorrelating neurons/pixels reduce redundancy between neurons/pixels?
How does this contribute to the efficient coding hypothesis?

A

-decorrelation removes the redundant information between the two neurons/pixels
-redundant info is what one neuron already know about the other neuron or overlapping info. this is therefore predictable info. remove redundancy = efficient coding

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Why is it not a good efficient coding hypothesis if neighbouring pixels have similar brightness/ are correlated?

A

we can predict what neuron 1 is going to do from neuron 1 = lots of redundant information

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are the two steps of whitening? what does this look like on a Gaussian linear model graph for each step?

A
  1. decorrelate data (making it a vertical line instead of diagonal on graph)
  2. scale axes to equalize range (make data points one big circle)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

How does whitening comply to the ECH?

A

reduces redundancy by decorrelating data
reduce redundancy = good ECH

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are the three MATHEMATICAL steps in the process of whitening?

A
  1. Eigen-decomposition of the covariance matrix
  2. Rotate data
  3. Scale data
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Does a Fourier DECOMPOSITION comply to ECH? why?

A

no because there is negative CORRELATION between y =power x = frequency of change (of pixels)
therefore there is redundant information between pixels = not good ECH

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What does the linear Gaussian model capture in natural images?

A

captures pair-wise pixel correlations

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What image is produced from applying basic whitening functions? Does it look natural/look like anything occurring in nature??

A

checkboard receptive fields which don’t look like natural receptive fields

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is the issue with just adding whitening to the linear Gaussian model?
What can you do to fix this?

A

-whitening = large circle of data on graph
issue= you can make more changes to the model like rotating the circle
-add another constraint and localise the circle in space = localised whitening

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What does the image look like when you add LOCALIZED whitening basis functions to the model?

A

it look like receptive fields in neurons lower down in neural circuit (circle in a bigger circle)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Additionally, what can you do to improve the model once you have added localised whitening basis functions?

A

-filter out noise and also make energy efficient

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What does the image look like when natural images have second-order redundancy removed?
What has happened to the correlation between neighbouring pixels in this image?

A
  • you can still see the structure however the image looks washed out because edges and contrast are missing
    -correlation between neighbouring pixels = 0 (correlation/redundant info has been removed)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What does a perfectly decorrelated image look like?

A

like swirls in a tree stump

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What does applying second-order whitening to a Gaussian model look like on a graph?
For second-order whitening, how do you find the FINAL rotation?

A

-like an x in a big circle data points
-find the direction in the data that are the least Gaussian (skinny pointy peak curve = non-Gaussian)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Why, after localised basis whitening do we find directions in the data that are the ‘least’ Gaussian? (2nd order model)

A

so we can recover our independent components

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Why can’t independent components be Gaussian?

A

majority of independent components are non-Gaussian -> if they were all Gaussian, we wouldn’t be able to separate them properly because a Gaussian distribution is symmetric and lacks unique structure

17
Q

Are the independent components non-Gaussian or Gaussian?
What is the central limit theorem (CLT) in terms of Gaussianity?

A

-independent components are non-Gaussian
-when we mix multiple non-Gaussian signals (independent components), their combination becomes more Gaussian

18
Q

Why do we want to recover our independent components?
How can we achieve this?

A

-the independent components are non-Gaussian however, due to CLT, when mixed they have become Gaussian. so we want to recover independent components

-find directions in the data where the output is least Gaussian

19
Q

What are the three ways we can recover the independent components?

A
  1. Maximize a measure of non-Gaussianity (kurtosis)
  2. Pick non-Gaussian distribution as prior over inputs
  3. Minimize mutual information between outputs
20
Q

How is independent component analysis ICA explained with an example of the eyes?

A

eyes detect signals and mix them in brain. brain then also de-mixes them to find original sources

21
Q

What does the image look like when you add ICA filters?

A

like receptive fields in primary visual cortex V1 which are localised and orientation specific
they are Gabor-like

22
Q

The response properties of retinal ganglion / thalamic visual neurons can be (mostly) explained by ________ ?

A

decorrelation

23
Q

Is redundant information and mutual information the same thing?
Why?

A

-mutual info is how well the system can predict the input from the output
redundant info is information you can predict in input/output
-no because we want to maximise mutual info and minimise redundant info