Neurocognition Flashcards

1
Q

Which type of networks can you distinguish? Give a brief description of each.

A

feedforward network and fully recurrent network.

in FFN, info moves in 1 direction from input layers to hidden layers then to output layers. it focuses on classification

in FRN, info moves in both directions and every neuron is connected to every ohter neuron. it’s known for the ability to memorise

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are the arguments (observations) in favor of the suggestion that object recognition and
classification occur in the brain by means of feedforward networks?

A

ventral object recognition happens very fast, too fast for feedback processing

shown in study w primates

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is a receptive field of a neuron?

A

specific area of the neuron that it responds to. it only activates for stimuli in that certain area

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Which variations can you see of receptive fields in feedforward networks in the brain that process
visual information for object recognition and classification?

A

as info goes from V1 to IT, the receptive field becomes larger, the neurons become less specific and are activated by more objects. this allows recognition regardless of size/ location and angle changes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is a topographic or retinotopic representation?

A

when the order of objects perceived in the retina are projected to the V1 the same order

eg. whats visually next to each other are next to each other in V1 representation too

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is a topographic or retinotopic representation?

A

when the order of objects perceived in the retina are projected to the V1 the same order

eg. whats visually next to each other are next to each other in V1 representation too

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What does size/ location invariant object recognition mean?

A

the size and location of the object in the visual field does not interfere with recognition

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Which observations in the Quian Quiroga article support the notion of invariant object representation
in the brain?

A

cells in the medial temporal lobe is activated when participants were shown same person in different background.

  • respond to the idea of the percepts rather than the details that falls in the retina (conceptual representation)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is the relation between information and uncertainty? Give an example.

A

information is the reduction of uncertainty. without reducing uncertainty, there is no information given.

eg. if only 1 candidate ran for president and he won, his victory is not information.
(=0 bits)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is the amount of information? How can it be expressed?

A

AOI is the minimal amount of signals needed for communication

-expressed in ‘bits’

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is the relation between information and classification?

A

classification reduces information.

-eg. in the AND problem, the initial input is 2 bits (x, y) but the output is only 1 bit (0, 1)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is a perspective or Frame Of Reference (FOR)? Give an example.

A
  • the info that the neuron has access to.
  • different for each neuron
  • eg. FOR of output layer is the layer that is right below it.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Why is a perspective or Frame Of Reference (FOR) important for understanding how a network
operates?

A
  • cuz each node of each layer have a different FOR

- have to undestand what info it has access to before we can understand how it learns

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What are the similarities and (typical) differences between real neurons and artificial neurons?

A
  • both can be activated/ inhibited depending on whether the threshold is reached
  • real neurons: activation/ inhibition depends on action potential that goes thru the synapse
  • artificial neurons: activation/ inhibition depends on the activity number and connection weight
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What does it mean that a classification problem is linear separable? Give an illustration (example).

A

when in an input space, a line can separate the coordinates that signify activation from coordinates that does not signify activation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is the input space of a network such as a perceptron?

A

input space: spatial representation of all possible inputs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Describe in global terms a learning procedure for a perceptron. Explain what the error is in the
learning rule. What is the role of the error?

A

learning occurs thru shifting the connection weights,
-aka supervised learning

  • according to the learning rule, the new weight equals to a random weight plus the error
  • error: difference between the desired output and the actual output
  • it measures the error the network makes
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Does the learning procedure for a perceptron stop changing the weights of the perceptron when the
perceptron achieves classification? If so, why?

A

-yes

  • weights stop shifting when the desired output is achieved
  • so that the weights can surpass the threshold
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What is supervised learning? Give an example.

A
  • learning procedures that use a measure of error
  • connection weights can be shifted
  • eg. learning of a perceptron.
    begin w random weight (Wi) and update w learning rule

learning depends on the actual output and the desired output,
-we want to the desired output to be 1 and current output to be 0 so the weights would be larger than the threshold

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Can all classification problems be learned by a perceptron? If not, why not?

A
  • no

- some problems, eg. EXOR, can’t be solved because it’s not linear separable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Give an example of a classification problem that cannot be solved by a perceptron. Explain (and
illustrate) why not

A
  • EXOR problem
  • not linear separable
  • L5 p5
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What is a squashing function? Give examples.

A
  • activation function the reduces potentially large input into a small output range (-1 to 1)
  • eg. logistic function, hyperbolic tangent
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Why do you need squashing functions for a feedforward network with hidden layers (so a network
with more than two layers), if the network has to achieve more than a perceptron?

A
  • squashing function w hidden layers needed when the classification problem is not linear separable,
  • without a squashing function the problem can’t be solved
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Show why a multi-layer feedforward network without squashing functions in the hidden layers is
similar to a two-layer feedforward network.

A

if the multi-layered network uses a linear function (instead of a squashing function), it still wouldn’t solve the EXOR problem

  • bcuz it doesn’t change the input the output layer receives
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Why is object invariance a problem for feedforward networks?

A
  • the same object can produce very different images on the retina (angle, lighting, etc)
  • in feedforward networks, output layer is only connected to its last hidden layer
  • to correctly classify an object on all its possible configurations, the same connection weight has to be forwarded to the output layer
    = v hard to achieve computationally
26
Q

What could object invariance entail for the way in which object recognition and classification is learned by a feedforward network?

A
  • ffn supposed to be able to achieve object invariance
  • network has to view same object again and again, under different conditions, to derive a unique activation pattern for the object
27
Q

Are there experimental observations that suggest that networks in the cortex have difficulties with
learning to classify or recognize objects in an invariant way?

A
  • cox et al. created an unnatural visual world
  • participants were confused w object recognition after brief exposure to these unnatural images
  • showing that if the spatiotemporal characteristics arent well learnt, the network has a hard time w object invariance
28
Q

Describe the notion of a state space of a layer in a feedforward network. What is a vector in this state
space?

A
  • state space: n-dimensional spatial representation of all possible activations in a layer/ network
  • vector: each piont in state space
    x1, x2, x3, xN
    N: # of dimensions
    in this case its the # neurons
29
Q

Explain how object invariance can be described in terms of the state space of a layer in a feedforward
network.

A
  • activation in state space varies per object, just like in visual cortex
  • unlike in VC w 1 point in space, there are identity points ( aka manifolds) that represent a single object
  • if classification line can separate objects, object invariance is reached
30
Q

What are the characteristics of the representation of two invariant objects in the state space of a layer
in a feedforward network when the objects are always distinguished by the network?

A
  • the objects’ respective manifolds should be separated by classification line for object invariance
31
Q

What is the Frame Of Reference for the highest layer (the classification layer) in a feedforward
network?

A
  • the (hidden) layer just below the highest layer

- its the layer that the highest layer receives input from

32
Q

What is the Frame Of Reference for layer i in a multi-layer feedforward network?

A

FOR of layer i= layer i-1

  • everything below layer i-1 is unreachable by layer i
33
Q

Describe in global terms the changes in the representation of two objects that are needed in object
classification in the visual cortex, from V1 to the identification layer (IT). In what important way does
this representation change?

A
  • in V1 representation, object manifolds are tangled
    = distinction impossible

-in IT rep, object manifold are untangled
= distinction possible

  • in the ventral stream, object recognition occurs thru untangling object manifolds
34
Q

When is the representation of two objects in object classification in the identification layer (IT) good?
When is it bad?

A
  • good: objects can be separated by hyperplane (linear decision function)
  • bad: object manifolds tangled
    = cannot be separated by hyperplane
35
Q

What are center-surround receptive fields?

A

receptive fields that activates when there is contrast between the center and its surroundings

-eg. black on white / white on black

36
Q

What is the difference between off-on and on-off center-surround receptive fields?

A

off-on center-surround:

  • no light in center, light in surroundings
  • eg. black dot on white paper

on-off:

  • light on center but no light in surroundings
  • eg. white spot on black background
37
Q

What is a significant difference between the receptive fields of the retinal ganglion cells and neurons
in the primary visual cortex (V1)?

A

receptive fields of retinal ganglion cells:

  • circular
  • responds to contrasts

neurons in V1:

  • elliptically shaped
  • respond to contrast and orientation

L5, p10

38
Q

Gabor filter formula

name the functions for:

  • Y
  • sigma (o)
  • lambda (人)
  • psi
  • theta
A
  • Y: aspect ratio
    (orientation selectivity)

-o: effective width (spatial selectivity of RF)
= if o large: big RF, filter less selective
= if o small: small RF, filter more selective

  • 人: wavelength of cosine
    (on-off selectivity)
  • psi: responsible for the phase of the filter
  • can switch filter from on-off to off-on
  • theta: orientation of the center-surround field
39
Q

What kind of receptive fields does the gabor filter formula describe?

A
  • the RF of simple cells S1 in V1 that responds to contrast and orientation of lines and edges
  • which then relays the info to the higher layers
40
Q

what happens if 人 is larger/smaller than 1 (in gabor filter equation)?

A
  • 人: wavelength of cosine
    (on-off selectivity)

if big:

  • big wavelength (wide wave)
  • no center surround

if small:

  • small wavelength (thin wave)
  • many center surround
41
Q

what happens if o is larger/smaller than 1 (in gabor filter equation)?

A

-o: effective width (spatial selectivity of RF)

if o large:

  • big RF
  • filter less selective

if o small:

  • small RF
  • filter more selective
42
Q

what happens if psi is 0 or pi (in gabor filter)?

A

if 0
= on-off

if pi
= off-on

43
Q

The Serre et al. model for object recognition / classification has two kinds of units: S units and C
units. Describe in global terms the different roles these units have in the model.

A

(S comes before C)

S units are modelled after simple cells
- S1 cells respond to oriented bars and edges

C units are modelled after complex cells

  • each C unit receive the output from a group of S cells w the same preferred orientation but at slightly different position and angle
  • pooling of size and position allows object invariance
44
Q

Describe how the Serre et al. model for object recognition / classification gradually achieves object
invariance.

A
  • the model achieves trade-off between selectivity and invariance by alternating between S and C layers
  • at each C stage, complex units become increasingly invariant to scale and position by combining S units w same selectivity but slightly different positions and scales
45
Q

What is the role of Gabor filters in the Serre et al. model?

A

simple S1 units have the shape of gabor filters

  • they respond to different orientations (4), scale (17) and phases (2)
46
Q

What do the C units represent the Serre et al. model?

A
  • rep complex cells
  • pools the output of S units in the same layer, which are tuned to the same preferred orientation but slightly different scale and position
  • eg. C1 pools S1 output
47
Q

How are the C units activated?

A

by pooling the output of S units in the same layer

  • receives output from S units with the same preferred orientation but slightly different scale and position
48
Q

What is the max (or hmax) operation? Where does it play a role in the Serre et al. model?

A

hmax:
- always used from simple cells to complex cells within the same layer
- C units look at the highest S unit activation (within the same layer)
- it doesn’t add up the S activations tho because then it’ll lose selectivity
- hmax allows for C units to be activated

49
Q

What do the S units represent the Serre et al. model?

A

modelled after simple cells that respond to oriented bars and edges

50
Q

How are S units activated?

A

responds to oriented bars and edges in their receptive field

51
Q

Why is the activation of S units maximal when the input (x) is equal to the weight (w) of the
connection between the C nodes of the previous layer and the S node? How can this be achieved?

A
  • due to the formula for the activity of S units, the numerator will then = 0 when w=x
  • max activation when input = weight
    so we get highest value from the equation
  • achieved by supervised learning
52
Q

What is the relation between perceptrons and the Serre et al. model?

A

every interaction between 2 layers (S and C units) are a perceptron

  • the model consist of several perceptrons
53
Q

What is unsupervised learning?

A

the network is presented with info that isn’t classified and it acts on the info without guidance/ prior training

  • groups unclassified info according to the patterns, similarities and differences
54
Q

The Serre et al. model for object recognition / classification determines the weights for the S units
from S2 to (and including) S4 by unsupervised learning. Describe in global terms how this learning
process operates.

A
  • unsupervised learning occurs thru exposing the model to the same object under different conditions, with an increasingly large RF
  • then allows it to learn the statistics of the environment in connection to the object
  • increasingly able to extract object from environment
55
Q

What is the stability-plasticity dilemma and how is it solved in the Serre et al. model?

A

stability- plasticity dilemma:
- connection weights are adjusting in the beginning but fixed after training

  • solution:
    weights become fixed after training (after unsupervised learning stage)
  • units imprint so the unit’s connection weight is set to be equal to the current activity level from the previous layer (w = x)
56
Q

Why are pictures of different scenes used in training Serre et al. model?

A
  • used in unsupervised learning stages
  • so that the model adapts to the stats of the natural environment so the unit gets tuned to common image-features that occur with high probability in natural images
57
Q

How does the classification layer in the Serre et al. model work? How does it learn?

A

classification layer learns thru supervised learning with a perceptron

58
Q

Describe why scrambled images could be a problem for feedforward networks for object recognition /
classification. Give an illustration. How can this problem be limited?

A
  • a group of S units respond to certain input pattern but doesn’t understand the relations between the images
  • separate parts of the scrambled image then activates the same S4 units that respond to seeing the whole picture
  • C4 units may then misinterpret the activation pattern as seeing the whole picture althou the parts are scrambled
  • problem can be limited by having other s4 units w overlapping RF (redundancy important) so can infer the start and end points of each figure
  • c4 would then be less likely to misinterpret the scrambled image
59
Q

What is the role of redundant representation in the Serre et al. model? Give an illustration.

A

redundancy limits the problem of misinterpreting scrambled images as a whole image

  • overlapping S units gives info about the start and end point of an image
  • C4 units are less likely to misinterpret the scrambled image
60
Q

what is a perceptron?

A
  • algorithm that solves linear separable classification problems
  • consist of input neuron (x, y) & output neuron (U)
  • connection weights determine the strength of these connections
  • uses activation function to transform input to output
61
Q

What is a (multi-layer) feedforward network? What is it typically used for?

A
  • Feedforward network: a deep learning network where information moves through layers of units in one direction, from input layer (through hidden layers) to output layer
  • Typically used e.g. for classification processes in the brain’s visual system