Bashivan Flashcards

1
Q

What does the classical neuronscience approach allow us to do?

A

It allows us to CLASSIFY neurons according to:
1. Morphology → how to dendritic trees look like?
2. Function → more different functions deeper in visual pathway
3. Firing pattern → bursts, frequency
Etc.

*Every knob is a feature (many many knobs)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is one of the importance uses of functional classifications of neurons?
(Classic approach)

A

Entorhinal cortex → Grid cells vs Border cells

Hippocampus:
- Place cells (being in a single spot in the room)
- Object-vector cells (respond to single object in that roome)
- Splitter cells (respond to being in a specific location but only when about to turn left)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are advantages and disadvantages of the classic approach?

A

Advantages:
- Describes what individual neurons contribute to the computation

Disadvantages:
- Typically function is considered in specific setting → would have to consider ALL possible settings
- Small populations of neurons are considered → limited amount of cells you can record from
- circuits and mechanisms have to be deducted based of intuition from a very limited amount of cell information

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is the difference between deep learning and machine learning?

A

Machine learning:
Input → Feature extraction by a domain expert → Classification → Output
*Domain expert defines key features on top of which classification fits

Deep learning → Feature extraction + Classification → Output
*The network itself establishes the features and classifications directly from the data (in the learning process)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are the “4 knobs” of deep learning?
(4 components on which the design is focused)

A
  1. Architecture
  2. Learning objective (cost function)
  3. Learning rule
  4. Dataset
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are the different types of architecture a neural netowork can take?

A

1-2 are best for image inputs
1. Multilayer Perceptrons:
Every circle/neuron of 1 layer is connected to every neuron of both adjacent layers (not connected to other of the same layer)
- min 3 layers: Input → Hidden → Output
- Large number of weight parameters need to be trained
2. Convolutional Neural Network
1 nose detector goes through the whole picture (convolution), no need for different nose detectors for different areas (check for specific patterns over all the image → extract features)

  1. Recurrent Neural Network
    - Accepts input from outside + Generates its own input
    - Sequential output feed back into the network
  2. Transformers
    - Used for ChatGPT, etc.
    *Parameters = connections between neurons
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are the different cost functions of neural networks in deep learning?

A

*These are ways to learn/change parameters to improve the output
1. Unsupervised objective functions
- NO teacher
- For cross-modal consistency (read → write, Hear → talk) ~ generative consistency
- For future predictions (transformers trained to do that) → predict an image of a car moving in 2 secs, predict next work in a sentence
- may fails to discover properties of the world that are statistically weak but important for survival (need supervised for that)

  1. Supervised objective functions
    - Teacher that tells if right or wrong output
    → Object recognition, object detection, source localization (of sound)
    - The network will change its parameters based on the feedback from the teacher to output ot maximize the odds of outputting the right answer next time
  2. Reward-base objective learning
    - Agent → action → Environment → reward/state → Agent …
    - 2 interactions (with envrionment and with reward)
    - Agent tries to maximize the reward

*What cost functions does brain optimize?
*What do cost functions look like in the brain?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are 3 ways of representing costs of neural networks in the brain?

A
  1. Genes → every neuron in its genes has encoding of what it needs to do?
  2. Cost-encoding Neural net (smaller networks):
    - Ouput layer explicitely computes error (try to satisfy this)
  3. Task-performing Neural net (larger networks):
    - Implicit encoding of cost
    - Cost embedded in task performance (ex: vision, decision-making)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What do we know about learning rules in the brain neural network?

A

*For the synapses that are potentiated
1. Changes in size of synaptic connections
2. Perforations caused by LTP
3. Multiple spines buttons (1 pre for multiple post)

To relate to neural networks:
Before training → all nodes have the same weight (all connected equally, parameters)
Training modulates the weight of different parameters/connections
Wtrained = WInitial + ∆W

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are types of datasets?

A
  1. Images → Image Net (millions)
  2. Video games (tens)
  3. Hundreds of billions pages in text → Common Crawl (to train language models)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are the main takeaways of the DL approach that are different from classical approach

A
  1. No unit typing → units have ubiquitous functionality
  2. Emergent properties → unit’s functional diversity emerges through learning (no need to specify every circuit by hand)
  3. Distributed processing → groups of units are orchestrated to facilitate internalized or externally-imposed objectives
    - Behaviour is not focused on single unit, focused on global network function and performance
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Which future questions could the DL framework allow us to resolve?

A
  1. Investigate relations between neurons across regions
    - How representations come to be, how neural network give rise to them
    - How do we recognize faces? (can’t test experimentally)
  2. Teological explanation for existence of representation
    - Explain why things exist in the brain → behaviours, anatomical features, evolutionnary pressures
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Why is object recognition such a challenge for the brain?

A

*Difficult computational problem
1. Have to consider the infinit amount of ways an object can be shown to us and be able to distinguish it every time → different sizes, angles, colors, backgrounds, etc.

  1. Extrapolate that problem for 1 object → to the thousands of object we can identify

*Ventral visual pathway solves this problem

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

How long does it take for the brain to discriminate different objects?

A

Information gets to IT cortex ~ 100ms → this area discriminates different objects

~ 40ms → LGN
~ 50ms → V1
by 10ms every cortex higher
V1 → V2 → V4 → PIT → CIT → AIT

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What are the first cells in the visual system to communicate via Action Potentials?

A

Retinal ganglion cells
~1.5 million in monkeys

Photoreceptors and Bipolar cells do not send APs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What are pinwheels in V1?

A

They are points around which if you turn (closely), you will encounter selectivity for all orientations

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What different regions are found in IT?

A
  1. Big region specific for faces
  2. Other region selective for body parts (EBA = extrastriate body area)
  3. Region selective for scenes (external or internal landmarks) → responses are anti-correlated with face selective regions responses
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What is population coding?

A

It is a way of looking at groups of neurons and their activity instead of single neurons

  1. Imagine an N-dimensional space → recording from N neurons simultaneously
  2. Each point/vector = 1 stimulus placed according to how much response it induces in each of the N neurons
  3. This N-Dimension space contains 1 line for each object with the infinit points on this line corresponding to all possible representations of this object

The ventral stream transforms this lined from being very curved → more linear as go higher up in cortical areas

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

For what type of information do you have to look at groups of neurons to learn about? (not found in single neuron’s activity)

A

3D scale, Z-axis rotation, height, width, perimeter

*response to these informations increase as we go up the ventral stream

→ They are category-orthogonal object properties (don’t fit in neuron categories)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

How is memorability correlated

A

Memorability is dictated by the MAGNITUDE (spikes/sec) of the response of the specific set of neurons that always responds to the specific stimulus
→ not change in which neurons fire and the weight of each neurons in the global response

21
Q

What was the name of the 1st neural net developped?
By who was it?

A

Multi-layer perceptron
Rosenblatt - 1958

*Uses the artificial neuron model

22
Q

Who created the 1st artificial neuron model?
What was its principle?

A

McCulloch and Pitts (1943)
Dendrites → Dendritic tree connecting to soma/Synapse → Axon → Downstream synaptic terminal

Input → weighted parameters → threshold function → Output

The input to the threshold function is the sum of all the (weights of n)*(input given to dendrite n) → if it is over the threshold, then the neuron will fire AP

23
Q

What is the structure of the multi-layer perceptron of Rosenblatt (1958)?
*First neural network

A

Retina → {Localized connections} → Protection area (A1) → Association area (A2) → Multiple Unit responses

*All other connections are random
- Each unit of each layer is connected to all the units of the previous/upstream layer
- Connections have different weights

If input 1 image of 10 pixels → each unit of A1 has 10 parameters → each unit of A2 has 100 parameters (*10)

24
Q

What is the neocognitron?

A

It is a hierarchical, multilayered artificial neural network proposed by Kunihiko Fukushima in 1979
- Considers the fact that neurons of V1 are not connected to ALL neurons of LGN, same for V2, and V4…
- Considers that different units connect to different sets of units in the previous layer (ex: T-detectors)
- V1 neurons has smaller RF than V2 smaller than V4 (and more peripheral neurons have bigger RF than foveal)
- Layer have alternating S-cells (feature extraction) and C-cells (feature pooling for invariance).

25
Q

What are the important features of the convolutional neural network?

A

Yann LeCun

  1. Filter with weights for each pixel (x0 or x1)
  2. This same filter multiplies each area of the image → convoluted feature (matrix of scores of probability that each area corresponds exactly to the filter)
    - X-detector will have x1 forming X-shape, T-detector has x1 forming a T-shape, etc.
    - 1 filter for all image, no need for different filters for different areas (compared to neocognitron)
26
Q

What is Alexnet?
What are its features?

A

It is a CNN (convolutional neural network) for classifying objects

Architecture → 9 layers of convolution, pooling, nonlinearity, and normalization (60mil parameters)

Cost function → supervised training

Learning rule → backpropagation → takes derivatives to figure out how effect each parameter has on the output

Dataset → Imagenet (1.3 million images)

27
Q

What was special about Alexnet’s 1st layer?

A

The weights of the first layer (outputs) resembled V1 cortex outputs a lot

28
Q

What is special about Alexnet’s recognition behaviours?

A

They are very similar to humans → Similar response patterns and error patterns (they make similar mistakes as humans do)

29
Q

What are the 4 types of model-brain comparsions?

A
  1. Behavioural agreement (output level)
  2. Neural agreement
  3. In-silico electrophysiology
  4. Development agreement
30
Q

What is behavioural agreement?

A

Model-brain comparison based on comparing model outputs with human’s responses:

  1. Compare accuracy of response
  2. Compare pattern of errors
  3. Compare how accuracy is affected by distortions in humans vs model
31
Q

What is neural agreement?

A

A model-brain comparison method based on representational similarities:
1. Representational similarity analysis → compare IT response to every stimulus in matrix to layer response (the “IT” layer)
2. Encoding models → How much neuronal response changes when exposure to similar stimuli (compare) → do linear regression
- predicts the response of each measurement channel (e.g. a neuron or fMRI voxel) on the basis of properties of the experimental condition (e.g. sensory stimulus)

32
Q

What are the types of in-silico electrophysiology?

A

Model-brain comparison method:
1. Lesion studies → lesions in the brain // cutting out single units

  1. Decoding → test presence of information in specific regions of the brain / regions of the model
  2. Selectivity profile → find patterns of selectivities in different areas or units
33
Q

What is development agreement as a model-brain comparison tool?

A

Compare development of brain and model over time → do they follow similar patterns of development?

  • Performing previous analyses at different stages of learning
34
Q

What is a Representation Dissimilarity Matrix (RDM)?

A

It is matrix whic shows the similarity of activation of IT or CNN model when showing 2 different objects
Blue → very similar response to both
Red → very different response to both
*Showed 8 pictures of objects fitting into each of the 8 different categories

Conclusions:
- IT has a very similar matrix to CNN models
- V4 doesn’t
- IT and CNN respond similarly to 2 different picture of boats for ex. (same category)
- IT and CNN responds similarly to pictures of fruits and faces

35
Q

What layers of CNN are better at predicting V4 activity vs IT?

A

V4 activity → Layer 3
IT activity is better predicted by Top layer (layer 4)

36
Q

What is InceptionV1?
What are 2 ways to represent it?

A

Convolutional model for object recognition
- Has 20 layers

  1. Unit presentation → each unit is rpresented by its most stimulating visual pattern
  2. Circuit presentation → units in 2 layers and their connections
    - 3x3 matrix showing the pattern have to appear in which part of the matrix to excite the downstream unit/inhibit (spatial selectivity)
37
Q

Which filters are seen in Layer 0 of InceptionV1?

A

*3x3 convolution
1. Gabour Filters (44%) → Orientation selective similar to V1 simple cells, a bti invariant of where in the RF
*Simple edge detectors come in pairs of negative reciprocals
2. Color Contrast Filters (42%) → detectors for color on one side of the RF also come in pairs of negative reciprocals
3. Other Units (14%) → difficult to name categories

38
Q

Which detectors are found in layer 1 of InceptionV1?

A

*1x1 convolution
1. Low Frequency (27%)
2. Color Contrast (16%)
3. Complex Gabor (14%)
4. Multicolor (14%)
5. Gabor Like/Complex Gabor (17%)
6. Color (6%)
7. Other Units (5%)
8. Hatch (2%)

39
Q

What are complex Gabor detectors?

A

Complex Gabor → Edge detectors that are invariant to the exact position of the dark/light part and of the the edge
Dark-Light-Dark = L-D-L

*Equivalent to complex orientation selective V1 cells

40
Q

How do complex Gabor units emerge?

A

Responses of multiple Gabor filters with similar orientations are combined → UNION OVER CASES

  • Upstream units with different contrast selectivities, but similar orientations excite together
  • Opposit orientation gabor units inhibit the downstream complex Gabor
  • Similar but not quite exact orientation → small excitation
41
Q

What are some important selectivities of layer 2 of InceptionV1?

A

*3x3 convolution
Shape predecessors !!
1. Color Contrast/Multicolor/Color Center-Surround
2. TEXTURES (hatch, Gabor, texture contrasts)
3. Early Birghtness Gradient → important for object boundaries
4. A bit of lines, curves, corners

42
Q

Why does selectivity for textures appear so early in InceptionV1?

A

Because it is trained with the ImageNet dataset which has pictures representing natural stimuli for humans and animals and a big section of this dataset is composed fo picture of dogs and animals

43
Q

How is shape detection achieved by the InceptionV1 filters?

A

Shape detection is done through spatial combinations with different excitabilities from different orientations in the different pixels of the 3x3 matrix

44
Q

What selectivities start emerging in layer 3a of InceptionV1?

A

CURVE DETECTORS
Too many to show all of them
- Eye/small circles (2%)
- Fur precursors (3%) → Specific types of texture
- High-Low Frequencies (6%) → high in one side of RF and low in the other side

45
Q

How can circuit presentation define circle detectors?
In which layer of InceptionV1 are they?

A

*In layer 3a

The filter is excited by different curvature direction at each corners and by full circle in the middle

*Also triangle detectors

46
Q

Selectivity-dependent inhibition

A

47
Q

What selectivities are found in layer 3b of the InceptionV1 CNN?

A

Proto-Head filters →\

48
Q

How are Shape and curve detectors generally formed?

A

Formed through spatial filter combination from upstream units

49
Q

How are invariant object detectors contructed? (General definition)

A

Invariant object detectors are similarly constructed by pooling across orientation-specific object parts detectors

  1. Build the full object by pooling part with the same orientation
  2. Pool the all full orientation-selective objects so that the unit is excited by the full object in all orientations