Deep Learning - Dr Bashivan Flashcards

1
Q

why should you not include every detail in a neural model?

A
  1. more difficult to interpret
  2. lower feasibility
  3. more difficult optimization
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

what is the current practical sweet spot for amount of detail integration?

A

deep neural nets

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

what is the up and down of verbal explanations?

A
  • easy to communicate!
  • has a narrow bandwidth :/
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

what is the up and down of quantitative explanations? (code)

A
  • easily transferrable, easy communication, can answer questions without costly experiments
  • requires coding literacu
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

what is the classic approach for studying neuroscience?

A

identify and characterize individual elemnts in the brain (bottom->up approach)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

what is the difference between machine learning and deep learning?

A

machine: figure out a template/feature of what you are looking for and then classify
deep learning: feature extraction + classification happen at the same time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

why is the classic approach for studying the brain not so efficient?

A

only considers one of few tasks at a time, and only a few neurons

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

give an example of the classic approach

A

surround modulation and two-interval discrimination

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

what components is the deep learning framework based on?

A
  • architecture
  • learning objective (cost functions)
  • learning rule
  • dataset (secondary axis)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

what are 3 principles of holistic deep learning approach?

A
  • units have ubiquitous functionality
  • units’ function diversity comes from autonomous learning
  • groups of units are orchestrated to facilitate internalized or external objectives
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

name 2 static architecture models

A
  • multilayer perceptrons
  • convolutional neural network
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

what is multilayer perceptrons?

A

each unit in a layer is connected to all the units in the previous and following layer

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

what in convolutional neuron network?

A

units are locally connected to subgroups of units

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

name the 2 dynamic architecture models

A
  • recurrent neural network
  • transformers
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

what is recurrent neural network?

A

internal memory gets updated based on observations

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

what are the 3 types of cost-functions strategies?

A

unsupervised, supervised, reward-based

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

what is unsupervised objective (cost) functions?

A
  • learn from observations, model reproduces what it sees: predicting errors, continuity, sparsity
  • has generative consistency: wake-sleep algorithm, generative neural networks
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

give an example (allegory?) of unsupervised objective functions

A

finishing someone’s sentence

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

what is a downside of unsupervised learning algorithms?

A

it may fail to discover properties of the world that ae statistically weak but important for survival

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

how can we solve the problem of unsupervised objective functions not discovering essential properties?

A

supervised objective functions

21
Q

give examples of supervised objective functions

A

object recognition
object detection
source localization

22
Q

what is reward-based cost functions?

A

agents try to maximize reward

23
Q

how are costs encoded in the brain vs in neural net?

A

brain : genes
neural net:
- cost-encoding neural net (small)
- task-performing neural net (large)

24
Q

what are the 3 learning rules you can use?

A

following a gradient, not following a gradient, partially following a gradient

25
Q

why do we think prefrontal cortex neurons continue to fire during the delay period despite no stimulus?

A

to keep the novel information in mind

26
Q

what did they find after making monkeys perform an oculomotor delayed response task?

A

during the delay period, neurons fire or are inhibited selectively to the cue location

27
Q

what fraction of PFC neurons shows excitation or inhibition during the delay?

A

1/3

28
Q

name the 4 toolkit for model testing

A
  1. behavioural agreement
  2. agreement with neural data
  3. in silico electrophysiology
  4. developmental agreement
29
Q

what is behavioral agreement toolkit?

A

quantifying the behavioral similarity of our network vs animal network doing the same task

30
Q

what is agreement with neural data toolkit?

A

comparing how the model vs animal solves the task

31
Q

what are 2 different ways of testing agreement with neural data?

A
  1. representational similarity analysis: comparing patterns of responses using matrix
  2. encoding model: compare neuron with a unit
32
Q

what are the 3 types of in silico electrophysiology

A
  • lesion studies
  • decoding (find how a characteristic of the task is encoded)
  • selectivity profile
33
Q

what is developmental agreement toolkit?

A

performing previous analyses at different stages of learning

34
Q

explain the artificial neuron model

A

each dendrite works as an input channel
-> some weight each input and assignes a value to each -> if weighted sum reaches threshold the neuron starts spiking

35
Q

what was the first neural net?

A

multi-layer perceptron: extended artificial neuron model into interconnected layers of neurons

36
Q

what was the limitation of the multi-layer perceptron?

A

it only considers a limited part of the visual field (units are only connected to the units around the center of the previous layer)

37
Q

convolution neural networks allow…

A

Patterns to be distinguished regardless of their spatial location

38
Q

what is convolutional neural networks?

A

network learns and applies a quernel / convolution with specific feature and recognizes them in the environment

39
Q

what is alexnet?

A

convolutional neural network with 9 layers of convolution, pooling, nonlinearity, normalization

40
Q

how was alexnet trained?

A

supervised training with imagenet dataset

41
Q

what did they find in the first layer of alexnet?

A

patterns useful for image recognition, similar to those found in V1

42
Q

what did they use the representation dissimilarity matrix for?

A

to compare the response of neurons of V4, IT, and CNN last layer to 8 different categories of objects

43
Q

the 3rd and 4th layers of alexnet CNN corresponded to what macaque brain areas?

A

3rd layer = V4
4th layer = IT

44
Q

name 3 ways how alexnet was doing unsupervised learning

A
  1. deep cluster: groups its inputs in clusters
  2. instance discrimination: discriminate between pairs of observations from memory
  3. contrastive learning: learns to respond similarly to different variations of the same observation
45
Q

do neural networks have a spatial map?

A

no, selectivity of the units is completely random. no topography

46
Q

how large the weight is between 2 units is scaled by what?

A

the physical distance between those 2 units

47
Q

as you go higher in the visual pathway hierarchy, what happens to topography?

A

increased topography organized by categories (more organization)

48
Q

what are the 2 topographic models propose (models for learning objective)?

A
  1. wiring cost minimization (distance between neurons)
  2. spatial cost function (spreading of neurons)
49
Q

what does the correlation between response similarity and cortical distance show?

A

spatial loss hypothesis encourages local correlations