Neural network models Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

What is a feedforward neural network?

A

data flows in one direction from input to output through hidden layers comprised of units/neurons each connected to everything in the previous layer via weighted connections

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is a perceptron?

A

single layer feedforward neural network: takes multiple outputs, computes weighted sum (multiply each input by weight), produces singular output

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is gradient decent?

A

optimisation algorithm to minimise error/loss, adjusts weights and other parameters in direction that leads to greatest decrease in the error, like decending down a hill by taking small steps in steepest direction

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Explain parametric versus non-parametric models

A

parametric are a fixed size, fixed number of inputs and outputs, learn set of parameters to map inputs onto outputs. non-parametric models grow as number of inputs and outputs grow, encodes values like a table or big list

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Explain linear versus non-linear problems/networks

A

linear problems need straight line to solve, non-linear need a curve. linear activation functions output directly proportional to input but non-linear doesn’t have to be. E.g. step function like light switch but sigmoid like volume knob

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is translation invariance?

A

ability to recognise objects regardless of spatial location, scale or orientation, achieved in visual system and CNNs by heirarchal processing, spatial pooling and feature detection

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Why do regular neural networks lack translation invariance? Why are CNNs better?

A

FNNs are fully connected and lack weight sharing - only learn patterns independently at each spatial location. CNNs have local connectivity and weight sharing - convolutional layers where neuron connected to local region of input and filters applied across different locations

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is a CNN?

A

type of FNN specialised for processing visual data/images, distinguished by addition of convolutional hidden layers - local filters convolve across image matrix to prodice feature map, layers build upon features of last to extract more complex features in heirarchal structure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Explain dimensionality

A

number of features in input data, number of neurons in each layer. more dimensionality = more complex representations but also more computational costs. activation functions can increase by introduce non-linearity, pooling reduces

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Explain adversial networks

A

networks trained to find images that CNNs classify incorrectly, incrementally adjusts image until it maximally resembles image of different class but without losing its class label

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Why are CNNs a good model of the ventral but not the dorsal stream?

A

can classify objects like the “what” pathway but are usuallly for static images and can’t do motion like “where” pathway and don’t understand what objects are like “how” pathway.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Explain spatial and temporal heirarchies

A

how processing organised in brain- spatial goes from simple features like lines up to complex objects, temporal goes from shorter to longer temporal windows. CNNs have spatial but not temporal

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Why are CNNs a good model for the visual system?

A

image processing, heirarchal feature extraction, local filters analogue for retinal receptive fields, translation invariance, multiclass classification from probability distrubution of classes, pooling to reduce dimensionality

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What are the limitations of CNNs as a model for the visual system?

A

CNNs feedforward only but visual system has feedback loops from higher cognitve areas, still struggle with generalisation as limtied to training data, vulnerable to adverserial methods, only spatial not temporal processing, only ventral not dorsal

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Explain delay preiod activity

A

activity in area of what trying to remember when trying to remember it - cells fire persistently in spatially selective fashion thought to be substrate for short-term integration/working memory. seen in dorsolateral prefrontal cortex of macaque

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Explain temporal integration in the visual system

A

normative models using log liklihood suggest accuracy of decision grows with number of samples as noise averages out to zero over time. also descriptive as humans and monkeys perform better at tasks like direction of motion for dot cloud under longer duration, lateral parietal cortex activity reflects adding up of information/evidence for particular response

17
Q

Contrast recurrent models and RNNs

A

models where information loops back on itself, e.g. drift diffusion linear, wang non-linear one of responses win race to decision threshold. RNNs are neural networks using model but have freely trainable parameters

18
Q

What is an RNN

A

process sequential data by maintaing internal memory. interconnected layers of neurons but each neuron also connected to itself. feedback loop allows information to persist from one time step to next

19
Q

Why are RNNs a good model of memory/visual system?

A

process data over time so able to do action selection based on past and present data not just present. activity of hidden units in RNN resemble mixed selectivty of neural data for dot motion stimulus

20
Q

What are the limitations of RNNs as a model of memory/visual system?

A

computationally costly and biologically implausible backpropagation through time

21
Q

What is the issue with temporal credit assignment?

A

difficulty assignign credit for outcomes that may occur several steps after input that provoked them e.g., working out who infected you when became ill with virus that has long incubation period

22
Q

What is backpropagation?

A

method of updating weights in neural network. error (difference between output value and target value) passed backwards through network, optimisation alogrithm such as gradient decent used to adjust weights to minimise error

23
Q

What is backpropagation through time?

A

have to propagate back through time steps as well as layers, adjust weights at each time step based on how contributed to errror at end, unfolding network. but can’t unfold all of it so has to be a cutoff/truncation window

24
Q

Explain these parts of a neural network:
weights
activation function
bias term

A

weights = random numbers adjusted by leaarning rule to reduce error, weights sort of represent relative importance/contribution of each input to output. activation function = converts weighted sum into an output, determines whether neuron sould be activated or not based on threshold. bias term = controls position of decision threshold for activation function, can choose certain value if know what need/testing or to initialise and gets updated by network similar to weights

25
Q

What is a convolution?

A

mathematical operation combining two functions to produce a third function that represents how one function modifies the other. In CNN, convolution is the product of the weight values of a filter and the corresponding input values within the filter’s receptive field

26
Q

What is Balint’s syndrome and what does it show?

A

dorsal stream lesion, exhibit difficuly with novel stimuli and counting, characteristic defecits of CNNs, suggests CNNs model ventral but not dorsal

27
Q

Explain evidence for CNNs as model of visual system

A

Early CNN filters representational properties like early visual areas V1 e.g., oreintation selective. CNN response to objects maps on to selectivity of Inferior Temporal cortex neurons. CNNs optimised for betetr performance and CNNs optimised to resemble IT cortex data both resemble IT data