Week 4: recurrent visual processing Flashcards

1
Q

What are lateral connections?

A

Inputs to a unit coming from other units in the same layer

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

For what kind of networks have are temporal inputs important?

A

Linguistic processing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is a recurrent convolutional artificial neural network?

A

The convolutional filter is not only applied to a spatially-limited group of units in the previous layer but also to the surrounding units in the same layer, and also possibly other layers

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Why are biological and recurrent convolutional networks time-dependent?

A

The activity in a layer changes depending on results of surrounding activity in the layer and activity in the next layer. there is dynamic interaction between layers to reach an equilibrium which is time-dependent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

How is recurrent activity implemented?

A

Through lateral and feedback connections

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are dynamic neural oscillations?

A

The dynamic interactions between excitatory and inhibitory neural population produce oscillations in neural population activity. at oscillation peaks, excitatory population activity is highest

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

How is recurrent activity implemented in deep networks and how is this different from biological networks?

A

In a layer with recurrent activity, the activity of each unit at one time point feeds into the same unit at the next time point. In biological networks neurons never synapse directly with themselves

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is a BLT artificial network?

A

Bottom-up, lateral and top-down artificial network - type of recurrent network

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What happens to the network’s activity in a BLT network?

A

The network’s activity moves back and forth because the results of each interaction affect the activity of the interacting elements - no fixed state of activity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

How do BLT networks change the computational time? why?

A

Much more computationally intensive

  • more extensive filters
  • multiple time steps modelled
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What developments have there been in recent convolutional networks?

A

Recently developed recurrent convolutional networks are investigating lateral and top-down connections

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What happens to image classification when using BLT networks?

A

Image classification performance in difficult tasks improves considerably

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Why are BLT networks time dependent?

A

The interactions between different layers make the network time-dependent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is the difference between a deep feedforward network and a shallow recurrent network?

A

Recurrent connections effectively make networks deeper

  • the same transformation is repeated by recurrent cycles
  • each layer performs multiple layer operations but each with the same set of weights, fewer weights to learn
  • recurrent networks match neural architecture and activity more closely than deeper networks.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What happens in attractor/hopfield networks?

A

Frequently seen patterns of activity produce strong connections between the activated group of neurons due to Hebbian learning. this pattern gets built into the connection weights. new incoming patterns causes a sequence of recurrent activity that activates units even when there is no input to activate then directly. neurons usually activated together in the higher layer become active - an incomplete pattern comes in it is completed based on experience

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What does the attractor/hopfield network require?

A

Multiple recurrent cycles

17
Q

How do different networks perform with recognising incomplete or noisy patterns?

A

Feedforward networks perform poorly and increase linearly as more of the object is shown. A recurrent hopfield attractor network approaches human performance

18
Q

How does feedback work in predictive encoding?

A
  • the initial sweep forward extracts image features
  • prediction is generated based on previously seen patterns
  • the completed prediction is fed back to the lower layer and inhibits activity that fits with this prediction
  • the feedback connection carries the prediction
  • the feedforward signal becomes only the parts that don’t fit with the prediction, the prediction error
  • remaining activity signals prediction error
  • Hebbian learning follows this error signal, like back-propagation
19
Q

What are PredNets?

A

A recent type of recurrent network used to implement predictive coding. they predict future activity based on current activity and previous experience, but no labels

20
Q

How can predictive coding happen in networks?

A

It can emerge spontaneously in a network where each layer has a full set of within-layer connections and feedforward and feedback convolution, without blocks explicitly designed to generate predictions

21
Q

Why do researchers try to train a network to minimise pre-activation in a predictive network?

A

Asking a network to minimise the summed activation of its inputs, the network can’t just set everything to 0 because the input image will produce a lot of activation. it needs to actively inhibit the input image. this is effective for reducing energy consumption and predicting the next input

22
Q

Why might the brain use predictive coding?

A

To reduce energy demands

23
Q

What is the blue brain project?

A

A project that has recently finished mapping the full neural circuitry of mice

24
Q

Why were current processes such as convolution and rectification created?

A

They were chosen for computational efficiency rather than accurately simulating the underlying process

25
Q

What has neuroscience revealed as the basic principles of neural responses and their interactions?

A
  • recurrent interactions
  • attractor networks
  • predictive coding
  • Hebbian learning from prediction errors
26
Q

What are current neural networks good for?

A

Making predictions about external events, some characteristics of biological understanding and intelligence

27
Q

What is the software toolbox Neuron used for?

A

biophysical simulation of interacting neurons as mapped from mouse brains. uses the neural parameters and connection patterns actually measured in the mouse brain

28
Q

What are the two problems with making a similar map of the human brain?

A
  1. ethically we can’t make genetically engineered human clones to study
  2. the computing power needed to run the simulation of even the mouse brain is not available
29
Q

What is the problem with simulating neural circuitry on a computer?

A

Most computer processors are designed for linear mathematics
-they can perform any operation accurately but they are slower and more energy-hungry than the brain, so they are currently being implemented in simpler neural models

30
Q

How is the computing power problem being overcome?

A

Running biophysical simulations in real time required neuromorphic processors, designed to run neural simulations. they are developing quickly and may be ready in 10 years