Biological Plausibility Flashcards

1
Q

What are artificial neural networks?

A

Main tools used in machine learning. As the “neural” part of their name suggests, they are brain-inspired systems which are intended to replicate the way that humans learn

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What do neurons allow?

A

biophysical properties of neurons, in particular alterations in their membrane potential, which allow them to propagate action potentials through synapses to connecting neurons in a network, accounting for the spatial and temporal aspects of the signal as well as the magnitude which determines whether synapses and therefore networks, are strengthened or weakened. These functions are met by complex sets of ion channels, receptors and transporters through vast numbers of signalling cascades

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

How do artificial neurons differ from neurons?

A

Artificial neurons abstract and simplify the neuron to input and output

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

How are artificial neurons similar to neurons?

A

artificial neurons are connected and arranged in layers to form large networks

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What do ANN’s consist of?

A

input and output layers, as well as (in most cases) a hidden layer consisting of units that transform the input into something that the output layer can use

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

How are connections formed in ANN’s?

A

Connections can be formed through learning and do not need to be ‘programmed’

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

How do neural architectures relate to major approaches for neural computing?

A
  1. hierarchical information flow from one layer to the next, is copied by the feedforward neural networks
    - architectures are capable of complex pattern identification
    - often these feedforward networks use the error of performance of feedback to learn + nervous system also uses feedback to learn
  2. Looping between groups or levels of neurons in tasks such as learning handwriting or language recognition has inspired the development of recurrent neural network approaches in which data can flow in multiple directions.
  3. use of inhibition, as well as excitation has guided the development of cooperative/competitive actions found in autoassociative neural network simulations.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Limitations of ANN

A
  1. The biological characteristics implementation in ANN models sometime needs assumptions, simplifications and constraints to ensure good computational performance and better problem resolution. Thereby, several artificial neural network models’ implementations do not consider some of the natural neural network aspects, but instead specifically meet computational criteria and lead to reduce the natural biological inspiration of artificial models.
  2. Furthermore, in bio-inspired systems analyses are applied to datasets that represent non-real human decision-making tasks or simulated tasks. In other words, they are synthetic or built datasets that do not represent real situation of a human intervention, because they are controlled situations.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What was the first model of an artificial neuron and who made it?

A

McCulloch-Pitts neuron (1943)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What features did McCulloch-Pitts neuron’s have?

A
  1. neuron activity “all-or-none” process;
  2. a certain fixed number of synapses
    - excited within a latent addition period
    - to excite a neuron: independent of previous activity and of neuron position.
  3. only significant delay in nervous system: synaptic delay;
  4. activity of any inhibitory synapse prevents neuron from firing;
  5. network structure does not change along time.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are the features of an artificial neuron?

A

• xi : inputs (binary);
• wi : synaptic weights (real, because the synapses can inhibit (-) or excite (+) and have different intensities);
• computation occurs in soma:
x0 = 1 and w0 = β = -θ β = bias and θ = activation threshold.

• The activation function can be:
– hard limiter,
– threshold logic,
– sigmoid.

• the biologically more plausible -> sigmoid function.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What computational properties could be gained by using more complex models of neurons?

A

Adaptability – signalling via different NT’s – i.e. 5-HT has a very different effect than GABA

Coincidence detection – temporal/spatial summation

Temporal aspect – speed of neurons via diameter differences

Differential responses – response depends on termination of neuronal signalling

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Discuss how the biophysical properties of a single neuron can affect what type of function it performs

A

Neurotransmitters – I.e. GABAergic neuron would be inhibitory whereas one that releases glutamate would be excitatory

Length – if very long then summation of signals of many neurons vs. very short then just transmitting signal between a few neurons

Termination – a neuron that terminates in muscle functions to produce muscle contraction vs. neuron that terminates on another neuron to signal

Diameter – speed -> larger diameter is faster because less leakage

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Describe an example of a model whose function depends on its neurons having more detailed biophysical properties

A

In the adaptive coding model, the central idea is that neurons throughout large regions of prefrontal cortex have the capacity to code many different types of information. In any given task context, neurons adapt to preserve only information of relevance to current behaviour. At the same time, they support the representation of related information elsewhere in the brain, including coding of relevant stimuli, responses, representations in semantic memory and reward states. Any given cell has the potential to be driven by many different kinds of input — perhaps through the dense interconnections that exist within the prefrontal cortex. In a particular task context, many cells become tuned to code information that is specifically relevant to this task. In this sense, the prefrontal cortex acts as a global workspace or working memory onto which can be written those facts that are needed in a current mental program.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

How can neurons add?

A

Neurons can simply add their voltages together. Voltage is the linear combination of these two signals.

Voltage which is generated is proportional to driving force x ratio of conductance’s.

V = (G1 + G2) V1/ G0

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

How can neurons divide?

A

You’re dividing the amount of signal coming in by this other signal that’s saying we want to scale down our hunger etc. In principal, if you have shunting inhibition and it’s very dominant in the cell, you can generate an output which is proportional to that ratio.

V = G1V1/G2