Biological Plausibility Flashcards
What are artificial neural networks?
Main tools used in machine learning. As the “neural” part of their name suggests, they are brain-inspired systems which are intended to replicate the way that humans learn
What do neurons allow?
biophysical properties of neurons, in particular alterations in their membrane potential, which allow them to propagate action potentials through synapses to connecting neurons in a network, accounting for the spatial and temporal aspects of the signal as well as the magnitude which determines whether synapses and therefore networks, are strengthened or weakened. These functions are met by complex sets of ion channels, receptors and transporters through vast numbers of signalling cascades
How do artificial neurons differ from neurons?
Artificial neurons abstract and simplify the neuron to input and output
How are artificial neurons similar to neurons?
artificial neurons are connected and arranged in layers to form large networks
What do ANN’s consist of?
input and output layers, as well as (in most cases) a hidden layer consisting of units that transform the input into something that the output layer can use
How are connections formed in ANN’s?
Connections can be formed through learning and do not need to be ‘programmed’
How do neural architectures relate to major approaches for neural computing?
- hierarchical information flow from one layer to the next, is copied by the feedforward neural networks
- architectures are capable of complex pattern identification
- often these feedforward networks use the error of performance of feedback to learn + nervous system also uses feedback to learn - Looping between groups or levels of neurons in tasks such as learning handwriting or language recognition has inspired the development of recurrent neural network approaches in which data can flow in multiple directions.
- use of inhibition, as well as excitation has guided the development of cooperative/competitive actions found in autoassociative neural network simulations.
Limitations of ANN
- The biological characteristics implementation in ANN models sometime needs assumptions, simplifications and constraints to ensure good computational performance and better problem resolution. Thereby, several artificial neural network models’ implementations do not consider some of the natural neural network aspects, but instead specifically meet computational criteria and lead to reduce the natural biological inspiration of artificial models.
- Furthermore, in bio-inspired systems analyses are applied to datasets that represent non-real human decision-making tasks or simulated tasks. In other words, they are synthetic or built datasets that do not represent real situation of a human intervention, because they are controlled situations.
What was the first model of an artificial neuron and who made it?
McCulloch-Pitts neuron (1943)
What features did McCulloch-Pitts neuron’s have?
- neuron activity “all-or-none” process;
- a certain fixed number of synapses
- excited within a latent addition period
- to excite a neuron: independent of previous activity and of neuron position. - only significant delay in nervous system: synaptic delay;
- activity of any inhibitory synapse prevents neuron from firing;
- network structure does not change along time.
What are the features of an artificial neuron?
• xi : inputs (binary);
• wi : synaptic weights (real, because the synapses can inhibit (-) or excite (+) and have different intensities);
• computation occurs in soma:
x0 = 1 and w0 = β = -θ β = bias and θ = activation threshold.
• The activation function can be:
– hard limiter,
– threshold logic,
– sigmoid.
• the biologically more plausible -> sigmoid function.
What computational properties could be gained by using more complex models of neurons?
Adaptability – signalling via different NT’s – i.e. 5-HT has a very different effect than GABA
Coincidence detection – temporal/spatial summation
Temporal aspect – speed of neurons via diameter differences
Differential responses – response depends on termination of neuronal signalling
Discuss how the biophysical properties of a single neuron can affect what type of function it performs
Neurotransmitters – I.e. GABAergic neuron would be inhibitory whereas one that releases glutamate would be excitatory
Length – if very long then summation of signals of many neurons vs. very short then just transmitting signal between a few neurons
Termination – a neuron that terminates in muscle functions to produce muscle contraction vs. neuron that terminates on another neuron to signal
Diameter – speed -> larger diameter is faster because less leakage
Describe an example of a model whose function depends on its neurons having more detailed biophysical properties
In the adaptive coding model, the central idea is that neurons throughout large regions of prefrontal cortex have the capacity to code many different types of information. In any given task context, neurons adapt to preserve only information of relevance to current behaviour. At the same time, they support the representation of related information elsewhere in the brain, including coding of relevant stimuli, responses, representations in semantic memory and reward states. Any given cell has the potential to be driven by many different kinds of input — perhaps through the dense interconnections that exist within the prefrontal cortex. In a particular task context, many cells become tuned to code information that is specifically relevant to this task. In this sense, the prefrontal cortex acts as a global workspace or working memory onto which can be written those facts that are needed in a current mental program.
How can neurons add?
Neurons can simply add their voltages together. Voltage is the linear combination of these two signals.
Voltage which is generated is proportional to driving force x ratio of conductance’s.
V = (G1 + G2) V1/ G0