Exam Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q
  • What is the purpose of the brain according to Carl Sagan and Daniel Wolpert? (5)
A
  • According to Daniel Wolpert, he argued that the sole purpose of the brain was to plan, organise and execute movements.
  • He based his argument on that plants do not have brains and can not move.
  • Daniel Wolpert gives an example that purpose of the brain is to organise, plan and execute movements as a sea squirt is an aquatic animal that moves in the ocean (has a brain) until they find a rock they can attach themselves onto.
  • Once that happens, the animal will digest its own brain. Thus, David Wolpert concluded that a brain is not necessary if you do not need to move.
  • Daniel’s Wolpert argument of the purpose of the brain is similar to Carl Sagan’s which argues that the brain’s purpose is to store information to produce adaptive and complex behaviour beyond what can be encoded in genes.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q
  • What is dualism and its view is attributed to who..? (2)
A

Dualism is the idea that both the mind and body are separate, and that the mental is distinct from the physical (i.e., the mind is not the brain).
The view of dualism is mainly attributed to philosopher called Descartes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Where did substance dualism proposed and
What is substance dualism? (2)

A

In Descartes Meditations on First Philosophy, he proposes a version of dualism called substance dualism or also known as interactionism.

Substance dualism is where both the mind and body are essentially dissimilar from each other and are made up of two different substances.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

In substance dualism, made of up 2 substances (2)

A

the body is made up of res extensa (corporeal [i.e., physical substance])

and the mind is made up of res cogitans (thinking [i.e., non-physical substance]).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

In substance dualism, mind and body location and experienced in ordinary perception? (2)

A

The mind can not be located in space and not experienced through ordinary perception

whereas the body is located in space and can be experienced in ordinary perception.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

In substance dualism, although mind and body separate and made up of 2 substances, they

A

influence each other causally (interacting).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Example of substance dualism of quickly swimming on boat when seeing shark: (6)

A

1) light waves from shark hit your retina,
2) your brain will extract sensory information from the activation pattern of her retina,
3) this will pass information to your non-physical mind,
4) the mind will interpret this sensory information it has received from brain and recognises it is a shark,
5) it will decide best thing is to swim back to boat and get on it and
6) the brain sends signal to her muscles and to swim back to boat.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are the strong objections to dualism? = First objection is** lacks causality** and goes against conversation of energy (3)

A

Dualism has shown to lack causality.

Descartes considered that both the mind and body are different substances – one is physical and one is mental substance that affect each other causally (interacting with each other) via the pineal gland.

Dualism lacks causality since how does the PG transmit information to the mind and back has not been explained scientifically as no one was able to propose a theory for this in roughly 400 years.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are the strong objections to dualism? = First objection is lacks causality and goes against conversation of energy (8)

A

Dualism has also shown to go against conversation of energy principle.

Informing the brain involves applying force into ions (i.e.. electronically charged particles which make neurons fire action potentials).

Where does that energy come from to make those charged particles (i.e., ions) move can be explained via conversation of energy principle.

Generally nature and universe follow this law of conversation of energy that states that our universe is a closed system and in closed systems the energy is neither produced or removed but can change its form from one energy to another (e.g., kinetic energy to heat).

If substance dualism was true, then it means that energy would constantly be added into closed system of our universe every time the mental (res cogitans) interacts with the physical (Res extensa).

Thus the law of conversation of energy principle must be false and contradicts fundamental laws of physics.

However, there is a lot of scientific evidence to suggest conversation of energy is true!

Since substance dualism contradicts basic fundamental physics, according to Daniel Dennett, this is a fatal flaw in dualism that is inescapable and unavoidable.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are the strong objections to dualism? = second objection is

A

dualism concerns evolution as very few researchers would accredit mind stuff to a single cell and especially where and when does mind stuff (i.e.. res cogitans) appear in the chain of evolution has not been answered.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are the strong objections to dualism? = third objection of dualism (6)

A

The third objection of dualism comes from research from physical brain damage and psycho-active substances.

Mental states have shown to be affected by physical substances.

Furthermore, psycho-active substances have shown to change a person’s mental state.

A person’s mental state is also affected by the physical damage to their brain.

For example, a person can experience amnesia caused by damage to their hippocampus in their brain.

How does psycho-active substances affect res cogitans, how does person’s mental state affected by their physical damage to brain and how does damage to brain prevent realisation of mental states in res cogitans has not been replied satisfactorily

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is materalism?

A

Materialism is the view that the mind is a physical object and mental states are derived from physical states

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

How does materalism explain that you quickly swim to boat once seeing a shakr in water (3)

A

1) light waves from shark hit your retina,
2) Your brain extracts sensory information from the activation patterns of retina and processes them and
3) the brain sends signals to the muscles and swims quickly back to the boat.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is identity theory? (2)

A

According to the identity theory (proposed by materialists), the mind is the brain, and mental states such as beliefs, desires, emotions (Etc…) really are physical states of the brain.

For each mental state, according to this theory, there is a unique physical configuration of the brain (i.e., distribution of activity in brain cells) such that life form can be in that mental state only if it is in that brain state.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Describe the large anatomy of the brain and its functions beginning part:

A

The major large subdivisions of the brain , on a large scale, is it has a telencephalon, diencephalon, mesencephalon and rhombencephalon.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Describe the large anatomy of the brain and its functions beginning part = telencephalon (3)

A

The telencephalon consists of a olfactory bulb and subcortical structures (e.g., basal ganglia) .

The function of this division of brain is that the cerebrum is responsible for higher (cortical) function.

The basal ganglia is important for a wide range of functions such as action selection, attention, procedural learning, habit learning, conditional learning and eye movements.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Describe the large anatomy of the brain and its functions beginning part = diencephalon (5)

A

The diencephalon consists of the thalamus, hypothalamus, epithalamus and subthalamus.

The thalamus is the main relay station for the brain between the telencephalon (cerebral cortex) and the brain stem/spinal cord for sensory information.

The epithalamus helps to regulate circadian rhythms

The subthalamus helps to regulate and coordinate motor function.

The hypothalamus main function is to maintain your body’s internal balance (e.g.. regulating blood pressure, body temperature etc…) , which is known as homeostasis.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Describe the large anatomy of the brain and its functions beginning part = mesencephalon (2)

A

The mesencephalon is the front portion of the brain stem and contains the tectum and tegmentum.

The mesencephalon is responsible for: 1) controlling auditory processing, 2) pupil dilation, 3) eye movement, 4) hearing and, 5) regulating muscle movement.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Describe the large anatomy of the brain and its functions beginning part = rhombencephalon (2)

A

The rhombencephalon is the lower part of the brain stem (i.e., hindbrain) and contains the medulla oblongata, pons and cerebellum.

This usually deals with autonomic functions such as breathing, alertness, digestion, sweating heart rate, attention and many more.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

How can finer subdivisions of the brain be mapped out? (6)

A

In the human brain, finer subdivisions can be mapped out using Brodmann areas (1990).

The Brodmann areas map out smaller areas of the brain based on 3 elements: 1) connectivity (intrinsic, afference, efferent) , 2) cell types (based on cytoarchitecture) and, 3) structure (e.g., are the neurons grouped together?).

Afferent neurons are nerve cells that carry impulses towards the central nervous system (CNS).

Efferent neurons are nerve cells that conduct impulses away from the CNS.

Intrinsic neurons are the cells whose axons and dendrites are all confident within a given structure.

As compared to Brodmann areas, there are more modern methods of finding finer subdivisions of the brain such as gene expression.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Describe the different circuit motifys/artifical neural networks beginning part

A

There is different types of circuit motifs that is utilised in computational neuroscience models such as: 1) feed-forward neural network, 2) feedback inhibition neural network, 3) recurrent neural networks and 4) lateral inhibition neural networks.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Describe the different circuit motifys/artifical neural networks
feed-foward network (2)

A

A feed-forward neural network is where there is a group of neurons that project directly (have excitatory network connections) to another group of neurons.

Feed-forward neural network is the simplest artificial neural network that is devised

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Describe the different circuit motifys/artifical neural networks
feedback inhbition

A

. Feedback inhibition neural network, excitatory principal neurons have a synapse with inhibitory interneurons , which then inhibit those neurons by feeding back to them (negative feedback loop; Carl & Jong, 2017).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Describe the different circuit motifys/artifical neural networks
recurrent neural network

A

In recurrent neural networks, neurons are inside a interconnected circuit that sends feedback signals to one another.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Describe the different circuit motifys/artifical neural networks
lateral inhbition neural network

A

In lateral inhibition neural network, active neurons suppress neighbouring neurons’ activity through inhibitory synaptic connections (Cao et al., 2018).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Describe the principal physiological determinants of neural patterns of activity. That is, which factors determine the time and order of action potentials in a network of neurons? Ignore any variability in sensory signals due to outside (of the brain) factors for this question.

Beginning part

A

The factors that influence the time and order of action potentials in a network of neurons is the ion channels and synaptic inputs it has.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Describe the principal physiological determinants of neural patterns of activity. That is, which factors determine the time and order of action potentials in a network of neurons? Ignore any variability in sensory signals due to outside (of the brain) factors for this question.

Ion channels part (5)

A

Neurons have many ion-conducting channels that are embedded into their cell membrane (Dayan et al., 2001).

These ion channels are highly selective and only let one type of ion pass through them (Dayan et al., 2001) in response to changes in a neuron’s membrane potential.

Neurons across the brain differ in their composition of ion channels and the type of ion channels a neuron has (e.g., sodium, potassium etc…) will be dependent on the gene expression of the neuron.

The opening and closing of different ion channels generates a change in a neuron’s membrane potential (i.e., producing an action potential).

The composition of the neuron’s ion channel will determine their behaviour (i.e., time and order of action potentials) since different ion channels vary in their properties such as time course of opening and closing the ion channel.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Describe the principal physiological determinants of neural patterns of activity. That is, which factors determine the time and order of action potentials in a network of neurons? Ignore any variability in sensory signals due to outside (of the brain) factors for this question.

synaptic inputs part (5)

A

Synaptic inputs also influence the time and order of action potentials since once the action potential has been produced at a particular part of the neuron’s cell membrane it is propagated through the neuron’s axon and every part of the cell membrane becomes sequentially depolarised to initiate synaptic transmission to communicate with other neurons.

In synaptic transmission, the action potential will travel all the way down to the axon of the neuron’s pre synaptic terminal.

This will cause the vesicles to form and release a neurotransmitter that will diffuse across a synaptic cleft which will bind to the receptor molecules of the post synaptic terminal of a receiving neuron.

If the neurotransmitter is excitatory (e.g., noradrenaline) then the post-synaptic neuron is more likely to fire an action potential.

If the neurotransmitter is inhibitory, then the post synaptic neuron is less likely to fire an action potential.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

What are neurons and describe the anatomy of neurons? (2)

A

Neurons are the cells that maintain a difference in electrical potential between the inside and outside of the neuron.

Neurons come in different shapes and varieties but the most common is cortical pyramidal neurons.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

What are neurons and describe the anatomy of neurons? = axon

A

Neurons (cortical pyramidal neurons) have a long axon where electrical impulses from the neuron travel away to be received by other neurons.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

What are neurons and describe the anatomy of neurons? = dendrites

A

Dendrites is where it receives incoming electrical impulses from other neurons via synaptic connection.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

What are neurons and describe the anatomy of neurons? = cell body

A

Cell body is part of neuron that holds nucleus as well as other organelles like soma.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

What are neurons and describe the anatomy of neurons? = nucleus

A

The nucleus contains genetic material of the soma.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

What are neurons and describe the anatomy of neurons? = myelin sheath

A

Myelin sheath is a lipid layer around axon and carries messages to one of these lipids to another making transporting an electrical impulse more efficient

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

What are neurons and describe the anatomy of neurons? = neuron has cell membrane where (2)

A

ion pumps exchange electrically charged atoms (ions) with extra-cellular medium where some ions pumped in and some pumped out

    • this charge distribution produces the resting membrane potential which is usually -80/70 millivolts (mV).
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

What is the process of action potential in neurons? (8)

A

A neuron at rest will typically have a membrane potential of around -70 millivolts.

An action potential is produced at a particular part of the neuron’s cell membrane when an external stimulus with sufficient electrical value changes the resting membrane potential of the neuron to the neuron’s action potential threshold (also known as threshold potential).

The threshold potential of a neuron is usually -55 mV.

The threshold potential activates the voltage-gated Na+ ion channels to open which allows a rapid influx of Na+ ions to enter inside the neuron and causing an increase in the membrane potential towards +40 mV.

This causes the depolarisation of a small region of the cell membrane.

The voltage-gated Na+ ion channel begin to close and the influx of Na+ inside the neurons stops.

The voltage-gated K+ ion channels open with a slight delay and causes an efflux of K+ to move out of the cell which causes the membrane potential to decrease to -90 mV) and causing the neuron to become hyperpolarised.

Eventually, the voltage-gated K+ ion channels close and eventually the voltage returns back to the resting membrane potential. This is called the refractory period.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

What is the process of synaptic transmission in neurons? (how does AP propgate to other neurons) - (5)

A

Once the action potential is produced at a particular part of the neuron’s cell membrane it is propagated through the neuron’s axon and every part of the cell membrane becomes sequentially depolarised to initiate synaptic transmission to communicate with other neurons.

In synaptic transmission, the action potential will travel all the way down to the axon of the neuron’s pre synaptic terminal.

This will cause the vesicles to form and release a neurotransmitter that will diffuse across a synaptic cleft which will bind to the receptor molecules of the post synaptic terminal of a receiving neuron.

If the neurotransmitter is excitatory (e.g., noradrenaline) then the post-synaptic neuron is more likely to fire an action potential.

If the neurotransmitter is inhibitory, then the post synaptic neuron is less likely to fire an action potential.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

What are the different ways in which concept of mapping summed inputs to firing rate? = beginning part (3)

A

The McCulluh Pits Model of Neurons, Linear Neuron and Sigmoid Neuron Models all have the same equation (e.g, w1X1 + w2X2 + w3X3 [which can be written as a realistic sigma formula where N represents an arbitrary number of neurons]) where they sum the inputs of input neurons X multiplied by their synaptic weights to produce firing rate of receiver neuron Y.

However, all these models conceptualise mapped summed inputs to firing rate differently due to different transfer functions (G) and subsequently different final outputs.

A transfer function introduces one more step between firing rate Y and the final output of the neuron.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

What are the different ways in which concept of mapping summed inputs to firing rate? = McCulluh Pits model (5)

A

The McCulluh Pits Model of Neurons transfer function they use is they define a threshold value θ (theta) in which if Y is greater or equal to the threshold value then Y is one (i.e., neuron is active).

On the other hand, if Y is less than theta then Y = 0 (neuron is silent). The transfer function they use is a step function.

The final output of the McCulloh Pits Model will be G(Y) = r = 1 or 0 where Y is interpreted as the activation of the neuron and r is some measure of output of neuron given its activation.

Y referred to activation of the neuron is fairly abstract notion so it could be thought of the internal state of the neuron that leads to action potential or not.

R could be tentatively identified with the firing rate (number of action potentials that are fired per second).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

What are the different ways in which concept of mapping summed inputs to firing rate? = Linear Neuron Model transfer function (4)

A

In Linear Neuron model they do not use a step function as a transfer function since the real neurons have a lot of variability in their firing and not just firing just at 0 or 1.

Linear Neuron Model’s transfer function is a piece-wise linear where if Y is greater or equal to 0 then Y = some sigma equation then neuron is active but if Y is less than 0 then Y = 0 so neuron is silent (makes sense since neuron firing rate can not be negative so off limits).

The final output of linear neuron model is G(Y) = r =Y where r can have values between 0 and infinity.

In Linear Neuron Model it seems unreasonable to have firing rate grow without a bound as input increase as neurons can not fire million spikes per second.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

What are the different ways in which concept of mapping summed inputs to firing rate? = Sigmoid Neuron Model transfer function (2)

A

Therefore, in sigmoid neuron model they introduce a saturating transfer function where firing rate can not go faster than a given frequency. The final output of the sigmoid neuron model is G(Y) = r grows than Y grows. As G(Y) = r then we fit a threshold where output of Y is saturated.

This transfer function ensures our output does not grow to infinity with infinite inputs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

Disadvantage of McCulloh Pits Model of Neurons, Linear Model and Sigmoid Model of Neurons (5)

A

The models are connectionist networks meaning the networks produced with neural models with no dynamics.

All these models have no dynamics meaning that if Y meets threshold value (in some cases Y = 1) then Y is constantly state in firing action potentials and there no internal mechanism that changes Y to 0 or another value in these models.

This can’t be the case since it costs a lot of metabolic energy to produce firing action potentials so it is not possible for neurons to fire action potentials constantly.

Since there are no dynamics in the model then have to compute equation of summing the inputs of X input neurons multiplied by their synaptic weights to a receiver Y neuron again to obtain a different value.

Thus, these models are radically different as how real neurons behave.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

What is integrate and fire model? (14)

A

Integrate and fire model wants to model the dynamic changes that occur in the membrane potential such as in a real neuron if we raise the membrane potential (that is just below its firing threshold) then it will decay back to -70 millivolts (mV) over time.

This dynamic change in membrane potential is not captured by connectionist networks which are networks produced with neural models with no dynamics.

The integrate and fire model equation focuses on how the membrane potential evolves with time, given some synaptic inputs and any externally injected currents.

The integrate and fire model’s equation of change in membrane potential over time adds factors that increase or decrease variable u (known as membrane potential).

At rest at integrate and fire model, u is at -70 mV.

Factors that increase u would be excitatory synaptic inputs and injected currents and these are positive term in the integrate and fire model’s equation of change of membrane potential over time.

Factors that decrease u would be at a high u, ion-channels would open that bring u back down and these would be negative term in the model’s change of membrane potential over time equation.

The model assumes that this effect would be proportional to u (i.e., the further away we are from resting membrane potential then the stronger are pushed back down).

The integrate and fire model adds other variables such as time constant t and urest (resting membrane potential) to make units work out of equation.

The change in membrane potential over time in the model would be 0 when urest (resting membrane potential) subtracted from u (current membrane potential) is 0.

In integrate and fire model we can also spilt the derivative of change of membrane potential over time to calculate u2 (membrane potential at time t2) from u1 (membrane potential at time t1) and all the other inputs which would be repeated for every neuron in network given certain connectivity patterns (i.e., specified by inputs) and other inputs.

In integrate and fire model has dynamics as if the membrane potential fits a threshold value of action potential (e.g., -40 mV), we say that the spike has been fired and the membrane potential will reset to -70mV.

The (leaky) integrate and fire model is also called the ‘formal’ spiking neuron and it gives us spike times but the spike wave-forms are not calculated in this model.

Thus, the model is a spiking model with intrinsic dynamics.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

What is firing rate model (9)

A

The firing rate model is a non-spiking relative of the integrate and fire model.

The firing rate model makes changes from the integrate and fire model as it removes the spiking threshold, post-spike reset and urest. It also re-interprets what variable u stands for in integrate and fire model (which is membrane potential in integrate and fire model) and renames it to variable a which stands for activation of a neuron.

They also substitute synaptic action (as effect on membrane potential) with familiar summation of incoming inputs that is used in connectionist models.

They also add a transfer function , from connectionist model, for instance a sigmoid such as negative ‘a’ values get mapped to 0 and positive values will saturate.

The firing rate’s transfer function will turn variable a into firing rate. The firing rate transfer function will decay the membrane potential just like the integrate and fire model but it is thought of as firing rate instead of membrane potential.

The firing rate model has the assumption that average rate of firing action potentials for a neuron (in response to inputs) adequately captures fundamental properties of a neural network.

The firing rate model is a non-spiking model meaning it does not model spike and any phenomena that depends on accurate spike times can not be modelled with it.

Although firing rate model is non-spiking, it captures dynamic changes in activity (i.e., average rate spikes over time) and many neural phenomena can be modelled just with rates.

Overall the firing rate model is a non-spiking model with intrinsic dynamics.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

What is Hodgkin Huxley model? (3)

A

The Hodgkin Huxley model models ion channels and outlines mechanisms that underlie the propagation and initiation of action potentials based on their work they did with a giant squid axon.

It has an equation for how each ion channel changes which is then plugged into an equation of rate of change of membrane potential over time.

Overall, it is a spiking model with intrinsic dynamics.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

What research shown that spinal cord is not a mere relay station? (6)

A

The spinal cord is not a mere relay station between the brain and muscles.

This was proven by Cabelugen et al., (2003) and Delvolve et al., (1997) studies.

In their studies, they did a decerebrated preparation on a salamander so its only left with its brain stem and spinal cord.

They fixed the body of the salamander in a viable solution (keeps their tissues in a viable state) and injected two electrodes to the MLR (Mesencephalic Locomotor Region) that has a constant signal of current.

The researchers found that at a low MLR stimulation, the salamander’s body will perform a walking gait while at high MLR stimulation it will turn into a swimming gait.

In conclusion, they found that higher brain areas is not necessary to produce locomotion modes and that spinal cord is not a mere relay station.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

What are the different ways to record locomotor modes in salamander and what has research showed? = beginning part

A

The two different ways to record locomotor modes in salamander is through fictive locomotion and through in vivo (i.e., in the body) EMG (electromyography).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

What are the different ways to record locomotor modes in salamander and what has research showed? = EMG part (4)

A

The first technique is through in vivo EMG and EMG records the electrical activity in muscles via electrodes.

The EMG data of salamander shows that the muscle activation is different for the different locomotor modes the salamander has.

From EMG data, the swimming gait of the salamander shows that there is a wave of muscle activity that travels down the body (travelling wave) and that there is alternating muscle contractions on either side of the body as well as constant lag between one muscle and the next.

From the EMG data, the walking gait of the salamander shows that all muscles on one side of the trunk become active at first in unison with two legs and the next cycle these muscles are silent and the other side of the trunk becomes active (standing wave).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

What are the different ways to record locomotor modes in salamander and what has research showed? = Fictive Locomotion (8)

A

The second way to measure locomotor modes in salamander is through fictive locomotion.

The methodology of fictive locomotion involves extracting the whole spinal cord and put it into a solution (which has N-methyl-D-aspartate [NMDA]) that helps to keep tissues in a viable state.

The electrodes are then placed directly on the ganglia.

More specifically, fictive locomotion places the electrodes to measure ventral root recordings (VRs) which are nerve endings goes to the muscles.

Ganglia are nerves that come out of the spinal cord and NMDA is a excitatory neurotransmitter that makes neurons fire.

Fictive locomotion will then measure the electrical activation of spinal cord nerves which will be summed to be collective output of many spinal cord neurons sending action potentials along the nerves.

From fictive locomotion method, researchers found that there are neighbouring spinal cord segments at peak at slightly offset times (i.e., there is phase lag between the spinal cord segments).

This reflects the properties we have seen in spinal cord networks like in muscle activation in salamander as there is a wave of muscle activity that travels down their body (travelling wave) and constant lag between one muscle and the next.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

What does it allow us to infer about the spinal cord networks? (Fictive Locomotion Research).

A

This pattern of collective output of fictive locomotion suggests that the spinal segments (as neural networks) must be coupled to each other to influence each other locally (e.g., one side of the muscle is active while the other side of the muscle is relaxed).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
51
Q

Why do we use lampreys instead of salamander to understand muscle activation in swimming? (2)

A

In terms of understanding how neurons work in salamanders on how they generate muscle activation for swimming,

we have to use lampreys since there is more single neuron data on lampreys and it swims just like a salamander (i.e., muscle contraction in alternation, left-right, left-right, and slightly delayed along the body).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
52
Q

What is actual swimming behaviour of lampreys? (4)

A

From EMG data it shows, the lamprey swims by producing an alternating activation of motor neurons on left and ride of each segment.

It has 100 different spinal cord segments are activated successively with a phase delay.

This allows the animal to push through water.

The higher the frequency of alternation, the faster it will swim

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
53
Q

How was the spinal cord network of lampreys produced?

A

Various studies recorded individual neurons of lampreys, measured their ion channels and measured their synaptic connectivity to produce the spinal cord network of lampreys.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
54
Q

How does the spinal cord network of lampreys work to produce locomotion? (8)

A

The spinal cord locomotor network of lampreys contains cross-inhibitory neurons (CCINs), excitatory inter-neurons (EINs) and motor neurons (MNs).

The network represents one segment of the spinal cord.

Lamprey’s locomotion (i.e., alternating rhythmic activity) is initiated when the interneurons and motor neurons (MN) receive constant tonic input (i.e, constant flow of action potentials impacting on spinal cord neurons) from the brainstem.

More specifically, the interneurons and MNs receive a descending excitatory drive from reticulospinal (RS) neurons in the brainstem (McCllellan & Grillner, 1984).

There is recurrent connections between the EINs within half-segment of the spinal cord.

These EINs will have an excitatory connection to MNs which will make the muscle contract.

At the same time, EINs will excite the CCINs which will have inhibitory connections to all the neurons of the other side of the spinal cord (contra-lateral half-segment).

This inhibition of contra-lateral half segment means that one side of the spinal cord is active whole the other side is silenced (i.e., prevented from firing action potentials) so both sides of the segment are not active simultaneously.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
55
Q

What are the mechanisms that makes one-half of the spinal cord segment stop firing APs if there is tonic input from the brainstem? beginning part (2)

A

1) spike-frequency adaptation and

, 2) lateral interneurons (LNs) being active mid-cycle and inhibiting CCINs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
56
Q

What are the mechanisms that makes one-half of the spinal cord segment stop firing APs if there is tonic input from the brainstem? spike-frequency adapation (5)

A

The spike-frequency adaptation means the reduction of a neuron’s firing rate to a stimulus of constant intensity.

Spike-frequency adaptation helps to terminate ongoing activity as firstly one side of the spinal cord segment becomes active in which excitatory interneurons (EINs) fire lots of action potentials which inhibits the other side of the spinal cord.

After a while, spike-frequency adaptation takes place so firing rate of EINs reduces.

The intervals of EINs without spike becomes larger with time which makes the other side of the spinal cord not as strongly inhibited (i.e., fewer inhibitory action potentials arrive at the contra-lateral side of the spinal cord segment).

This means the other side of the spinal cord segment has time to be active and starts to fire multiple action potentials quickly and in succession which inhibits the previously active side (this is called escape from inhibition).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
57
Q

What are the mechanisms that makes one-half of the spinal cord segment stop firing APs if there is tonic input from the brainstem? lateral interneurons (4)

A

LNs also help to terminate ongoing activity so one side of the spinal cord segment is active and other one is not.

LNs are featured in the spinal cord locomotor network of lampreys.

LINs terminate ongoing activity so rhythmic alternating activity can occur in lamprey’s locomotion as

later during ipsilateral bursting activity of EINs and motor neurons (MNs) in the network, the LIN become active and inhibit CCIN so it allows the network neurons on the contralateral side to disinhibit and become active (Wallen et al., 1992).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
58
Q

What are the neural mechanisms for the spinal cord lamprey network? (spike-frequency adaptation) - (5)

A

Spike-frequency adaptation is due to a phenomenon called spike after hyperpolarisation (sAPH).

Hyperpolarization is when membrane potential becomes more negative than resting membrane potential which makes it difficult for next spikes to be emitted in the neuron.

sAPH is due to calcium ions flowing into cell (due to Ca+ ion channels opening) with each action potential (alongside Na+ ions) and slowly accumulates in neuron.

Ca+ has a hyerpolarisation current through a different ion channel called calcium-dependent potassium channel that brings membrane potential down.

The accumulation of Ca+ is sensed by this calcium-dependent potassium channel. Ca+ accumulates slowly until it reaches a steady state where amount of Ca+ transported away (decay of Ca+ concentration) equals the amount of Ca+ that flows in.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
59
Q

Describe the single cell model of the lamprey spinal cord network (6)

A

How the neural properties of the lamprey’s spinal cord network determining the function of locomotion can be realised by the single cell multi-compartment Hodgkin Huxley model of the lamprey’s spinal cord.

In normal Hodgkin Huxley model we have an equation for each of the individual ion channels which is then plugged into a rate of change of membrane potential over time equation.

However, in a multi-compartment Hodgkin Huxley model of lamprey’s spinal cord that Ekeberg et al., (1991) created they have multiple compartments that constitute different parts of the neuron.

More specifically, they have a soma compartment and three other compartments for the dendrites of the neuron.

Each compartment is composed of different ion channels, such as: 1) sodium (Na), 2) potassium (K), 3) calcium (Ca) and, 4) calcium-dependent potassium channel (KCa).

In the multi-compartment model, the have an equation of rate of change of membrane potential over time which will be 0 (i.e., neuron will be at rest) if Eleak (resting membrane potential) is equal to E (current membrane potential).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
60
Q

Pros and cons of multi-compartment single cell model of lamprey network (4)

A

The pros of multi-compartment model is that it is more realistic and closer to biology and allows to stimulate the effects of ion channels.

The cons of the multi-compartment model is: 1) need more data to fix the composition of ion channels as need to measure the elements of the equation in real neurons for model to map loosely to biology which is a very labour intensive task,

2) very expensive computationally to stimulate in computer as need to perform equation of rate of change of membrane potential over time for every compartment and for every ion channels and,

3) hard tot une the parameters of equation as all parameters have not been measured so researchers will need to make a tough decision of what plausible values to use based on a whole range of values available in literature.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
61
Q

Describe the single-cell model of lamprey spinal cord showing evidence of spike-after hyperpolarisation: (4)

A

There is evidence of spike-after hyperpolarisation that causes spike-frequency adaptation which helps alternating rhythmic activity to occur in lampreys by inhibiting one side of the spinal cord segment and one is disinhibited.

This is because they plotted the rate of change of membrane potential over time for calcium concentrations.

They found that membrane potential goes down once calcium ions flows in and accumulates in neuron which activates hyperpolarising current.

As calcium concentration decays, however, the membrane potential will grow up. This is what will give increase in spike distance (aka spike-frequency adaptation).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
62
Q

Explain the Ca+ dynamics at spinal cord lamprey (5)

A

From experiments, they found that there is two types of calcium pools: 1) calcium pool is one where calcium ions flow in and enter through Ca+ channels due to each action potential in the soma and, 2) calcium pool is where calcium ions flow in at NMDA synapse (when NMDA receptors are activated).

Ekeberg’s calcium-dependent potassium current’s strength is driven by these two calcium pools.

The two pools of Ca+ we have fast and slow Ca+ dyanmics where intake and decay of Ca+ ions happen at different timescales for these two pools.

It is fast for membrane Ca+ pools and slow for NMDA-synapse Ca+ pool.

For NMDA-synapse pool, the Ca+ goes in due to receptor-docked on NMDA-synapse. After enough Ca+ accumulates (accumulation of Ca+ sensed by calcium-dependent potassium channel) it will trigger a hyperpolarising current which brings membrane potential down.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
63
Q

**What is plateau potentials? **How is plateau potentials related to fictive locomotion/How is fictive locomotion slower than in vivo locomotion? (4)

A

Plateau potentials are when action potentials are blocked with Tetrodotoxin.

NMDA-plateau potentials are produced when its generation of action potentials blocked with tetrodotoxin (TTX) which suppresses Na+ channels from opening and closing as well as no Ca+ flows into soma does it does not affect membrane potential.

However, membrane potential can still be affected with other ion channels.

There is a strong and constant NMDA input (due to Ca+ flowing into NMDA synapse) which brings membrane potential up, membrane potential then plateaus and then decays when enough Ca+ accumulates in which the calcium-dependent potassium channel brings membrane potential down.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
64
Q

What is plateau potentials? How is plateau potentials related to fictive locomotion/How is fictive locomotion slower than in vivo locomotion? (5)

A

Fictive locomotion is slower than in-vivo locomotion.

Plateau potentials are related to fictive locomotion.

This is because in fictive locomotion the isolated spinal cord is placed into a solution that contains a large amount of NMDA concentration which docks to all the receptors in the spinal cord neurons.

Thus, it is hypothesised that NMDA Ca+ dynamics produce these slow fictive locomotion signals that are slower than in-vivo locomotion.

In other words, fictive locomotion is slower than actual in-vivo locomotion due to the product of un-naturally large NMDA concentration.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
65
Q

Results of computer simulation of spinal cord network of Wallen = connected multi-compartment model made by Ekeberg et al., (1991) that

A

looks similar to lamprey’s spinal cord network.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
66
Q

Results of computer simulation of spinal cord network of Wallen = Kainate simulation (2)

A

They found that stimulating the network of neurons with kainite , that activated non-NMDA receptors, give left and right side alternatives of action potential firing (e.g., left side is firing, right side is inhibited until left side is fatigued then other side is active and fires action potential; vice versa).

They found increased stimulation of kainite increased spikes per second and faster oscillations (translates to faster swimming for lamprey).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
67
Q

Results of computer simulation of spinal cord network of Wallen = serotonin (2)

A

They also found if you add serotonin (5-HT) to model then it will reduce the influence of calcium-dependent potassium channel in a way that it will reduce the amplitude of after hyperpolarisation (AHP) which will be weaker and takes longer for spike-frequency adaptation to occur.

This will result in oscillations of action potential firing to be longer and swimming frequency to be slower.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
68
Q

Results of computer simulation of spinal cord network of Wallen = Higher amounts of NMDA added to model (2)

A

They also found that when higher amounts of NMDA concentration is added to the model, then slower and longer NMDA oscillations the model has.

This is because more Ca+ flows in and once oscillations terminated then takes long time for Ca+ to decay before neuron becomes active.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
69
Q

Results of computer simulation of spinal cord network of Wallen = They also found evidence of lateral interneurons in model network to terminate ongoing activity so rhythmic alternating locomotion happens for lampreys. This is because (2)

A

they compared the bursting activity of network with LINs connected and disconnected from the network model.

They found the rhythm becomes more slower and irregular when LINs are disconnected. Thus, the synaptic inhibitory connections from LINs onto CCINs contribute to terminate ongoing activity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
70
Q

What is a central pattern generator and examples (2)

A

A central pattern generator is a network that takes simple inputs (e.g., tonic [i.e., constant] signal from brain stem) and produces a more complex pattern of neural activity (e.g., oscillations from rhythmic muscle activation).

Examples of CPG include the lamprey locomotor network, heartbeat and digestion.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
71
Q

What is a central pattern generator and give an example of it? = first paragraph (9)

A

The heartbeat control system of medicinal leech has been studies for three decades as CPG.

The leech has two tubular hearts running along the length of the body and moves blood through a closed circulatory system.

The beating pattern of leeches (beat period of 4s to 10s) is asymmetric with one heart generating high systolic pressure through front-directed peristaltic wave (peristaltic coordination mode) along its length and another heart generating low systolic pressure through near-synchronous constriction (synchronous coordination mode) along its length.

The peristaltic heart moves blood forward.

As compared to the peristaltic heart, the synchronous heart has been hypothesised to push blood into peripheral blood circulation and supports rearward blood flow.

After about 20 to 40 heart beats (switch period ~ 100 – 400 s) the heart switches roles.

The two heart tubes leech has receives excitatory input from ipsilateral member of a pair of segmental heart motor neurons (HE) which is located in each midbody segmental ganglion.

The firing pattern of HE neurons (i.e., fictive motor pattern) is bilaterally asymmetric with motor neurons on one firing rear-to-front progression while those on other side fire nearly synchronously with appropriate side-to-side coordination of these two firing pattern (i.e., firing pattern switch).

The HE neurons are controlled and coordinated by heartbeat CPG through rhythmic inhibitory drive.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
72
Q

What is a central pattern generator and give an example of it? = second paragraph (7)

A

There are nine pairs of identified segmental heart interneurons (HN) (plus one identified pair) that compose of CPG.

The core CPG consists of 7 pairs of interneurons located in first seven midbody ganglia of nerve cord and indexed by ganglion number and body side (HN(L,1) – HN(R,7)).

The rhythmic activity in CPG network is paced by highly interconnected timing consisting of coordination (HN(1) and HN(2) interneurons) and osciliatory interneurons (HN(3) and HN(4) interneurons).

The firing pattern of interneurons of core CPG is also bilaterally asymmetric like HE neurons with appropriate side to side coordination.

The asymmetry of firing pattern is not permanent as there are regular side to side switches in CPG network as peristaltic and synchronous pattern in HN underlie changes in both motor pattern and rhythmic constriction pattern in heart tubes.

The switches in coordination is mediated by HN(5) switch interneuron which link the timing of network to middle premotor neurons by bilateral inhibitory connections; only one of the pair of interneurons rhythmically active at a time and other is silent.

The premotor interneurons and motor neurons on one side of the active switch interneurons are coordinated synchronously while those on other side of silent switch interneurons are coordinated peristaltically.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
73
Q

What is STM and LTM? What is its methods of decay , its capacity and mechanisms for loss? (6)

A

Short-term memory (STM) is keeping a small amount of information in your mind and making it accessible for a short-term.

An example of STM is a new phone number is kept in mind until it is dialled and then immediately forgotten.

The capacity of STM is 7+/-2.

The STM is also known as WM and STM and WM often used interchangeable but defined differently.

We are consciously aware of STM meaning we can cognitively manipulate the contents of STM in our head and actively rehearse them.

The mechanism of loss in STM ins decay is where information is immediately forgotten when it is no longer needed/relevant in that moment.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
74
Q

What is STM and LTM? What is its methods of decay , its capacity and mechanisms for loss? (4)

A

Long-term memory is refers to when information from STM is transferred to long-term storage to produce enduring memories. We are consciously aware of the LTM contents.

The capacity of LTM is high.

The mechanism of loss of LTM memories is due to interference. In interference theory, the reason people forget is not because the memories are lost from storage but because other information gets into the way of what you want to remember.

It is due to the structure of the brain having overlapping representations of memory

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
75
Q

Describe the working memory model (5)

A

The working memory model was made by Baddley and Hitch (1974). The model consist of a central executive which manipulated and maintains the contents of short-term memory (STM).

More specifically, the central executive drives the whole WM system.

It directs the attention and processing to different subsystems its connected to: the visuospatial sketchpad and phonological loop.

The phonological loop is where information is acoustically coded. It processes verbal and auditory information.

The visuospatial sketchpad is where information is stored and processed visually or spatially in WM model.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
76
Q

What research implies that WM system is not unitary and have modality specific components (5)

A

Research has shown that in a task where letters are presented visually, participants show errors that indicate that information is acoustically coded.

For example, participants replace T for G (sound similar) instead of Q for G (appearance of letters look similar).

Similarly, participants found that recalling a wordlist more difficult for similar sounding words and not semantically related words such as recalling ‘rice’ instead of ‘ice’ and not recalling ‘frost’.

Further research also shown that repeating nonsense syllables disrupts the phonological memory.

All these research discussed above indicates that the working memory (WM) system is not unitary but a multi-component system with modality specific components, each can be damaged separately.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
77
Q

What research shown WM components can be damaged separately? (3)

A

Research has shown that WM components can be damaged separately.

As research shown that damage to Brodmann areas 44 and 40 means individuals can not hold strings or words in their memory or mind and have deficit in the rehearsal process of phonological loop.

Research also shown participants have visuospatial sketchpad WM deficits as damage/lesions to the parieto-occipital causes deficits in visuo-spatial WM for instance that participants with that damage have difficulties memorising and repeating a sequence of blocks the experimenter has touched.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
78
Q

What research shown support for dissociation of visuospatial sketchpad and phonological loop? (3)

A

This is because there is changes in local cerebral blood flow (PET) in different areas of the brain when participants doing verbal and spatial WM tasks in healthy participants.

For Auditory WM tasks: activity in inferno-lateral.

For Spatial WM tasks: occipital, parietal, inferior frontal (most RIGHT of the brain.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
79
Q

Explain how neural property (ADP) attracts with network property to generate the function of WM maintenance in Lisman Idiart model

First paragraph = explaining ADP (3)

A

The Lisman-Idiart model proposes that there is a neural mechanism called ADP that helps out in working memory (WM) maintenance.

ADP stands for afterdepolarisation. Depolarisation occurs when the membrane potential increases and more likely to emit a new spike.

The ADP is a positive ‘hump’ in membrane potential that is produced after a spike is emitted in the model.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
80
Q

Explain how neural property (ADP) attracts with network property to generate the function of WM maintenance in Lisman Idiart model

second paragraph = Lisman Idiart network (5)

A

In Lisman Idiart model they have network in which activity of prefrontal neurons is calculated by their equation of rate of change of membrane potential over time.

In this equation, they have terms such as: 1) VOSC which is a sin function that causes fluctuation in membrane potential in background and, 2) Vinh which is a term added every time an action potential is fired by a presynaptic neuron.

VOSC provides excitatory oscillatory input to the neurons. Vinh has a feedback inhibition circuit.

If VOSC neuron fires a spike it excites all prefrontal neurons and eventually transmitted to Vinh neuron which inhibits all neurons including itself.

The model assumes that the firing of each neuron in network represents one item in WM.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
81
Q

Explain how neural property (ADP) attracts with network property to generate the function of WM maintenance in Lisman Idiart model

third paragraph = Lisman Idiart networ + ADP (7)

A

How ADP and neuronal network of Lisman and Idiart model work together to implement active rehearsal for content in WM is explained below.

There is background oscillations in the Lisman and Idart model’s network.

If we present a letter G for someone to remember then the neurons in the network can quickly create synaptic connections with neuron in phonological loop which encodes and represents the letter ‘G’.

The part of the phonological loop that represents ‘G’ will make a particular neuron in network fire an action potential.

This neuron will then inhibits itself and all the other neurons in the network via the feedback inhibition circuit.

Then once next peak of osciliation comes around, ADP raises the membrane potential high enough for that neuron that represents letter ‘G’ to fire again.

The ADP and oscillatory inputs maintain the spiking of that neuron.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
82
Q

Why is HH model not used in Lisman-Idiart model? (5)

A

The Lisman-Idiart model does not use a Hodgkin Huxuley model since they want to see how a neural mechanism of ADP (Afterdepolarisation) helps to carry out working memory (WM) maintenance.

They do not need to think about which ion channel is in charge of ADP as they want to model ADP’s effect on membrane potential over time.

The model does not give an explanation of ADP and use ADP to explain higher-level phenomenon.

The aim of Lisman and Idiart’s model is to demonstrate what ADP can be used for and its effect on membrane potential.

Therefore, a Hodgkin Huxley model is not needed as it will not fulfil the Lisman-Idiart’s aim of adding how ADP functions in terms of a network of neurons.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
83
Q

Advantages and disadvantages of the Lisman-Idiart model of WM (2)

A

The model demonstrates how neuronal prosperities (ADP) and network structure (feedback inhibition and oscillatory input) work together to implement function

A criticism is that the authors chosen parameters (e.g., oscillation frequency) to make the number 8 for capacity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
84
Q

What are the two functions of the hippocampus and what research supports those functions? = beginning

A

The hippocampus and its nearby areas are shown to be important for memory and spatial cognition/orientation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
85
Q

What are the two functions of the hippocampus and what research supports those functions? = memory (3)

A

There was first hint in research that hippocampus and nearby areas are important for memory formation from famous study of HM in which the individual had severe epilepsy that was drug resistant.

At a last resort, the doctors cut both hippocampi (since hippocampus is typically source of epileptic seizures) to solve HM epileptic fits.

They found out that HM could not form any new memories.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
86
Q

What are the two functions of the hippocampus and what research supports those functions? = spatial cognition (6)

A

A parallel stream of animal research also found using the Morris Water maze that revealed the hippocampus was fundamental for spatial navigation/cognition.

Spatial cognition means the knowledge and processes used to represent and navigate in and through space.

The Morris Water maze involves rodents being placed in a pool of water that is opaque and has a maze which has a hidden escape platform.

The hidden escape platform is just below the surface of the water and fixed location in maze.

In the maze, rodents must search to locate the hidden platform.

Morris et al., (1982) found that rats who had no lesions to the hippocampus (control rats) took less time in swimming towards the platform, no matter what area they were dropped in the maze, as compared to the rats who had bilateral lesions to the hippocampus.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
87
Q

What is single-cell recording methodology involve? (4)

A

In single-cell recordings, micro drives with electrodes are implanted chronically in a rodent’s brain.

Once the animal is recovered from the surgery, the rodent is allowed to roam freely in a box where there is a visual cue (e.g., white cue card) on the wall of the box which helps the animal orient itself to or perform simple tasks around the box.

The electrodes in the animal are moved slowly per day by the experimenter until they record spikes (i.e., action potentials).

The single-cell recording technique allows us to know what single neurons are doing in a behaving animal.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
88
Q

Where are head directions cells found in the brain? (2)

A

Head direction cells are predominantly found in large network of brain areas in Papez circuit (Taube, 2007) such as the: 1) entorhinal cortex, 2) the thalamus (lateral dorsal and anterior dorsal nuclei) and, 3) anterior dorsal thalamus.

Head direction cells are also found in non-Papez circuit brain areas such as: 1) lateral dorsal thalamus, 2) dorsal striatum and, 3) medial precentral cortex.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
89
Q

Where are place cells found in the brain?

A

Place cells are found in the subiculum and entorhinal cortex in the brain (Taube, 2007).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
90
Q

Evidence of head direction cells and place cells from single-cell recordings (4)

A

From Taube et al., (1990) they produced a graph from single-cell recordings that is integrated over time.

The animal runs around in box for 10-20 minutes where the experimenter will track where the animal is looking and record firing rates of head-direction (HD) neurons.

In graph, a specific neuron will emit few spikes at 90 degrees but when animal Is looking at 200 degrees every time during those 20 minutes that specific HD neuron vigorously emits more spikes (preferred firing direction at 200 degrees)

In place cell graph you let animal run around box and everytime a specific place cell fires at a specific location then plot a red dot. You accumulate this data over 20 minutes of rodent running around the box.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
91
Q

What are the two types of cells that are important for spatial cognition and their receptive fields (5)

A

The two types of cells that are important for spatial cognition is: 1) head direction (HD) cells and, 2) place cells.

Receptive fields are areas at which simulation leads to a response of a specific sensory neuron.

Different place cells and HD cells are distinguished by their different receptive fields.

Place fields have a receptive field for spatial location which means a particular place neuron will fire most vigorously at a particular location in the environment.

HD cells have a receptive field for head orientation which means they will have a specific head orientation at which a specific HD fires maximally.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
92
Q

What are the three uses of head-direction cells? (3)

A

The three uses of head-direction cells is that used for orientation which is very important for navigation.

It is also used for grasping and pointing so if you want to reorient yourself and do some action like pointing somewhere in a specific direction.

Finally, head-direction cells is used to define a point of view (human spatial cognition).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
93
Q

What are the defining properties of head-direction cells and hypotheses from it? (7)

A

From manipulations of single-cell recordings, experiments found three defining properties of head-direction (HD) cells which is: 1) HD cells depend on vestibular input, 2) cue cards control angular turning and, 3) HD drift in darkness meaning that without any visual input the animal loses its sense of orientation.

Stackman and Taube found HD cells depend on vestibular input (i.e., changes in direction, movement and position of head) as they found that neurotoxic lesions of vestibular labyrinth abolished HD cell signal up to three months post lesions.

Taube also demonstrated cue cards control angular turning by recording HD cells in a cylinder which contains a prominent visual cue (e.g., white cue card) attached to the box.

They rotated this important visual landmark which leads to a corresponding shift in preferred firing direction of HD cells.

Thus, HD cells controlled by landmarks.

Mizumori and Williams found HD cells drift in darkness as when rats are either blindfolded or placed in complete darkness then preferred direction of HD cells become less stable (disrupted) and begins to drift.

The hypotheses from these 3 main defining properties of HD cells is that HD cells used for navigation and when the animal lost it way, HD cells have lost their stable directional tuning which makes them drift.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
94
Q

How can we correct for drift in head-direction cells from visual cells in visual cortex? (3)

A

We can correct the drift we see in the head-direction (HD) cells when animal lost its way in dark by receiving feedback from visual cue.

Visual cells are somewhere in visual cortex that provide feedback (i.e., providing synaptic inputs at particular orientations to specific HD cells).

When animal sees a cue card ahead in a box, a specific visual cell will be active and give strong synaptic input to the appropriate and correct HD cells which allows the animal to orient itself.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
95
Q

How can you get a tuning curve of a single head-direction (HD) cell? (3)

A

As animal moves around in a box, our chosen neuron fires at varying rates depending on the heading.

We sum all the activites of the neuron and divide by total time animal spend in box to get the tuning curve.

The tuning curve of a single HD neuron has firing rate of a single HD neuron as a function of heading and data accumulated over time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
96
Q

What happens to tuning curve of single HD neuron when animal is turning their head to another direction? (3)

A

There is a single tuning curve of a single HD neuron that fires when an animal heading direction is at 200 degrees.

However, if the animal heading changed , lets say to 90 degrees, the activity of the HD cell that fired maximally at 90 degrees shifted so it is less active and gives less contribution to 200 degrees head orientation.

Thus, as an animal moves, the tuning curve of firing rate of HD cells shifts with different heading directions so different HD cells get smaller or larger contributions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
97
Q

How can ring of HD ring CAN (continuous attractor network) sustain activity when the head is still or even in darkness? (4)

A

Head-direction (HD) cells preferred firing direction still fire maximally if the head is still at a certain head orientation and its firing is maintained briefly in darkness where individual receives no sensory information.

This is done via the short-range excitatory synaptic connections and long-range inhibitory connections the ring of HDs have.

The HD cell that fires and most active at a certain direction sustain their activity , even in darkness, by exciting itself as well as exciting neighbouring HD cells near them due to the short-range excitatory synaptic connections (recurrent connections) as well as having long-range inhibitory synaptic connections to distant HD cells to suppress their activity.

There is close-range excitation and long-range inhibition for each HD neuron in the ring. Thus, symmetric short-range and long-range inhibition gives sustained activity of HD cell.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
98
Q

How can you turn your head in HD ring CAN (continuous attractor network) (7)

A

To turn your head to another direction, the activity pattern of ring of HD cells will need to be shifted along the line of neurons.

These line of HD neurons active will have offset inhibition in the direction opposite of a turn and offset excitation in direction of a turn.

These connections will be active only when the head is turning (dependent on velocity).

These connections are doubled, one for clockwise and one for counter clockwise.

To turn clockwise, nearby HD cells to the right will be excited along the line of neurons.

To turn anti-clockwise, nearby HD cells to the left will be excited in line of neurons in HD ring.

Thus, velocity dependent asymmetric excitation and inhibition gives capability to turn head and shift pattern of activity across ring of HD neurons.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
99
Q

Two possible behaviours when giving external stimulus to HD ring network? (4)

A

The two possible behaviours when giving external stimulus to HD ring in Zhang 1996 HD ring continuous attractor network (CAN) is: 1) shift and, 2) reset.

Zhang found that the internal direction maintained by HD cell networks is calibrated by external inputs from a local-view detector

. If the activity of HD cell network is maintained at 180 degrees but heading is actually at 200 degrees then external input from the local-view detector induce a shift in its activity towards 200 degrees.

Reset is when the excitation of HD cells in network is too far away from the actual orientation of heading then external input from local-view detector produce new estimate  this resets the heading.

100
Q

How is HD ring network an example of CAN? (3)

A

The HD ring network is an example of a continuous attractor network because you can place the activity anywhere you wanted in line of HD neurons.

The activity can be shifted and come to rest at a new position.

The HD ring network can sustain its activity as connectivity pattern is same for each neuron when heading is still at a certain orientation.

101
Q

What happens if precise connections are perturbed in HD ring network? (3)

A

All activity of HDCs converge to different locations and at the end only represent a subset of all possible orientations.

The continous attractor becomes a discrete attractor.

Certain HDC attract the activity bump. Discrete basins of attractions (circle)

102
Q

What is continuous attractor and discrete attractor? (3)

A
  • Continuous attractor is symmetric connections maintain the activity packet in place we can place the ball anywhere
  • Discrete attractor: Given a bit of time, the ball settles in one of several valleys (basins).
  • Locations between valley are unstable.  Is this what happens to HD during aging due to neuron loss?
103
Q

The advantages of the HD ring CAN (Continuous attractor network) - (4)

A

The advantages of the HD ring CAN is that it is the best model of HD we have.

It allows us to explain how internal sense of direction is coded and maintained.

It does not make use of spikes, let alone ion channels (assuming all the information about head direction is encoded in firing rate of the network).

It is also good case of rate-coded neurons

104
Q

The disadvantages of the HD ring CAN (Continuous attractor network)

A

The disadvantage of the HD CAN is that how the brain learn and maintain such precise connections as neurons die off and affected by biological noise (e.g., temperature) which has not be answered in this model.

105
Q

Why associative memory models? (5)

A
  1. Introduce learning
  2. Introduce fundamental ideas about associating patterns of neural activity
  3. Associating patterns or sequences of patterns is needed for episodic memory
  4. The hippocampal anatomy maps very well onto these ideas.
106
Q

Describe the Hopfield (1982) Associative Memory Network: properties and assumptions (6)

A

The Hopfield (1982) associative memory network uses standard artificial neurons with no dynamics.

The representation of the network is shown below which demonstrates all neurons are connected with each other and that neuron Si is connected to Sj with weight of wij.

The assumption of the Hopfield associative memory network is that it assumes a fully connected network with symmetric connections (wij = wij).

Symmetric connections means that weight going in one direction is the same as weight going in another direction.

The properties of the Hopfield associative memory network is that it contains simple connectionist neurons, no dynamics, we impose the update schelude, a sign function as a transfer function and units can be active (Si = 1) or inactive (Si = -1).

The sign function used as a transfer function means if a value is below 0 then set it to -1 and if a value is above 0 then set it to 1.

107
Q

What does it mean when Hebbian learning says that: “neurons that fire together wire together” (2)

A

This means that if the sender and receiver neuron are both active then the sender likely contributed to making that receiver neuron fire!

Thus, it strengthens the connections between the sender and receiver neuron; that is their weight increases.

108
Q

How do we change the weights of synaptic connections mathematically in Hopfield Associative Memory Network? (3)

A

In Hebbian learning, we can change the weights of synaptic connections mathematically in Hopfield Associative Network by taking one of the weight (wij) and add to the product of activity of pre and post synaptic neurons (if both active) and multiply by very tiny number which is epsilon.

Epsilon is used in the equation as it is a very tiny number as we don’t want to change the weights in the network too quickly.

Additionally, In most cases, you will want to incrementally learn something new (i.e., have multiple presentations of two stimuli together to associate them together).

109
Q

How does the Hopfield Network Learn? (Imposing Pattern and Learning Rule) - (10)

A

The Hopfield Network learns by imposing a pattern we want to learn then letting the learning rule act.

Imposing a pattern means that we clamp an activity of a subset of neurons for one pattern and let the Hebbian learning rule act to change the synaptic weight connections in the network by taking one of the weight between neurons and add to the product of activity of pre and post synaptic neurons (if both active) and multiply by a tiny number (epsilon).

When imposing a pattern, if both neurons are not active (i.e., both -1) then the weight between these neurons increases and their connection is strengthened. If both neurons are active (i.e., both 1) then weight between these neurons are increased and connection between them is strengthened.

However, if activity between two neurons are mixed (i.e., one is inactive [-1] and one is active [1]) then the weight goes down and may lead to pruning of the synaptic connection between these neurons.

These pattern of activation is learned as stable states under the rules for updating activations.

Stable states mean the update rule produces no more changes in the active neurons in network.

Thus, when a pattern of activation does not change anymore, we say a stable state has been reached.

The update rule can be asynchronously where one unit of the network is updated at a time (at random or pre-defined order) or synchronously where all untis are updated at the same time.

In update rule, many patterns can be learned in same network but memory capacity is limited to ~0.14N (N is number of neurons in Hopfield network).

The memory learned in Hopfield network is content addressable, can perform a pattern completion of a partial cue.

110
Q

What does it mean that memory of Hopfield network is content addressable, performing a pattern completion of a partial cue? (2)

A

Content addressable simplify means that part of the content of the memory is sufficient to address to find the complete memory.

This means the network can perform a pattern completion of a partial cue of completing pattern of activation of learning from partial input.

111
Q

Give an example that the Hopfield Network does not work in isolation (4)

A

Say we have a memory of “I saw a magneta turtle that was squeaking”.

This would tirgger the activity of the neurons in visual cortex that represented a magneta turtle as well as neurons in auditory cortex that represented the squeak sound.

There will be direct connections of neurons in Hopfield associative network to other neurons.

Pattern completion will continue in Hopfield associative memory store and also extend to reactive the neurons in the sensory cortices that was first active when you first memorised the thing (e.g., “the magenta turtle that was squeaking”).

112
Q

Why is Hopfield Network not working in isolation a toy model? (3)

A

This is because the hippocampus has an extensive connections to virtually ‘all association areas (polymodal) in neocortex.

But there is no necessarily direct connections to early (unimodal) sensory cortices.

So the sketch is a severe simplification.

113
Q

Why is Hopfield Network memories discrete attractors? (2)

A

They are discrete attractors as we would not want a continuous attractor as there will easily be interference between the different patterns of activation in network.

Thus, Hopfield network memories are discrete attractors as we want to separate our memories.

114
Q

How does the Hopfield Network recall a memory?

A

Once learning is done (see paragraph of learning) it can perform recall which involves starting from a pattern similar to memorised pattern of activation, change activation according to sign of input (update until no changes occur) to recover the original pattern.

115
Q

Explain how Hopfield associative network learns and recalls , performs pattern completion, with an example

Suppose we want a memorise a cow mooing = first paragraph (7)

A

The Hopfield network learns by imposing a pattern activation we want to learn (i.e., the purple cow mooing) and then letting the learning rule act.

Imposing a pattern means we clamp the activity of a subset of neurons for one pattern of the purple cow was mooing and let Hebbian learning rule act to change the synaptic weight connections in network.

Hebbian learning does this by taking the weight between neurons and add it to the product of activity of pre and post synaptic neurons (if both active) and multiply by a tiny number (epsilon).

When imposing the pattern, if both neurons are not active (i.e., both -1) then the weight between these neurons increases and their connection is strengthened.

If both neurons are active (i.e., both 1) then weight between these neurons are increased and connection between them is strengthened.

However, if activity between two neurons are mixed (i.e., one is inactive [-1] and one is active [1]) then the weight goes down and may lead to pruning of the synaptic connection between these neurons.

The learning rule will act until there is no more changes in the set of active neurons

116
Q

Explain how Hopfield associative network learns and recalls , performs pattern completion, with an example

Suppose we want a memorise a cow mooing = second paragraph (2)

A

The Hopfield network does not work in isolation as there is direct connections of neurons in Hopfield associative network and other neurons.

Such as the memorisation of seeing a purple cow that was mooing in Hopfield network also activates the activity of neurons in visual cortex that represent a purple cow as well as neurons in auditory cortex that represent the moo sound.

117
Q

Explain how Hopfield associative network learns and recalls , performs pattern completion, with an example

Suppose we want a memorise a cow mooing = third paragraph (6)

A

The memory learned in Hopfield network is content addressable, can perform a pattern completion of a partial cue which can recall the complete memory.

Content addressable simplify means that part of the content of the memory is sufficient to address to find the complete memory.

For recall, it involves to start with a pattern similar to memorised pattern of activation , change activation according to sign of input (update until no changes occur) to recover original pattern.

To recall the purple cow mooing, we give a partial cue of a cow mooing which will cause two neurons to be active (auditory) and through pattern completion these two active neurons will drive for another neuron to be active which represents the visual representation of the cow; making the associative memory store complete.

Thus, we can recall a nearby (i.e., similar) pattern that “I saw a purple cow that was mooing”.

Pattern completion will continue in Hopfield associative memory store and also extend to reactive the neurons in the sensory cortices that was first active when you first memorised the thing (e.g., “the magenta turtle that was squeaking”).

118
Q

There is similarities in between mice, monkeys and humans in hippocampus as

A

The DG projects massively to CA3 (not vice versa) and CA3 projects massively to CA1 (not vice versa).

119
Q

What is the primary information processing pathway in hippocampus?

A

Its from the cortex into DG then sends CA3, CA3 sends to CA1 and CA1 sends to subiculum

120
Q

What is the physiology of hippocampus in terms of theta oscillations and ripples? (2)

A

If we stick the electrode in hippocampus when we are resting and sleep we get ripples (high-frequency bouts of activity )of theta oscillations in membrane potential which could help us consolidate and replay memories

If we stick in electrode in hippocampus when we are focused while encoding and retrieving memories then we get slow theta oscillations in membrane potential.

121
Q

What is the physiology of the hippocampus in terms of cells? (6)

A

If we look at individual cells in hippocampus proper we find cells such as place cells that fire maximally in response to specific locations in environments.

If we look at individual cells in enthrone cortex in hippocampus then we see grid cells.

Grid cells are neurons that respond when an animal is at a specific location in an environment such that the responsive locations form a grid-like-pattern; these cells are important for navigation and play an important role in memory.

If we look at individual cells in hippocampus we find head-direction cells.

If we look at individual cells in hippocampus we find boundary cells which respond to a presence of an environmental boundary at a specific distance.

They appear to be related to HD cells. There are specific boundary cells that fire boundaries at east, west, south etc..

122
Q

What is the behaviour of the hippocampus?

A

The hippocampus is important for spatial memory (morris water maze, taxi drivers have enlarged hippocampus by memorising all the streets)

123
Q

What research shown that damage to grid cells underly deficits in navigation? (3)

A

In animal models of Alzheimer’s (which is a disease that can affect the hippocampus), we can introduce damage similar to what is seen in humans with Alzheimer’s and observed the activity of grid cells.

This shown the grid cells expressed the spatial variables of where the animal is affected in animal models of Alzheimer’s disease.

This points that the damage to these cells underly deficits in navigation.

124
Q

Case of HM had deficits in spatial memory since…(5):

A

The case of HM had a surgery to remove hippocampus to treat his severe epilepsy.

Subsequently, he could not form new memories.

According to Corkin (2013), HM had deficits in spatial memory as he forgot locations of items and could not find his way home.

He also could not associate the what, where and when as he had deficits in Hopfield network and memory matrices as these are associative memory stores.

Thus, he could not link elements together of what, where and when and subsequently could not recall them.

125
Q

What does the hippocampal circuit consists of and what does each one contain? (4)

A

The hippocampal circuit consist of: 1) mossy fibres, 2) scahfer and recurrent collaterals, 3) performant pathway and, 4) direct CA1 pathway.

The mossy fibres have connections from DG to CA3.

The Schafer and recurrent collaterals connect CA3 cells among each other and project to CA1 which have place cells.

The performant pathway connects cells in DG as well as direct connections to pyramidal neurons in CA3 in the hippocampus.

126
Q

What is the tri-synaptic loop in the hippocampus?

A

It is connections from EC to DG to CA3 to CA1.

127
Q

What function does DG perform in hippocampus (5)

A

In addition to being able to impose activity on CA3 and override its current connections, DG can also perform a function called pattern separation.

Pattern separation is opposite of pattern completion.

DG performs pattern separation in DG as if two incoming neural patterns are fairly similar (i.e., a large proportion of the same neurons are active at the same time), then the DG makes these patterns dis-similar.

However, the overall function of DG is poorly understood since it does not fire often.

DG also is location of adult neuro-genesis where new neurons are born.

128
Q

What multiple roles does the CA1-CA3 connections do? = first role (2)

A

association of information-rich (EC-CA1) and information poor (compressed) in CA3 > CA1 streams.

Therefore, at time of encoding memory in CA3 since the memory is compressed when it goes to CA3 then you can retrieve a lot of detail of memory since ther is a direct connection from EC to CA1 hetero-association which produces more information rich representation which is then associated to CA3.

129
Q

What multiple roles does the CA1-CA3 connections do? = second role and example (3)

A

different information that enters the hippocampal circuit at different dorso-ventral level but CA3 extends across the entire longitudinal axis of the hippocampus and can associate the information at the different levels.

For example, at one dorso-ventral level information about affective state may enter (e.g., fear).

At another dorso-ventral level, spatial information may enter (e.g.. where was I in the environment) and CA3 synapses can associate the affective and spatial information together.

130
Q

Auto association in ___ and hetero association in ___ and ___ (2)

A

CA3
CA3 and CA1

131
Q

How can the hippocampal circuit act as an associative memory network? (9)

A

1) Perfornant path synapes in DG form new representations of input from EC

2) Mossy fibres from DG to CA3 induce a sparse pattern of activity for auto-associative storage

3) Excitatory recurrent connections in CA3 mediate auto-associative storage and recall of these patterns

4) Schaffer collaterals from region CA3 to CA1 meditate hetero-associative storage and recall of associations between activity patterns in CA3 and activity induced in CA1 by entorhinal input

5,6) Perfornant path inputs to region in CA1 form a new representations of entorhinal cortex input for comparisson/association with recall from CA3

7,8) The comparison of recall activity in region CA3 with direct input to region CA1 regulates cholinergic modulation. Mismatch between recall and input increases ACh, match decreases ACh

9) The theta rhythm may time encoding vs retrieval modes

132
Q

Memories in Hippocampus do not depend forever (5)

A

since HM had intact memories of old childhood memories but memories from 5-10 years before lesion lost and forgot death of favourite uncle in 1950.

HM retrograde amnesia.

Retrograde amnesia is loss of memory access to events that occurred or information learned before an injury or the onset of disease.

The temporal gradient of retrograde amnesia says recall for events immediately leading up to onset is poorly but earlier memories remin intact.

The temporal gradient of retrograde amnesia show memories depend on hippocampus temporally and sugesst hippocampal memories are consolidated in neocortex over time and therefore become hippocampal independent.

133
Q

How do we do fast and slow learning in hippocampus (3)

A

There is fast and slow learning in memory consolidation in hippocampus.

If we experience an event and memorise it then hippocampal recurrent collaterals for rapid (;one-shot) associative learning.

Then we learn an event where hippocampus trains slow learning in neocortex and slowly update weights.

134
Q

How do we do fast and slow learning in hippocampus = model (2)

A

Form strong connections in CA3 to be able to pattern complete to cortex

Overtime, the connections in hippocampus wil decay away once you established strong connections in cortex

135
Q

Issues of fast and slow learning memory consolidation in hippocampus (4)

A

Issues of fast and slow learning in memory consolidaiton

How exactly is this rehearsal happening

and how fast is this transfer? (One night or 20 years)

model of consolidation of memory and how it happens is a hot topic and ongoing debate.

136
Q

How are place cells akin to the cells in Hopfield network? (5)

A
  1. At a given location a small number of them is active
  2. Using Place cells in CA3 and other cells outside hippocampus (e.g., code for affective state/intention) to Associate together (i.e., bind) the inputs related to an event that is available in from a given location
  3. Provide a medium for pattern completion in CA3
  4. At a given location by forming these connections, an attractor is formed across CA3,consisting of spatial and non-spatial information.

5.When position changes, a new set of place cells is active, and a new associations is retrieved

137
Q

Memory Matrices are different but similar version as compared to the

A

Hopfield network

138
Q

Memory matrices is another toy model that helps us

A

think about memory

139
Q

Memory matrices as compared to Hopfield network: (2)

A

brings us closer to mapping associative memory function onto the hippocampus

A more realistic model compared to Hopfield

140
Q

A feed-forward single layer neural network can be drawn as in hetro-association

A
141
Q

Wiring diagram of memory matrices has two sets of

A

neurons: circle (input neurons) and triangles (output neuron)

142
Q

Wiring diagram of memory matrices, red arrow

A

Red arrow are axons
so output of the blue neurons go along the axon

143
Q

Wiring diagram of memory matrices: black line

A

Black line is dendrite of triangular neurons which make contact of axons of the circular input neurons

144
Q

Wiring diagram of memory matrices: black box

A

Black box is the synaptic connection that the axon makes onto the dendrite of the receiver (output) neuron

145
Q

In memory matrices the activation and weights are either

A

0 or 1

146
Q

In wiring diagram of memory matrices, Y is and X is (2)

A

Y is receiver neuron
X is input neuron

147
Q

In memory matrices, we let the neurons learn by

A

changing the weights to be maximum between the current weight and product of input and output

148
Q

We impose some pattern on x and also impose a pattern on y diagram so it causes in the diagram

A

So x1 neuron is active,
x2 is not
x3 is active
x4 and x5 are not:

So y1, y3 and y4 is active.

149
Q

What happens when imposing patterns on x and y neurons in memory matrices according to our learning rule? - (3) ‘hetero-association’ example

A

x1 and y1 are both 1 and if weight was 0 before so 1 x1 = 1 and 1 is maximum value so turn weight to 1 so synapse has been learned

If I give an input x1 then give contribution to activation in y1,y3 and y4.

Same for x3 and other synapses stay 0

150
Q

Hetroassociation - (3)

A

imposing pattern on x and y and network learns input and output assocations

one pattern (x) can generate another (y)

the input can generate the previous output based on having associated them together before with synaptic connections

151
Q

Memory matrices auto-association is where

A

we can also associate the pattern with itself using recurrent connections

152
Q

Memory matrices a recurrent feed-back neural network can be drawn as: (auto-association)

A
153
Q

In memory matrices auto-association diagram

A

each neuron has an axon that connects to dendrites that connects to itself as well as neighbouring neurons that represent the input

154
Q

In memory matrices auto-association activation of x and weights:

A

xi = 0 or 1
wij = 0 or 1

155
Q

In memory matrices, we impose a pattern on the triangular neurons

what happens if you impose a different pattern? (2)

A

in which they will learn the synpases and maintain the input (i.e., state)

If we impose a different pattern then they will learn a different set of connections

156
Q

Learning rule generally (also for auto-association/hetro-association etc..)

A
157
Q

Output of Y in memory matrices (hetero-association/auto-association is)

A

Threshold or divide by no of active inputs so yi is 0 or 1

158
Q

Auto-assocation is when the network

A

learns to associate a pattern of activity with itself

159
Q

Detonator synapses in auto-association function (2)

A

These synapses are labelled detonator synapses

Need them to impose a new pattern of activity to be learned, while ignoring the feedback from the current pattern

160
Q

Memory matrices is similar to Hopfield auto-associative network but (3)

A

connection weights are 0/1 and don’t need to be symmetric

Connection weights only increase (with pre and post synaptic activity)

Neuron activation values are 0/1 (not -1/1)

161
Q

Memory matrices perform

A

pattern completion and error correction like Hopfield entwork

162
Q

Memory matrices prone to

A

Memory matrices prone to

interference

163
Q

Diagram of memory matrices worked example labelled (3) hetro-association

A

synapses and black synapses are 1 and empty squares mean synapses are 0

input neuron x and output neuron y

If i impose a pattern x1 and y1 then learn given connections then impose a pattern x2 and y2 and learn other connections etc…

more patterns I present the more synapses turned on.

164
Q

Diagram of memory matrices worked example for pattern 1 in red

A
165
Q

Diagram of memory matrices worked example for pattern 2 in purple hetro-association

A
166
Q

We can take diagrams out and treat it as a matrix of connection weights:

A
167
Q

We get correct recall so we take x3/pattern 3 in hetro-association - (2)

A

we multiply it by the cornnection matrix and divide by 3 (number of active cells)

Then we get the pattern 100110 in y3 neuron

168
Q

Pattern completition so we give x3 but mistakenaely turn one off

so x3 is 001001 instead of 001011 - (2)

hetro-association

A

if we multiply this by the connection matrix and divide by number of active (2) we get the correct output:

100110 = y3

169
Q

We also get saturation in adding another pattern like x4 (011100) gives y 4 but we get interference

hetero-association

A

as we filled the memory matrices with too many synapses and pattern 3 can not be recalled correctly anymore so y3 not same as actual y3

170
Q

Pattern completion in auto-assocative network:

A
171
Q

What are the three main properties of place cells? = first property (2)

A

The first property of place cells is that their location of place fields can not be predicted based on anatomical vicinity.

This means that anatomically close place cells have place fields far from each other (rather than close to each other) or one place cell can be silent while other is active in different environments.

172
Q

What are the three main properties of place cells? = second property (5)

A

The second place cell property is that: 1) firing is independent of rat’s orientation in open field, 2) firing is directional in narrow-armed mazes and, 3) firing is robust to the removal of sensory cues.

Firing is independent of rat’s orientation in open field since activity of place cell is still in same location if animal direction west to east or east to west.

Firing of place cells is directional in narrow-armed maze (linear track where animal runs back and forth) meaning the place cell fires in one direction.

Research from Nakazawa shown firing of place cells is robust to removal of sensory cues as if you have 4 sensory cues in enviroment and remove ¾ of them then place cells still fire at specific locations of the enviroment.

However, if you block the NMDA receptor, this is not the case, which means the formation of place cell fields is important for plasticity by NMDA-receptors for these place cells.

173
Q

What are the three main properities of PC = third property (4)

A

The third property of place cells is global remapping.

Individual place cells at given environments fire at different location or are silent and together all active place cells likely to cover the entire environment with their place fields.

Some place cells are silent in one environment but active in another.

If a place cell is active in both environments then it is so at different locations.

Global remapping is process of changing firing locations or turning on/off between environments

174
Q

Why do we need to be careful when thinking of a set of active place cells akin to an extended attractor network? (3)

A

We can think of set of active place cells across environment akin to extended attractor network (current context recruits a given set).

We need to be careful since a set of active place cells bounded together by CA3 recurrence it can not be via CA3 place cells with small receptive fields.

Must be via CA3 cells that are active throughout the environment (i.e., not just CA3 place cells)

175
Q

What research shown that we need synaptic plasticity to have stable place fields? (2)

A

Kentros et al., (1998) showed that if you inject saline , place field is stable.

Place field stability falls when NMDA receptors are blocked (i.e.,NMDAR anatgonist is injected in rat)

176
Q

What is the learning rule of Sharp’s model of place cells? = first paragraph (describing competitive learn rule model) - (5)

A

The sharp model of place cells uses a learning rule called competitive learning rule.

There is a diagram of the competitive learning rule below which shows the following: 1) some input patterns, 2) some neurons will be active or not, 3) there will be an output neurons (i.e., receiver neurons), 4) initial random weights between each input and output neuron and, 5) strong inhibition between the output neurons (via lateral inhibition but don’t model them explicitly).

177
Q

What is the learning rule of Sharp’s model of place cells? = second paragraph (describing competitive learn rule) - (7)

A

For input pattern X, there will be initial random connections and a particular output neuron (at random) Oi will be the most active neuron by setting it to 1 (maximum activity) and other Ok neurons to 0.

After this, Hebbian learning will be conducted where wij => wij + epilson OiXkj and do this for all other connections where output neuron is active and update weights between neurons.

We do not do this if activity of output neuron is 0 since it will not change the weight.

Epilson is a very tiny number as we want small incremental changes to weights in Hebbian learning.

The equation of Hebbian learning means the activity of given output and input neuron is multipled by each other and output is made very small by epilson and added to pre-existing weight (wij).

The process of Hebbian learning is repeated with presentations of each input pattern.

The result of competitive learning rule is that outputs whose incoming weights are most similar to pattern xn wins and its weight become more similar as we do learning.

178
Q

What do we do before performing the learning rule in Sharp’s model of place cells and explain it (add to explanation of competitive learning rule) - (6)

A

Before updating the weights in Hebbian learning equation in competitive learning rule between neurons, we will need to normalise the synaptic connections first.

Normalisation is for a given neuron, the total sum of connection (i.e., input) weights to each output neuron stays constant (e.g., sum [wi] = 1).

The sum(wi) = … can be any random number.

We keep track of normalisation as we divide the weights after each learning step by the sum of total weights (sum[wi]) so sum will always stay 1.

For example, in this diagram sum of all inputs weights going onto output neuron 02 is always 1:

Normalisation process repeats with each presentation of input pattern.

179
Q

What happens if we did not have normalisation? (competitive learning rule) - (3)

A

If we did not have normalisation then if we continue learning for a long time, the weight will become very strong in Hebbian learning.

There must be a physiological limit in the brain as how strong a synaptic connection can be.

The output neuron can only fire at a given maximum rate.

180
Q

Example of keeping track of normalisation (add to explanation of competitive learning rule) - (6)

A

We keep track of normalisation as we divide the weights after each learning step by the sum of total weights (sum[wi]) so sum will always stay 1.

For instance in diagram, weights after first learning step is 2,1,8,3 and 2

. Sum(wi) = 2 + 4 +5 +3 +2 = 16 => update weights to be 2/16, 1/16, 8/16, 3/16,2/16 so sum of all weights is still 1.

The strongest weights will be stronger wij  wij epilson oixj.

For example, wij = 8/16 grows stronger faster at expensive of wij = 1/16.

The weakest weights get weaker.

181
Q

What does Sharp’s model of PC firing proposes? - (2)

A

Model proposes that maybe at given location, a certain sensory inputs are very strong and selects and map place cells on this.

Maybe at different location, different sensory inputs are active and map those onto place cells. In this way, we get location specific input.

182
Q

Explain Sharp’s actual model of place cells set up and its competitive learning rule - (7)

A

Sharp’s model of place cells was developed in 1991 and uses standard artificial neurons with no dynamics.

There is a diagram of sharp’s model of place cell firing simulation shown below:

There is a simulation of rat (blue triangle) that will move around the circular box and cues around the box (A,B,C etc…).

At each location, the simulated rat will observe the distance to visual cues and “observes” direction of cues relative to itself

So neural activity is propagated and perform learning updates.

Then the simulated rat will move and explore the box and rotate a small distance.

What happens to the neurons as simulated rat moves around and observes distance to cue and observes direction to cues in Sharp’s model is that there is two stages of competitive learning: 1) conjunction so distance and direction of landmarks is learned (i.e., input pattern is now a representation of distance and direction to all cues) and, 2) conjunctions of there first stage output yields place cells.

183
Q

Findings of Sharp model of place cells firing (4)

A

There is simulated place cell firing is resistant to cue removal in model as at each location , activity of PC output neurons is calculated and get these place fields for specific locations for cell 3, 9 and 13.

. If you remove some cues, the place field is still there (exactly see experimental) as subset of remaining cues is sufficient once learned correct connections to reactive that cell.

Stimulated place fields are omni-directional only after random exploration and not followed directed exploration.

This is shown as we get directionality in linear tracks. Model recorded location and direction of “stimulated rat”.

184
Q

What research showed experimentally what was the input to place cells? (7)

A

O’Keefe and Burgess, 1996 investigated what gave inputs to place cells. In their method, they made a square environment where the rat forages in a box where the size of the box is varied across dimensions and performed single-cell recordings on place cells.

They obtained single-cell recording data on place cells where they extended the box across 1 dimension and in another dimension and then extended the box on both dimensions.

They found that some place cells they recorded exhibited ‘simple’ place field firing which is consistent across box sizes.

However, some place cells appear to change their firing when the geometry of the box changed.

For example, the firing field of place cells was close to east wall.

When the wall extended on that dimension (towards east), the firing field of one place cell neuron changes and extends so there is place cell spikes all along the east side of the wall of the box.

Thus, this showed that geometry of space is an important factor (at least for a subset of place cells).

185
Q

What research showed hypothesised input to place cells? (10)

A

After O’Keefe and Burgess showing geometry of space is certainly an important factor (at least for a subset of place cels),

Hartley coined the term boundary vector cells (BVC) cells to describe cells that respond to a presence of a wall.

BVC have not been found experimentally around the time of Hartley et al’s research so they did theoretical predictions.

Hartley et al hypotehsised each BVC is tuned to respond when a barrier lies at a specific distance from the rat in a particular allocentric direction.

In that BVC receptive field at given orientation in enviroment relative to enviroment.

As the animal approaches the wall, the wall of a specific barrier/boundary of box, covers more of the receptive field and firing rate of a particulae BVC goes up and is maximally.

These BVC cells are allocentric (world-centred) meaning the direction (of barrier) defined relative to some element of environment (like an external cue) and does not depend on animal’s orientation.

Hartley et al. (2000) prpoposed that BVC firing is determined by product of two gaussian functions defining distance and angle tuning to boundary.

There is shorter distances reflecting less uncertainty in inputs to BVCs.

They said if you take a set of BVCs with receptive fields in different directions that are active, they together signal the distance to a set of boundaries

186
Q

Describe BVC model of place cell firing (6)

A

Hartley et al. (2000) theoretically predicted/hypothesised that boundary vector cells (BVC) may give (geometric) input to place cells (PC) and influence their activity.

In other words, Hartley hypothesised we know how far we are from the wall (BVC) as we know where we are (PC).

They created a BVC model of place cell firing in which it involves ¾ BVC which give input weight to a PC and you can sum BVC input activity * input weight and put through transfer function to produce a place field.

When 3 BVC cells are active, these different BVC cells will be maximally active and respond to barriers at specific directions and distances.

These three different active BVC cells give input to PC and is maximally active at this location.

We can then subtract the background activity and get place fields at particular locations.

187
Q

Describe BVC model of place cell firing modelled O’Keefe and Burgress data (4)

A

The BVC model of place cell firing model experimental data from O’Keefe and Burgress as they calculated the activity of place cells at all possible positions/locations (x,y) simultaneously since one location does not influence the other.

We could do this location by location but nothing stops us at doing it at all locations.

Hartley modelled O’Keefe and Burgress (1996) using 2-4 BVC oriented at right angles to one another and thresholds are sufficient to fit most (i.e., give best match) of PC fields in data by O’Keefe and Burgress (1996).

They showed there was a close match of place fields between O’Keefe and Burgress data and Hartley’s model results.

188
Q

Can have BVC predict boundary-referenced firing of PC across enviroments (methods) - (4)

A
  1. Record some place cells in differerent shapes of enviroment (e.g., square, circle)
  2. Find BVCs that best fit to those place cells in (e.g., square/circle enviroment)
  3. Use the model BVCs to predict firing in other enviroments that have different shapes (e.g., right angle-triangle)
  4. Experimentally, then record the same PC cells as found in the step 3 in predicted enviroments (e.g., right-angle triangle)
189
Q

How is BVC found and where are they discovered in the brain? - (3)

A

Discovered BVC in rodent subiculum (area close to hippocampus).

Rodent has a receptive field for west and everytime rodent is near the west wall of a box, a cell will fire

does not matter if curtain around box, remove the curtain or change shape of the enviroment, the specific BVC will always fire west to the wall

190
Q

How is BVC found and where are they discovered in the brain? - (3)

A

Discovered BVC in rodent subiculum (area close to hippocampus).

Rodent has a receptive field for west and everytime rodent is near the west wall of a box, a cell will fire

does not matter if curtain around box, remove the curtain or change shape of the enviroment, the specific BVC will always fire west to the wall

191
Q

How is BVC found and where are they discovered in the brain? - (3)

A

Discovered BVC in rodent subiculum (area close to hippocampus).

Rodent has a receptive field for west and everytime rodent is near the west wall of a box, a cell will fire

does not matter if curtain around box, remove the curtain or change shape of the enviroment, the specific BVC will always fire west to the wall

192
Q

The disadvantage of BVC and sharp model of place cell firing - (6)

A

can not account for the long-term dynamics of place fields in similar environments nor the importance of synaptic plasticity in place field stability.

Additionally it can not explain global remapping to place cells.

Global remapping is process of changing firing locations or turning on/off between environments.

Global remapping is akin to change between attractor but there is slow experience dependent changes to place fields.

The attractor is slowly deformed over time (i.e., changing the active representation of one enviroment by moving to new enviroment).

It is likely to reflect the synaptic plasticity and can not be explained by a ‘fixed’ model (i.e., BVC and sharp model) = leading to development of extended BVC model.

193
Q

Explain the BCM rule that is used in extended BVC model (7)

A

The extended BVC model uses the learning rule called Bienenstock-Cooper Monroe (BCM) rule and it is a modification of Hebbian plasticity.

In extended BVC model we update weights using BCM rule where it required the pre and post synaptic activity (xi and yi).

The weight reduces if post synaptic activity is low and weight increases if activity is high.

In extended BVC model using BCM rule we change the weights and update the activity of yi (place cells) of summing the BVC input activity * input weights and put through transfer function.

There is a sliding threshold in extended BVC model using BCM rule.

The threshold of yi output firing is proportional to the square of recent activity of yi (place cells).

The BCM rule xplains if you get a pattern of activity x drives neuron y then if y less than threshold then weight decreases (unlearn weak inputs) and if y more than threshold than weight increases (learning).

We update the threshold based on average activity of y.

194
Q

Applying BCM rule to extended BVC model (4)

A
  • From BVCs, and get PC field representation
  • Repeated so get PC field representation for all PC
  • Update the weights between BVC and PC using BCM rule
  • What happens if you have a weak secondary peak in your PC at a specific location in enviroment then learning rule will weaken the weights for the neuron that gives you this weak secondary peak over time
195
Q

Unsupervised learning

A

The network learns useful weights soley based on the statistics of input patterns

196
Q

Example of unsupervised learning (3)

A

Classifying input patterns vis competitive learning (winner takes all)
BCM rule (unlearning below threshold)
Basic Hebbian learning rule

197
Q

Supervised learning (2)

A

The network is given a desired output to compare its own output to

The difference instructs the change in synaptic weights

198
Q

Examples of supervised learning

A

Perceptron (i.e., delta rule) , deep learning models,

199
Q

Reinforcement learning

A

The network is given intermittent feedback in form of punishment and rewards and uses this as a guide to change weights.

200
Q

Supervised learning rule and reinforcement learning rule can be

A

combined

201
Q

For our brain to produce ‘adaptive’ behaviour it must be capable

A

of learning!

202
Q

What are the three learning rules? (3)

A
  1. Unsupervised learning
  2. Supervised learning
  3. Reinforcement learning
203
Q

Example of supervised learning rule

A

Delta rule

204
Q

There is no one correct learning rule to understand the brain as

A

depend on the brain area, the type of information that is learned, thestage of development of the organism etc.

205
Q

What is the difference between reinforcement learning and supervised learning?

A

In reinforcement learning, they don’t a teaching signal (i.e., telling the network what the correct output should be) for every input-output combination as compared to supervised learning as only get occasional reward and punishment.

206
Q

What are the characteristics of Rosenblatt’s perceptron model? ( 3)

A
    • Describes how a set of examples of stimuli and correct responses can be used to train an artificial neural network to respond correctly via changes in synaptic weight
  • Learning governed by the firing rates of the pre- and post-synaptic neurons and the correct post-synaptic firing rate (i.e.,a teaching signal) => “supervised learning”
  • Uses standard artificial neurons with no dynamics
207
Q

Why is Rosenblatt’s perceptron model an important historical example?

A

important historical example: instructive in understanding the aim of using neural networks for pattern classification.

208
Q

Describe simple perceptron model (4)

A
  • One output neuron (O1) and two input neuron (x1 and x2)
  • X1 and X2 neuron each have a input weight of w1 = 1 and w2 = 1
  • To get output (activity of O1 neuron), you sum the input activity * input weight and put through trasnfer function (which is step function)
  • Output neuron is active if both input neurons are active (given they reach a threshold of 1.5) = model performs logical ‘AND’ function
    It is only when two input neurons are active that the output neuron is active
209
Q

Describe from simple perceptron model how they got to producing equation of line and mechanism of learning in perceptron model - (7)

A

You can produce a graph of all of the possible x1 and x2 combinations from simple perceptron model where dashed line is decision boundary and left of line is where 01 = 0 and right of line is 01 = 1.

We can write equation of line: x1 +x2 = 1.5, x2 = -x1 + 1.5

Then line separating o1 = 1 and o1 = 0 is where net input equals the threshold which is w1X1 + w2X2 = T

We can rearrange threshold equation to make equation of line which is: x2 = -(w1/w2)x1 + T/w2 (format of y = mx + c and implicility assumed W1 = W2 = 1).

Changing the weights will change the equation of line

Learning is changing the weights to classify two input patterns means finding weights so line separates group of 01 = 0 and 01 = 0.

An input pattern is simply a set of activity patterns across input neurons.

210
Q

Give example of pattern classification on Rosenblatt’s Perceptron on classifying adults and children based on height and weight - (6)

A
  • X1 signals weight
  • X2 signals height
  • Each individual (child/adult) we test will be one combination of two input neuron (weight/height) = pattern
  • We train/learn the network on many example patterns of individuals, each individual will change input neuron’s weights and place a decision boundary to separate two groups (height and weigt)
  • Decision boundary can be fuzzy since there may be kids tall or heavy and adults small and light but overall classifer separate children and adult based on height and weight
  • After training this network on 100 individuals (50 adults/50 children), if we present 101’s person (that has not been used to train the network) , then we measure performance on how well it generalises (i.e., how well classifies the 101’s individual correctly):
211
Q

What is learning/training, training set, generalisation and performance in perceptron model mean? (definition) - (4)

A

Learning or training in perceptron model means we present example patterns and each example pattern changes the input weight.

The training set is set of patterns across neurons (e.g., x1 and x2 representing weight and height which is the categories we want to classify adults and children with).

Performance of pattern classification in perceptron depends on generalisation which is giving a new, never before seen datapoint (X1, X2) and observe the output (result of classification.

Performance of perceptron is how well the perceptron neural network classifies when presenting new datapoints.

212
Q

Describe how the perceptron network is trained or learned so that when you present an example pattern it changes weights in model = describe model below - (3)

A

This is done through the delta rule which is an example of supervised learning rule.

We will use supervised learning rule on a complex model of perceptron which has: 1) 5 input neurons X, 2) 3 output neurons (i.e., 3 possible results of pattern classification) , 3) start with a initial random weights, 4) outputs are n = 3 (o1k, o2k, o3k), 5) input is n = 5 (x1k, x2k, x3k, x4k, x5k).

K means pattern K and output K which is elements of training set (i.e., set of patterns across neurons).

213
Q

Describe how the perceptron network is trained or learned so that when you present an example pattern it changes weights in model = delta ruel - (8)

A

In 3 output and 5 input neurons, which we have set of patterns across neurons (i.e., our training set).

Present a pattern k and find a corresponding output values (o1, o2k…).

So present pattern k and each output neuron perform computation as sum inputs activity * weight and put through transfer function and see if output is on or off based on inital random weights.

We know if network has classified pattern k correctly by using delta rule in which has teaching signal for neural network (i.e., has acess correct target output for pattern k).

In delta rule, change in connection weights performed according to difference between the target and the output we get for the first input pattern k.

Then represent next input pattern k –> k +1 and update weights again.

The delta learning rule in perceptron model changes input weight to reduce the error.

The change in connection weight is big if discrepancy between target and output activity is big.

214
Q

Example of using delta rule numerically in perceptron - (5)

A

Specifically, looking at connection weight from x1 and output neuron 02

Output activity of 02k , after computation, is o2k= 0.2,

Input activity of x1k was 0.7

We also have target t2k is: 0.3, so output of activity o2 want 0.3 instead of 0.2

The delta learning rule takes into account by changing connection weight between x1 and 02 by calculating:
w12 (connection weight of x1 and 02) + epilson 0.7 (0.3-0.2) so the error is reduced!

215
Q

Rosenblatt’s actual perceptron model - (11)

A
  • artifical retina of inputs x
  • Single output neuron that turn on or off depending whether pattern on artifical retina was correct one
  • Single output neuron on if pattern detected (using transfer function)
  • Presented input patterns xk
  • Some patterns detected with target (tk=1) and some are not (they are foils)
  • Each pattern , apply the delta learning rule
  • After many presentations of whole traning set (in random order) it would find the best linear discriminator of target from foils
  • Note the connection weight only change if there is an error and if input neuron is active
  • If target is bigger than output then weight increases
  • If target is lesser than output then weight decreases
  • This delta rule changes the weights to reduce the error
216
Q

How do we find threshold T in delta rule in perceptron?

A

Just use T=0 and add another input x0=-1, then the weight from it w0 can serve the same purpose

217
Q

The disadvantage of perceptron model (9)

A

Minsky and Papert (1969) said that the perceptron can perform linear discrimination but can not solve ‘non-linearly separable problems’

A non-linear separable problem , for example, can be shown below where there is a graph of possible combinations of input x1 and x2.

Say you want to learn x1 or x2.

The output neuron would be ON if X2 = 1 and X1=0 OR X2 = 0 and X1 = 1.

Can we place such a line that divides the space such as output is 1 and output is 0 on other side?

We can not do this separation linearly.

Rosenblatt’s Perceptron performs linear discrimination where decision boundary always linear from combinations of input neurons where either side output neuron is active or not (i.e., 1 or 0).

Therefore, perceptron can not solve non-linearly separable problems as it always uses a linear decision boundary.

However, if we include hidden layers in perceptron which gives it more power then able to solve these ‘non-linearly separable problems’.

218
Q

What does reinforcement learning deal with and why is it difficult? (2)

A

Reinforcement learning deals with the question of how to make a decision in sequential problem like chess or solving a maze.

This is difficult because of the temporal credit assignment problem as you don’t know which of your past actions was pivotal for a good outcome (e.g., winning a game of chess).

219
Q

What is the general setup of model-free reinforcement learning? (7)

A

agent in the environment that receives reward intermittently.

The rewards are positive or negative but not otherwise specific as in supervised learning.

But the agent does not know the rules of the environment meaning it does not have the model of its environment.

The artificial agent has policy, state and reward.

Agent has a policy which is a set of rules that determine which actions to take for different states of myself in environment such that I (hopefully) maximise my future reward.

Reward R is where the environment provide reward which affects state of agent.

For a given policy, a value V can be associated in given state which is sum of expected future reward for each possible state.

At any moment the agent is at a given state. V is calculated by formula which has a discount factor which means further in future you expect the reward less likely you value it.

220
Q

What is the goal of reinforcement learning and why is it a problem? (2)

A

The goal of reinforcement learning (especially model-free reinforcement learning) is to optimise the policy to maximise future reward.

This is a hard problem as in a simple game of tic tac toe we can write down all the states but for chess it is astronomical possibilities of states and can’t write them all out.

221
Q

How can a robot learn to kick a ball through Q-learning (model-free reinforcement learning)? = first paragraph and second

A

At any given moment the robot is in a given state (i.e., distance and direction to the ball which characterises its mental repertoire) and can take a set of possible actions (move up/move down/move left/move right/kick ball).

When robot will kick the ball it gets a reward (e.g., a number in memory it wants to maximise) and up until then it gets nothing.

222
Q

How can a robot learn to kick a ball through Q-learning (model-free reinforcement learning)? = third paragraph = learning algorithm (4)

A

The learning algorithm guides robot learning which says it wants to maximise reward.

According to learning algorithm, robot has learning episodes where it evaluates its own state, take an action and observe reward (robot may get reward or not) and these episodes last until ball is kicked and reward is received.

if robot receives reward something in environment has been learned.

If it has not then robot takes another action.

223
Q

How can a robot learn to kick a ball through Q-learning (model-free reinforcement learning)? = fourth and fifth paragraph = robot and first learning (4)

A

The football kicking robot does not know which actions lead to reward and performs actions initially and only learns at the end of learning episodes.

At its first learning episodes, it performs random actions and kicks when there is no ball in the field.

But at some random point in future the robot manages to kick the ball and receives the ball. It has now learned when it is zero steps from the ball it kicks it.

However, it has not learned when it is one step away from the ball to kick it and only learn in next learning episode. Its only learned the last state-action pair.

224
Q

How can a robot learn to kick a ball through Q-learning (model-free reinforcement learning)? =sixth and seventh paragraph = Q learning table and second learning episode (4)

A

In Q-learning, we can keep track of progress of robot learning to kick ball in Q-table which mimics outline of environment and assigns value for each action.

For example, here is kick table for first learning episode. Value is performing action (A) given state (S) and thought of as current prediction of how much reward robot will eventually obtain if given state S it performs action A and subsequent high value actions.

In second learning episode, this time the robot reached the ball above and find ourselves in field with ball and know how to kick because of Q-table for KICK.

Can assign a different value to state-action pair that led us to the field with ball to kick by assigning value of 8 (Smaller than 10 in KICK table as once step away from reward) in B7 in move-down Q-table which was zero prior to learning.

225
Q

How can a robot learn to kick a ball through Q-learning (model-free reinforcement learning)? =last paragraph (4)

A

At each learning episodes there is then partial values assigned immediately preceding state-action pair if one steps into field with previous learned value.

The trial and error part of the robot reduces across Q-learning episodes and build more Q-tables which assigns values to all the states of robot of different actions.

Thus, this leads to route of ball to kick.

After robot learned the route to ball to kick, it can look up the actions in Q-table and performed a learned action sequence as does not need to rely on trial and error anymore.

226
Q

What is Q? (4)

A

Q tells us ‘joint quality’ of taking an action A, given a state S.

The formula of Q means probability of entering state S’ (next state , given S, A) * total reward.

So the probability of transition of state times reward.

In The formula of Q means transitioning from current state to next state is not completely deterministic

227
Q

We won’t assign rewards to bunch of random states the robot does like kicking ball where there is no ball - (4)

A

We could not have assigned reward to bunch of states before due to superstition.

Maybe doing some useless motions like robot going back and forth 10 times .

If we assign values to those actions then repeat in future and even though they are useless they have no casual impact on outcome.

This is called superstition.

228
Q

What is Q-Learning?

A

It is a model-free, off-policy reinforcement learning that will find the best course of action, given the current state of the agent.

229
Q

What is the benefits of Q learning? - (2)

A

Can learn complex behaviours

No need for an explicit teaching signal as in supervised learning and only intermittent reward

230
Q

What is the disadvantages of Q learning? - (5)

A

Takes time, lot’s of trial an error. Especially if the state-space (number of possible state-action pairs) is large (e.g., as in chess)

But.. But we can approximate the Q-table, approximate it with deep learning

Deep Q learning: e.g., Deepminds Atari game playing AI uses a DQN (Deep Q Network)

Cannot apply in all types of situations: Can’t fall randomly off a million cliffs to learn optimal behaviour. Sounds funny until you consider self-driving cars!

Training in the real-world (e.g., a robot) would take a long tim

231
Q
  • What is temporal difference learning? (2)
A

Temporal difference learning is a model-free reinforcement learning (RL) and slightly different from Q-learning.

It links classical conditioning and reinforcement learning together in order to overcome the temporal credit assignment problems that occurs in Rescorla-Wagner model.

232
Q
  • How does the temporal difference learning work?
A
233
Q
  • How does the temporal difference learning work? - (4)
A

In temporal difference learning model, we will need the value function V at time t (Vt) to predict the sum of future rewards, not just immediate rewards so we can learn S2 predicts S2 which predicts R (reinforcement = general like food reward/punishment)

In other words, we want the sum of all rewards at times tau greater than now (present at time t).

We want to decompose V(t) to current and estimate of subsequent reward.

To do this, we need a delta rule to ensure this happens.
Delta at time t is current reward + future reward – expected reinforcement (what I expect) so delta becomed the difference between expected reward now and estimate of all future reward.

234
Q
  • Temporal difference learning (TDL) looks familiar to (2)
A

Q-learning update formula

and maps onto it well
TD-0 first , 0-rder is closet to Q-learning.

235
Q

Third role of CA1-CA3 -(2)

A

During recall: If the EC->CA1 connection generates one pattern in CA1, but the CA3->CA1 connection generates a slightly different pattern from memory (via pattern completion in CA3),

then the mismatch can be an indicator of changes in the environment.

236
Q
  • In T-D learning more generally we don’t need to be
    restricted to 1 time step forward - (4)
A

We can introduce further time steps in future and discount them using discount factor = as further way in future uncertainity increases

We can extend TD learning to include more states in future e.g., 2 time steps in i.e., - (2) = similar to Q-learning

we can estimate two steps into future and if reward is to be expected we update our V (current estimate value of expected reinforcement)

Same form of update as we had in Q-learning expect it is not on state-action pair updating

237
Q
  • Comparing the delta rule in temporal difference learning to R-W rule:
A

Delta is based on current reward only as its delta is the difference between current reward (reinforcement) and expected reinforcement given all stimuli

238
Q
  • What is acquisition?
A

Acquisition is where S (CS) is paired with reward in Phase 1 , nothing in phase 2 and get response to S.

239
Q
  • What is extinction?
A

Extinction is where S (CS) paired with R in Phase 1, present S (CS) on its own in Phase 2 and see no response to S.

240
Q
  • What is partial reinforcement?
A

Where we occasionally present S with R and lead to weak response to S

241
Q
  • What is Rescorla-Wagner rule? (2)
A

The Rescorla-Wagner rule attempts to describe the change in associative strength (V) between a signal (conditioned stimulus, CS) and a subsequent stimulus (unconditioned stimulus, UCS) as a result of conditioning trial.

Classical conditioning can be modelled with this rule.

242
Q

Third role of CA1-CA3 -(2)

A

During recall: If the EC->CA1 connection generates one pattern in CA1, but the CA3->CA1 connection generates a slightly different pattern from memory (via pattern completion in CA3),

then the mismatch can be an indicator of changes in the environment.

243
Q
  • How can 2 stimuli that predict reinforcement be modelled? = first part discussing blockin and oversahdowing (7)
A

The two conditioned stimuli (S) that can predict reinforcement can be seen in figure below in which if the reinforcement neuron is active then we predict reinforcement (could be reward/punishment).

The way in which two stimuli that predict reinforcement can be modelled is implied by blocking and overshadowing experiments research conducted by Kamin in 1969 demonstrating not all stimuli present during learning subsequently control behaviour.

In a blocking experiment he did, a dog is repeatedly exposed to a tone (first conditioned stimulus, CS1) together with food (unconditioned stimulus, US).

The dog salivates when the tone is presented (conditioned response, CR).

After several consecutive conditioning trials, this time the tone (CS1) and light (CS2) together with US , causes dog to not salivate / has weak response to light CS2) when tested separately later.

The stimulus control by CS2 has been blocked by earlier pairing of CS1 with US.

In an overshadowing experiment, a light CS1 is more salient than a tone CS2, then effect of pairing a UCS with light + tone compound will be strongly associate the light with UCS (food), with little associative strength developing between the tone and UCS (the light overshadows the tone); this causes no response (i.e., salivation) to CS2.

244
Q
  • How can 2 stimuli that predict reinforcement be modelled? = second part what these blockin and oversahdowing show (3)
A

These blocking and overshadowing gave answer that way in which two CS predicts R can be modelled is via the Rescorla-Wagner rule.

The Rescorla-Wagner rule attempts to describe the changes in associative strength between a conditioned stimulus and a unconditioned stimulus as a result of conditioning trial.

This rule has a delta rule of taking the difference between reinforcement (general term which could be reward or punishment) and produces a single error term for all stimuli.

245
Q

Diagram of R-W rule

A
246
Q
  • Why can’t we use the Rescorla-Wagner rule for temporal sequencing of stimuli/2nd order conditioning? - (6)
A

Rescorla-Wagner rule can not be applied when there is a temporal sequence of stimuli that occurs in second-order conditioning.

For instance, second-order conditioning can be demonstrated using the following procedure: a CS1 (e.g., a light) is paired with a UCS (e.g., food) in phase 1; then CS2 (e.g., a tone) is paired with CS1 (the light) in phase 2. Tested response to S2.

This resulted in a CR relevant to the original UCS (food) being evoked by CS2, even though CS2 has never been directly paired with food (e.g., Rescorla, 1980; Rizley & Rescorla, 1972).

The Rescorla-Wagner rule can not be applied to explain why there was a response to CS2 since it only works for direct associations of a CS with reinforcement (R; food reward/punishment).

More specifically, the rule does not work since there is no R in phase 2 of second order conditioning and its simple delta rule it uses depends on R!

This is because the simple delta rule it uses is calculated by taking the difference between received and expected reward.

247
Q

We can think for V for value iteration as

A

one large table with entires for all possible states (initalised e.g. with 0)