Neuroscience exam practice Qs Flashcards
- Design what you would consider to be an ideal drug for treating procrastination. Justify your choices related to speed of onset and route of administration. Consider who should take this drug (everyone? Only those who procrastinate a lot?) and what factors affect this decision beyond wanting to procrastinate less.
Procrastination derives from an inability to defer instant gratification in order to reach a long-term goal; in other words, a failure to inhibit an immediately rewarding behaviour to engage in a behaviour that brings a later, and presumably larger, reward. The ability to inhibit behaviour is associated with activity in the pre-frontal cortex, so a drug designed to decrease procrastination would enhance activity in the pre-frontal cortex. Progressive ratio experiments have demonstrated that dopamine agonists increase motivation in rats and humans: a DA agonist will make them do more work for the same reward. An ideal drug that decreases procrastination would thus increase dopaminergic activity and pre-frontal cortex activity.
There are currently several drugs that do this, and have been shown to reduce procrastination – ADHD drugs such as Ritalin and Aderall, which are amphetamines and thus DA and norepenephrine agonists. What makes these amphetamine-based ADHD drugs less than ideal is their side effects, analogous to those of street amphetamines – insomnia, heart arrhythmia, jitteriness, withdrawal. So an ideal drug would, basically, reproduce the dopaminergic, cognitive-enhancing, motivating effects of ADHD drugs but with fewer side effects – a higher therapeutic index. Cocaine is a pure DA agonist that has been shown to increase motivation, and could be explored as a drug to reduce procrastination. However, both ADHD drugs and cocaine are associated with dependence, and it would be unwise to prescribe this drug to anyone whose procrastination was not at a clinical level – i.e. causing impairment and distress.
Given that procrastination is generally not a life-or-death situation, a medium-speed route of administration such as orally in tablet form would appear optimal for a procrastination drug. It would be interesting to experiment with the efficacy of drugs with varying half-life. Would an anti-procrastination drug that lasts for 30 minutes, just enough time to get someone on task, be effective? Or would longer-lasting effects be needed to keep the subject focussed on the task? For those whose procrastination was chronic, a depot shot could be considered.
And while it is a tantalising idea to create a drug that can increase motivation for immediately ungratifying but important tasks, a behavioural approach –involving increasing self-efficacy and the salience of the final reward – is probably a safer long-term option.
- Why is a “just say no” strategy likely to fail in treating addicted individuals? Describe 3 differences in the brains of people with a history of drug abuse compared to age-¬‐matched controls.
A “just say no” strategy, while it may be effective for those at risk of becoming addicted, is unlikely to work for addicted individuals. Drug addicts usually have dependence to a drug, meaning not taking the drug results in withdrawals, setting up a negative reinforcement feedback loop whereby the drug is needed to remove the aversive withdrawal symptoms. Moreover, the brains of drug addicts have permanent differences to that of age-matched controls which make it difficult for them to inhibit drug-taking behaviour.
Firstly, drug addicts have been shown to exhibit hypofrontality – reduced activity in the medial prefrontal cortex – an area of the brain associated with inhibiting behaviours. The corollary of this is that drug addicts find it harder to defer the instant gratification of a drug in favour of a less tangible long-term reward, such as being drug-free.
The motivation to give up drugs may be affected by reduced dopaminergic activity. Alcoholics, for instance, have fewer and less sensitive D2 receptors in the striatum.
Also, cocaine addicts have been shown to have less sensitivity to natural rewards, meaning that there is a devaluation of any rewards that are not drug-related, making it difficult to stop drug-use via differential reinforcement of other behaviours, as natural rewards appear to be less reinforcing for drug addicts.
- Describe three neurotransmitter systems that are/could be targeted with pharmacotherapies to reduce anxiety symptoms. Which do you think is best and why?
Three neurotransmitter systems that can reduce anxiety are GABA, serotonin and adrenalin.
Perhaps the most obvious NT system that could be targeted is the GABA system – which is stimulated by alcohol (perhaps society’s oldest anxiolytic) as well as by drugs of the barbiturate and benzodiazepine class. GABA interneurons have an inhibitory effect on neural activity throughout the brain, and GABA agonists such as benzodiazepines can effectively reduce both the cognitive and the autonomic symptoms of anxiety. As a result of GABA’s broad activity in the brain, benzodiazepines also have a wide side-effect profile: impaired motor skills, learning and coordination, drowsiness and lethargy. More significant is the long-term dependence associated with benzodiazepine and withdrawals that include anxiety – the very symptom they are often prescribed to treat. This makes them very susceptible to addiction.
The adrenaline system is responsible for triggering the sympathetic nervous system “fight-or-flight” response associated with acute forms of anxiety, and can be targeted with beta blockers, which work by stopping the neurotransmitters norepinephrine and epinephrine from binding to beta receptors and sending a signal to the PNS. While targeting the adrenaline system can reduce physiological anxiety symptoms such as increased heart rate and shaking, they have little direct effect on the cognitive/affective aspects of anxiety. The adrenaline system plays its largest role in anxiety in high-stress/panic situations (such as PTSD, stage fright) and is not involved in more pervasive, generalised forms of anxiety such as phobias. Targeting the adrenaline system to reduce anxiety is thus effective only in a limited context.
The most effective way of decreasing anxiety pharmacologically – and the first-line chemical treatment today – appears to be via the serotonin system. Selective serotonin reuptake inhibitors function by flooding the synapse with serotonin, resulting in an anxiolytic, anti-depressant effect. The most common side effects of SSRIs includes reduced sexual sensation and flattened affect. A disadvantage is that they take 4 to 6 weeks to reach maximum efficacy and certain SSRIs can cause uncomfortable – and unpredictable – withdrawal symptoms if they are stopped abruptly, including homocidality and suicidal ideation, the latter especially among younger patients. SSRIs, too, while their side effects are fewer than for benzodiazepines, are only effectice in 65% of patients, vs 40% placebo – suggesting they work only for the few. That being said, the relatively low dependence risk, compared to GABA agonists, and efficacy across a wider range of forms of anxiety than adrenaline agonists, make it the preferred anxiolytic medication.
A caveat to all this is that the best treatment for anxiety appears to be CBT, which has been shown to be at least as effective as SSRIs, carries no side-effects and has less risk of relapse.
- How are glutamate receptors involved in LTP?
Long-term potentiation is a long-lasting enhancement in signal transmission between two neurons that results from stimulating them synchronously and repetitively. In layman’s terms, neurons that fire together, wire together. This increase in firing rate is caused by an increased number of glutamate receptors at the post-synaptic neuron.
When a neuron is subjected to tetanus (repeated stimulation) this results in the insertion of more AMPA glutamate receptors into the post-synaptic neuron and the activation of NMDA receptors. The mechanism whereby this happens is as follows: when AMPA receptors are activated, they cause a slight depolarisation of the postsynaptic cell, by letting more Ca2+ into the neuron. The slight depolarisation is detected by the NMDA receptor, which release the magnesium ion from its ion channel, allowing calcium ions to flow into the cell and for an action potential to be generated. The calcium ions activate protein kinase CaMK II, which increases the number/sensitivity of AMPA receptors. In fact, LTP can be measured by the ratio of AMPA to NMDA receptors – the higher this ratio is, the greater the evidence LTP has occurred.
In short, as a neural pathway is stimulated repeatedly, the glutamate receptors on the pathway become more numerous and more sensitive, meaning signal strength is increased along that pathway. This phenomenon is LTP.
- Describe how Darwin’s theory of evolution by natural selection explains how the expression of traits in a population changes over generations. Choose one psychological trait, and describe how Darwin’s theory might be applied to explain how this trait evolved to its current form in humans.
According to Darwin’s theory of natural selection, the fittest individuals of a species are more likely to pass on their genetic material. This is because, given limited resources, those who have traits allowing them to compete successfully for those resources will survive to reproduce, passing on their genes. As such, genes responsible for traits that confer a reproductive advantage will be represented in future generations more than those that do not.
The psychological trait of jealousy in pair bonds, for instance, can be explained through Darwinian theory. Jealous people are motivated to keep their partner close by, helping to ensure, in the case of men, that female partners reproduce only with them. Conversely, jealousy in women helps ensure the male partner invests in that relationship, is present to protect offspring from threat, ensuring the female’s genetic material is passed on.
- Variation. Organisms (within populations) exhibit individual variation in appearance and behavior. These variations may involve body size, hair color, facial markings, voice properties, or number of offspring. On the other hand, some traits show little to no variation among individuals—for example, number of eyes in vertebrates.
- Inheritance. Some traits are consistently passed on from parent to offspring. Such traits are heritable, whereas other traits are strongly influenced by environmental conditions and show weak heritability.
- High rate of population growth. Most populations have more offspring each year than local resources can support leading to a struggle for resources. Each generation experiences substantial mortality.
- Differential survival and reproduction. Individuals possessing traits well suited for the struggle for local resources will contribute more offspring to the next generation.
- Explain how maternal rearing styles and changes to the stress axis can become inherited from generation to generation in female rats via epigenetic processes.
Female rats who are raised by high-maternal care mothers are less cortisol-reactive – susceptible to stress – than rats raised by low-maternal care mothers. If a female rat is stressed, it is less likely it is to engage in maternal care behaviours, such as “arched back nursing” (a posture that allows rat pups to find the mother’s nipple) and licking and grooming their offspring. Thus the maternal rearing style can be passed on from generation to generation in female rats.
The mechanism whereby this occurs is that maternal care changes the pattern of gene expression (mRNA) in the stress hormone system in adults. Rats who have received high maternal care rats have a greater number of receptors that regulate stress in the hippocampus (glucocorticoid/cortisol). The cause for this lies in the level of methylation of DNA, which is lower among low-maternal-care offspring, meaning their genes are harder to read and transcribe. Thus, the epigenetic process of maternal care can modulate the stress axis by changing patterns of gene expression.
- Describe what internal cues for meal initiation and cessation are. Obese people are said to be more sensitive to external cues. How do external cues influence eating?
Internal cues for meal initiation and cessation are those generated within the body. The meal initiation cues include:
Stomach stretch – a satiety signal that occurs when the stomach is full; it is picked up by the somatosensory nerve and sent to nucleus of the solitary tract (NTS) in the medulla.
Blood sugar – picked up by sugar-sensitive cells in the NTS.
Leptin – released when there is too much body fat and picked up by arcuate nucleus (ARC) of the hypothalamus.
Insulin –picked up by ARC and indicates there is enough blood sugar.
The principal meal initiation cue is that provided by ghrelin, which is produced when the stomach is empty – and picked up by ARC. A spike in Ghrelin precedes eating.
External cues were divided by Herman and Polivy (2008) into normative cues – such as how much food one is served, when others stop eating – and sensory cues – the smell, taste and look of food. Obese people obey normative cues (when in company), but are more sensitive to the sensory aspect of food. External cues, such as the smell of cookies, for instance, has been shown to increase consumption of the sensed item both in the obese and normals. The mechanism whereby this occurs is that exposure to a cue triggers the mesolimbocortical reward pathway – we crave the food – and that enhances the hedonic experience of the food.
The sensory cue of the palatability of the food itself has also been demonstrated by Yeomans et al. (2001) to reduce the effects of satiety on consumption – if food tastes good, we are more willing to ignore satiety signals. The orbitofrontal cortex appears to be involved in the reduction of the pleasurability of a food upon satiety. This OFC satiety switch appears to be dysfunctional in the obese, who continue to eat – and feel pleasure – even when sated.
- Describe 5 ways that glia regulate neural activity. For each, describe how a dysfunction in this process might contribute to a neuropathology.
Glial cells are crucial not only for providing structural and tropic support for neurons, but also are involved in neurotransmission. However, dysfunctional activity of glial cells has also been implicated in a variety of neuropathologies. Five ways in which this can occur are:
- Microglial cells act like immune cells, looking for signs of inflammation and removing waste products by phagocytosis, ensuring neurotransmission is not interrupted by extraneous substances. Microglia can, however, respond so actively they damage brain cells. In stroke, for instance, especially ischemic stroke, or head injury, microglial cell response can be worse than the damage itself, as the cells detect multiple abnormalities and can destroy neural matter through phagocytosis or apoptosis.
- Oligodendrocytes regulate neural conduction by myelinating axons. In multiple sclerosis, this myelination function is disturbed, slowing neurotransmission and leading to a range of debilitating motor and cognitive symptoms.
- Microglial cells release pro-inflammatory cytokines to trigger an immune response. This process, however, has been shown to be involved in neurodegenerative diseases such as Parkinson’s and Alzheimer’s.
- Glial cells are involved in regulating activity at glutamate synapses. In chronic pain, however, glial cells stop regulating activity, leading to the spread of activation between pain neurons as more glutamate is present in the synapse.
- Glia can transport nutrients to neurons. However, in the neurodegenerative disease amyotrophic lateral sclerosis, astrocytes secrete a toxic factor that kills motor neurons – those involved in muscle function.
- Epilepsy.
-hyperactivity of astrocytes → hyperactivity, induce seizures
Schizophrenia …
-Fewer ogliodendrocytes, less myelin, disturbance in white matter.
-Positive symptoms from increased number of astrocytes, (decreased glutamate in synapses, hyperexcitation). Anti-psychotics (glial inhibitors) reduce the positive symptoms of schizophrenia.
- What is the spino-thalamic tract? Describe the neuroplastic mechanisms that occur in the spinal cord in chronic pain.
The spino-thalamic tract is a sensory pathway originating in the spinal cord which transmits information to the thalamus about pain (as well as temperature, itch and crude touch). The types of sensory information transmitted via the spinothalamic tract are described as “affective sensation”. This means that the sensation is accompanied by a compulsion to act. For instance, an itch is accompanied by a need to scratch, and a painful stimulus makes us want to withdraw from the pain. Three fibres are responsible for the sensation of touch/pain – the Aβ, Aδ and C fibres, and these all join the spinal column at the dorsal horn before sending signals to the brain, mostly via the thalamus.
Chronic pain sufferers show several changes in the spinothalamic tract that might account for the symptoms of the disorder:
Firstly, when pain neurons are repeatedly stimulated, connection between them is strengthened through NMDA-mediated signal enhancement, or long-term potentiation, a condition known as secondary hyperalgesia. Aβ-fibres (touch neurons) start releasing neurotransmitters in the spinal cord that are normally only released by C-fibres (pain neurons).
Silent neurons – which are usually dormant – start making new pain receptors and responding to pain.
Touch fibres also start wiring themselves to pain fibres and thus touch becomes associated with pain. This last point explains the phenomenon of allodynia, whereby previously innocuous touch stimuli (such as the feeling of a shirt upon one’s back) can become subjectively painful.
Spreading of pain fibres to more ascending pain transmission neurons, as glial cells cease to regulate glutamate at the synapse, allowing it to spread to other neurons, resulting in a wider painful area.