Task 7 Flashcards
The amygdalas role in decision making
The amygdalas role in decision making is that is guides responses based on emotional information based on the environment. It communicates this with the IT/PRh which guides responses based on visual information, and both of these communicate to the OFC, which holds the representation of expected outcome values (based on amygdala and IT/PRh), and helps us make a decision.
amygdala can influence IT to enhance processing of biologically relevant stimuli – IT/PRh interaction fit frontal cortex necessary to implement visually guided rules
amygdala role
Anticipatory autonomic and neuroendocrine response
⇒ Amygdala mediates not only unconscious biases and preferences but also similar feelings about abstractions (ideas, concepts, etc.)
⇒ An important role in registering changes in the status quo
⇒ Ensures that affective signals enter the decision-making process
de martino
Two response options: ‘gamble’ (certain probability of losing or gaining a certain amount of money) or ‘safe’ (a certain amount of money was either given or taken away)
· Two different frames: ‘Gain frame’ (gain money) or ‘Lose frame’ (lose money)
results
Framing sig. changed the participant’s decision (to gamble or play safe)-Risk-averse (‘safe’) in a gain frame and risk-seeking (‘gamble’) in the loss frame
amygdala: mediates the framing effect (active when participants choose to gamble in loss frame and safe option in gain frame)
o value-related prediction and learning; simple instrumental decision-making; detection of emotionally relevant info (present in contextual and social-emotional cues)
vmPFC/OMPFC: involved in rational performance and noticing own emotional biases
o individuals that are less susceptible to framing effect and act more rationally have higher vmPFC (in article more general term OMPFC is used) activity
® ACC-Role of MCC/preSMA/OMPFC: active when participants decide against a general behavioral tendency
o detects conflicts between analytic response tendencies and emotional amygdala-based tendencies
® vmPFC and amygdala are highly interconnected -> the two systems are not completely separated
Murray opinion
de martino opinion
Hampton option
Hampton: Because activity modulations are evident in normal control participants but not in the two patients with bilateral amygdala damage, amygdala must be causally involved in this modulation. This conclusion seems sound, but remember from the Murray paper that the amygdala signal is not a necessary updating signal for reversal learning, although the Hampton study shows that it does contribute to the updating signal in normal subjects
Hampton et al. report decreased reversal learning in the patients without amygdala, whereas Murray says that amygdala isn’t essential for reversal learning
Hampton
- Participants were scanned with fMRI while they participated in a task designed to probe reward-related learning and behavioral decision making – monetary probabilistic reversal learning
- Subjects were shown two stimuli and chose one
- Afterward, monetary reward (gain) or punishment (loss) appears
Deterministic: always rewarded if correct, always punished if incorrect - Probabilistic: Correct stimulus: Results in rewards 70% and losses 30%; Incorrect stimulus: Results in rewards 40% and losses 60%
- Once correct stimulus has been chosen four times, contingencies are reversed
- Subjects have to infer that reversal took place and switch choice; Then, the process is repeated
Hampton results
Results from previous studies using the same task and healthy subjects:
- Activation changes in amygdala and vmPFC related to processing rewarding and punishing outcomes and subsequent behavioral decisions
→ Receipt of reward associated with activity in mPFC + vmPFC
→ Receipt of punishment associated with activity in MCC+vlPFC + lOFC
- Activity in both regions tracks expected reward value during the performance; Expectation signals updated flexibly following changes in reinforcement contingencies
Expected reward signals:
- Signal correlated with activity in OFC and mPFC, time-locked to time of choice
- OFC/mPFC activity increased linearly with increasing expected reward value → Areas encode expected reward of currently chosen stimulus
Hampton results
Results from previous studies using the same task and healthy subjects:
- Activation changes in amygdala and vmPFC related to processing rewarding and punishing outcomes and subsequent behavioral decisions
→ Receipt of reward associated with activity in mPFC + vmPFC
→ Receipt of punishment associated with activity in MCC+vlPFC + lOFC
- Activity in both regions tracks expected reward value during the performance; Expectation signals updated flexibly following changes in reinforcement contingencies
Expected reward signals:
- Signal correlated with activity in OFC and mPFC, time-locked to time of choice
- OFC/mPFC activity increased linearly with increasing expected reward value → Areas encode expected reward of currently chosen stimulus
in this Hampton ea. paper, the authors look at a later phase in the trial, namely the outcome or feedback phase. As indicated above, this is the phase where all areas involved in storing and updating outcome-related information should be active – and they are: lateral OFC updates stimulus-outcome associations and anterior MCC action-outcome associations. And vmPFC? Its response to feedback only reflects a sort of appreciation for the feedback. The voxels are less deactivated during feedback preceding a stay trial than feedback preceding a strategy switch. And, we know from the task on the cortical contribution to object selection that vmPFC is less de-activated when processing more highly valued information (e.g., positive adjectives applied to self, choices between objects with large value difference for the self …
Hampton results
Results from previous studies using the same task and healthy subjects:
- Activation changes in amygdala and vmPFC related to processing rewarding and punishing outcomes and subsequent behavioral decisions
→ Receipt of reward associated with activity in mPFC + vmPFC
→ Receipt of punishment associated with activity in MCC+vlPFC + lOFC
- Activity in both regions tracks expected reward value during the performance; Expectation signals updated flexibly following changes in reinforcement contingencies
Expected reward signals:
- Signal correlated with activity in OFC and mPFC, time-locked to time of choice
- OFC/mPFC activity increased linearly with increasing expected reward value → Areas encode expected reward of currently chosen stimulus
R
in this Hampton ea. paper, the authors look at a later phase in the trial, namely the outcome or feedback phase. As indicated above, this is the phase where all areas involved in storing and updating outcome-related information should be active – and they are: lateral OFC updates stimulus-outcome associations and anterior MCC action-outcome associations. And vmPFC? Its response to feedback only reflects a sort of appreciation for the feedback. The voxels are less deactivated during feedback preceding a stay trial than feedback preceding a strategy switch. And, we know from the task on the cortical contribution to object selection that vmPFC is less de-activated when processing more highly valued information (e.g., positive adjectives applied to self, choices between objects with large value difference for the self …
damaged patients
-switch trials
Lesion patients show LESS activity in areas where control patients showed increased activity during switch trials: Anterior insula, posterior lateral OFC
SM and AP both more likely to switch after reward compared to controls (same in deterministic task)
- smaller anterior insula / PLOFC activation → negative feedback following previously rewarded stimulus holds much less value to the S-R update, as no amygdala signal arrives in the OFC.
- stay trials (choose same stimulus again): - no difference
-Expected reward signals
Lesion patients had different activity in the mPFC
· figure D shows the ‘normal’ expected reward in mPFC is linear. This linearity not present for lesion patients
· Showing how amygdala processes expected reward value of each choice abnormally!
- Response to rewarding/punishing outcomes
§ No significant difference between amygdala lesions vs controls!
· Reward > mPFC, mOFC
· Punishment > anterior ventrolateral PFC and lOFC
§ Showing that processing of reward/punishment in lOFC and mPFC still intact!
· Amygdala lesions only impair generation of expected reward signals
no differences → suggests that processing of rewards and punishments stay intact after amygdala damage.
- the outcome choice (reward or punishement) is also processed by the visual-parietal-prefrontal system (high route ledoux) → as a consequence, participants rationally know the effect of their choices, but have no affective response (low route) from the amygdala telling them how they feel about this outcome.
- amygdala lesions selectively impair generation of expected rewards signals and behavioural choice signals based on expected reward, but leave generation of reward outcome signals intact.
dieffernce murary Hampton
Murray provides convincing evidence (e.g., Figure 2) that the amygdala is not a necessary structure for learning stimulus outcome associations: monkeys with highly selective bilateral amygdala lesions perform as well on a reversal learning task as do intact monkeys.
But it doesn’t follow from this that amygdala does not influence the learning and decision processes involved in such tasks. ut is not needed for reversal learning but it does help. The general hypothesis is that bot affective value and reward value information is entered into the lOFC/vmPFC//aMCC/(pre)SMA goal-directed decision making process. The reward value signal comes from the midbrain dopamine neurons and the affective value signal comes from the amygdala. Rewards, from primary food to complex social and abstract monetary gains, clearly have a reward value associated with them, but they also have an affective value: finding an unexpected reward elicits positive emotions. This affective ‘tag’ of the outcome is sent around in the brain to the areas that need to update information
o Reward value signal > comes from midbrain dopamine neurons
Money has obvious ‘reward’ value > getting money is a reward
o Affective value signal > comes from amygdala!
§ However, getting money also elicits positive emotions!
· So without the amygdala, there is 1 less source of valence information > reward value system is sufficient for learning stimulus-reward associations
Murray and Rushmore
The amygdalas role in decision making is that is guides responses based on emotional information based on the environment. It communicates this with the IT/ PRh which guides responses based on visual information, and both of these communicate to the OFC, which holds the representation of expected outcome values (based on amygdala and IT/PRh), and helps us make a decision.
Amygdala - OFC → affective information
o the first amygdala updates the values of expected outcomes and then OFC stores it for response selection
o This connection enables the animal to choose among multiple competing cues based on outcome expectancies
Murray model
Amygdala - OFC → affective information
o the first amygdala updates the values of expected outcomes and then OFC stores it for response selection
o This connection enables the animal to choose among multiple competing cues based on outcome expectancies
IT/PRh – OFC → visual information
o Necessary for implementation of visually guided rules
o E.g. Object-reversal learning depends on theses structures, not on amygdala à was still possible in patients with amygdala lesions
- Amygdala – IT/PRh
+ Amygdala can modulate activity in IT/PRh to enhance sensory processing of significant stimuli and events
+Attention modulation
→ There is an amygdala-independent route for visually guided rules and another amygdala-dependent route for affective information
Murray and Rushmore
The amygdalas role in decision making is that is guides responses based on emotional information based on the environment. It communicates this with the IT/ PRh which guides responses based on visual information, and both of these communicate to the OFC, which holds the representation of expected outcome values (based on amygdala and IT/PRh), and helps us make a decision.
Murray’s Model ends where Rushworth’s Model begins (-> OFC)
· Object-reward associations that have been formed in the amygdala and stored in OFC are now compared and used to guide incentivized behavior in mOFC/vmPFC and MCC
- lOFC gets the visual input and the emotional input from the IT/PRh and amygdala which creates the lOFC output
lOFC: scanning environment and associating stimuli with rewards
mOFC/vmPFC: value comparison based on lOFC information & decision on action goal
MCC: action value comparison/ action reward-association à decides on action to achieve
The OFC communicates to the mOFC/vmPFC for to deduce value expectation or comparison, and then finally to the ACC to compare action values and come to a decision
Explain results of the Hampton et al. (2007) experiment for the healthy control participants.
They found signals in the
1. Posterolateral OFC
2. Anterior insula
3. ACC
That were related to behavioral choice – thus deciding whether to maintain the choice or switching to another.
BUT: reduced signals here for amygdala lesioned people
Explanation: The signals produced in the above mentioned brain regions rely directly on the input provided by the amygdala, thus the connection between the 2 regions (amygdala + vmPFC) is primarily responsible for the computation of reward expectancy.
In particular, explain how these results on reversal learning in normal participants fit in with what was discussed in the two previous tasks (task 5: cortex and object selection; task 6: error monitoring).
Reversal learning: Requires a subject to flexibly adjust their behavior when the reward-related contingencies that they have previously learned are reversed
Neurological basis:
1. Amygdala sends assigned value singals to vmPFC
2. vmPFC computes expected reward based on this input
3. These signals are used to determine whether to
a. maintain choice
b. switch