Neurobiology of decision making Flashcards

1
Q

decisions in everyday life

A
  • Important part of life
  • Deciding between options
  • Diff choices of same thing (e.g. choosing between brands)
  • Some bigger - change course of life
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

mem –> decisions –> future actions

fellows (2018)

A
  • Decisions based on experiences from mem - base on what you have experienced in the past, what is easier
  • Make predictions about what experiences are going to be - predict consequences of decisions (whether it will be very similar or very diff)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

prediction-choice-outcome loop

fellows (2018)

A
  1. goal
  2. prediction of outcome
  3. decision & appropriate actions
  4. observe action outcome
  5. outcome subjected to internal monitoring processes
  6. prediction error used to update mem
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

general features of decision making

A
  • avoid harm
  • mimise: costs of time, effort, missed opportunities
  • maximise reward
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

factors to consider before making a decision

A
  • Difficulty of action
  • Probability of success and failure
  • How valuable is the possible reward at this moment (context)
  • Missed opportunities
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

biases in DM

A
  • Stick with default - choosing what you know
  • Choosing certain gains over gambles
  • Choosing gambles over certain losses
  • Temporal discounting: choosing immediate rewards over future rewards unless benefits are made explicit
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

different levels of DM

A
  1. simple perceptual decisions
  2. more complex decision
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

random-dot motion task

perceptual decision task, hanks & summerfield (2017)

A
  • Monkey maintains fixation
  • Random dots presented
  • Certain % of dots moves coherently, other dots move randomly
  • More randomly moving = harder
  • When monkey detects main motion direction, move eye to main direction of moving dots
  • Noisy sensory signal converted into discrete motor act
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

accumulating ev in perceptual decisions

A
  • Neurons tuned to encode movements to right, some encode movements to right
  • Motion detectors fire whenever detect movement in certain direction
  • Accumulated until one of these reaches decision threshold
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

3 stges of perceptual DM

A
  1. Detection of sensory evidence: what are the alternatives that can be detected
  2. Integration of ev over time –> because evidence is noisy
  3. Checking if threshold has been reached
    * –> if so, elicit appropriate action
    * –> if not, accumulate more ev
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

where does ev accumulation take place

simple perceptual decisions

A
  • Brain areas responsible for encoding the relevant feature e.g. area MT/V5 with motion in DM
  • Parietal & dorsal prefrontal cortex
  • Recent ev: sensorimotor areas representing possible actions, accumulate ev as well
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

homogenous model of ev accumulation

A

all relevant neurons active at same time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

models of ev accumulation

A
  • homogenous
  • heterogenous
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

heterogenous model of ev accumulation

A
  • Collectively encoded
  • Early responding neurons active quickly & pass on activity to other neurons
  • wave of activity in the network
  • Accumulated ev in network reflects when detection threshold will be reached
  • mem for accumulated ev can both be flexible & durable
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

mental maps in DM

A
  • Decision making processes rely on internal models of the current task
  • Experiences need to be organised in internal models or mental maps
  • Internal model helps us to predict the diff outcomes of the available options based on our experiences
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

mental maps in DM historically

Tolman’s rat spatial maze

A
  • 2 points (A & B) spatially close but barrier between
  • never experienced going straight from A –> B
  • build mental map
  • barrier removed: quicker route, rats quickly adapt
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

principle being transferred to non-spatial tasks

A
  • Experience gets embedded in a cog map on how things work & relate to each other
  • Will guide later decisions

e.g. psychopy coding might be complicated solution at first but then find a shortcut with more experience using the app

18
Q

problems being described as a series of decisions

A
  • Sequence of steps could be represented as a mental map
  • Some sequences will be more useful than others
  • Some steps can be exchanged without affecting the overall result
  • Assumed that your mental map of the overall task will guide you through this series of decisions
  • Need to activate your memory content to make these decisions
19
Q

mental maps in DM: hippocampus

A
  • retrieval of LTM content
  • New experiences require neuronal activity in hippocampus to be stored in LTM
  • When need to make decisions that are related to these experiences you will re-activate these mems
  • DM can bias what is being stored in mem
20
Q

strategy switch in DM

schuck et al. (2015)

A
  • ppts presented with red/green patterns of squares inside frame
  • task identify arrangement of pattern within frame
  • instructed S-R mapping: location arrangement responses (task)
  • learned S-R mapping: red always associated with 1 response, green other response. ppts learn this & form a shortcut in cog map based on learned association
  • activity in MFC prior to strategy shift if learned S-R mapping

fMRI measuring MFC

21
Q

representation of hidden states

kaplan et al. (2017)

A
  • more abstract representation of cog map that shows decision steps from original problem
  • diff cog maps for diff tasks
  • state space representation: state space, hidden space, mental exploration
  • requires interactions between brain areas involved in DM & LTM

medial/orbito-frontal cortex & hippocampus seem to be involved

22
Q

state space

state space representation

A
  • cog map for a given task
  • grid-like representation
  • help structure new experience
  • represented in orbito- & medial frontal cortex
23
Q

hidden space

state space representation

A
  • our position within the current task
  • the point that reflects which decision we are currently
24
Q

mental exploration

state space representation

A
  • eval of potential outcomes for diff choices
  • before we make a decision
25
Q

areas associated with subjective value of decision options

fellows (2018)

A
  • subcortical areas (thalamus & striatum) - subjective valuation
  • OFC - goals
  • vmPFC - track expected value with current goals
26
Q

frontal lobe lesions & DM

kalat (2015)

A
  • patient studies: lesions distrupts value-based DM e.g. inconsistent prefs
  • trust game
27
Q

trust game

kalat (2015)

A
  • A decides how much money to give B
  • amount given x3
  • B then decide how much to give back to A
  • lesions in VMPFC when A: give less to B
  • lesions in VMPFC when B: keep nearly all money instead of returning
28
Q

lateral prefrontal cortex & DM

A
  • seems not involved in value-based choices
  • but active in many decision paradigm
29
Q

frontal pole & DM

most anterior area of PFC

A

exploratory beh

30
Q

carland et al. (2019)

A

reward rate maximisation & urgency signal

31
Q

aim of inds

reward rate maximisation

A

maximise subjective reward = reward rate maximisation

32
Q

factors influencing reward rate maximisation

A
  • minimise: costs of time, effort
  • maximise reward
33
Q

choosing an activity

reward rate maximisation

A
  • When engaging in 1 activity, cant take up alternative activities
  • Choosing 1 option might often imply rejecting several others
  • Time spent to receive reward influences subjective value
  • Most adaptive beh: maximise overall reward rate
  • If only 1 option decisions are easy
  • More options make it more difficult to decide: longer deliberation time
34
Q

factors in reward rate formula

A
  • utility
  • success probability
  • cost
  • deliberation time
  • handling time
  • ITI
35
Q

utility

reward rate formula

A
  • reward sensitivity/value
  • payoff of an outcome
36
Q

success probability

A
  • risk sensitivity
  • how likely is it that i will actually get the intended outcome?
37
Q

cost

reward rate formula

A
  • subjective effort
  • physical & cog
38
Q

temporal discounting factor

reward rate formula

A
  • time to be invested
  • deliberation time = how long to make decision
  • handling time = how long doing before reward
  • ITI = how long before having another go
39
Q

urgency signal

A
  • mechanism that can push the ev accumulation over threshold to take action
  • reduces deliberation time - action can be started earlier
  • even if info isn’t 100% clear yet
  • BUT can lead to some incorrect decisions
40
Q

assumptions about urgency signal

A
  • mechanism that serves to maximise reward rate
  • controlled by projections from basal ganglia –> cog & sensorimotor areas
  • grows during deliberation time & helps to optimise it
  • modulated by task context (those with simple decision & minor consequences diff to those more important)
  • diff in diff inds