L10 - Variations in processing the US Flashcards
What is the equation for the RWM
(triangle V) = a + B X (Lamda - Sum V)
(Triangle V)
Amount of learning, change in the CS-US association
Lamda
Experience of US presentation
How much learning can be supported to the US
V
how much a subject know about relationship between CS and US
Sum (V)
Expectation of US, based on total associated strength of all CSs present
How much can be predicted from what you know about all of the CSs present
a
salience of CS
B
salience of US
a and B are
Parameters that regulate the rate of conditioning (otherwise all conditioning would be done in one trial)
With Learning Sigma V and lamda are always what on the first trial?
0 and 1
Why bother with a numerical formula?
- If we have a very clearly laid out model it allows us to make PREDICTIONS
- Can design experiments and see if empirical findings match predictions of the model
- It is testable
What does reaches an asymptote mean?
closer to a value of 1
Across trial what happens to associative strength and increments in V
Associative strength (V) increases but increments in V get ever smaller
Overshadowing with RWM
Limited amount of learning possible, two CSs share V
The CS with the larger alpha takes larger lamda proportion per trial, leaving less for other CS
If one is more salient than the other, it will take more of “the pie”
Blocking with RWM
Pre-trained CS starts with high V
Already large sigma V before C2 is added
Change in V is small on each trial with added CS
The pre-trained CS has already taken a large amount of the pie, so less available to be shared between it and the new CS
Relative signal Validity with RWM
Extra conditioning trials with L alone means V of L increases faster and blocks conditioning to T