Rescorla & Wagner’s Model Flashcards
what did the Rescorla and Wagner Model (RWM) find?
- found that RWM could acquire (λ = 1) and lose (λ = 0) associative strength.
- In each case learning realistically levelled off at λ.
- Our computations realistically captured learning curves for acquisition and extinction of associative learning.
what is rescorla and Wagner’s model?
This is the Rescorla-Wagner equation. It specifies that the amount of learning (the change ∆ in the predictive value of a stimulus V) depends on the amount of surprise (the dif- ference between what actually happens, λ, and what you expect, ΣV).
Rescorla and Wagner equation
∆V = αβ(λ − ΣV)
what is ∆V?
the change in associative strength on the trial you’re computing
what is α?
A learning-rate parameter ‘alpha’
what is β?
a learning rate parameter ‘beta’
what is λ ?
the asymptote, lambda
what is ΣV?
the total associative strength from previous trials, ‘sigma V’. it’s a running total of the associative strength and gets bigger during conditioning until it matches λ
what can RMW explain?
- blocking
- overshadowing
- CS-UCS contingency effects
what does RMW fail to explain?
- downshift unblocking
one-trial overshadowing - latent inhibition