Classic Theories - Week 5 Flashcards
Continuity and Contingency
- Continuity: Stimuli being associated (Pavlovian)
- Contingency: Reinforcement Learning (Skinner)
Thorndike
- Cat Puzzle Studies (Trial and Error)
- Animal trying multiple responses are more successful
- Learning to respond when needed, avoiding responses when pointless
Guthrie
- Reinforcement teaches behaviour reproduction
- Threshold: showing fear stimuli through exposure slowly
- Stop action: Learning is quick. Generally considered too easy
Tolman
- Animals have motivation states
- Animal learn spatial representations of environment
- Learning can occur without reinforcement
Latent Learning
- Learning that occurs prior to reward
- Faster when there is a reward
Learning vs Performance
- Learning is formation of S-R connection
- Performance is activation of response due to reward
Locus of Control
- Expecting rewards if they behave in a certain way
- Believe they are in control of reward
Atkinson’s Expectancy Value Theory
- Do things if we are motivated
- We don’t apply for jobs if we don’t believe we will get it
Hulls Theory
- Learning occurs when animals associate S-R (Contiguity)
- Only activate to reduce drives (eg. hunger)
Habit Family Hierarch
- Convergence: Trial and Error to find best way
- Divergence: If response is unsuccessful, they’ll use other ways
Gradient of Reinforcement
Each behaviour triggers next behaviour to be reinforced
Fractionary Goal Responses
Internal self-cueing process
Operant Conditioning
Reinforcement and Punishment
- Contingency is fundamental while continuity isn’t enough
- P(O/R) = Probability(Outcome/Response)
- P(O/R)>P(O/No Response)
Discrimination
- Choosing when to respond
- Discriminating against certain stimuli
Generalisation
When one stimulus relates to another
Extinction
Don’t reinforce/punish behaviour, association is weakened
Spontaneous Recover
Urges return extinguish CR’s (Smoking, Gambling)
Response Chaining
Simple behaviour form complex behaviour
Fading Techniques
Gradually removes behaviour
Fixed vs Variable
Fixed: Set schedule
Variable: Random schedule
Ratio vs Interval
Ratio: Number of responses
Interval: amount of time
Schedules
VR: Unpredictable number of response
FI: Specific amount of time
VI: Unpredictable amount of time
FI: Specific number of responses
Differential rate of low responding (DRL)
- Rewards response delaying
- Marshmallow Test
Differential rate of high responding (DRH)
- Rewards high level response output
- Kicking goals, lifting heavier weights