topic 4 Flashcards

1
Q

Thorndike’s law of effect

A

if a response, in the presence of a stimulus is followed by satisfying state of affairs, the association between stimulus and response and strengthened and if it is followed by an unsatisfying event it is weakened

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

operant learning

A

a change in behavior as a function of the consequence that follows it.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

escape behavior

A

when operant behavior increases by removing an ongoing event or stimulus. ex. pressing a lever to stop an electric pulse

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

avoidance behavior

A

when operant behavior increases by preventing the onset of an event or stimulus. ex. pressing a lever to prevent electric shock

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

discrete trial procedures

A

instrumental response produced once per trial. Each trial ends with removal of the animal from the apparatus. Ex. mice in a maze (how long it takes for them to reach the end= 1 trial) and then there is a “reset”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

free operant procedures

A

animal remains in apparatus and can make many responses. ex. operant boxes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

cumulative record

A

based on old cumulative recorder device.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

unconditional (primary) reinforcer

A

a reinforcer that acquired its properties as a function of species evolutionary history. ex. food, sex. water, sleep

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

conditional (secondary) reinforcer

A

otherwise neutral stimuli or events that have the ability to reinforce due to a contingent relationship with other , typically unconditional, reinforcers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

variables effecting reinforcement

A

immediacy, specific reinforcer used, task characteristics, contingency, contiguity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

immediacy

A

a stimulus is more effective as a reinforcer when it is delivered immediately after the behavior

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

contingency

A

a stimulus is more effective as a reinforcer when it is delivered contingent of the behavior.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

contiguity

A

nearness of events in time (temporal contiguity) or space (spatial contiguity) - high contiguity is often referred to as pairing, less contiguity between the operant response and the reinforcer, diminishes the effectiveness of the reinforcer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

motivating operations

A

establishing operations make a stimulus more effective as a reinforcer at a particular time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

abolishing operations

A

make a stimulus less potent as a reinforcer at a particular time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

reinforcer magnitude

A

generally a more intense stimulus is a more effective reinforcer . relation between size and effectiveness is not linear.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

schedule of reinforcement

A

a rule describing the delivery of reinforcement. different schedules produce unique schedule effects ( particular pattern and rate of behavior over time). over the long term, effects are very predictable. occurs in numerous species

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

continuous reinforcement schedule (CRF)

A

behavior is reinforced each time it occurs. rate of behavior increase rapidly. useful when shaping a new behavior. rare in natural world

19
Q

4 types of intermittent reinforcement schedules

A

Fixed ratio schedule
variable ratio schedule
fixed interval schedule
variable interval schedule

20
Q

Fixed ratio schedule (FR)

A

behavior reinforced after a fixed number of times generate post reinforcement pause

21
Q

post reinforcement pause (PRP)

A

a period of inactivity or no responding that occurs immediately after a reinforcement is given. pausing typically increases with ratio size and reinforcer magnitude.

22
Q

variable ratio schedule (VR)

A

the number of responses needed varies each time. ratio requirement varies around an average. PRP’s are rare and very short . produces higher rates than a comparable fixed ratio

23
Q

random ratio

A

schedule is controlled by a random number generator. produces similarly high rates of responding. type of ratio used in casino and video games

24
Q

progressive ratio

A

ratio requirements move from small to large. PRPs increase with ratio size.

25
Q

fixed interval schedule (FI)

A

behavior is reinforced when it occurs after a given period of time. Produces PRP’s. responding increases gradually. uncommon in natural environment

26
Q

variable interval schedule (VI)

A

the timing of the response needed varies each time. interval varies around an average. PRPs are rare and short. steady rates of responding. not as high as Variable ratio schedule. common in natural environments.

27
Q

competing contingencies

A

the individual has multiple options that may conflict or compete for their attention or actions. ex. watch you tube or study

28
Q

premack principle

A

in nature, different behaviours have different probabilities of occurring. e.g. eating is high probability and lever pressing is low probability.
Low prob -> high prob= reinforces low prob
high prob -> low prob= does not reinforce high prob

29
Q

testing Premack principle

A
  1. establish baseline responding for different behaviours
  2. instrumental conditioning procedure w/:
    L-> H H-> L
30
Q

controlling stimulus (S)

A

an antecedent. a stimulus that changes the probability of an operant behaviour

31
Q

discriminative stimulus (S^D)

A

aka occasion setter. a type of antecedent. a stimulus or event that precedes an operant and sets the occasion for its reinforcement.

32
Q

extinction stimulus (S^∆)

A

a type of antecedent. a stimulus or event that precedes an operant and sets the occasion for non-reinforcement

33
Q

establishing operation

A

makes a stimulus more effective as a reinforcer at a particular time. e.g. deprivation ( food is more effective stimulus if the individual is deprived of food)

34
Q

discriminative stimulus

A

a cue or signal that indicates that a specific behavior will be reinforced

35
Q

abolishing operation

A

makes a stimulus less potent as a reinforcer at a particular time. e.g., satiation ( food is less effective as a stimulus if the individual is full)

36
Q

discrimination

A

occurs when the presence (or absence) of stimuli is the occasion on which a response will be followed by a reinforcement. e.g., key pecking is only reinforced when the green light is on -> the green light IS the occasion when pecking is reinforced. eventually the bird will only peck when the green light is on.

37
Q

stimulus control

A

a change in operant behavior that occurs when either S^D or S^∆ is presented.
ex. “ the light color is functioning as a discriminative stimulus” or “the light has acquired stimulus control over the pigeons behaviour”

38
Q

discriminative Index

A

a measure of the stimulus control exerted by an S^D or S^∆ .

39
Q

generalization

A

less precise control of stimulus compared to discrimination. it is obtained by training a wide array of settings/ stimuli

40
Q

stimulus generalization

A
  1. process where once a CS has been established )it produces CR reliably), similar stimuli may also produce a CR
    or
  2. process where once an operant response occurs to one discriminative stimulus, it also occurs to other similar stimuli
41
Q

stimulus discrimination

A
  1. process where we exhibit less pronounced CR to CS’s that differ from original CS. -> CR occurs to CS but not another stimulus
    or
  2. process where less responding occurs to stimuli that are different from the original trained stimulus. -> operant response to trained stimulus but not others
42
Q

discrimination training

A

S^D-> B -> C
ex. large dog -> child says dog -> praise
pony -> child says dog -> no praise
outcome= child says dog when they see a large dog but not a pony
~discrimination between categories

43
Q

generalization training

A

S^D ->B -> C
ex. any sized dog -> child says dog -> praise
outcome= child will say dog when they see any dog
~ generalize within category