Test 2 Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

The Law of Effect

A
  • Created by E.L. Thorndike
  • In any given situation, the probability of. a behavior occurring is a function of the consequences it has had in that situation in the past
  • Behaviors will be repeated if they lead to satisfying consequences
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Operant Learning

A

Behavior can be said to operate on the environment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Instrumental Learning

A

Behavior is instrumental in producing consequences

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

BF Skinner identified 4 operant procedures:

A
  • Positive Reinforcement
  • Negative Reinforcement
  • Positive Punishment
  • Negative Punishment
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Reinforcement

A

The procedure of providing consequences for a behavior that increases or maintains the rate of a behavior
- This is ALWAYS true

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Punishment

A

The procedure of providing consequences for a behavior that reduce the rate of behavior.
- This is ALWAYS true

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Positive Reinforcement

A

Behavior –> Presented w/ good stimulus –> Frequency of behavior increases

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Negative Reinforcement (escape/avoidance)

A

Behavior –> Removal of bad stimulus –> Frequency of behavior increases

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Positive Punishment

A

Presentation of a stimulus should discourage/reduce the behavior
- Timeout
- Driving course for reckless driving

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Negative Punishment

A

Removal of a stimulus should discourage/reduce the behavior
- Timeout
- Ticket for speeding
- Car getting towed for parking w/out a permit
- Denial of privileges

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

The three-term contingency (S-R-S)

A

Description of an operant procedure that identifies 3 elements: The situation (S) in which the behavior occurs; the particular response (R) or behavior that occurs; and the consequence (S) the behavior has
- Sometimes referred to with the letters A-B-C: Antecedent - Behavior - Consequence

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

The 2 Stages of Operant Learning

A
  • 1) Acquisition
  • 2) Extinction
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Acquisition

A

The stage in which a response is acquired or initially learned (response becomes stronger)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Extinction

A

(After acquisition has occurred) The stage in which a response declines due to withholding of the rewarding stimulus, even if the response is shown

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Spontaneous Recovery

A

The sudden reappearance of a learned response following extinction

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

2 Variables Affecting Reinforcement

A
  • 1) R-S Contingency
  • 2) R-S Contiguity
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

R-S Contingency

A

Where operant learning is concerned, the word contingency means that a particular consequence depends on the performance of a particular behavior
- The greater the degree of contingency between a behavior and a reinforcer, the faster the behavior changes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

R-S Contiguity

A

The gap between a response and its reinforcing consequences. In general, the shorter this interval is, the faster learning occurs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

4 Schedules of Reinforcement

A
  • Fixed Interval
  • Variable Interval
  • Fixed Ratio
  • Variable Ratio
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Fixed Interval

A

An individual is rewarded for responding after a fixed period of time
- Ex) A rat receives food by pressing a lever after 10 seconds
- Getting paid (bi-weekly)
- Good grades
- Rent due on (x)
- Christmas bonus every year
- Microwave

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Variable Interval

A

An individual is rewarded for responding after an unspecified period of time
- Ex) A rat receives food by pressing a lever after 7 seconds, then 12 seconds, then 5. . .
- Waiting for food @ a restaurant
- Speeding tickets
- Waiting for the elevator
- Infant feeding schedule
- Random bonus @ work
- Pop quiz

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Fixed Ratio

A

A reward is delivered following a specified number of responses
- Continuous reinforcement –> Being rewarded for each behavioral response is an FR 1 Schedule
- Ex) A rat receives food after it presses a lever 10 times
- Ex) You get a soda each time you put money in a vending machine
- Vending Machine
- Training your dog (every x time he sits, he gets a treat)
- Doing your chores (every 10)
- Advertising promotion ( 1 thousandth customer)
- Hole puncher for a coffee shop
- Apple watch 10,000 steps
- Free flight after 10,000 miles
- 10 reps at the gym then break
- Being paid on commision

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Variable Ratio

A

A reward is delivered following an unspecified number of responses
- Ex) A rat receives food after ir presses a lever 12 times, then 7 times, then 9 . . .
- Gambling
- Claw/slot machine
- Sports betting

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Learning

A

An inferred change in the organism’s mental state which results from experience and which influences in a relatively permanent fashion the organism’s potential for subsequent adaptive behavior
- Tarpy and Mayer, 1978

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Partial Reinforcement Effect (PRE)

A

A behavior maintained on an intermittent schedule is more resistant to extinction than behavior maintained on continuous (1 for 1) reinforcement
- The ‘thinner’ the reinforcement schedule before extinction, the greater the # of responses during extinction
- Additionally, if the partial reinforcement pattern is variable ( VI and VR schedules), then there will be a greater number of responses during extinction than if the pattern is predictable/fixed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

3 Hypotheses for Explaining PRE

A
  • Discrimination Hypothesis
  • Frustration Hypothesis
  • Response Unit Hypothesis
27
Q

Discrimination Hypothesis

A

It is harder to discriminate between extinction and an intermittent schedule than between extinction or continuous reinforcement

28
Q

Frustration Hypothesis

A

Nonreinforcement of a previously reinforced behavior is frustrating. Frustration is an aversive emotional state, so anything that reduces frustration will be reinforcing
- 2 competing responses during extinction: lever-pressing response (previously reinforced by food) and the lever-avoidance response (Currently reinforced by the reduction of frustration)

29
Q

Response Unit Hypothesis

A

To understand the PRE we must think differently about the response on intermittent reinforcement
- If an animal receives food only after pressing the lever twice, we should not think of this as press-failure, press-reward, but rather as press-press-reward. One response consists of 2 lever presses

30
Q

Primary Reinforcers

A

Naturally/innately reinforcing
- Food, water, sexual stimulation, and relief from heat/cold

31
Q

Secondary Reinforcers (Conditioned Reinforcers)

A

Dependent upon their association with other reinforcers
- Money, praise
- Generally weaker than primary reinforcers
- Often easier to reinforce a behavior immediately with secondary
- Often less disruptive than primary

32
Q

Generalized Reinforcers

A

Secondary reinforcers that have been paired with many different kinds of reinforcers and can be used in a wide variety of situations

33
Q

Premack Principle

A

Different activities have different values for us, and the values can be inferred by observing how often/long we engage in these activities. Reinforcers are considered “high probability” activities
- A higher-probability activity can reinforcer a lower-probability one
- Ex) Eating ice cream, can reinforce
- Eating potatoes, can reinforcer
- Eating spinach, can reinforcer

34
Q

Response Deprivation Hypothesis

A

If we are deprived of doing something we like, and access to that activity is controlled/restricted, then the activity can be used as a reinforcer

35
Q

Generalization

A

Same response to 2 stimuli based on their similarities

36
Q

Discrimination

A

Different responses to 2 stimuli based on their similarities

37
Q

Stimulus Control

A

Presence of a discriminative stimulus reliably affects whether the behavior will occur
- After an individual has been trained to respond (or not) to a specific discriminative stimulus, they may later respond to just that exact stimulus; or they may respond to others that are similar

38
Q

Generalized Gradient

A

Refers to the relative strength of a response to a stimulus based on how similar it is to the original one used during training

39
Q

Excitatory Generalization

A
  • Guttman and Kalesh (1956) trained pigeons to peck at a key with varying colors of light behind it. One group originally received a color of wavelength = 600 nanometers. Their responses to a variety of wavelengths were compared
  • Variability in the wavelength caused dramatic drops in response, meaning the pigeons could discriminate very well
  • Steep gradient = little generalization = more specific
  • Flatter gradient = more generalization = less specific = more stimuli will generate a response
40
Q

Inhibitory Generalization

A
  • Weismann and Palmer (1969) trained pigeons to peck at a green key to earn a reward. Pecking at a green key with a VERTICAL white line on it was not rewarded
  • The more horizontal the line, the more responses obtained
41
Q

Factors Affecting Generalization Gradients

A
  • Degree of Training
  • Training - Test Interval
  • Prior Discrimination Training
42
Q

Degree of Training

A

More training with the original stimulus leads to less generalization (the subject becomes increasingly familiar with the specific stimulus)

43
Q

Training- Test Interval

A

A longer time period between training and testing leads to more generalization (perhaps specific details of the original stimulus are forgotten)

44
Q

Prior Discrimination Training

A

Having discrimination training before the generalization test leads to less generalization

45
Q

Response Chain

A

Connected sequences of behavioral response (typically, only the last response leads to a primary reinforcer)
- A stimulus (S1) leads to a response (R1) and if it is appropriate, (S2) is presented, leading to (R2) . . . and so on
- Training can begin with the first ‘link’ (forward) or last link (backward)

46
Q

Example of Response Chaining (Frisbee Dogs)

A
  • Dog is let off of its leash (S1)
  • Dog runs to marked area on field (R1)
  • Owner raises hand (S2)
  • Dog sits (R2)
  • Owner throws frisbee (S3)
  • Dog catches/retrieves frisbee (R3)
  • . . . and earns a treat (S4, the primary reinforcer)
47
Q

Example of Something That Affects Our Appetite/Eating

A
  • Choosing a ‘fat’ table (booth and dim light) vs. a skinny table (open, well lit)
  • When you eat in private or at home, you will probably eat more than when you are in public. When you sit in a dark booth, you may generalize this to be like you are in private or even at home b/c the booth might be like the couch.
  • B/c of these generalizations, you’ll ear like you are in private
48
Q

Example of Something That Affects Our Appetite/Eating

A

Having gum in your mouth while grocery shopping. it curbs the cravings of salt or any other junk food by having the taste of mint in your mouth. You can’t imagine the mint taste going well with the salt, so it becomes an unconscious taste aversion of the imagined flavors mixing

49
Q

Example of Something That Affects Our Appetite/Eating

A
  • When you eat at your desk, you eat more and don’t enjoy the food.
  • While you are at your desk, you are positively reinforcing working by rewarding yourself with food regardless of what the food is, which reinforcers the habit of eating at your desk
50
Q

Example of Something That Affects Our Appetite/Eating

A

Cereal on the kitchen counter leads to gaining weight. Cereal being in plain sight on the counter has a high contiguity to being hungry. By instinct, we eat when we are hungry, therefore removing cereal from the counter removes hunger and we eat less

51
Q

How do Environmental Variables Affect Our Eating Behavior

A
  • Color and size of the plate:
  • Contrasting color can lead to less eating b/c you discriminate between the food and the plate; you can better tell how much food you’re eating. Smaller plates can also discourage overeating; contingency is relevant as eating a lot partly depends on having a large quantity of food right in front of you
52
Q

Disadvantages of Punishment

A
  • Physical punishment is cruel -> potential for trauma
  • The individual getting punished could lash out
  • Rebellious attitudes
  • Punishments can model aggression
  • Intensifies emotion
  • Can affect the relationship (ex. my parents are unfair/don’t like me)
  • Learned fear/avoidance punisher
53
Q

How is Punishment Done Correctly?

A
  • Mutual Respect
  • Strict but not degrading
  • Must occur right away
  • Punishments are consistent
  • Don’t assume/discriminate
  • The intensity of the punishment should match the prevalence of the behavior
  • Communicate the reason why the punishment is being given
54
Q

How Can Excitation be a Form of Punishment?

A

It can be considered a form of punishment b/c an individual can get frustrated by unfulfilled expectations. The only ways to reduce the unwanted emotion are:
- A: Have someone respond to the behavior
- B: Decrease the behavior

55
Q

Learned Helplessness

A
  • Seligman and Maier (1967)
  • Phase 1: (Group 1) = The dogs are restrained, receive shocks, can turn the shocks off. (Group 2) = The dogs are restrained, receive shocks on the same schedule as group 1 but cannot turn off the shocks
  • Phase 2: Both groups tested in a shuttle box, where they had the opportunity to avoid/escape a 60-sec shock. Group 2 had great difficulty learning these associations
  • Inescapable shock trained the dogs to do nothing, to be helpless
  • It was not prior exposure to shock, per se, that produced the helplessness, but the inescapability of the shock
56
Q

Immunization

A
  • Uncontrollable reinforcers do not hinder later learning IF subjects are presented with the controllable rewards before the learned helplessness phase
  • They will persist in later circumstances b/c they have previously learned they have control over their surroundings (‘learned mastery’)
  • Kid knows if he gets his shots, he will get a reward
57
Q

Reversibility

A
  • The condition caused by learned helplessness can be changed
  • Seligman found that if he repeatedly dragged a helpless dog to the shock free area of the shuttlebox, it would eventually escape the shock on its own.
  • Reversibility work: depressed people might be helped if they are made to engage in reinforcing activities
58
Q

Self Control

A

Training oneself to alter their own behavior (not relying on reinforcement from external sources); often involves emphasizing larger later rewards over smaller sooner ones

59
Q

Mischel’s Marshmallow Test

A
  • 4-5 year olds are given a marshmallow and asked to wait alone in a room. If they can avoid eating it, they get 2
  • About 33% were able to wait
  • They used distraction tactics
    – Counting or singing
    – Thinking about the more abstract properties of the food
  • In a follow-up, with the children now at 17, those who had done well earlier were now academically successful, and able to cope with stress and setbacks
60
Q

Ways to Exert Self Control

A
  • Physical Restraint
  • Deprivation or Satiation
  • Doing something else
  • Self reinforcement and self punishment
61
Q

Physical Restraint

A

Changing the environment so you are unable to engage in a particular behavior
- Ex) Leslie gave her Facebook password to a friend until she was finished with her comprehensive exams

62
Q

Depriving and Satiating

A

One can alter how reinforcing a particular stimulus is
- Ex) I like to go to to Kroger on a full stomach so I don’t buy too much bad food

63
Q

Doing Something Else

A

Replacing the bad behavior with an alternative

64
Q
A