Unit Three Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

Shaping definition

A

A method for generating new behavior in which responses that are increasingly like the goal behavior are successively reinforced

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Thorndike studied animal learning as a way of measuring what

A

Animal intelligence

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

To reinforce a behavior is to provide what for the behavior to increase its what

A

Consequences , strength

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Positive and negative reinforcement have what in common

A

Both strengthen behavior

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

In the discrete trial procedure what ends the trial

A

The behavior

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

How did some operant conditioning occur with Albert and the white rat

A

Albert reached for the rat just before the loud noise occurred means operant conditioning occurred

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Weil wanted to separate the effects of what of reinforcement and what of reinforcers

A

Delay and number

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

In general the more you increase the amount of reinforcer the —- benefit you get from the increase

A

Less

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

According to premack principle, blank behavior reinforces blank

A

High probability/ likely , low probability unlikely behavior

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

According to the response deprivation theory school children are eager to move at recess because they have been deprived of the opportunity to

A

Move about / exercise

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

The two process in two process theory are blank and blan

A

Pavlovian conditioning and operant learning

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Thorndikes chick in a maze

A

Put a chick in a maze with the correct route it would find food and other chicks. With succeeding trials the chick became more efficient snd then the appropriate route

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Thorndikes hungry cat in a puzzle

A

Put a cat in a box with food out of reach needed to pull a wire loop to get the food . Eventually it would accidentally pull the loop. After that the ineffective moved decreased dramatically

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

A steep learning curve shows what

A

Rapid learning and a easy task

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Law of effect definition

A

Behavior is s function of its consequences as defined by thorndike

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Connectionism

A

Thorndike speculated that reinforcement strengthened bonds or connections between neurons , a view that became known as Connectionism

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Operant learning definition

A

Behavior is strengthened by its consequences

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Instrumental learning

A

The behavior is typically instrumental in producing these consequences

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Two differences between operant and Pavlovian

A

Operant is not reflexive like Pavlovian and is often complex. In operant the organism acts on the environment and changes it s d the change the. Strengthens or weakens the behavior
. Pavlovian is more passive where operant is active

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Contingency squares how many are there

A

4 positive and negative reinforcement and positive snd negative punishment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Reinforcement definition

A

Is the procedure of providing consequences for s behavior that increases or maintain the strength of the behavior

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Three characteristics of reinforcement

A

The behavior must have a consequence
The behavior must INCREASE in strength
The increase in strength must be due to the consequence

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Positive reinforcement

Positive reinforcer

A

A behavior is followed by the appearance of or an increase in the intensity of s stimulus . This stimulus called s positive reinforcer is ordinarily something the organism seeks out.
It then strengthens the behavior

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Reward training

A

Sometimes used to refer to positive reinforcement because often rewards are used as positive reinforcers. HOWEVER SOMETIMES AVERSIVES CAN BE USED ex electric shock

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Negative reinforcement and negative reinforcer

A

A behavior is strengthened by the removal of or s decrease in the intensity of s stimulus . The stimulus is the negative reinforcer something that the organism usually tries to escape or avoid

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Escape training

A

Aka negative reinforcement becaus they are escaping snd aversive

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Discrete trial

Measures

A

What thorndike used.the behavior of the participant ends the trial and is then returned to the start
Measures- often time to complete or number of errors

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Skinner box

A

Skinner created s box that had a lever rats had to push to get food

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Free operant procedure

Dependent variables with it

A

Fees operant procedure the behavior may be repeated any number of times. Ex skinners box the lever was pushed any number of times
Usually the number of times s particular behavior such as pecking occurs per minute

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Advantage of free operant procedure

A

More natural and less intrusive

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Most important difference between Pavlovian and operant

A

In Pavlovian the U.S. Is contingent on another stimulus the CS where in operant a stimulus is contingent on behavior

32
Q

Reflexive usually uses what system whereas voluntary usually involves

A

Reflexive is usually with Pavlovian , autonomic nervous system, and smooth muscles and glands. Voluntary usually with operant snd associated with voluntary nervous system and skeletal muscles

33
Q

Is it hard or easy to distinguish between operant snd Pavlovian

A

No

34
Q

Primary reinforcers

A

Naturally or innately reinforcing ( usually) but are not dependent on their association with other inforcers
Very powerful snd limited in number
Ex. Food ester sex electrical stimulation of the brain, relief from hot and cold, and certain drugs

35
Q

Secondary reinforcers

A

Dependent on their association with other reinforcers . Ex praise recognition, smiles, snd positive feedback . They are secondary to other reinforcersr. Dependent on primary reinforcers
Aka conditioned reinforcers

36
Q

Four advantages of secondary reinforcers

A

Become less effective over time slower than primary reinforcers.
Often much easier to use to reinforce behavior than other reinforcers.
Less disruptive then primary snd take less time
Can be used in many different situations

37
Q

General reinforcer definition

A

Have been paired with many different kinds of reinforcers in as variety of situations .ex money

38
Q

Main disadvantage of secondary reinforcers

A

Their effectiveness depends on their association to s primary one. It may lose its effectiveness if it doesn’t work anymore ex money becomes worthless

39
Q

Why is shaping sometimes related to tantrums

A

The parent gradually demands more and more outrageous behavior to give them what they want

40
Q

5 tips for shapers

A

1) reinforce small steps
2) immediate reinforcement
3) small reinforcers
4) reinforce the best approximation available
5) back up when necessary

41
Q

Behavior chain what is it snd what’s it called when you train for it

A

Connected sequence of events ex. Gymnastics routine.

To train its Called chaining

42
Q

First step of chaining

A

Break the task into its components called task analysis

43
Q

Two ways to chain after task analysis

A

1) forward chaining : the first link in the chain is reinforced then until it’s without hesitation, then the next steps sequentially
2) backward chaining : begin with the last part of the chain and work backwards note the chain is never performed backwards they just start s step further towards the start each time

44
Q

The last step in the chain is usually

A

The last step usually produces a reinforcer that is often s primary reinforcer

45
Q

Contingency for operant conditioning

A

Degree of correlation between the behavior and its consequence
Rate of learning is dependent on the contingency want it to follow 100% of the time

46
Q

Contiguity with operant conditioning

A

Gap between w behavior snd its reinforcing consequence, generally s shorter gap means faster learning because they can’t get confused about which action is reinforced
Weil showed that the gap was important by using a constant rate of reinforcement

47
Q

Three characteristics of reinforcers

A

I1) small reinforcers given frequently will result in faster lesrnong then s large infrequently. However, the size if all else is the same results in faster learning. Ex 100$ vs 5$

2) however reinforcer size/ magnitude is not linear. The more you increase the less benefit you get from the increase
3) qualitative differences in reinforcers: identifying preferred reinforcers can make s difference

48
Q

How do task characteristics affect operant learning

A

Tasks that are easier. Smooth muscles are harder to modify than skeletal muscles but had been done

49
Q

How does deprivation level operant learning

A

The greater the level of deprivation the more effective the reinforcer . Mainly important when it alters a physical condition

50
Q

Two other variables that affect operant learning

A

Learning histories

Competing contingencies - if the behavior also produces punishing consequences

51
Q

Extinction with operant learning

A

Withholding the consequences that reinforce s behavior

52
Q

Extinction burst with operant conditioning

A

An abrupt increase in behavior upon extinction .nthen generally followed by decline in try behavior

53
Q

Variability of behavior with operant conditioning

A

Increase in variability of behavior because they are trying to get the previously reinforced behavior

54
Q

Emotional behavior with extinction

A

Often an increase in aggression

55
Q

Resurgence

A

Resppearance of previously reinforced behavior upon extinction
Not spontaneous recovery is with extinction of Pavlovian

56
Q

Resurgence can be used to understand what

A

Regression- the tendency to return to primitive infantile modes of behavior ex. Man has tantrum

57
Q

Factors that affect the rate of operant extinction

A

Number of times the behavior was reinforced before it was extinguished
The effort the behavior requires
The size of reinforcers used during training

58
Q

Behavior is learned how quickly and extinguished how quickly

A

Behavior is usually acquired quickly snd extinguished slowly

59
Q

Can reinforcement ever be completely extinguished

A

Not really, it will likely occur at a rate above baseline

60
Q

What did thorndike conclude about practice

A

That practice is only important if is reinforced

61
Q

Hull’s drive reduction theory

A

Motivations, states are called rives. A reinforcer is s stimulus that reduces one or more drives. This theory works well with primary reinforcers . However, secondary reinforcers do not necessarily satisfy physiological needs. Some cannot be classified as primary or secondary. Ex male rats will take mating opportunity even if they can’t ejaculate. Huge criticism and weakeness with this theory

62
Q

Relative value theory

A

By premack
Said that in any given situation some kinds of behavior have a greater likelihood of occurrence than others. Thus different behaviors have different relative values and determine the reinforcing properties of s behavior.
Need to know the relative values of the activities.

63
Q

Premack principle

A

High probability behavior reinforces low probability behavior. You can make the high probability behavior contingent on the low probability behavior snd increase the likelihood of the lower probability behavior.

64
Q

Criticism for relative value theory

A

Does not explain why the word yes is enforcing ,
Low probability behavior will reinforce high probability behavior only if the low probability behavior had been prevented from being performed for some time

65
Q

Response deprivation theory

A

Behavior becomes reinforcing when the organism is prevented from engaging in its normal frequency ( falls below baseline level).

66
Q

Fault with response deprivation theory

A

Doesn’t explain why words like yes etc can be reinforcing. Doesn’t explain why people performed better in thorndiked experiment when blindfolded

67
Q

The example where dogs learned to jump when the light went out before the shock came is an example of what

A

Escape avoidance lesrning. First learn to escape and then eventually avoid the aversive behavior

68
Q

Two process theory

A

Issues both Pavlovian snd and operant are important. Escaping the shock is negatively reinforcing but eventually the extinguished light becomes CS for fear ( the U.S. For fear is the shock)

Therefore there is only really escape with this theory it escapes the shock snd then the dark chamber

69
Q

Three issues with two process theory

A

Fear of the CS decreases as the animal learns to avoid the shock. This means the tone should become less reinforcing snd should becomes extinguished. 2)However, it doesn’t extinguish
3) a study by sidman showed that rats pressed s lever to decrease the rate of shock .this is an issue because there is no escape
3)

70
Q

Sidman avoidance procedure

A

There is no preceding stimuli. Shocks occur at regular intervals it rats csn delay the shock by 15sec to avoid the shock
Found that they pressed the lever to delay the shocks

Has been argued that time was the CS debated

71
Q

One process theory

A

Proposes that avoidance involves only one process: operant learning both behaviors are reinforced by the reduction in aversive stimuli
The reduction in exposure to shock is reinforcing.
Learn to jump to avoid shock

72
Q

Alan neuringer get demonstrated that with reinforcement —- could learn to behave randomly

A

Pigeons

73
Q

In general the greater the number of reinforcements before extinction the

A

Greater the number of responses during extinction

74
Q

Chaining is useful for wildlife?

A

True

75
Q

Have efforts to reinforce contraction of individual muscle fibers failed?

A

No