Final Exam Second Half Flashcards

1
Q

What is Learning?

A
  • the acquisition, from experience, of new knowledge, skills, or responses that results in a relatively permanent change in the state of the learner.
  • This definition emphasizes these key ideas:
    • Learning is based on experience.
    • Learning produces changes in the organism.
    • These changes are relatively permanent.
  • Eg. think about Jennifer’s time in Iraq and you’ll see all of these elements: Experiences that led Jennifer to associate the sound of an approaching helicopter with the arrival of wounded soldiers changed the way she responded to certain situations in a way that last for years.
  • Learning can also occur in much simpler, nonassociative forms.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is Habituation?

A
  • a general process in which repeated or prolonged exposure to a stimulus results in a gradual reduction in responding.
    • eg. if you lived near a busy highway you probably noticed the sound of traffic when you first moved in.
    • After a while, the roar wasn’t quite so deafening anymore and eventually you were able to ignore the sounds of the automobiles in your vicinity.
    • This welcome reduction in responding reflects the operation of habituation.
    • occurs even in the simplest of organisms.
    • eg. the Aplsia exhibits habituation: when lightly touched, the sea slug initially withdraws its gill, but the response gradually weakens after repeated light touches.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is Sensitization?

A
  • presentation of a stimulus leads to an increased response to a later stimulus.
    • eg. Kandel found that after receiving a strong shock, Aplysia showed an increased gill-withdrawal response to a light touch.
    • In a similar manner, ppl whose houses have been broken into may layer become hypersensitive to late-night sounds that wouldn’t have bothered them before.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is Classical Conditioning?

A
  • Ivan Pavlov.
  • a type of learning that occurs when a neutral stimulus produces a response after being paired with a stimulus that naturally produces a response.
  • In his classic experiments, Pavlov showed that dogs learned to salivate to neutral stimuli such as a buzzer or a metronome after the dogs had associated that stimulus with another stimulus that naturally evokes salivation, such as food.
  • When Pavlov’s findings first appeared in the scientific and popular literature, they produced a flurry of excitement bc psychologists now had demonstrable evidence of how conditioning produced learned behaviours.
  • This was the kind of behaviourist psychology John B. Watson was proposing: An organism experiences events of stimuli that are observable and measurable, and scientists can directly observe and measure changes in that organism.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are the Basic Principles of Classical Conditioning?

A
  • When the dogs were initially presented with a plate of food, they began to salivate. Pavlov called the presentation of food an unconditioned stimulus (US) - something that reliably produces a naturally occurring reaction in an organism.
    • he called the dogs’ salivation an unconditioned response (UR) - a reflexive reaction that is reliably produced by an unconditioned stimulus.
  • Then Pavlov paired the presentation of food with the sound of the ticking of a metronome, a buzzer, the humming of a tuning fork, or the flash of a light.
    • This period is called acquisition - the phase of classical conditioning when the CS and the US are presented together.
  • Nothing in nature would make a dog salivate to the sound of a metronome or a buzzer.
  • However, when the CS (the sound of the metronome) is paired over time with the US (the food), the animal will learn to associate food with the sound, and eventually the CS is sufficient to produce a response, or salivation.
  • Sure enough, Pavlov found that the dogs ultimately salivated to these sounds and flashes, each of which had become a conditioned stimulus (CS) - a previously neutral stimulus that produces a reliable response in an organism after being paired with a US.
  • This response resembles the UR, but Pavlov called it the conditioned response (CR) - a reaction that resembles an unconditioned response but is produced by a conditioned stimulus.
  • In this example, the dogs’ salivation (CR) was eventually prompted by the sound of the metronome (CS) alone because the sound of the metronome and the food (US) has been associated so often in the past.
  • The CR reflects learning, whereas the UR does not.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is Second-Order Conditioning?

A
  • After conditioning has been established, a phenomenon called second-order conditioning - a type of learning in which a CS is paired with a stimulus that become associated with the US in an earlier procedure, can be demonstrated.
    • Eg. in an early study, Pavlov repeatedly paired a new CS, a black square, with the now reliable tone. After a number of training trials, his dogs produced a salivary response to the black square, even though the square itself had never been directly associated with the food.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is Extinction?

A
  • extinction - the gradual elimination of a learned response that occurs when the CS is repeatedly presented without the US.
    • This term was introduced bc the conditioned response is “extinguished” and no longer observed.
    • Having observed that the could produce learning through conditioning and then extinguish it, Pavlov wondered if this elimination of conditioned behaviour was permanent.
      is what would happen if they continued to present the CS (metronome ticking) but stopped presenting the US (food)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is Spontaneous Recovery?

A

spontaneous recovery - the tendency of a learned behaviour to recover from extinction after a rest period.
- To explore this question, Pavlov extinguished the classical conditioned salivation response and then allowed the dogs to have a short rest period.
- When they were brought back to the lab and presented with the CS again, they displayed spontaneous recovery - the tendency of a learned behaviour to recover from extinction after a rest period.
- Notice that this recovery takes place even though there have not been any additional associations between the CS and US.
- Some spontaneous recovery of the conditioned response even takes place in what is essentially a second extinction session after another period of rest.
- Clearly, extinction had not completely erased the learning that had been acquired.
- The ability of the CS to elicit the CR was weakened, but it was not eliminated.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is Generalization?

A
  • It wouldn’t be very adaptive for an organism if each little change in the CS-US pairing required an extensive regimen of new learning.
  • Rather, the phenomenon of generalization tends to take place: The CR is observed even though the CS is slightly different from the CS used during acquisition.
    • In other words, the conditioning generalizes to stimuli that are similar to the CS used during the original training.
    • The more the new stimulus changes, the less conditioned responding is observed — which means that if you replaced a manual can opener with an electric can opener, your dog would probably show a much weaker conditioned response.
  • When an organism generalizes to a new stimulus, two things are happening:
    • 1st → by responding to the new stimulus used during generalization testing, the organism demonstrates that it recognizes the similarity between the original CS and the new stimulus.
    • 2nd → by displaying a diminished response to that new stimulus, it also tells us that it notices a difference between the two stimuli.
    • In the 2nd case, the organism shows discrimination, the capacity to distinguish between similar but distinct stimuli.
    • → Generalization and discrimination are two sides of the same coin.
      • The more organisms show one, the less they show the other, and training can modify the balance between the two.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What did the case of Little Albert show?

A

Classical conditioning.
- Watson wanted to see if such a child could be classically conditioned to experience a strong emotional reaction — namely, fear.
- Watson presented Little Ablert with a variety of stimuli: a white rat, a dog, a rabbit, various masks, and a burning newspaper. Albert reacted in most cases with curiosity or indifference, and he showed no fear of any of the items.
- Watson also established that an unconditioned stimulus could make Albert afraid. While Alberta was watching Rayner, Watson unexpectedly struck a large steel bar with a hammer, producing a loud noise. Predictably, this caused Albert to cry, tremble, and be generally displeased.
- Watson and Rayner then led Little Albert through the acquisition phase of classical conditioning.
- Alberta was presented with a white rat. As soon as he reached out to touch it, the steel bar was struck. This pairing occurred again and again over several trials. Eventually, the sight of the rat alone cause Albert to recoil in terror, crying and clamoring to get away from it.
- In this situation, a US (the loud sound) was paired with a CS (the presence of the rat) such that the CS all by itself was sufficient to produce the CR (a fearful reaction)
- Little Alberta also showed stimulus generalization. The sight of a white rabbit, a seal-fur coat, and a Sant Claus mask produced the same kinds of fear reactions in the infant.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What was Watson’s goal in his Classical Conditioning Experiment?

A
  • 1st → wanted to show that a relatively complex reaction could be conditioned using Pavlonian techniques.
  • 2nd → wanted to show that emotional responses such as fear and anxiety could be produced by classical conditioning and therefore need not be the product of deeper unconscious processes or early life experiences as Freud and his followers had argued.
  • Instead, Watson proposed that fears could be learned, just like any other behaviour.
  • 3rd → Watson wanted to confirm that conditioning could be applied to humans as well as to other animals.
  • This study was controversial in its cavalier treatment of a young child, especially given that Watson and Rayner did not follow up with Albert and his mother during the ensuing years.
  • A therapy that has proven effective in dealing with such trauma-induced fears is based directly on principles of classical conditioning: Individuals are repeatedly exposed to conditioned stimuli associated with their trauma in a safe setting, in an attempt to extinguish the conditioned fear response.
    • However, conditioned emotional responses include much more than just fear and anxiety responses.
    • eg. whey ads use attractive women in ads for products geared towards young males, including beer and sports cars
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What was the Rescorla Wagner Model on Classical Conditioning?

A
    • Robert Rescorla and Allan Wagner (1972) were the first to theorize that classical conditioning occurs when an animal has learned to set up an expectation.
      • The sound of a metronome, because of tis systematic pairing with food, set up this cognitive state for the lab dogs; Pavlov, bc of the lack of any reliable link with food, did not.
      • In fact, in situations like this, many responses are actually being conditioned.
      • When the metronome ticks, the dogs also wag their tails, making begging sounds, and look towards the food source.
  • The Rescorla-Wagner model introduced a cognitive component that accounted for a variety of classical conditioning phenomena that were difficult to understand from a simple behaviourist point of view.
    • e. the model predicted that conditioning would be easier when the CS was an unfamiliar event than when it was familiar.
    • The reason is that familiar events, being familiar, already have expectations associated with them, making new conditioning difficult.
    • In short, classical conditioning might appear to be a primitive process, but it is actually quite sophisticated and incorporates a significant cognitive element.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What parts of the brain are important for classical conditioning?

A
  • More recent neuroimaging findings in healthy young adults show activation in the cerebellum during eyeblink conditioning.
    • Cerebellum is part of the hindbrain and plays an important role in motor skills and learning.
  • Fear conditioning has also been studied.
    • Amygdala plays an important role in the experience of emotion, including fear and anxiety.
    • The amygdala, particularly an area known as the central nucleaus, is also critical for emotional conditioning.
    • eg. a rat that is conditioned to a series of CS-US pairings in which the CS is a tone and the US is a mild electric shock.
    • When rats experience sudden painful stimuli, they show a defensive reaction, known as freezing, in which they crouch down and sit motionless.
    • In addition, their autonomic nervous systems go to work: Heart rate and blood pressure increase, and various hormones associated with stress are released.
    • When fear conditioning takes place, these two components — one behavioural and one physiological — occur, except now they are elicited by the CS.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What does Fear Conditioning tell us about the Brain?

A
  • The central nucleus of the amygdala plays a role in producing both of these outcomes through two distinct connections with other parts of the brain.
    • If connections linking the amygdala to the midbrain are disrupted, the rat does not exhibit the behavioural freezing response.
    • If the connections between the amygdala and the hypothalamas are severed, the autonomic responses associated with fear cease.
    • Hence, the action of the amygdala is an essential element in fear conditioning, and its link with other areas of the brain are responsible for producing specific features of conditioning.
    • The amygdala is involved in fear conditioning in people as well as in rats and other animals
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

How has Classical Conditioning been linked to Evolutionary Elements?

A
  • Much research exploring this adaptiveness has focused on conditioned food aversions, primarily taste aversions.
    • Eg. a psychology prof was once on a job interview in Southern California, and his host took him to lunch at a Middle Eastern restaurant. Suffering from a case of bad hummus, he was up all night long and developed a lifelong aversions to hummus.
    • The hummus was the CS, a bacterium of some other source of toxicity was the US, and the resulting nausea was the UR. The UR (the nausea) became linked to the once-neutral CS (the hummus) and became a CR (an aversion to hummus).
    • However, all of the psychologist’s hosts also ate the hummus, yet none of them reported feeling ill. It’s not clear, then, what the US was; it couldn’t have been anything that was actually in the food.
      • What’s more, the time between the hummus and the distress was several hours; usually, a response follows a stimulus fairly quickly.
      • Most baffling, this aversion was cemented with no single acquisition trial.
  • These pecularities are not so peculiar from an evolutionary perspective.
  • To have adaptive value, this mechanism (of learning to avoid food that once made us ill) should have several properties:
    • Rapid learning should occur in perhaps one or two trials. If learning takes more trials than this, the animal could die from eating a toxic substance.
    • Conditioning shoudl be able to take place over very long intervals, perhaps up to several hours.
      • Toxic substances often don’t cause illness immediately, so the organism would need to form an association between the food and the illness over a longer term.
    • The organism should develop the aversion to the smell or taste of the food rather than its ingestion. It’s more adaptive to reject a potentially toxic substance based on smell alone that it is to ingest it.
    • Learned aversions should occur more often with novel foods than with familiar ones. It is not adaptive for an animal to develop an aversion to everything it has eaten on the particular day it got sick.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What have studies on taste aversion suggested?

A
  • Studies such as these suggest that evolution has provided each species with a kind of biological preparedness - a property for learning particular kinds of associations over other kinds, such that some behaviours are relatively easy to condition in some species but not others.
    • eg. the taste and smell stimuli that produce food aversions in rats do not work with most species of fish.
    • Birds depend primarily on visual cues for finding food and are relatively insensitive to taste and smell.
      • It is relatively easy to produce a food aversion in birds using an unfamiliar visual stimulus as the CS< such as brightly coloured food.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

The study of classical conditioning is the study of behaviours that are _______?

A
  • REACTIVE
  • Most animals don’t voluntarily salivate or feel spasms of anxiety; rather, they exhibit these responses involuntarily during the conditioning process.
  • Involuntary behaviours make up only a small portion of our behavioural repertoires, the remainder are behaviours that we voluntarily perform.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What is Operant Conditioning?

A
  • a type of learning in which the consequences of an organism’s behaviour determine whether it will repeat that behaviour in the future.
    • The study of operant conditioning is the exploration of behaviours that are active.
  • Edward L. Thorndike first examined active behaviours back in the 1890s, before Pavlov published his findings.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What did Edward L. Thorndike’s Experiments show?

A

Operant conditioning.
- Thorndike’s research focused on instrumental behaviours, that is, behaviour that required an organism to do something, such as solve a problem or otherwise manipulate elements of its environment.
- eg. Thorndike completed experiments using a puzzle box, which was a wooden crate with a door that would open when a concealed lever was moved in the right way.
- A hungry cat placed in a puzzle box would try various behaviours to get out — scratching at the door, meowing loudly, sniffing the inside of the box, putting its paw through the openings — but only one behaviour opened the door and led to food: tripping the lever in just the right way.
- After the cat earned its reward, Thorndike placed it back in the puzzle box for another round.
- Eventually, the cats became quite skilled at triggering the lever for their release.
- What’s going on:
- 1st → the cat enacts any number of likely (yet ultimately ineffective) behaviours, but only one behaviour leads to freedom and food.
- Over time, the ineffective behaviours occur less and less frequently, and the one instrumental behaviour (going right for the latch) becomes more frequent.
- From these observations, Thorndike developed the law of effect - the principle that behaviours that are followed by a “satisfying state of affairs” tend to be repeated, and those that produce an “unpleasant state of affairs” are less likely to be repeated.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What is the Law of Effect?

A

Thorndike developed the law of effect - the principle that behaviours that are followed by a “satisfying state of affairs” tend to be repeated, and those that produce an “unpleasant state of affairs” are less likely to be repeated.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What is the difference between Pavlov’s and Thorndike’s Work?

A
  • In Pavlov’s work:
    • the US occurred on every training trial, no matter what the animal did.
    • Pavlov delivered food to the dog whether it salivated or not.
  • In Thorndike’s work:
    • the behaviour of the animal determined what happened next.
    • If the behaviour was “correct” (ie animal triggered the latch), the animal was rewarded with food.
    • Incorrect behaviours produced no results, and the animal was stuck in the box until it performed the correct behaviour.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What is Operant Behaviour?

A
  • coined by B.F Skinner
  • behaviour that an organism performs that has some impact on the environment.
  • In Skinner’s system, all of these emitted behaviours “operated” on the environment in some manner, and the environment responded by providing events that either strengthened those behaviours (ie. they reinforced them) or made them less likely to occur **(ie. they punished them).
  • Skinner’s elegantly simple observation was that most organisms do not behave like a dog in a harness, passively waiting to receive food no matter what the circumstance.
    • Rather, most organisms are like cats in a box, actively engaging the environment in which they find themselves to reap rewards
  • Skinner’s approach to the study of learning focused on reinforcement and punishment.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

According to Skinner, what is a reinforcer?

A

any stimulus or event that increases the likelihood of the behaviour that led to it

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

According to Skinner, what is a punisher?

A

any stimulus or event that decrease the likelihood of the behaviour that led to it.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Whether a particular stimulus acts as a reinforcer or a punisher depends in part on whether it ___________________________________________.
INCREASES OR DECREASES THE LIKELIHOOD OF A BEHAVIOUR. - Presenting food is usually reinforcing and it produces an increase in the behaviour that led to it; removing food is often punishing and leads to a decrease in behaviour. - Turning on an electric shock is typically punishing (and decreases the behaviour that led to it); turning it off is rewarding (and increases the behaviour that led to it). - To keep these possibilities distinct, Skinner used the term *positive* for situations in which a stimulus was presented and *negative* for situations in which it was removed. - P*ositive reinforcement* (a stimulus is presented, and its presentation increases the likelihood of a behaviour) - *negative reinforcement* (a stimulus is removed, and its removal increases the likelihood of a behaviour) - *positive punishment* (a stimulus is administered, and its administration reduces the likelihood of a behaviour) - *negative punishment* (a stimulus is removed, and its removal decreases the likelihood of a behaviour).
26
What does Positive Reinforcement mean?
Positive reinforcement (a stimulus is presented, and its presentation increases the likelihood of a behaviour)
27
What does Negative Reinforcement mean?
negative reinforcement (a stimulus is removed, and its removal increases the likelihood of a behaviour) - Negative reinforcement is the *removal* of something, such as a shock, that increases the likelihood of a behaviours. - Eg. new parents are highly trainable by their babies, and quickly learn to repeat behaviours (such as bouncing a certain way; swinging the baby in their car seat at a certain rate; pulling goofy faces) that the baby either positively reinforces (by smiling or cooing) or negatively reinforces (by stopping crying).
28
What does Positive Punishment mean?
a stimulus is administered, and its administration reduces the likelihood of a behaviour
29
What does Negative Punishment mean?
a stimulus is removed, and its removal decreases the likelihood of a behaviour
30
What does positive mean?
something that is added - positive for situations in which a stimulus was presented
31
What does negative mean?
- something that is taken away - negative for situations in which it was removed.
32
Is Reinforcement or Punishment more effect in promoting learning?
- Reinforcement is generally more effective than punishment in promoting learning, for many reasons. - 1 → punishment signals that unacceptable behaviour has occurred, but it doesn’t specify what should be done isntead. - Scolding a young child for starting to run into a busy street certainly stops the behaviour, but it doesn’t promote any kind of learning about the *desired* behaviour.
33
What are Primary Reinforcers?
Food, comfort, shelter, and warmth ar examples of primary reinforcers because they help satisfy biological needs or desires.
34
What are Secondary Reinforcers?
- the vast majority of reinforcers or punishers in our daily lives have little to do with biology: Verbal approval, a bronze trophy, or money all serve powerful reinforcing functions, yet none of them taste very good or help keep you warm at night. - These *secondary* *reinforcers* derive their effectiveness from their associations with primary reinforcers through classical conditioning. - Eg. money starts out as a neutral CS that, through its association with primary USs such as acquiring food or shelter, takes on a conditioned emotional element. - Flashing lights, originally a neutral CS, acquire powerful negative elements through association with a speeding ticket and a fine, or as an indicator of a dangerous situation.
35
A key determinant of the effectiveness of a reinforcer is the amount of time between the occurrence of a behaviour and the reinforcer: what timing is less effective?
- The more time elapses, the less effective the reinforcer. - Eg. hungry rats in which food reinforcers were given at varying times after a rat pressed a lever. - Delaying reinforcement by even a few seconds led to a reduction in the number of times the rat subsequently pressed the lever, and extending the delay to a minute rendered the food reinforcer completely ineffective. - Most likely explanation for this effect is that delaying the reinforcer made it difficult for the rats to figure out exactly what they needed to perform in order to obtain it. - In the same way, parents who wish to use a piece of candy to reinforce their children for playing quietly should provide the cnady while the child is still playing quietly; waiting until later, when the child may be engaging in other behaviours — perhaps making a racket with pots and pans — will make it more difficult for the child to link the reinforcer with the behaviour of playing quietly. - The greater potency of immediate versus delayed reinforcers may help us appreciate why it can be difficult to engage in behaviours that have long-term benefits. - Eg. the smoker who desperately wants to quit smoking will be reinforced immediately by the feeling of relaxation that results from lighting up, but they may have to wait years to be reinforced with the better health that results from quitting
36
As a general rule for punishment, what does timing have to do with the effectiveness of punishment?
- As a general rule, the longer the delay between a behaviour and the administration of punishment, the less effective the punishment will be in suppressing the targeted behaviour. - The reduced effectiveness of delayed punishment can be a serious problem in nonlab settings bc in everyday life, it is often difficult to administer punishment immediately or even soon after a problem behaviour has occurred. - Eg. a parent whose child misbehaves at a shopping mall may be unable to punish the child immediately with a time-out bc it is impractical in the mall setting.
37
What can increase the effectiveness of delayed punishment?
- Strategies for increasing the effectiveness of delayed punishment: increasing the severity of the punishment or attempting to bridge the gap between the behaviour and the punishment with verbal instructions. - eg. the parent in the shopping mall might tell the misbehaving child exactly when and where a later time-out will occur.
38
What are the Basic Principles of Operant Conditioning?
- Thorndike recognized that learning takes place in *contexts,* not in the free rage of any plausible situation. - As Skinner rephrased it later, most behaviour is under *stimulus* *control*, which develops when a particular response occurs only when an appropriate *discriminative* *stimulus* (one that indicates that a response will be reinforced) is present. - Skinner discussed this process is terms of a “three-term contingency”: In the presence of a *discriminative* *stimulus* (classmates drinking coffee together in Starbucks), a *response* (joking comments about a psychology professor’s increasing waistline and receding hairline) produces a *reinforcer* (laughter among classmates). - The same response in a differnt context — eg. professor’s office — would most likely make a very different outcome. - Stimulus control shows both discrimination and generalization effects similar to those we saw with classical conditioning. - Researchers used either a painitng by the French impressionist Claude Monet or Pablo Picasso’s paintings from his cubist period for the discriminative stimulus. - Participants were reinforced only if they responded when the appropriate painting was presented. - After training, the participants discriminated appropriately: Those trained with the Monet painting responded when other paintings by Monet were presented, and those trained with a Picasso painting reacted when other cubist paintings by Picasso were shown. - Monet-trained participants did not react to Picassos, and the Picasso-trained participants did not respond to Monets. - The research participants showed that they could generalize *across* painters as long as they were from the same artistic tradtion. - Those trained with Monet responded appropriately when shown paintings by Auguste Renoir (another French impressionist), and the Picasso-trained participants responded to artwork by the cubist painter Henri Matisse, despite never having seen those paintings before. - Research participants were pigeons that were trained to key-peck to these various works of art. - Stimulus control, and its ability to foster stimulus discrimination and stimulus generalization, is effective even if the stimulus has no meaning to the respondent.
39
How is Extinction Related to Operant Behaviour?
- Operant behaviour undergoes extinction when the reinforcements stop. - Pigeons cease pecking a key if food is no longer presented following the behaviour. - On the surface, extinction of operant behaviour looks like that of classical conditioning: The response rate drops fairly rapidly and, if a rest period is provided, spontaneous recovery is typically seen. - HOWEVER, there is an important difference. - In classical condition → the US occurs on *every* trial, no matter what the organism does. - In operant conditioning, the reinforcements occur *only* when the proper response has been made, and they don’t always occur even then. - Not every trip into the forest produces nuts for a squirrel. - Yet these behaviours don’t weaken and gradually extinguish. - In fact, they typically become stronger and more resilient. - Extinction is a bit more complicated in operant conditioning than in classical conditioning bc it depends, in part, on how often reinforcement is received.
40
What is Important about reinforcements for operant conditioning?
- Unlike in classical conditioning, where the sheer *number* of learning trials was important*,* the *pattern* with which reinforcements appeared was crucial in operant conditioning*.* - Skinner explored dozens of what became known as *schedules* *of* *reinforcement*. - The two most important are *interval* *schedules,* based on the time intervals between reinforcements*, and ratio schedules,* based on the ratio of responses to reinforcement.
41
What are the two types of Schedules of Reinforcement?
The two most important are interval schedules, based on the time intervals between reinforcements, and ratio schedules, based on the ratio of responses to reinforcement.
42
What are the two types of interval schedules for reinforcement?
- Under a **fixed-interval (FI) schedule** - *reinforcers are presented at fixed time periods, provided that the appropriate response is made.* - eg. on a 2-min fixed-interval schedule, a response will be reinforced, but only after 2 minutes have expired since the last reinforcement. - Rats and pigeons in Skinner boxes produce predictable patterns of behaviour under these schedules. - They show little responding right after the presentation of the reinforcement, but as the next time interval draws to a close, they show a burst of responding. - Under a **variable-interval (VI) schedule** - *a behaviour is reinforced on the basis of an average time that has expired since the last reinforcement.* - Eg. on a 2-min variable-interval schedule, responses will be reinforced every 2 mins, *on* *average*, but not after each 2-minute period. - Variable-interval schedules typically produce steady, consistent responding bc the time until the next reinforcement is less predictable. - Not common schedule irl, but one example might be radio promotional giveaways, such as tickets to rock concerts. - The reinforcement — getting the tickets — might average out to once an hour across the span of the broadcasting day, but the presentation of the reinforcement is variable: It might come early in the 10:00 o’clock hour, later in the 11:00 o’clock hour, immediately into the 12:00 o’clock hour, and so on.
43
What are the Two Types of Ratio Schedules of Reinforcement?
- Under a **fixed-ratio (FR) schedule**, *reinforcement is delivered after a specific number of responses have been made.* - One schedule might present reinforcement after very fourth response, and a different schedule might present reinforcement after every 20 responses; the special case of presenting reinforcement after each response is called *continuous* *reinforcement,* and it’s what drove Skinner to investigate these schedules in the first place. - In each example, the ratio of reinforcements to responses, once set, remains fixed. - Eg. pieceworkers get paid after making a fixed number of products, and some credit card companies return to their customers a percentage of the amount charged. - When a fixed-ratio schedule is operating, it is possible, in principle, to know exactly when the next reinforcer is due. - Under a **variable-ratio (VR) schedule** - *the delivery of reinforcement is based on a particular average number of responses, although the ratio of responses to reinforcements is variable.* - Slot machines in a modern casino pay off on variable-ratio schedules that are determined by the random number generator controlling the play of the machines. - Variable-ratio schedules produce slightly higher rates of responding than fixed-ration schedules, primarily because the organism never knows when the next reinforcement is going to appear. - And the higher the ratio, the higher the response rate tends to be: A 20-response variable-ratio schedule will produce considerably more responding than a 2-response variable-ratio schedule will.
44
When are schedules of reinforcements most resistant to extinction?
- When schedules of reinforcement provide **intermittent reinforcement** - *whereby only some of the responses made are followed by reinforcement*, they produce behaviour that is much more resistant to extinction than does a continuous reinforcement schedule. - one way to think about this effect is to recognize that the more irregular and intermittent a schedule is, the more difficult it becomes for an organism to detect when it has actually been placed on extinction. - eg. if you’ve put a toonie into a vending machine that, unknown to you, is broken, no snacks come out. - Bc you’re used to getting your potato chips on a continuous reinforcement schedule — one toonie produces one bag of chips — this abrupt change in the environment is easy to notice, and you are unlikely to put additional money into the machine: You’d quickly show extinction. - However, if you put your toonie into a slot machine that, unknown to you, is broken, do you stop after one or two plays? Almost certainly not. - Under conditions of intermittent reinforcement, all organisms will show considerable resistance to extinction and continue for many trials before they stop responding.
45
What is the Intermittent Reinforcement Effect?
The relationships between intermittent reinforcement schedules and the robustness of the behaviour they produce is called the intermittent reinforcement effect - the fact that operant behaviours that are maintained under intermittent reinforcement schedules resist extinction better than those maintained under continuous reinforcement.
46
What is shaping?
learning that results from the reinforcement of successive steps to a final desired behaviour. Most of our behaviours, then, are the result of shaping - The outcomes of one set of behaviours shape the next set of behaviours, whose outcomes shape the next set of behaviours, and so on. - eg. parents using shaping to teach young children skills such as riding a bike by reinforcing successive behaviours that are needed to attain the complex goal behaviour. - it is easy to shape bar pressing. - watch that rat closely: if it turns in the direction of the bar, deliver a food reward. - This will reinforce turning towards the bar, making it more likely. - Wait for the rat to take a step towards the bar before delviering food; this will reinforce moving towards the bar. - After the rat walks closer to the bar, wait until it touches the bar before presenting food. -> None of these behaviours is the final desired behaviour (reliably pressing the bar). Rather, each behaviour is a successive approximation to the final product, or a behaviour that gets incrementally closer to the overall desired behaviour.
47
What are Superstitiuous Behaviours?
- Skinner: - put several pigeons in Skinner boxes, set the food dispenser to deliver food every 15 seconds, and left the birds to their own devices. - Later he returned and found the birds engaging in odd, idiosyncratic behaviours, such as pecking aimlessly in a corner and turning in circles. - He referred to these behaviours as “superstitious” and offered a behaviourist analysis of their occurrence. - The pigeons, he argued, were simply repeating behaviours that had been accidentally reinforced. - That is, a pigeon that jsut happened to have pecked randomly in the corner when the food showed up had connected the delivery of food to that behaviour. - Bc this pecking behaviour was reinforced by the delivery of food, the pigeon was likely to repeat it. - Now pecking in the corner was more likely to occur, and it was more likely to be reinforced 15 seconds later when the food appeared again. - For each pigeon, the behaviour that is reinforced would most likely be whatever the pigeon happened to be doing when the food was first delivered. - Skinner’s pigeons acted as though there was a casual relationship between their behaviours and the appearance of foo when it was merely an accidental correlation.
48
What are some proposed Cognitive Elements of Operant Conditioning?
- Edward Chace Tolman was one of the first researchers to question Skinner’s strictly behaviourists interpretation of learning and was the strongest early advocate of a cognitive approach to operant learning. - Tolman argued that there was more to learning than just knowing the circumstances of the environment (the properties of the stimulus) and being able to observe a particular outcomes (the reinforced response). - Proposed that an animal established a means-end relationship. - That is, the conditioning experience produced knowledge or a belief that in this particular situation, a specific reward (the end state) will appear if a specific response (the means to that end) is made. - In both Tolman’s means-end relationship and Rescorla-Wagner’s model of classical conditioning, the stimulus does not directly evoke a response; it establishes an internal cognitive state that then produces the behaviour. - During the 1930s and 1940s, Tolman and his students conducted studies that focused on *latent* *learning* and *cognitive* *maps*, two phenomena that strongly suggest that simple S-R interpretations of operant learning behaviour are inadequate.
49
What is Latent Learning?
- **Latent Learning** - *something is learned, but it is not manifested as a behavioural change until sometime in the future.* - can easily be established in rats and it occurs without any obvious reinforcement. - Eg. Tolman gave three groups of rats access to a complex maze every day over a span of 17 days. - The contorl group never received any reinforcement for navigating the maze. The ywere simple allowed to run around until they reached the goal box at the end of the maze. - Over the 17 days, the control group (green) got a little better at finding their way through the maze, but not by much. - A second group of rats received regular reinforcements; when they reached the box, they found a small food reward there. - These rats showed clear learning (blue). - A third group was treated exactly like the control group for the first 10 days and then rewarded for the last 7 days. - This group’s behaviour (orange) was quite striking. - For the first 10 days, they behaved like the rats in the control group. - However, during the final 7 days, their learning curve shifted abruptly; now they showed substantial learning, behaving a lot like the rats in the second group, which had been reinforced every day. - Clearly, the rats in this 3rd group had learned a lot about the maze and the location of the goal box during those first 10 days even though they had not recieved any reinforcements for their behaviour. - They showed evidence of latent learning. - These results suggested to Tolman that beyond simply learning “start here, and end here,” his rats developed a sophisticated mental picture of the maze. - Tolman called this a **cognitive map** - *a mental representation of the physical features of the environment.* - Tolman had thought that the rats had developed a mental picture of the maze that allowed them to navigate it, and he devised several experiments to test this idea. - Latent learning and cognitive maps suggest that operant conditioning involves an animal’s doing much more than responding to a stimulus. - Tolman’s experiments strongly suggest that there is a cognitive component, even in rats, to operant learning.
50
Where did the first hint of how specific brain structures might contribute to the process of reinforcement come from?
the discovery of what came to be called pleasure centres. - McGilla researchers James Olds and Peter Milder inserted tiny electrodes into different parts of a rat’s brain and allowed the animal to control electric stimulation of its own brain by pressing a bar. - They discovered that some brain areas, particularly those in the limbic system, produced what appeared to be intensely positive experiences: The rats would press the bar repeatedly to stimulate these structures. - The researchers observed that these rats would ignore food, water, and other life-sustaining necessities for hours on end simply to receive stimulation directly in the brain. - They called these parts of the brain *pleasure* *centres.*
51
What are the structures and pathways in the brain that deliver rewards through stimulation?
- The neurons in the *medial* *forebrain* *bundle,* a pathway that meanders its way from the midbrain through the *hypothalamus* into the *nucleus accumbens,* are the most susceptible to stimulation that produces pleasure. - not surprising bc psychologists have identified this bundle of cells as crucial to behaviours that clearly involve pleasure, such as eating, drinking, and engaging in sexual activity. - 2nd → the neurons all along this pathway and especially those in the nucleus accumbens itself are all *dopaminergic* (ie. they secrete the neurotransmitter *dopamine*). - high levels of dopamine in the brain are usually associated with positive emotions. - are some competing hypotheses about the role of dopamine: dopamine is more closely linked with the expectation that the reward itself, or that dopamine is more closely associated with wanting or even *craving* something rather than simply liking it. - Researchers have found good support for a reward centre in which dopamine plays a role. - rats will work to stimulate this pathway at the expense of other basic needs. - If the drugs that block the action of dopamine are administered to the rats, they cease stimulating the pleasure centres. - 2nd → drugs such as cocaine, amphetamine, and opiates activate these pathways and centres, but dopamine-blocking drugs dramatically diminish their reinforcing effects. - 3rd → fMRI studies show increased activity in the nucleus accumbens in heterosexual men looking at pictures of attractive women and in individuals who believe they are about to receive money. - Rats that are given primary reinforcers (eg. food or water) or that are allowed to engage in sexual activity show increased dopamine secretion in the nucleus accumbens — but only if they are hungry, thirsty, or sexually aroused.
52
What are the evolutionary elements of Operant Conditioning?
- Several behaviourists who were using simple T mazes to study the learning in rats discovered that if a rat found food in one arm of the maze on the first trial of the day, it typically ran down the *other* arm on the very next trial. - A staunch behaviourist wouldn’t expect rats to behave this way. → the rats were hungry and they had just been reinforced for turning in a particular direction. - According to operant conditioning, this should *increase* the likelihood of their turning in that same direction, not reduce it. - Makes sense from evolutionary pov: - Rats are foragers, and like all foraging species, they have evolved a highly adaptive strategy for survival. - They move around in their environment, looking for food. If they find it somewhere, they eat it (or store it) and then go look somewhere else for more. - If they do not find food, they forage in another part of the environment. - So if the rat just found food in the *right* arm of a T maze, the obvious place to look next time is the *left* arm. - The rat knows there isn’t any more food in the right arm bc it just ate the food it found there! Indeed, foraging animals such as rats have well-developed spatial representations that allow them to search their environment efficiently. - If given the opportunity to explore a complex environment like a multiarm maze, rats will systematically fo from arm to arm collecting food, rarely returning to an arm they have previously visited.
53
What is Observational Learning?
- **Observational learning** - *a process in which an organism learns by watching the actions of others.* - In all societies, appropriate social behaviour is passed on from generation to generation largely through observation. - eg. tasks such as using chopsticks or operating a TV’s remote control are more easily acquired if we watch these activities being carried out before we try them ourselves. - even performing surgery is first learned by observation. - In all societies, appropriate social behaviour is passed on from generation to generation largely through observation. - Eg. even complex motor tasks, such as performing surgery, are learned in part through extensive observation and imitation of models. - Eg. Albert Bandura’s Bobo Doll Experiments - When children who observed the aggressive actions of the adult models were later allowed to play with a variety of toys, including a child-size Bobo doll, they were more than twice as likely to interact with it in an aggressive manner as a group of children who hadn’t observed the aggressive model. - Children in these studies showed they were sensitive to the consequences of the actions they observed. - When they saw the adult models being punished for behaving aggressively, the children showed considerably less aggression. - When the children observed a model being rewarded and praised for aggressive behaviour, they displayed an increase in aggressive behaviour.
54
What are the implications for observational learning in Bandura's Studies?
- The observational learning seen in Bandura’s studies has implications for social learning and cultural transmission of behaviours, norms and values. - eg. research conducted across several decades has revealed exposure to media violence is linked with an increased likelihood of aggressive or hostile thoughts and behaviour among youth and observational learning has been implicated as one of the mechanisms responsible for this association. - A school-based intervention produced less use of violent media (compared with a no-intervention control group) that persisted for 2 years after the intervention concluded, and this effect was associated with a reduction in self-reported aggressive behaviour.
55
What is a Diffusion Chain?
- Research with children has also shown that observational learning is well suited to seeing behaviours that can spread widely across a culture through a process called a **diffusion chain** - *a process in which individuals initially learn a behaviour by observing another individual perform that behaviour, and the become models from which other individuals learn that behaviour.* - The evidence to date indicates that children can learn how to use a novel tool by observing an adult model use that tool; more importantly, they can then become effective models for other children to learn how to use that tool. - These findings of transmission across multiple “cultural generations” underscore that observational learning is well suited for transmission through a diffusion chain and thus a potentially powerful means of influencing a culture. - Observational learning is especially effective when people can observe both experts and novices perform a task, because they can learn to avoid making the same errors that novices made.
56
Can monkeys learn through observational learning?
- YES! But... - Some chimps saw experimenter using a rake in its normal position to drag a food reward into reach; this method was rather inefficient bc the teeth were widely spaced and the food sometimes slipped between them. - Other chimps saw the experimenter use it more efficiently, with the teeth pointing up and the flat edge of the rake touching the ground. → Both groups of chimps later used the rake when trying to obtain the food themselves, indicating observational learning. - But those who observed the more efficient “teeth up” procedure did not use it any more than did those who observed the less efficient “teeth down” procedure. - Children used it the exact same way they saw the experimenter use the rake, in contrast to the monkeys. - the chimps seemed only to be learning that the tool could be used to obtain food, whereas the children learned something specific about how to use the toll.
57
What is enculturation hypothesis?
- Tomasello - enculturation hypothesis: being rasied in a human culture has a profound effect on the cognitive abilities of chimps, especially their ability to understand the intentions of others when performing tasks such as using tolls, which in turn increases their observational learning capacities.
58
What are the Neural Elements of Observational Learning?
- *mirror* *neurons* are a type of cell found in the brain of primates (including humans). - They fire when an animal performs an action, as when a monkey reaches for a food item. - They also fire when an animal watches someone *else* perform the same specific task. - Although this “someone else” is usually a fellow member of the same species, some research suggests that mirror neurons in monkeys also fire when they observe humans performing an action. - eg monkey’s mirror neurons fire when they observed humans grasping for a piece of food, either to eat or to place it in a container. - Mirror neurons may play a critical role in the imitation of behaviour as well as the prediction of future behaviour. - Thought to be located in specific subregions in the frontal and parietal lobes, and there is evidence that individual subregions respond most strongly to observing certain kinds of actions. - If appropriate neurons fire when an animal sees another animal performing an action, it could indicate an awareness of intentionality, or that the animal is anticipating a likely course of future actions. - Both of the elements — role imitation of well-understood behaviours and an awareness of how behaviour is likely to unfold — contribute to observational learning. - Eg. people and watching content of good dancers vs not - Analysis of the fMRI data revealed that in comparison with the untrained sequences, viewing the previously danced or watched sequences recruited largely similar brain networks, including regions considered part of the mirror neuron system, as well as a couple of brain regions that showed more activity for previously danced than watched videos. - Results of a surprise dancing test given to participants after the conclusion of scanning showed that performance was better on sequences previously watched than on the untrained sequences, demonstrating significant observational learning: but performance was best of all on the previously danced sequences. - Related evidence indicates that observational learning of some motor skills relies on the motor cortex, which is known to be critical for motor learning. - Applying TMS to the motor cortex greatly reduced the amount of observational learning, whereas applying TMS to a control region outside the motor cortex had not effect on observational learning. - These findings indicate that some kinds of observational learning are grounded in brain regions that are essential for action.
59
What is Implicit Learning?
- **Implicit Learning** - *learning that takes place largely independent of awareness of both the process and the products of information acquisition.* - eg. habituation is a simple kind of implicit learning in which repeated exposure to a stimulus results in a reduced response. - Some forms of learning start out explicitly but as one continues to learn, become more implicit over time. - Eg. driving a car - Implicit and explicit learning typically refers to gradually acquiring knowledge and skills over time.
60
Who was B.F. SKinner and what was his contribution to learning?
- Leading proponent of behaviourism - Coined the term “operant conditioning” meaning that a human or non-human animal operates on its environment in a way. It emits responses that produce certain consequences → reinforcement - **Operant (behaviour):** A *class* *of* *behaviour* that operates on the environment to produce a common environmental consequence. - controlled by or occurs again in the future as a result of an immediate consequence, punishment or reinforcement. - eg like pressing a lever in a puzzle box → Thorndike’s puzzle box - the behaviour or response is pulling the lever in order to get continually released to get access to the reinforcer which is food → positive consequence. - means whatever is in the box is going to pull that lever again - **Learning**: A change in behaviour due to experience. - eg. the lever → the lever probably didn’t mean anything to the cat until it was paired with the ability to escape from the puzzle box. → this way it gets access to that food. - so the behaviour of pushing/pulling the lever changes over time through learning. → in that the cat continues to press the lever quicker and quicker as the trials go on. - **Operant Learning:** A change in a class of behaviour as a function of the consequences that followed it. - can be synonymous with operant conditioning - there is a behaviour → behaviour leads to a consequence → consequence is going to do sth to that behaviour (depends on whether there is reinforcement of punishment).
61
What does Reinforcement always mean?
- Reinforcement always means that we’re increasing behaviour - Reinforcement is: 1. The occurrence of a particular behaviour - this could be a dog crying at night after it’s been put to bed. - eg. dog begging at dinner table 2. Is followed by an immediate consequence - would be the parents of the dog coming in to comfort it - dog gets fed a scrap of food 3. That results in the strengthening of the behaviour (ie. the person is more likely to engage in the behaviour again in the future). - it results in strengthening the behaviour: in this case, the dog will continue to cry in her crib bc her crying continues to result in that attention she gets from her loving parents. - immediate consequence of the dog’s behaviour, begging, results in strengthening of that behaviour. → increasing of that behaviour - Reinforcement = strengthened - always increase
62
What is Positive Reinforcement?
- *Adding* something *appetitive (good stimulus)* to *increase* behaviour 1. The occurrence of a particular behaviour - do some behaviour - eg. child who successfully completes their homework - eg student who answers a question correctly in class - child helps clean up their toys 2. Is followed by the *addition* *of an appetitive stimulus* (or an *increase in the intensity* *of an appetitive stimulus*) - will increase the response rate/behaviour - get something good/you like for doing that behaviour - parents praise the child and give them extra play time - teacher gives student a sticker or stamp on their hand - parent allows the child to choose the next story to read at bedtime 3. That results in the strengthening of the behaviour - help the chances that you will do that behaviour again. - child will do their homework more because they get extra play time.
63
What does Negative Reinforcement mean?
- Just means that we’re taking away sth bad to increase behaviour - eg. getting up early so you don’t hit rush-hour traffic → avoiding traffic is the removal of the aversive stimulus and you’re taking that away by getting up early. → so you’re more likely to leave early in the future. - *Taking away* something *aversive* to *increase* behaviour. - eg. using sunglasses or an umbrella → sunglasses work bc they remove the sun from getting in your eyes and decrease squinting and eye strain - umbrellas → can be used for light or rain → don’t want to get rained on or wet on the way to class so you can use an umbrella → negatively reinforces you bc you’re taking away the situation of getting soaked before class so you’ll continue to use an umbrella and carry one when it’s going to rain. 1. The occurrence of a particular behaviour. - eg cow herding → cows sometimes herded by riding alongside with cattle prods until they’re moving in the correct direction and eventually learn to do this without the prod - speeding - eg. student who studies really hard in order to avoid failing a test - eg. person taking pain meds to alleviate a headache - child who cleans their room - person wears sunscreen to avoid getting sunburnt 2. Is followed by the *removal of an aversive stimulus* (or a *decrease in the intensity of an aversive stimulus*) - taking away something aversive or reducing it - removing the bad stimulus and by removing the bad stimulus you’re increasing this response - cattle herding with prod → so the response, going in the correct direction, increases in order to remove the aversive stimulus. - eg. you always drive the speed limit in order to avoid getting a ticket - student puts in extra effort to prepare for exams to avoid the negative outcome of failing. - by taking meds, the headache, aversive stimulus, is removed. increasing the likelihood of taking medication in the future. - child who cleans their room stops their parent from nagging → cleans up to avoid the aversive stimulus of parental nagging. - by applying sunscreen, they prevent the discomfort and pain of sunburn → increases the likelihood of applying sunscreen regularly. 3. That results in the strengthening of the behaviour.
64
What are the 2 ways of reinforcing?
- Adding a stimulus + Positive Reinforcement - Remove a stimulus - Negative Reinforcement → reinforcement means that you are increasing a behaviour - eg child who is having a tantrum demands airheads → eventually caretaker gives in and buys the airheads → half the ppl in the store boo, half the ppl clap. - What two behaviours have now increased? What has been reinforced? - the caretaker is more likely to give him candy when he demands it and has a tantrum - the child’s tantrum has been reinforced by his caretaker’s actions. → is more likely to have a tantrum in the store bc it results in receiving candy from their caregiver. - the candy is the positive reinforcer bc receiving the candy is going to increase the behaviour - the negative reinforcement is that giving the candy, the caretaker is being negatively reinforced bc they’re giving the candy and getting *rid* of the aversive stimulus of their child crying and screaming in public. → so they’re going to continue giving the candy in this kind of situation.
65
What is Aversive Stimulus?
- An event or stimulus that an organism escapes or avoids. - sth that we don’t want - therefore removing an aversive stimulus has the ability to increase or reinforce your behaviour.
66
What is an Escape Behaviour?
- When operant behaviour increases by removing an ongoing event or stimulus - in this case the stimulus is aversive - eg. the operant behaviour of pressing a lever to stop an electric shock - the important component here in that escape is that you’re in a bad situation with an aversive stimulus and you want to stop it. - eg. you’re in a box, you’re getting shocked and if you press the lever you’ll stop the electric shock. - eg you’re in an aparatus where one side is electrified and the other is not → you’re on the electrified side and the shock occurs → you can stop it by putting yourself on the other side of the aparatus - eg. if you’re on the noisy floor of the library and you want a nice quiet space to go do some work → can escape the noisy area by going to the silent floor
67
What is Avoidance Behaviour?
- **Avoidance behaviour:** When operant behaviour increases by preventing the onset of the event or stimulus. - pressing a lever to prevent an electric shock altogether → escape = stopping sth that has already occurred → where you have been exposed to the aversive stimulus and now you’re stopping it. - eg being kidnapped → escaping is stopping the torture you might be going through → avoidance = preventing sth from ever occurring. - eg. kidnapping → avoidance is preventing being kidnapped by always being on your guard and always jumping into your car in case there is sb underneath it going to slash your ankles, making sure you check your backseat, making sure there’s no one in there hiding. - Is it avoidance or escape?: you’re walking around on hot asphalt on a hot day, its 100 degrees farehheit, you step on the hot asphalt, and when you step on it its hot, so you step on the grass. - Escape → bc you still had to go through burning the bottoms of your feet to get away from it - Things like putting on your shoes before you go outside would be avoidance. - avoidance behaviour → knowing its going to be hot so you put on your shoes before - or walking on grass to begin with
68
What are the 2 types of Reinforcers?
- Unconditional (Primary) Reinforcer: - Conditional (Secondary) Reinforcer:
69
What is an Unconditional (Primary) Reinforcer?
- A reinforcer that acquired its properties as a function of species evolutionary history. - meaning that it is very adaptive for us to find this particular reinforcer, reinforcing - Our unlearned reinforcers - Reinforcing the first time they are presented to an organism. - Often things that have to do with survival or biological importance → unlearned → we don’t have to learn that these are reinforcing. - Eg. eating - we can’t survive unless we eat - eg. sex we can’t pass on our genes without it - water - we have to consume a specific amount of water - social interaction → we’re a social species we tend to need social interaction - tend to be appetitive (positive/good) stimuli - Also have aversive stimuli → escaping from sth could be a primary reinforcer bc it’s helped us to survive over time. - we want to get away from things that are harming us like extreme heat on asphalt - Also depends on some amount of deprivation - if you hadn’t had access to any of your primary reinforcers they’re going to be more reinforcing. - the more deprived you are in food, the more food is a reinforcer for you. - But if you’re really full, food might not be as good a reinforcer. → instead social interaction may be more reinforcing for you at that point. - Often species specific - own of my reinforcers is blueberries → but a lion wouldn’t find that reinforcing
70
What is a Conditional (Secondary) Reinforcer?
1. **Conditional (Secondary) Reinforcer:** - Otherwise neutral stimuli or events that have acquired the ability to reinforce due to a *contingent relationship* with other, typically unconditional reinforcers. - Learned reinforcer - Eg. money - bc money is paired with the unconditional reinforcer → with money you can buy food, drinks, anything that you might find reinforcing but still the money itself has the ability to reinforce your behaviour. - Imagine there is an apocalypse → person who once enjoyed money and found money to be a conditional reinforcer can no longer use money to buy the things they want and everyone is now using buttons and seashells - In this case, money would no longer be a conditional reinforcer bc it would no longer be conditioned to reinforce their behaviour and that contingent relationship no longer exists. - But typically, we work in some way to receive money. - Money is a conditional reinforcer that will reinforce our behaviour of still going to work to receive that money and this way we have access to the things we need. - eg verbal praise - associated with feelings of approval and accomplishment which are inherently rewarding. - eg. applause - associated with social approval and recognition - eg. a medal - symbolize success and is associated with prestige and recognition as well - eg. getting a grade of A - associated with positive outcomes like academic success and parental approval
71
What are the Variables that effect Reinforcement?
- **Immediacy (Contiguity)** - A stimulus is more effective as a reinforcer when it is delivered *immediately* after the behaviour - eg teaching dog to roll over → if they did the trick you would want to reinforce that behaviour immediately. - **Contingency** - A stimulus is more effective as a reinforcer when it is delivered *contingent* on the behaviour. - so your reinforcer is contingent on that behaviour - your reinforcer is not coming randomly → eg. you’re not getting fed chocolates just anytime you’re doing anything, you’re getting chocolate for performing the target behaviour. - **Motivating Operations** - Establishing operations make a stimulus more effective as a reinforcer at a particular time. Abolishing operations make a stimulus less potent as a reinforcer at a particular time. - **Individual Differences** - Reinforcers vary from person to person. - eg some ppl are reinforced for getting praised more, sometimes its sth else - **Magnitude** - Generally, a more intense stimulus is a more effective reinforcer. - *ask Characteristics* - eg. Reinforce a pigeon pecking for food vs. a hawk pecking for food (difficult) - pigeons naturally peck → easier to reinforce or modify their behaviours that way - ***Motivating Operations*** - ***Establishing Operations (EO)*** - An operation that *increases* the effectiveness of a reinforcer. - eg. Deprivation - if you are v hungry, food is going to be very reinforcing → making deprivation an EO - the EO of being hungry sets the state for the reinforcement - eg salty popcorn at the bar → bar wants you to pay for and buy more drinks so they put out slaty food bc then it is an establishing operation in that it makes you very dehydrated, wanting to have a beverage → so that buying that beverage is more reinforcing when you’re in that particular established operations. - ***Abolishing Operation (AO)*** - An operation that *decreases* the effectiveness of a reinforcer. - Eg. Satiation - if you’re full then you’re not going to be very motivated by food → you can’t consume more food if you don’t want more food - food as a reinforcer is being decreased in effectiveness through abolishing operations. - eg. your favourite music → once you’ve listened to it for like 5 hours in a day, 10 more minutes of listening to your favourite music isn’t as reinforcing. → you’ve had enough exposure to that particular reinforcer so the reinforcer itself is not increasing your behaviour as much.
72
What is the Premack Principle?
- In nature, different behaviours have different probabilities of occurring. - Eg. eating = high probability behaviour for a rat; lever pressing = low probability behaviour for a rat - suggests that low probability responses reinforce access to high probability responses - high probability responses are contingent on those low probability response and thus you see an increase in the low probability response. - H = high probability - L = low probability - if going from L response → H, reinforces doing the L response/behaviour - so in this case, the low probability behaviour of pressing the lever can train a rat bc they will then engage in that low probability behaviour bc it leads to the high probability behaviour of eating. - lever pressing leads to food → reinforces lever pressing bc you’ll do the lever pressing to get the food - H → L, ***does* *not*** reinforce H. - getting access to low probability responses does not reinforce a high probability response so this relationship is very directional - low has to lead to high - lever pressing HAS TO lead to feeding - you are not going to eat food to get access to pressing the lever - Eg. if a child prefers playing pinball to eating candy → can reinforce eating candy by letting them play Pinball each time they eat candy. - So if a child prefers playing Pinball (High probability response), they’re already playing Pinball → say you’re a entreprenuerial dentist → you can get the kid to eat more candy by getting them access to playing pinball - this reinforces the low probability behaviour of eating candy (L)
73
What is a Cumulative Record in Operant Conditioning?
A plot of cumulative responses (y-axis) over time (x-axis) - flat line means no responding - first step is labeled Response - pigeon’s first response - the dip down in the pen is just the pen resetting bc it’s too high, it’s run out of room and then it starts over again indicating responding - So whenever you see that step up that’s response. - notice that some of the steps seem wider than others → so the responses are occurring less quickly than with the more narrow steps → means there is more time between the responses when we see the wider step bc the flatline means there is no responding. - Cumulative record is just a visual representation of how frequently a participant or subject is repsonding during a session. - can be used to measure our objective overt behaviour and the rate of responding. - The slope that we see is how we can take a look at the rate of responding. - wide/long steps indicate a low rate of responding - if the steps are very short → indicates a high rate of responding.
74
What is a Continuous Reinforcement Schedule (CRF)?
- **Continuous Reinforcement (CRF) Schedule** - Behaviour is reinforced each time it occurs - Rate of behaviour increases rapidly - Useful when shaping a new behaviour - eg when you’re teaching a dog a new behaviour it is a lot better when you can provide continuous reinforcement at first - difficult to do, not natural - technically a type of fixed ratio, in that it is a FR of 1.
75
How many types of Intermittent Reinforcement Schedules are there? And what are they?
- Four (4) main types: - **Fixed-ratio (FR)** - **Variable-ratio (VR)** - **Fixed-interval (FI)** - **Variable-interval (VI)** - What these schedules are doing is determining the rate and pattern of the reinforcement behaviour that we’re looking at.
76
What is a Fixed-Ratio Schedule?
- Behaviour is reinforced after a fixed-number of times. - eg. *FR-120* - you would have to peck 120 times before you get that food - means you’d have to do a specific number of said behaviour before you are reinforced. - eg. midterms and receiveing grades are FR-1 → every time you take a midterm you get a grade - continuous reinforcement technically falls under FR but in a sense it is different bc its continuous and now we’re looking at more intermittent but similar in that every response gets reinforced which would be an FR-1. - if we had a cumulative record, FR would look like below → the tick at the step shows that reinforcement happened. - Generates **Post-Reinforcement Pause (PRP)** - Pausing typically increases with ratio size and reinforcer magnitude. - flat line - usually bc after you’re reinforced there’s a bit of a pause in behaviour → you’re not participating in the behaviour bc you just received the reinforcer - Generates steady run rates following the PRP. - Fixed ratio schedule works well in factories - sometimes in a factory you’ll get paid per piece → eg. 5 dollars for 12 shirts that you hem → usually a post-reinforcement pause → take a quick break → then get back to making 12 shirts. - delivering newspapers → paid for every 3rd paper they deliver and don’t get reinforced until they deliver 30 - eg. classroom token system → for every 3 assignments they complete they get a token - eg. a loyalty program → free coffee after 10 purchases → reinforcement is given after every 10 purchases
77
What is a Variable-Ratio Schedule (VR)?
- The number of responses needed varies each **time**. - number of times - eg. sometimes its going to be 3 responses and then you get rewarded and sometimes its going to be 5 and that averages out to be five so it would be a VR of 5. - but bc its an average, sometimes you’re reward 5 times, sometimes after 7 times or after 4 or after 6. - Ratio-requirement varies around an average. - eg. VR-360 - **Ratios:** - 1, 10, 20, 30, 60, 100, 180, 240, 300, 360, 420, 480, 540, 600, 660, 690, 690, 720 and 739 responses - Mean = 360 (Average Ratio) - **Shuffled Ordering:** - 20, 240, 720, 420, 480, 60, 10, 690, 30, 739, 360, 690, 300, 1, 660, 600, 540, 100, 180 - goes up and down - Post-Run Pauses (PRPs) are rare and very short - Influenced by the *lowest* ratio and/pr the *average* ratio. - in that, that is what is influenced how many times we’re responding. - the reason you don’t respond is because you’re waiting to see if you get reinforced for that one more peck bc you don’t know when, after how many responses, you’re going to get reinforced again. - Produces higher rates of responding than a comparable Fixed-Ratio Schedule. - bc of the idea that you don’t know when you’re getting reinforced. - eg. like the VR when you’re pressing on a slot machine → when you’re gambling, you don’t know when you’re going to hit the jackpot → its on a variable-ratio schedule so you’d have to make a certain number of responses and that number is going to vary based on what has been inputed as the variable ratio in the machine. - Common in natural environments.
78
What are the two common Variable-Ratio Schedules (VR)?
- **Random-Ratio:** - Schedule is controlled by a random number generator. - Produces similarly high rates of responding. - Type of ratio used in *casino* *games* and *video* *games*! - **Progressive-Ratio** - Ratio requirements move from small to large - and increases at an even ratio - eg. 1,2,3,4,5,6,7,8.. - eg. 2,4,6,8,10… - PRPs increase with ratio size - Creates a “break-point” measure of how hard an organism will work. - at what point is a human or animal unwilling to work to receive reinforcement? - will they do pecking 2x, 6x until they get reinforcement? → keep going until they stop - eg. fishing → casting a line several times → the reinforcement catching a fish happens after a variable number of casts. - eg. a telemarketer → the reinforcement (a successful sale) happens after a variable number.
79
What are Fixed-Interval Schedules (FI)?
- Behaviour is reinforced when it occurs *after* a given period of time. - eg. FI-4min. - means that after 4min if you make the response again you are going to be reinforced - you wouldn’t get reinforced after 1, 2, or 3 minutes, but if you perform the behaviour at 4 mins, you get reinforced. - if you responded at 5 mins you’d still get reinforced bc it’s reinforcement *after* that period of time. - interval means the time until you make a response - Produce PRPs (post-reinforcement pauses) - Responding increases gradually producing a “scallop” shape. - bc as the time for the availability of the next reinforcer draws closer, the response rate is going to increase. - as we get closer to those 4 mins passing you’re going to respond more bc you know the reinforcement is coming. - Uncommon in the natural environment. - Instead of the number of responses being reinforced it’s about making a response after a specific amount of time, a very fixed amount of time. - eg. studying → after an exam, you wouldn’t study for a while but after it gets closer to the next exam, you’d see an increase in that studying behaviour. → so you’re increasing your responding closer to the next time you have an exam (scallop-shaped) - eg. your laundry → no matter how many times you open it before its done, it’s not going to be done. You can open it any time after it’s done though. So the FI would be how long it takes for your laundry to finish. → then it would be about making your response after the fixed interval has occurred. And you won’t get reinforced until after the time has passed. - Eg. monthly subscription box → is delivered after a fixed interval of time like a month. - Eg. weekly allowance → children receive their allowance every Saturday and the reinforcement (allowance) is given at a fixed interval of time (once a week)
80
What is a Variable Interval Schedule (VI)?
- The timing of the response needed varies each time. - when you’re responding, when you’re supposed to respond is going to vary each time. - Interval varies around an average. - Eg. VI-3mins - **Ratios (in seconds):** - someone would be reinforced in 300, 30, 280, 120, 360, 300, 0, 240, 220, 180, 10, 280, 100, 60 - PRPs are rare and short - because you don’t know when you’re going to be reinforced, time-wise the PRP is rare and short bc you don’t know when the next reinforcement is available - Steady rates of responding - Not as high as VR - Eg. not knowing how long it will rain or how long until it stops raining. - Eg. not knowing when your bus is going to arrive - Eg. pop quiz → given at unpredictable times throughout the semester → reinforcement (quiz) is given after a variable amount of time. - Eg. random drug testing → the reinforcement (drug test) happens after a variable amount of time. - Eg. checking your email → reinforcement (email) will arrive after a variable amount of time. - Common in natural environments
81
What is Extinction?
- A procedure that withholds reinforcers that maintain behaviour - Used a lot when we have behaviours that we don’t want to occur - Is not about ignoring what is happening but is about withholding the reinforcer. - During the extinction phase you will see low responding bc the behaviour that was previously reinforced is no longer reinforced and thus the behaviour stops occurring in the future. - Extinction is working bc we’re withholding the reinforcer. **Extinction:** 1. A behaviour that has been *previously* *reinforced* 2. *No longer results in the reinforcing consequences* 3. And therefore, the behaviour *stops* occurring in the future. - If a behaviour is reinforced, even intermittently, it will continue to occur. - if there is any level of reinforcement then the behaviour is going to continue. - So we want to fully withhold the reinforcer. - If a behaviour is not longer followed by receiving consequences, then we would say that the behaviour has undergone extinction or that the behaviour has been extinguished.
82
What actually happens first with Extinction?
**Extinction Burst** - Increase in frequency, duration, and/or intensity of the unreinforced behaviour during the extinction process. - so you’ve taken away whatever the reinforcement is, but first this extinction burst is going to occur. - Eg. you are having car trouble → it’s winter, it’s 5am, you’re trying to head to the gym → do you typically immediately give up and go back to bed? Do you skip the gym? - typically what you see first is that extinction burst → you try a couple of things: put the keys back in, try the car again, hit the steering wheel, open up the hood, look at it, close the hood are realize you don’t know what you’re doing. - all of these represent extinction burst bc you’re still trying to get that reinforcement. - Eg. malfunctioning vending machine → put in your money, pick your favourite after class snack and nothing happens. - usually you will hit the machine a few times, push the change button, add another selection, maybe put in more money - this is the extinction burst → the increased intensity of behaviour that is not going to result in the reinforcer. - Eg. elevators - If its taking a long time to come down sometimes you just click the button again. eg. Snooky - It seems after some observation that the screaming is directly related to attention → when she cries she receives attention from her cast mates → is reinforcing to Snookie so she ends up crying every time - What we want to do is withhold that reinforcement → extinction procedure - we try to get the roommates to not respond and withhold that reinforcement of attention. - What happens first is that extinction burst → a rise in the length of the intensity of the screaming behaviour before it will start to extinguish - goes up in the first few days - then it starts to go down → less and less until we see no screaming
83
What is Spontaneous Recovery?
- **Spontaneous Recovery** - The tendency for extinguished behaviour to occur again in situations similar to those it had been previously reinforced. - Eg. Snookie’s crying: (see chart below) - On day one we see a lot of responding and then a decrease in behaviour and extinction - by day we see that there are a few spots that show signs of spontaneous recovery → we see an increase in this behaviour and then it goes back down. - → think of it as trying a behaviour again to see if maybe now it works - eg. on day 3 Snookie decides she hasn’t got the reinforcement she wants, so maybe it was sth about those days and if I do it now, it will work. - or she tries it a new club bc she thinks its a new environment and it will work so she tries the behaviour again - can observed that repeated sessions are necessary for extinction → you want to make sure you are doing it across multiple different days and even under multiple settings. - Eg. if we only receive extinction therapy in a clinical setting and only once, it is not going to be very effective - but if its more than once and in multiple different settings, then the therapy is more likely to be effective.
84
What is Punishment?
- Punishment → decreasing a behaviour - General understanding is that the effect of the consequence, if it’s increasing your behaviour, then its reinforced, if its decreasing your behaviour then its punishment through a punisher. - can positively and negatively punish sb BUT both of these will result in a DECREASE in behaviour because punishment decreases behaviour. - Positive Punishment - we’re adding sth aversive to decrease behaviour - if its positive punishment, we want to decrease someone’s behaviour - ADDING something bad → eg. giving them a speeding ticket when they are speeding and ideally this decreases their responding in speeding behaviour - Negative Punishment - We’re TAKING sth AWAY - in this case, we are taking away something GOOD - we are taking away the appetitive stimulus bc we are decreasing the response rate. - eg. if you were a child and you did sth bad → have to go to your room → taking away sth good like time with your family, friends, watching TV to DECREASE whatever inappropriate behaviour was occurring before that
85
How can we Define Punishment?
1. The occurrence of a particular behaviour 2. Is followed by an immediate consequence 3. That results in the *weakening* of the behaviour (ie. the person is *less* likely to engage in the behaviour again in the future. **Two Ways of Punishing** - Add a Stimulus = Positive Punishment - adding an aversive stimulus in order to decrease responding - eg yelling, spanking, getting a failing grade for answering questions incorrectly, chores for not coming home before curfew. - Remove a Stimulus = Negative Punishment - Taking away something good (appetitive) - eg a suspended driving licence → the withdraw of pleasure and privilege of driving → negative punishment for drunk driving. - eg. taking away toys for hitting their sibling → the removal of the toy aims to decrease the hitting behaviour. - eg. loss of screen time - eg. student having their phone taken away during class for texting - reduction in allowance for not completing chores.
86
What is the Premack Principle for Punishment and Reinforcement?
- For Reinforcement: - High-probability behaviour reinforces low-probability behaviour. - Piano (low prob). → Coffee (high prob.) = Increase piano - eg. if sb wants to increase their piano-playing behaviour and really likes coffee then what you can do is make this person play the piano to get some coffee (will increase their piano playing) - if the coffee is a reinforcer or a high probability behaviour for them then this high probability behaviour can reinforce the low piano playing behaviour. - For Punishment: - Low Punishment: - Low-probability behaviour punishes high-probability behaviour. - Coffee (high prob.) → Piano (low prob.) = Decrease coffee - if you still like coffee and still hate piano, then what happens is you’re giving them access to coffee first and then it leads to playing the piano. - so every time you choose to have coffee, you have to play piano (punishment) and this would result in decreased coffee drinking (decrease high probability behaviour) - so the low probability of piano playing actually punishes the access to our high probability behaviour of drinking coffee. - This is one way punishment can work if you’re forcing yourself to do sth that you won’t want to do. - every time you are doing something that you’re trying to punish (drinking coffee) → then this is an effective way to do it by pairing these two things in that direction. - every time you drink coffee, you have to play the piano → because you don’t like playing the piano, you’re going to drink less coffee.
87
What is the difference between Extinction and Negative Punishment?
- **Extinction:** **withholding** the reinforcer that was maintaining the behaviour - withholding whatever was reinforcing the behaviour: eg. so not being given attention after asking for it crying at the club - is about how the behaviour was being maintained - what the reinforcer was and making sure you’re withholding the correct reinforcer to extinguish behaviour - **Negative punishment:** **removing** or **withdrawing** a positive reinforcer after the behaviour. - eg. not necessarily the same reinforcer that was maintaining the behaviour to begin with. - is about taking anything away that is reinforcing so that the individual will decrease their responding.
88
What are the Variables Affecting Punishment?
- **Contingency:** - The degree of correlation between a behaviour and its consequence. - they have to be contingent and occur consistently together in that every time you do that behaviour its followed by that punishing consequence. - the more contingent, the more effective punishment. - **Contiguity:** - Nearness of events in time (temporal contiguity) or space (spacial contiguity). - The longer delay (less contiguity), the slower the learning. - essentially timing - **Intensity:** - The more intense the punisher is in terms of magnitude, the more effective it typically is. - eg. 25 volts isn’t very different from not getting a shock but if you’re getting 220volts shock each time you lever press, that’s going to decrease quite rapidly and consistently through those sessions. - Introductory Intensity of Punishment: - Using an effective level of punishment from the beginning is very important! - when you’re deciding on your punishment and what could be an effective punisher, leading up to a strong enough intensity doesn’t work as well. - **Ethical Consideration:** - If punishment is to be used, it must be intense enough to suppress the behaviour dramatically - this is where ethical considerations come into play: say if you were thinking about hitting a child, which is not supported by research, (Eg spanking) → in this case, doing this corporeal punishment of hitting sb, this aversive stimuli, you would want it to be effective enough → you don’t need to beat the child → but still not supported by research - but on the other hand, there are risks to not using an intense punisher, one is that the behaviour won’t get suppressed, and instead, you might help build up a tolerance. - if you’d suppressed the behaviour right away with the right intensity, then you won’t need as many instances of the punishment instead of doing repeated instances or instead of needing to increase the intensity as you go. - would result in exposing ppl to more punishment than is necessary which would be unethical
89
What are the Problems with Punishment?
- **Escape** and **Avoidance**: - Punishment can induce **escape** and **avoidance** behaviours. - basically, when punishment occurs and you can assume when punishers are going to occur, typically you will see these avoidance behaviours. - Examples: - Hiding - Cheating - Lying - about doing the behaviour to begin with so you can avoid the consequence. - We don’t want to use punishment unless reinforcement isn’t working - Even if we use punishment, we want to try to use it in conjunction with reinforcement. - People can start to avoid the punisher → the person that’s administering the punishment like a caregiver. - If this were the case and we’re using punishment, this would make it really hard to punish someone’s behaviour effectively. **The Problems with Punishment** - **Aggression** - considered to be a form of escape → emotional side effect that comes with punishment - eg. in situations of shocking a monkey, there have been reported instances of that particular monkey attacking other monkeys within its group. → can be things that don’t have connection to the punishment. - is negatively reinforced bc you’re escaping it. → reinforcing in that way of taking away the aversive stimulus of punishment. - **Apathy** - is important to use reinforcement as well if you have to use punishment - if the behaviours of reinforced, then the organism may just do nothing at all and will exhibit apathy - no other options exist for reinforcement and punishment is attacking your only source of reinforcement, then the person/animal may just do nothing at all. - **Doesn’t teach acceptable behaviours!** - it only decreases the behaviours that we don’t want. - we still need to look at the behaviours that we do want ant make sure that those particular behaviours are still reinforced. - reinforcement should be used before punishment is considered - if punishment is necessary, it should be used in conjunction with reinforcement for alternative behaviours - **Abuse** - punishment can often get out of hand and turn into abuse - in general, negative punishment is preferred to positive punishment and we see this consistently in research - if there is a way to negatively punish someone instead of positively punish sb, then we tend to go with negative punishment. - even more so reinforced should be used before punishment. - **Imitation of the Punisher** - punishment can lead to imitation of the punisher
90
What is an Unconditioned Stimulus?
- Unconditioned/Unconditional stimulus → here is a donut - unlearned stimulus - food in general is going to provide a natural, unconditioned stimulus → we actually salivate to break down the food in our mouths and start the digestive process. → is a very natural response that is being elicited. - eg. salivating to food, eyes watering to onions, a feather causing sneezing - eg. the donut
91
What is an Unconditioned Response/Unlearned Response
- you don’t have to learn to respond to this - eg. salivation - eg. gagging, coughing, suckling behaviour from infants, blinking in result to dirt in the air, pupil constriction when exposed to suddenly bright situations, withdrawal from a heat source or a painful stimulus → all natural responses to unconditioned stimuli - eg. we don’t need to learn about food to respond to it → we jsut salivate - unconditional responses are connected to survival value → you want to make these responses and they’re appropriate and adaptive.
92
What is a Conditioned Stimulus?
- Conditional Stimulus (CS) - a previously neutral stimulus,, bc the tone bell on its own doesn’t do anything, the neutral stimulus becomes a conditioned stimulus, a *learned* stimulus that is paired with the food to elicit salivation - eventually the tone of the bell on its own would produce salivation without the food present yet. - if you repeatedly paired the bell with donuts, then Homer would come to salivate every time he hears the bell.
93
What is a Conditional Response (CR)?
- since you got that pattern of bell → food, you’ve learned to salivate to the bell alone bc you associate it with the food. - as we have the pairings of bell → food, bell → food, bell → food, we now have this relationship of conditioned stimulus to conditioned response in that the tone, bell, is going to produce salivating → it’s going to elicit the salivation
94
How to generate a conditional reflex?
- **Step 1:** - Make administration of the US **contingent** on the presentation of the novel/neutral stimulus. - so basically every time this donut is showing up, our US, it should be also with the neutral stimulus. → if we want to condition this neutral stimulus - bc before all the presentations of tone-donut-tone-donut, the tone that becomes our CS is just a neutral stimulus. - so what we’re going to do is repeatedly pair it bell → donut → bell → donut to associate it that way - so the Pavlovian conditioning occurs when the previously neutral stimulus is paired with this unconditioned stimulus and then elicits that unconditioned response as a result of this pairing the natural stimulus, becomes the CS, thus resulting the CR that was very similar to our UR.
95
How to generate a conditional reflex part 2?
- **Step 2:** - Present the CS (formerly the novel stimulus) on its own. - if you repeatedly pair bell with donut, bell with donut, bell with donut and that salivation continues to naturally happen, then what will happen is step 2. - now if we present just the bell, it’s the conditioned stimulus (CS) instead of the neutral stimulus bc we’ve learned something. - we’ve been conditioned to this particular bell and now if you present the CS formally, the novel or neutral stimulus on its own, and see if it produces a response, in this case the CR of salivation, then it is now the conditioned response. - the difference between the UR and SR is what they are salivating to and has the conditioned stimulus BECOME the conditioned stimulus? - → it becomes the conditioned stimulus if the person is salivating to just that stimulus without that presentation of the donut.
96
What is an example of Classical Conditioning?
- US: getting shot with the airsoft gun /hitting roommate with nerf projectile - UR: flinching, showing discomfort - CS: “that was easy” sound effect - CR: to flinch whenever he heard “that was easy”
97
What is Acquisition?
- **Acquisition** - Formation of an association between a CS (eg. metronome) and a US (eg. food) - you acquire a behaviour. Once acquired, how long does a behaviour persist? For a while as long as there is that association.
98
What is the Rescorla-Wagner Model in Classical Conditioning?
- Animals learn that some CSs are better predictors of the US - Proposes that animals learn to expect that some predictors, potential CS’s are better indicators of an unconditioned stimulus (US) than others - The strength of the CS-US association depends on the extent to which the CS predicts the US - if the CS reliably precedes the US, it sets out the expectation for the US, strengthening the learned association. - A key aspect of this model is prediction error - explains the strength of association base don prediction errors; where positive prediction errors strengthen associations and negative prediction errors weaken them.
99
What is a Prediction Error and what are the Types of Prediction Errors?
- **Prediction Error** - Difference between expected and actual outcomes - when the actual outcome differs from what is expected, learning occurs to adjust for this discrepancy. - **Positive Prediction Error:** Occurs when the actual outcome is better than expected, strengthening the CS-US association - bc the CS predicts a stronger and more rewarding US than predicted - eg. a dog being trained to associate a bell CS with food, US. - intially, every time the bell rings the dog receives food and it starts salivating at the sound of the bell. - If the bell rings and the dog receives an extra tasty treat instead of the usual food (so a better outcome than expected), this results in a positive prediction error. → the dog learns to expect a more rewarding experience from the bell, strengthening that bell through association. - **Negative Prediction Error:** Occurs when the actual outcome is worse than expected, weakening the CS-US association. - leads to a decrease in the strength of the CS-US asscoiation - bc the CS predicts a less rewarding US than expected - if the bell rings and no food is presented (a worse outcome than expected) then this results in a negative prediction error → dog learns that the bell is not a reliable predictor of food, and that weakens that bell-food association. - Eg. if the CS predicts the US accurately, the prediction error is low and there is little change in this association - however, if the CS does not predict the US well, the prediction error is high, leading to a stronger adjustment in the CS-US prediction.
100
What produces a Prediction Error?
- a difference between the expected outcome and the actual outcome. - the strengthening of the CS-US association
101
What is Second Order Conditioning?
- Where a conditioned stimulus (CS) that has already been associated with an unconditioned stimulus (US) is used to condition a new CS. - essentially involves paring a new stimulus with a previously conditioned stimulus rather than with the original unconditioned stimulus. - **First-Order Conditioning (higher-order conditioning):** - CS1 (Bell) + US (food) → UR (Salivation) - CS1 (Bell) → CR (Salivation) - **Second-Order Conditioning:** - CS2 (Light) + CS1 (Bell) → CR (Salivation) - after several pairings, the CS2 alone can elicit a conditioned response even though it was never directly paired with the US (unconditioned response) - CS2 (Light) → CR (Salivation) - eg. after the dog has learned to associate the bell CS1 with food (US), we introduce a light. - A light (CS2) is paired with the bell (CS1) - Eventually the light, CS2, alone, will make the dog salivate (CR) - this was even if the light was never paired with the food directly, it was only paired with the bell - kind of indirect association → the new CS becomes associated with the CR through association with the first CS, not directly with the US. - Shows that complex learning can occur in layers, where a stimulus can gain meaning and elicit responses through its connection with another conditioned stimulus. - eg advertising: - a celebrity (CS1) is associated with positive feelings (US). A product (CS2) is then shown with the celebrity (CS1). - Eventually, the product, CS2, alone, will elicit those positive feelings. - is why you see a lot of celebrities used in advertising bc you like them.
102
What are the Variables that Affect Respondent Conditioning?
- **Intertrial-Interval** - Interval between one CS-US exposure (a trial) and another CS-US exposure (a different trial) - so the next time you get that particular pairing, how long has it been since you’ve had the bell paired with the donut? - it can vary for a lot of time, but in general longer intervals are better than shorter intervals. - these two things are paired in a long-term contingency - **Age** - Degenerative/health effects of aging - aging can lead to cognitive decline affecting memory, attention and processing speed and this decline can help make it harder to form new associations or recall previously learned information. - Learning history - older individuals typically have more extensive learning history which can influence how they interpret and respond to new stimuli. - so previous experiences and learned associations might affect how that new information is integrated - with age, individuals may have encountered a wide range of stimuli and experiences and this extensive background can either facilitate learning through richer context or complicate learning if new stimuli are less disntiguishable from past experiences.
103
What is Equipotentiality?
- **Equipotentiality** - Any object or phenomenon can become a conditioned stimulus if associated with an unconditioned stimulus. - or any behaviour can be learned as long as it is reinforced but thorough research we know that some conditioned stimuli are more likely than others to produce learning - some behaviours are learned more easily than others: biology and evolution
104
What is Significant about Taste Aversion and Classical Conditioning?
- **Taste Aversion** - Eating a food and becoming ill (regardless of the actual cause) leads to a conditioned taste aversion - Aversive learning can occur even if the illness follows the food consumption by several hours - and even if its clear your sickness was caused by something else - Taste aversion contradicts Pavlov, as it can form with a delay between taste and sickness - as the association occurs, even tho the food and sickness are not experienced together in time, the conditioned association between the novel taste and getting sick, even when the illness occurs hours after eating, is strong enough that a taste aversion can be formed in one trial. → some ppl can’t even stand the smell of the food they associate. - eg. if animals eat a food that has sth that makes them ill, they quickly learn to avoid the taste and smell associated with the food. - easy to create taste aversions but difficult to create aversions with other stimuli like sight or sound. - makes sense bc taste and smell are cues for eating behaviours - animals that quickly associate a certain taste with illness and avoid that taste are going to be more successful → live and paste on their genes
105
What is Species-Specific Conditioning?
- **Species-Specific Conditioning** - Easier to condition taste aversion into humans and rats who rely more on taste and smell for food - but for birds, its easier to use a visual cue since they rely on vision more than they do on taste.
106
Can any behaviour be shaped through reinforcement?
- Skinner believed that any behaviour could be shaped through reinforcement - but now we know that animals have a hard time learning behaviours that run counter to their evolutionary adaptation. - **Marian and Keller Breland:** - **Raccoons** learned to place coins in a piggy bank but then reverted to rubbing coins, a natural behaviour linked to food rewards - the rubbing behaviour was not reinforced → raccoon associated the coin with food reward and treated it the same way. - rubbing food between its paws is a natural hardwired raccoon behaviour - so the task was incompatible with the trained behaviour. - **Pigeons** can be easily trained to peck at keys for rewards but struggled to peck to avoid shocks - instead, they can learn this by flapping their wings bc wing flapping is a natural behaviour related to escape. → so overall, the associations between the response and the reinforcement need to be similar to natural abilities.
107
What is Biological Preparedness?
- Organisms are predisposed to learn certain associations more easily due to evolutionary factors - Monkeys are more easily conditioned to fear snakes than flowers - dangerous objects versus objects with little threat - Humans may have evolved to favour in-group members and be wary of out-group members, an adaptive trait for early survival. - or forming tight-knit groups and being cautious of outsiders had survival advantages - Prejudice is influenced by social and environmental factors, such as cultural norms and personal experiences - while biological predispositions might play a role, prejudice is more directly shaped by societal influences and individual experiences. - Can be learned and reinforced through exposure to stereotypes, biased information and social interactions. - Unlike biological preparedness which involves innate tendencies, prejudcie is often a result of social learning processes but can myabe still see this connection.
108
How does Dopamine relate to Award?
- Dopamine is related to reward. Dopamine release sets the value of a rewarding unconditioned stimulus in classical conditioning or a rewarding reinforcer in operant conditioning. - We also know that drugs that block its effects disrupt conditioning. - **Dopamine Release** - Dopamine release occurs in the nucleus accumbens - eg. when a rat that is hungry is given food there is a dopamine release in the nucleus accumbens in the basil ganglia. - More dopamine is released when deprived. - think about how food and water are more rewarding when you’re hungry or thirsty.
109
What was Schultz's Study on Dopamine?
- Investigative dopamine neuron responses to rewards and predictive cues. - Rhesus monkeys were trained to perform tasks for rewards (eg. juice), and dopamine activity was measured. - Dopamine was activated both when rewards were received and when cues predicted rewards. - suggests that dopamine is involved in signalling both the anticipation and reiceving of reward. - Dopamine response increased with expectation and predictive cues of rewards - the study showed that the dopamine response was greater when rewards were more expected or when the cue was more predictive of the reward. - this helps in understanding how the brain encodes reward-related info and how learning about reward occurs - Conclusion: dopamine plays a crucial role in reward processing by signalling the expectation and delivery of rewards, contributing to learning and motivation.
110
What is a Conditioned Emotional Response?
- response to a stimulus that is acquired through operant conditioning. - Little Albert - 1yr old boy who grew up in a hospital setting - Watson and Reinher → emotional conditioning - before conditioning, he was shown a white rat and did not show any fear (neutral stimulus) - they had an unconditoined stimulus (US) of a loud, startling noise - the noise naturally caused a startled response in Albert - white rat was the CS that was paired with the loud noise → each time he played with the rat, the researchers made a loud noise above his head → and this pairing of the rate with a startling noise created a conditioned emotional response, in that, after several pairings, Little Albert began to exhibit fear whenever he saw the rat, even without the loud noise. - This fear became a condition emotional response (CER) - this response did not stay limited to the white rat → also showed fear towards other similar stimuli: white rabbit, dogs, even ppl wearing furry masks - demonstrates how a CER can generalize to similar objects. - CERs can be positive too! - eg. in her case, her cat wears a bell on her collar → the sound of the bell which is associated with her presence has become a positive conditioned emotional response for her. - whenever she hears that chime it triggers pleasant feelings bc it reminds her of her cat.
111
Your friend was frightened when she was on a tall bridge and found an unusual flower. She is now afraid of heights and tall bridges, but not unusual flowers. Why?
We are biologically prepared to learn to fear some stimuli, such as heights (a higher threat), more than others such as flowers (a lower threat).
112
How does Operant Conditioning and Classical Conditioning effect drug use?
- **Operant Conditioning** - Taking drugs is reinforced by the positive effects of the drug, leading to increased and repeated use. - **Classical Conditioning** - Cues present during drug use (eg. smell, sight of a needle) become associated with the drug’s effects. - eg. the smell of coffee can become a CS, the smell alone can lead coffee drinkers to feel energized as if they drank the coffee - **Environmental Cues and Cravings** - Exposure to cues associated with drug use (Eg. people, places) can trigger cravings - the environmental cues previously signaled the ingestion to the drug so someone can experience cravings if they encounter these cues and if the cravings are not satisfied then there can be withdrawal, the unpleasant physiological and psychological state of tension and anxiety that occurs when stopping drug use. - Cravings can lead to withdrawal if not satisfied, resulting in tension and anxiety - **Tolerance** - the need for increasing drug doses to achieve the same effect. - tolerance is higher when a drug is taken in the same location where it last occurred - the idea is that the body has learned to expect the drug where the person is and it compensates for the drug by alerting the body to metabolize it prior to the actual drug delivery and the impact can be so much that someone with a drug addiction takes drug doses that would be fatal for someone else who doesn’t use the drug frequently. - That’s also why if you take a larger dose in an unfamiliar setting, then you are more likely to OD → the body doesn’t have those cues so the body does not compensate for the drug.
113
Why might it be easier for someone with an addiction to abstain in a clinic rather than their own neighbourhood?
The clinic will most likely present the person with fewer learned drug cues that could potentially include cravings.
114
What is Imitation?
- We model behaviours of those we find attractive, high-status, or similar to ourselves, often implicitly. - sometimes is implicit → we don’t always know or admit that we change our way of speaking or dressing to mirror those that we admire and like. - We imitate what we see in others - eg. one study showed that teens who had a favourite actor who smoked, were more likely to smoke and if teens saw lots of movies where they smoked, they were more likely to smoke. → reason why we don’t see it in movies much anymore.
115
What is Vicarious Learning?
- **Vicarious Learning** - Learning by observing others being rewarded or punished. - Bobo again - children watch adult engaging in specific behaviours and either being rewarded or punished for those behaviours - key findings: children who saw the children being reinforced were more likely to imitate the behaviour. - → they learned that engaging the behaviour could lead to positive outcomes - children who saw the model being punished were less likely to imitate the behaviour → they learned that the behaviour could lead to negative outcomes - also supports Bandura’s social theory which encourages learning thorugh observation and the impact of reinforcement and punishment observed in others
116
What is Instructed Learning?
- **Instructed Learning** - Learning through verbal instruction and demonstration. - Teaching proper hand-washing technique - Here the objective is to teach individuals the hand-washing technique to prevent the spread of germs and illness. - would include a demonstration → a health instructor or video demonstrates the proper hand-washing technique, emphasizing each step, showing it, instructor would explain the importance of each step and germs.
117
You see another student suspended for cheating and now you are less likely to cheat. Why?
Social learning → in this case your behaviour has been influenced by vicarious learning.
118
What is Motivation?
- **Motivation**: Process that influences **goal-directed behaviour.** - Direction - direction is a specific goal that an individual wants to achieve - Persistence - if an individual is continuously trying to achieve their goal - Vigour - how hard an individual is trying to achieve their goal
119
What is Instinct Theory?
- Based on Darwin’s **theory of evolution** - **Instinct:** inherited predisposition to behave a certain way to certain stimuli. - Idea behind instinct theory is that instincts motivate a large portion of our behaviour - Was popular bc instincts are common among members of species, instead of relying on learning they have genetic basis and by the 1920s researchers had identified thousands of instincts. - but has faded now bc there was a lack of evidence and bc it is based entirely on circular reasoning - eg. why are ppl greedy? → greed is an instinct → how do we know greed is an instinct → ppl are greedy → and back around Instinct Theory 2.0 - Modern View: - Motives based on evolution - Genes that increase chances… - The adaptive significance of behaviour → means that behaviour has evolved for specific reasons - eg. one reason humans are social creatures is bc our ancestors → ppl that shared resources and worked together increased their chances of survival and were more likely to pass their genes on to the next generation.
120
What is Homeostasis?
- **Homeostasis:** - Internal physiological equilibrium - **Sensors** - there are several internal sensors required to maintain homeostasis - first a sensor mechanism is required to detect changes in the internal environment - **Response System** - 2nd → a response is done that can restore equilibrium - **Control Centre** - lastly, a control centre that receives information from the sensors and activates the response system. - eg. when you’re hot your body tries to cool you down by sweating **** - Eg. control centre functions like a thermostat in a furnace or an air conditioning unit. - once a thermostat is at a fixed temp or a set point, sensors detect temperature changes in either direction. - so the control unit responds by turning on the furnace or air conditioner until the sensor indicates that set temperature point has been restored and then turns that back off. - Homeostatic regulation can also involve learned behaviours → when we’re hot, we not only sweat we also may seek a cool drink or a shady place to sit.
121
What is Drive Theory?
- Clark Hull - **Drive Theory:** Disruption to homeostasis produce drives. - the motivation of physiological disruptions to homeostasis produce drives. - **Drives:** States of internal tension. - → motivate organisms to behave in ways that reduce tension. - drives like hunger and thirst arise from tissue deficits such as lack of food and water and provide a source of energy that pushes an organism into action. - Hull proposed that reducing drives is the ultimate goal of motivated behaviour. - homeostatic models currently applied to many aspects of motivation such as the regulation of hunger, sleep or body temperature. - less influential than in the past → people often behave in ways that seem to increase rather than reduce their state of arousal - eg. skipping meals to diet
122
What is Incentive Theory?
Incentive Theory: Our behaviour is dictated by a desire for external stimuli. - **Incentive**: A thing that motivates or encourages an organism to do something. - environmental stimuli that pull an organism toward a goal - focus on external stimuli that motivate behaviour, that historically incentives and drives were often linked. - Hull argued that all reinforcement involves some kind of biological drive reduction. - eg food is an incentive bc it eliminates the drive of hunger but this view is no longer held. - Modern incentive theory emphasizes that the pull of external stimuli and how stimuli with high incentive value can motivate behaviour, even in the absence of biological need. - eg finishing a meal and having no more biological need for food but we’re happy to eat dessert when sb places their favourite cake or pie or the table. → in this situation behaviour is motivated, not by biological need, but by the incentive value and pull of the external stimulus, the dessert. - so incentive theories of motivation have often been applied to explain drug addiction in a similar way. - Drive theory = “push” - Incentives = “pull” - “Push” and “pull” = **Biological Drive Reduction**
123
What is Expectancy Theory?
- Expectancy x value theory - Goal-directed behaviours driven by: 1. Strength of expectation - that particular behaviours will lead to a goal 2. Value of goal - aka incentive value. - motivation = expectancy x incentive value.
124
How can we apply Expectancy Theory?
- Eg: There are 3 students: James, Lenora and Harrison who are all students in a calculus call with similar math abilities - James studies hard in the hopes of getting an A - Lenora and Harrison put in just enough effort to pass with a C - Why is James the only one trying to get an A in the class when Lenora and Harrison have similar abilities in math? - James works hard bc he believes the more he is to be the greater the probability of getting an A and he values an A very highly. - Lenora also believes that studying hard will lead to an A but getting an A holds a low value for her in this course. - In contrast, Harrison values an A but believes bc the tests are tricky that studying hard is unlikely to produce a high grade motivation = expectancy x incentive value
125
What are the Types of Motivation?
- **Intrinsic Motivation:** - Perform activity for its own sake - bc you find it enjoyable or stimulating - read textbooks bc they find them interesting or want to learn more - **Extrinsic Motivation:** - Perform activity for reward or avoid punishment. - eg. students who read their textbooks only bc they want to get good grades - External incentives decrease motivation
126
What is Over-Justification Hypothesis?
- giving ppl extrinsic rewards to perform activities that they intrinsically enjoy may over justify that behaviour and reduce intrinsic motivation. - if we begin to proceed or we are performing for the extra rewards rather than enjoyment, then the rewards turn play into work and it might be difficult to return to play if those rewards are no longer available. - eg. its common for ppl to report that an activity is no longer enjoyable once they are paid for it. - eg a student who makes jewelry as a hobby bc they enjoy it and begins to sell it will report a marked decrease in the enjoyment of the activity.
127
What is Psychodynamic Theory? (1923)
- Sigmund Freud - one of most prominent psychodynamic theorists - Most behaviour from - **Unconscious impulses** - said our behaviour is from a never-ending battle between unconscious impulses struggling for release and the psychological defenses were use to keep them under control - from these unconscious motives, especially from instinctive sexual and aggressive drives, often disguise and expressed through socially acceptable behaviours. - thus hidden aggressive impulses may fuel one’s career as a lawyer, athlete or business person. - **Psychodynamic defenses** - Strongest drives = sex and aggression - Little research supports this theory - But his work did lead to other psychodynamic theories that highlighted different needs such as needs for self-esteem and relatedness to other people. - Today’s diverse psychodynamic theories continue to emphasize that along with conscious mental processes, unconscious motives and tensions guide how we act and feel.
128
What is Humanistic Theory and who is it attributed to?
- Abraham Maslow - Believed that psychology’s other perspective ignored key motive—our striving for personal growth. - Distinguished between deficiency needs and the physical and social survival and growth needs which are uniquely human and motivate to develop our potential. - Proposed the concept of a need hierarchy - deficiency needs at the bottom and growth needs at the top - Once our basic physiological needs are satisfied, → we focus on our needs for safety and security → belongingness and love needs → esteem needs → cognitive needs → aesthetic needs → self-actualization - Self-actualization is the need to fulfill our potential and is the ultimate human motive - striving to be all that you can be - Critics question the validity of Maslow’s need hierarchy and believe the concept of self-actualization is vague - Still, the model draws valuable attention to the human desire for growth, incorporates a wide range of psychological and biological motives and has influenced thinking in such diverse fields as philosophy, education and business.
129
What is Metabolism?
Rate at which body uses energy (calories)
130
What is Basal Metabolism?
- “Resting” metabolism = 2/3 of normal energy use. - the resting, continuous metabolic work of body cells - several mechanisms attempt to keep the body in energy homeostasis by regulating food intake. - there are short-term signals that start meals by producing hunger and stop food intake by producing satiety. - Your body also monitors long-term signals based on how much body fat you have. These signals adjust appetite and metabolism to compensate for times when you overeat or eat too little in the short term.
131
Is Eating Linked to immediate Needs?
No. - Hunger is not necessarily linked to immediate energy needs - Homeostasis: prevent “running low” on energy - designed to prevent you from running low on energy in the first place - in evolutionary terms, an organism that does not eat until its energy supply is low in any absolute sense would be at a serious survival disadvantage
132
What is a Set Point?
- **Set Point:** - Biologically determined physiological standard. - researchers believe there is a set point → an internal physiological standard around which our body weight, more accurately our fat mass, is regulated. - this holds that if we overeat or eat too little, homeostatic mechanisms will return us close to our original weight which is our set point. - Some researchers propose that as we gain or lose weight, homeostatic mechanisms kick in and make it harder to keep gaining or lose weight, but do necessarily return this to our original weight. - Over time we may settle at this new weight.
133
What are the significance of hunger pangs in starting a meal?
- Muscular contractions **correlate with** feelings of hunger - In an early experiment, Al Washburn showcased a unique scientific talent: he swallowed a balloon. - It reached his stomach, the balloon was inflated and hooked up to an amplifying device to record his stomach contractions. Washburn then pressed a key every time he felt hungry. - The findings: revealed that Washburn’s stomach contractions did indeed correspond to subjective feelings of hunger. - other research indicates that hunger pangs do not depend on an empty stomach or a stomach at all → animals display hunger and satiety even if all nerves from their stomach to their brain are cut and ppl who have had their stomach surgically removed for medical reasons continue to feel hungry and full. - thus other signals must help to trigger hunger.
134
What is the role of Glucose in Signalling the start of a meal?
- When you eat, digestive enzymes break food down into various nutrients → one key nutrient is glucose - a simple sugar that is the body’s and especially the brain’s major source of immediately useful fuel. - after a meal, some glucose is transported into cells to provide energy but a large portion is transferred to your liver and fat cells where it is converted into other nutrients and stored for later use. - Sensors in the hypothalamus and the liver monitor blood glucose concentration - when blood glucose levels decrease, the liver responds by converting stored nutrients back into glucose. This action produces a drop-rise glucose pattern. - meaning of this drop-rise pattern is not certain but it may contain information that helps the brain to regulate hunger. - As we eat, several bodily signal combine and ultimately cause us to end our meal - Stomach and intestine distination or bloating are satiety’s signals - the walls of these organs stretch as food fills them up, sending nerve signals to the brain. - this does not mean that the stomach literally has to be full for us to feel satisfied: nutritionally rich food seems to produce satiety more quickly than equal volume of less nutritious food, suggesting that some satiety signals respond to food content. - Patients who have had their stomachs removed continue to experience satiety, not only bc of intestine distination, but also bc of chemical signals. - The intestines respond to food by releasing several hormones called peptides to help digest your meal - egg CCK - Pattern of increase and decrease in **blood glucose levels** - **Cholecystokinin (CCK)**: a peptide - released into your blood stream by the small intestine as food arrives from the stomach. - it travels to the brain and stimulates receptors in several regions that decrease eating - hungry animals injected with CCK will stop feeding or reduce the size of their meals - in humans who receive small dose of peptides report feeling full after eating less food. - decreases feelings of hunger
135
What is Grehlin's role as a Signal that starts a meal?
- **Ghrelin:** - Another peptide! - Levels highest just before a meal. - Increases feelings of hunger in eating - Released into the bloodstream by the stomach and the small intestine - Is thought to be one of the most important signals for hunger among humans. - Humans given an injection of ghrelin report feeling hungry and given the opportunity to eat, consume more food than participants given injections of saline. - Ghrelin has also been reported to increase thoughts about food and mental images of food, especially the mental image of a favourite meal. - Levels decline rapidly after eating and begin to rise again as the next meal approaches. - Ghrelin release can also be triggered by food related cues such as pictures of food.
136
What is the role of Leptin?
- **Leptin:** - Hormone secreted by fat cells - fat cells are not passive storages for fats, rather they actively regulate food intake and weight by secreting leptin. - Leptin is a hormone that decrease appetite - As we gain fat, more leptin is secreted into the blood and reaches the brain where receptor sites on certain neurons detect it. - These leptin signals influence neural pathways to decrease appetite and increase energy expenditure - Leptin is a background signal, it does not make us feel full like CCK and other tidy signals that respond directly to food intake during a meal. - Instead, leptin may regulate appetite by increasing the potency of these other signals. - Thus, as we gain fat and secrete more leptin, we tend to eat less because these meal times satiety factors make us feel full sooner. - As we lose fat and secrete less leptin, it takes more food and a greater accumulation of satiety signals to make us feel full. - High leptin levels may tell the brain there’s plenty of fat tissue so it’s time to eat less.
137
What is the OB gene and what is its role?
- evidence for leptin and important roles is out of research with genetically obese mice. A gene called the ob gene (ob meaning obesity) normally directs fat cells to produce leptin. - But mice with OB gene mutation lack leptin as they can wait, their brains do not receive this curb your appetite signal and the mice overeat and become obese. - Daily leptin injections reduce their appetites and increase their energy expenditure and makes them become thinner. - Another strain of obese mice produces emboleptin, but because of a mutation in a different gene, called the DB gene, their brain receptors are insensitive to leptin. - The curb your appetite signal is there but they can’t detect it and become obese. - Injecting obese mice with leptin does not reduce their food intake and weight. - Are these specific OB and DB gene mutations a major source of human obesity? - Probably not, for both genetic conditions seem to be rare in humans; however, when they do occur, these conditions are associated with extreme obesity, suggesting the importance of normal leptin functioning in human weight regulation. - Unfortunately there is reason for doubt that leptin injections could be the magical cure to help ppl lose weight. - bc obese ppl already have emboleptin in their blood because of their fat mass for currently unknown reasons their brains appear to be insensitive to information
138
What are the 4 main signals in hunger and what is their effect?
1. GLUCOSE -> drop signals hunger 2. CCK -> decrease hunger 3. LEPTIN -> decreases hunger (is a long-term signal) 4. GHRELIN -> Increases hunger
139
What Brain Mechanisms regulate hunger?
- Many part of the brain, ranging from the primitive brain stem to the lofty cerebral cortex play a role in regulating hunger and eating - **Hypothalamus** - **H**unger, thirst, sexual arousal, and body temperature - areas near the side, called the lateral hypothalamus LH, seem to be a hunger on-centre - in contrast, structures in the lower middle area called the ventral medial hypothalamus, MH, seem to be a hunger off-centre - Scientists learned that although the LH and VMH played a role in hunger regulation, they were not really hunger on and off centres. - Many pathways involve the paraventricular nucleus PVN which is a cluster of neurons packed receptor sites for various transmitters that stimulate or reduce appetite - The PVN appears to integrate several different short term and long-term signals that influence metabolic and digestive processes.
140
What is the part of your brain that tells you whether you're hungry or not?
The hypothalamus! - It helps make the distinction if your body is rich in energy or if your body is poor in energy → full or hungry? - Full: concentration of glucose in our blood is high → means we’ll release a hormone called insulin. → insulin is released to store the glucose you have in your blood from whatever meal you just ate and it will go an bind with receptors in the hypothalamus and block these receptors to indicate to the brain and hypothalamus that you’re full and not hungry. - hungry: low blood concentration of glucose → you’re not going to be releasing insulin so you’re not going to be inhibiting the hypothalamus. - High lipids: If we ate a really fatty meal and we have high levels of lipids or fat in our bloodstream → it will cause the release of the hormone leptin, which is similar to insulin in that it represents the presence of energy rich nutrients in the bloodstream and bind with receptors there to inhibit the feeling of hunger, which means that when lipid concentration is low, you’re not going to be releasing leptin and you’re not going to be inhibiting the hypothalamus. - Leptin levels rarely change and the reason why is bc leptin levels are based more on the amount of adipose in your body than the amount of lipid you have in your blood - so even tho we’ll eat some fatty, greasy meal that’ll release lipids into our bloodstream we’ll definitely have a lot more fat tissue in our body before hand
141
What does our stomach do in regulating hunger?
- Stomachs - After we’ve eaten a bunch, our stomach will be pretty full with food; however, if we haven’t eaten in a while, our stomachs can be pretty empty. - Full stomach: - Empty Stomach: growl → ghrelin → the hormone that’s released into the bloodstream to tell the hypothalamus that we are pretty hungry → presence of ghrelin will tell the hypothalamus that we’re hungry and motivate us to find some food.
142
What 3 hormones are the main players that determine whether we're hungry?
These 3 hormones: insulin, leptin and ghrelin are the main players that determine whether we’re hungry.
143
What are some Psychological Effects associated with eating and not eating?
- Studies of beauty pageant contestants, magazines and fashion models indicate a clear trend towards a thinner, leaner and increasingly unrealistic ideal female body shape - Between he 1950s and the 1990s - the culturally defined “ideal female body” is changing again in recent years: adding an ultra fit physique in addition to extreme thinness. - Relative to men, over the past 50 years, women have become increasingly dissatisfied with thier body image Psychological Aspects - A study by Fallon Rosen in 1985 found that university women overestimated how think they needed to be to meet men’s preferences; whereas men overestimated how bulky they needed to be to match women’s preferences - Women also perceive their body shape to be heavier than ideal whereas men perceive their body shape to be closer to the ideal - According to Frederickson and Robert’s objectification theory, western culture teaches women to view their bodies as objects such as external observers would. - this perspective increases body shame and anxiety, which in turn leads to eating restrictions and even eating disorders. - The norms that thin is equal to attractive and you can never be too thin are strongly ingrained by adolescents - And have a big impact even early in adolescence such social pressures and beliefs can lead to a high level of dissatisfaction with one’s own body.
144
What are the Environmental and Cultural Factors that Influence How Much We Eat
- **Food availability** - Food scarcity - limits consumption - Low-cost foods - abundant low-cost food including high fat foods in many countries contributes to a high rate of obesity among children and adults. - **Food taste, variety and serving size** - powerfully regulate eating - good tasting food positively reinforces eating and increase food consumption, but during a meal and from meal to meal we can become tired of eating the same thing and stopping meal more quickly - food variety increases consumption (eg. a buffet) - the amount of food served also influences how much we eat - Through classical conditioning we learn to associate the smell and sight of food with its taste and these food cues could trigger hunger. - Eating may be the last thing in your mind until your nose detects the aroma from a bakery, a pizzeria, or a popcorn machine. - Many environmental stimuli effect food intake: you typically eat more when dining with other people rather than eating alone, in part bc the meals take longer. - Cultural norms influence when, how and what we eat. eg. in countries such as Spain and Greece people often eat dinner in the late evening, sometimes around 9pm. - Whereas most North Americas have long since finished their supper. - **Cultural Influences** Obesity - More than half adult Canadians are either overweight or obese. - In children almost 20% are overweight and 8% are obese. - Stats Canada reported a 500% increase in childhood obesity between 1980 and 2004. - Obesity is often blamed on a lack of willpower, of character or emotional disturbances. - Research do not consistently find some psychological differences between obese and non-obese people - **Genetic Influences:** - Metabolism - Basil metabolic rate and tendency to store energy as either fat or lean tissue - Overall genetic factors appear to account for about 40-70% of the variation in body mass among women and men - more than 200 genes have been identified as possible contributors to human obesity - and in most cases, it is the combined effect of the subset of genes rather than single gene variations that produce an increased risk. - however, although heredity does affect our susceptibility to obesity, so as the environment genes have not changed in much recent decades, but obesity rates in Canada and the US have increased significantly. - According to James Hill and John Peters, the culprits are an abundance of inexpensive, tasty, high fat foods which are available almost everywhere. - **Environmental Influences:** - Inexpensive poor-quality food - cultural emphasis on getting the best value contributes to the super sizing of items and technological advances that decrease the need for daily physical activity. - “Supersizing” - Decreased activity
145
What is Batesman's Principle?
- proposed a theory that was controversial in the modern day: that females and males supply their offspring with different levels of parental involvement. - More variability in the reproductive success of males than females - created through experiment with fruit flies - males are capable of producing many offspring in a short period of time - female produce only a limited numbers of eggs - so females spend their energy on their small number of success → as a result females tend to be choosy when picking their partners. - so males have to compete with one another to find a partner
146
What are Emotions?
- Mental states or feelings associated with our evaluation of our experiences. - Positive or negative feelings, effective states consisting of a pattern of cognitive, physiological and behavioural reactions to events that have relevance to important goals or motives - These events involve internal, such as your memories, or external like laughing at your friends
147
What is the link between emotion and motivation?
- Link between motivation and emotion has also been closely linked - Gratified, threatened or frustrated - Motivation and emotion both involve states of arousal and the both can trigger patterns of action. - Eg. flight in the case of fear, and attack in the case of anger - The terms “motivation” and “emotion” are both derived from the Latin word Movari → move - Emotion theorist Richard Lazarrus believed there is always a link between motives and emotions because we react emotionally only when our motives and goals are gratified, threatened or frustrated. - Emotional reactions are especially strong when an experience is pertinent to goals that are very important to us. - To differentiate between the two we can place them in a stimulus response framework: - Some theorists suggest that motives operate as internal stimuli that energize or direct behaviour toward some goal or incentive - Whereas emotions are basically reactions or responses to events that relate to important goals.
148
What is the Adaptive Function of Emotion?
- Emotions signal that something important is happening and - **Emotions direct attention** - Negative emotions - fear or anger are part of an emergency arousal system that increases the chances of survival by energizing, directing or sustaining, fighting or fleeing when confronted by threat or danger. - Positive emotions - help us form lasting social relationships and work to broader our thinking and behaviour that we explore, consider new ideas and try out new ways to achieve goals. - **Social communication** - Information about state - emotions provide observable information about our internal states and intentions → through this they influence how other people behave towards us - Influence others’ behaviour. - eg the effects of a baby’s crying or smiling at adults → parents and other adults report feeling irritated or unhappy when babies cry. - parents and adults generally respond to crying infants with caretaking responses that have obvious survival value for the infant - Positive emotions also pay off for babies - a smiling infant is likely to increase a parent’s feelings of love and caring, thereby increasing the likelihood that a child’s biological and emotional needs will be satisfied. - Within 1-3 days after birth, human infants respond to anotehr infants crying with crying of their own - Children who are less than 1 year old, respond with negative affect to vocal expressions of fear by their mother - by 2 years of age they react to their mother’s real or simulated signs of distress with efforts to help or comfort her - adults expressions of sadness and distress also evoke concern, empathy, and helping behaviour from others.
149
What are the 4 Features of Emotion?
1. Response to eliciting stimuli - external or internal 2. Result of **cognitive appraisal** - emotional responses result from our interpretation or cognitive appraisal of these stimuli which gives a situation its perceived meaning and significance. 3. Physiological response - Our bodies respond physiologically to our appraisal - we may become physically stirred up as in joy, fear, anger or we may experience decreased arousal, as in contentment or depression. 4. Include **behavioural tendencies** - Some are expressive behaviours, such as crying or smiling with joy - others are instrumental behaviours → ways of doing something about the stimulus that arouse emotion such as studying for an anxiety rousing test or running away. - Eg. an insulting remark from another person (eliciting stimuli) may evoke a cognitive appraisal that one has been unfairly demeaned, an increase in (physiological arousal), a clenching of jaws and fists (expressive behaviour) and verbal attack on the other person (instrumental behaviour). - As these two-way arrows indicate, the emotional components can influence one another - Cognition can trigger physiological changes and expressive behaviour which in turn can effect what we think of the situation and about ourselves. - Emotion is a dynamic, ongoing process, thus any of its four elements can change rapidly as a situation and our responses to it influence with another - eg. anger begins to escalate during a disagreement → you might choose to respond in a way that is aimed at making the other person less upset and this evokes a positive reaction or apology for the other person. → which helps to defuse the situation and reduces your negative appraisal of the other person and your level of emotional arousal.
150
Are Eliciting Stimuli internal or external?
- Can be internal or external - eg. being angry or proud at someone or something - additionally, the stimuli that trigger cognitive appraisals and emotional responses are not always external. - They can be internal such as mental images and memories. - most of us can work up a state of anger simply by recalling or imagining a painful injustice or insult from the past. - Or evoke warm feelings from recalling significant positive experiences. - Influence of **innate** factors - innate biological factors help to determine which stimuli have the greatest potential to arouse emotions. - Infants come equipped with the capacity to respond emotionally with either interest or distress to events in their environment. - Adults may be biologically primed to experience emotions in response to certain stimuli that have evolutionary significance. - Learning? - Learning influences the ability of particular objects or people to arouse emotions similar to what we discussed in chapter 7: previous experiences can make certain people or situations elicit stimuli for certain emotions. - eg. Little Albert who learned the fear of white rats and other furry objects. - On the broadest level, cultures have different standards for defining the good the bad and the ugly and these standards affect how eliciting stimuli will be appraised and responded to emotionally - Physical features that provoke feelings of infatuation in one culture, such as ornamental facial scars, may elicit feeling of disgust or other emotions - In Western societies recent increases in the popularity and acceptability of body piercings and tattoos illustrate how quickly cultural standards can change.
151
Is there an effect of culture on Cognitive Appraisal?
- Effect of Culture? - Similarities in appraisals for basic emotions - in one study conducted in 27 countries, researchers found strong cross-cultural similarities in the types of appraisals that evoked joy, anger, fear, sadness, disgust, shame and guilt. - Differences in appraisals of other emotions. - In another cross-cultural study comparing American and Asian people in Japan and Hong Kong, Robert Marrow and his colleagues, found Americans reported feeling happiness, pride and hope more frequently than the Japanese did. - The Japanese, in turn, reported more frequent feelings of shame and regret than did people from Hong Kong - The same situation can also elicit different appraisal and resulting emotional reactions depending on one’s culture. - Eg. the circumstance of being alone - For Tahitians, being along is appraised as an opportunity for bad spirits to bother a person and fear is the most common emotional response. - In Western cultures, being alone may at times represent a welcome respite in the frantic pace of daily life, evoking contentment and happiness. - Thus, where appraisals are concerned, there seems certain universals but also some degree of cultural diversity in some of the more subtle aspects of interpreting situations.
152
What brain regions are involved in emotion?
- Is clear though that emotions involve important interactions between corticol and subcortical areas. - Subcortical areas: - Hypothalamus, amygdala, hippocampus - play major roles in emotion - Cerebral Cortex (prefrontal cortex) - has many connections with the hypothalamus and limbic system, allowing constant communication between cortical and subcortical regions. - Cognitive appraisals surely involve activities in the cortex where the mechanisms for language and complex thought reside. - The ability to regulate emotion depends heavily on the executive functions of the prefrontal cortex, which lies immediately behind the forehead. - Thalamus - sensory input to various parts of the brain - send messages along two independent neural pathways - one travelling to the cortex - the other directly to the amygdala - means that the amygdala can recieve direct input from the senses and can generate emotional reactions before the cerebral cortex has had time to fully interpret what is causing the reaction. - Shortly afterward, the cerebral cortex responds to the more carefully processed cognitive interpretation of the situation. - Amygdala - which helps to coordinate and trigger physiological and behavioural responses to emotional arousing situations - Researchers suggest that this primitive mechanism, which is the only emotional mechanism in species, such as birds and reptiles has survival value as it enables the organism to react with great speed - Cortex - Where sensory input is organized as perceptions and evaluated by the thinking or linguistic part of the brain.
153
The existence of a dual system for emotional processing helps explain what?
- The existence of a dual system for emotional processing may help to explain some puzzling aspects of our emotional lives. - eg. most of us experience the experience of suddenly feeling emotional without fully understanding why. - it also suggests that people are capable of having two simultaneous emotional reactions to the same event. - a conscious one, occurring as a result of cortical activity and an unconscious one triggered by the amygdala. - may explain instances where ppl are puzzled by behavioural reactions that seem to be opposite of the emotions they are consciously experiencing.
154
How is Brain Activity involved in the regulation of emotional behaviour?
- of particular interest is the prefrontal cortex: - the seat of executive functioning involving reasoning, planning, decision-making and the control of impulsivity - deficits and prefrontal functions allow emotions, speech and acts in an unregulated manner that can have negative consequences.
155
What is the idea behind Polygraphs?
- **Autonomic Responses** - The idea behind polygraphs - Autonomic = automatic - The fight-or-flight response is produced by the sympathetic branch of the autonomic nervous system and by hormones from the endocrine system. - The sympathetic nervous system produces arousal within a few seconds by directly stimulating the organs and muscles of the body, meanwhile the endocrine system produces epinephrine, cortisol and other stress hormones into the bloodstream. - These hormones produce physiological effects like those triggered by the sympathetic nervous system - Their effects are longer lasting and can keep the body aroused for a considerable length of time - Different emotions produce different patterns of arousal - on the one hand, many investigators conclude that complex and subtle emotions, such as jealousy and tenderness do not involve distinct patterns of arousal. - On the other hand, autonomic patterns do show subtle differences in certain basic emotions such as anger and fear. - eg heart rate speeds up in both fear and anger but there are differences in where the blood gets pumped to. - Anger causes more blood to flow to the hands and feet - Fear reduces blood flow to the hands and feet - producing a scientific basis to the expression, cold feet. - Whether ppl can detect such physiological differences in a manner that would allow them to identify and label their emotions is an unanswered question. - We cannot easily control autonomic nervous system activation with exposure to emotion evoking stimuli. - This simple observation led to the idea that changes in physiological arousal might tell us when someone is lying or telling the truth. - The rationale is when people lie they will become anxious and that increases in anxiety would be reflected in physiological responses such as increases in heart rates, sweating and skin conductance, which increases bc of sweat gland activity. - On the contrary, if they are answering honestly, then there should be no change in physiological arousal - Polygraph is used to measure such changes - Controversial → research has found an especially high rate of false positives, identifying an innocent person as guilty with polygraph tests
156
What are Expressive Behaviours for?
- eg. when exposed to a slide showing angry or happy faces, University students responded with subtle facial muscle responses that denoted displeasure or pleasure within a third of a second - **Observable** displays of emotion - To infer emotions of others? - Sometimes other’s emotional displays can evoke similar emotional response in us, empathy. - eg. feeling the same emotion as a character when watching a movie - **Fundamental Emotional Patterns** - Emotions similar across cultures - individuals who are blind express emotions - children who are blind from bird - Charles Darwin argued emotional displays are products of evolution that developed because they contributed to species survival. - Darwin emphasized the basic similarity of emotional expression in animals and humans. - eg. both wolves and humans bare their teeth when angry - Darwin did not maintain that all forms of emotional expression are innate but he believed that many of them are. - Two key findings from other researchers stress that humans have innate or fundamental emotional patterns. - 1st → the expression of certain emotions such as rage and terror are similar across a variety of cultures, suggesting that certain expressive behaviour patterns are wired into the nervous system. - 2nd → children who are blind from birth seem to express these basic emotions in the same ways sighted children do, ruling out the possibility that they are learned only through observation. - Other argue that expressive behaviours are resulting from some combination of these innate emotions. - The evolutionary view does not assume that all emotional expression are innate, nor does it deny that innate emotional expressions can be multiplied of inhibited as a result of social learning.
157
There is General Agreement across cultures for what?
- **Facial Expressions** - General agreement across cultures - although facial expressions can be good cues for judging emotions, even ppl within the same culture learn to express the same emotions differently. Thus, some ppl have learned to appear very calm when they are angry. - funortunately we usually know sth about the situation to which the person is reacting and this is often an important basis for judging emotions - eg. if a woman is crying is it out of sadness or happiness? - many experiments show ppl’s accuracy and agreement in labeling emotions from pictures is considerably higher when the pictures show a background situation. - Women generally more accurate judges of emotional expressions than me - perhaps the ability to accurately read emotions has greater adaptive significance for women, whose traditional role within many cultures has been to care for others and attend to their needs - may also come from cultural encouragement for women to be sensitive to others emotions and to express thier feelings openly. - however, it is important to note that men who work in professions that emphasize these skills such as psychotherapy, drama and art are as accurate as women in judging emotions, suggesting that these skills can be learned.
158
What is the effect of culture on Expressive Behaviours?
- **Cultural display rules** - emotional expression within a given culture are called display rules - certain gestures, body posture and physical movements can convey vastly different meanings in different cultures. - eg. using upright thumb gesture while hitchhiking in certain regions of Greece can result in negative consequences → tire tracks on one’s body - in those regions an upright thumb is the equivalent of a raised middle finger in North America - In some ways emotional expressions differ across cultures like gestures do, since the display roles of a particular culture dictate when and how specific emotions are to be expressed - in the culture ofIndia, sticking out one’s tongue is a display role for expressing feelings of shame - some Asian cultures such as the Japanese are more subdued in their display of emotion in public settings than Europeans and Americans - Biological factors and culture shape expression - a number of emotional theorists conclude that innate biological factors and cultural display roles combine to shape emotional expression
159
What are Instrumental Behaviours?
- Behaviours directed at achieving a goal - Emotion function as: - ‘Calls to action’ - requiring some sort of response to a situation that arouses the emotion - eg. a highly anxious student must find a way to cope with their impending test → this is an instrumental behaviour directed at acheiving some goal - Enhance performance for simple tasks - researchers analyzed cross-cultural studies concluded that instrumental actions fall into 5 broad catgegories: 1. Moving towards others (such as in love) 2. Moving away from others (such as in fear) 3. Moving against others (as in anger) 4. Helplessness 5. Submission - From each of these 5 broad categories, many different goal-directed behaviours can occur - Whether an instrumental behaviour will be successful depends on the appropriateness of the response to the situation, the scale of which it is carried out, and the level of emotional arousal that accompanies the behaviours. - In many situations, the relation between emotional arousal and performance seems to take the shape of an upside-down or inverted U → as physiological arousal increases up to some optimal level, performance improves. But beyond that optimal level further increases in arousal actually impair performance. - The relations between arousal and performance depends not only on arousal level, but also on task complexity. - Task Complexity - How complicated the task is - how much precision is required to do the task - and how well the task has been learned. - generally speaking, as task complexity increases, the optimal level of arousal for maximum performance decreases, thus even a moderate level of arousal can disrupt performance on a highly complex task. - Performance drops off less at high levels of arousal for the simple task than for others. - Even the highest levels of arousal can enhance performance of very simple tasks such as running or lifting something. - this fact may account for seemingly super human feats that we hear about occassionally. - eg small mothers lifting up the front end of a truck to free her child who was trapped under the wheels - For complex tasks, relation between arousal and performance is different. - High emotionality can interfere with the ability to tned to and process info effectively. - Thus ppl may underachieve on intelligence test items that require complicated mental processing if they’re too anxious. - on physical tasks, muscle tension can interfere with the skillful execution of complex movements.
160
What is Facial Feedback Hypothesis?
- **Facial Feedback Hypothesis** - More likely to feel emotions that correspond to your facial features. - These muscles are active even in patients with spinal injuries who may recieve no sensory input from below the neck - According to the facial feedback hypothesis, this feedback to the brain may play a key role in determining the nature and intensity of emotion that we experience. - Research shows that positive or negative responses can be triggered by the contraction of specific facial muscles. - especially noteworthy are studies in which participants do not know they are activating muscles in specific emotional expression. - eg getting participants to hold pencils in their teeth which activated the muscles used in smiling → they rated themselves as feeling more pleasant than when they held to the pencil with their lips, which activates muscles involved in frowning. - Participants also rated cartoons as funnier when holding pencils in their teeth and activating the happy muscles than while holding pencils with their lips
161
What did Darwin propose in the Descent of Man?
- Descent of Man - written by Charles Darwin - proposed that humans and African apes share common ancestors
162
What did Darwin propose in "The Expression of the Emotions in Mans and Animals"?
- written by Charles Darwin - developed separate ideas from Descent of Man by proposing that humans and animals share several similarities in terms of our brains and specifically in terms of our emotions. - Darwin discussed that the facial expressions he noticed in the human expression of emotion were also present in the expression of emotion in other animals. - Also discussed the idea that emotions are adaptive and motivational - Emotions are adaptive, motivational - Emotional expression evolved in both humans and animals bc they protected our ancestors from harm and also helped them to communicate with others - Similarities in human + animal - many modern-day theorists do believe that humans and animals share similar emotions, however it seems that higher order animals tend to have more emotions than lower order animals such as insects. - And there are many emotions that are suspected to only occur in humans