Unit 4 Flashcards

1
Q

Behavioral Perspective

A

focuses on observable behaviors and how they are learned through interactions with the environment, emphasizing the role of rewards, punishments, and reinforcement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Classical conditioning

A

a type of learning where an organism learns to associate a neutral stimulus with a naturally occurring stimulus (unconditioned stimulus), leading to a learned response (conditioned response) to the previously neutral stimulus.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Acquisition

A

the initial stage of learning or conditioning, where a response is first established and gradually strengthened through repeated pairings of a neutral stimulus and an unconditioned stimulus.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Unconditioned stimulus

A

a stimulus that automatically and naturally triggers a specific response (the unconditioned response or UR) without any prior learning or conditioning.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Unconditioned response

A

an automatic, reflexive, and naturally occurring behavior that is elicited by a specific stimulus without prior learning or conditioning.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Conditioned stimulus

A

a previously neutral stimulus that, after repeated pairing with an unconditioned stimulus (UCS), elicits a conditioned response (CR).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Conditioned response

A

a learned reaction to a previously neutral stimulus that, through repeated pairings with an unconditioned stimulus, now elicits that response.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Extinction

A

the gradual weakening and eventual disappearance of a learned behavior (or conditioned response) when it is no longer reinforced or paired with the original stimulus.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Spontaneous recovery

A

the unexpected reappearance of a previously extinguished conditioned response after a rest period, even without further pairings of the conditioned stimulus (CS) and unconditioned stimulus (US).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Stimulus discrimination

A

the ability of an organism to distinguish between a specific stimulus and similar stimuli, and respond only to the specific stimulus that has been paired with a particular outcome (like reinforcement or punishment).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Generalization

A

the tendency to respond to stimuli that are similar to the original conditioned stimulus, even if they are not identical. This means applying a learned behavior or response to new situations or stimuli that resemble the original learned situation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Higher-order Conditioning

A

occurs when a previously neutral stimulus becomes associated with a conditioned stimulus, leading to a conditioned response, even without the original unconditioned stimulus being present.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Counter conditioning

A

a behavior therapy technique that uses classical conditioning to replace an unwanted response to a stimulus with a desired or more adaptive response.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Taste Aversion

A

a learned avoidance of a specific food or taste after experiencing a negative reaction, like illness, following its consumption, even if the food was not the actual cause of the illness.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Biological preparedness

A

the innate or natural predisposition of organisms to learn certain associations and responses, especially those that have been adaptive for survival, more easily than others.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

One-trial learning

A

the acquisition of a behavior or association after a single exposure to a stimulus or experience, rather than through repeated trials.
One-trial learning, also known as “single-trial learning,” suggests that learning can occur immediately and effectively with minimal or even a single exposure to a situation or stimulus.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Habituation

A

a diminished or disappearing response to a stimulus after repeated or prolonged exposure to it. It’s a form of non-associative learning where an organism becomes accustomed to a stimulus, and the initial response to it weakens or ceases.

18
Q

Operant Conditioning

A

a learning process where voluntary behaviors are modified by associating them with consequences, such as rewards or punishments, to increase or decrease their likelihood of occurrence.

19
Q

Law of Effect

A

Edward Thorndike, states that behaviors followed by positive consequences (reinforcement) are more likely to be repeated, while those followed by negative consequences (punishment) are less likely to be repeated.
Core Principle:
The Law of Effect is a fundamental principle in learning theory, suggesting that behaviors are shaped by their consequences.
Thorndike’s Contribution:
Edward Thorndike, a prominent psychologist, first proposed this principle in his work on learning in animals.
Reinforcement and Punishment:
Reinforcement: Behaviors that lead to satisfying or positive outcomes are more likely to be repeated.
Punishment: Behaviors that lead to unsatisfying or negative outcomes are less likely to be repeated.
Examples:
Positive Reinforcement: A child who is praised for good grades is more likely to study hard in the future.
Negative Reinforcement: A person who uses an umbrella to stay dry is more likely to use an umbrella in the rain in the future.
Punishment: A student who fails a test may be less likely to study for the next test.
Connection to Operant Conditioning:
The Law of Effect laid the foundation for the principles of operant conditioning, which focuses on how behaviors are learned through consequences.
B.F. Skinner’s Work:
B.F. Skinner, a key figure in operant conditioning, built upon Thorndike’s ideas, using the “Skinner box” to study how animals learn through reinforcement and punishment.

20
Q

Positive Reinforcement

A

adding a desirable stimulus after a behavior to increase the likelihood of that behavior occurring again.

21
Q

Negative Reinforcement

A

removing an unpleasant stimulus to increase the likelihood of a desired behavior occurring in the future.
What it is:
Negative reinforcement is a type of operant conditioning where a behavior is strengthened by the removal of something undesirable.
How it works:
By removing an aversive (unpleasant) stimulus, the behavior that led to the removal is more likely to be repeated.
Examples:
Taking away chores: A child learns that good behavior leads to not having to do chores, reinforcing the good behavior.
Lessening leash tension: A dog learns that walking politely leads to the removal of tension on the leash, reinforcing the polite walking.
Turning off an alarm: A person learns that wearing a seatbelt stops the annoying beeping sound, reinforcing the seatbelt behavior.
Key difference from punishment:
While both involve removing something, punishment aims to decrease a behavior, while negative reinforcement aims to increase a behavior.
Not about “good” or “bad”:
The terms “positive” and “negative” in reinforcement don’t refer to the quality of the stimulus, but rather whether something is added or removed.

22
Q

Positive Punishment

A

adding an unpleasant or aversive stimulus to decrease the likelihood of a behavior occurring again.
Operant Conditioning:
Positive punishment is a type of operant conditioning, where behaviors are learned through their consequences.
Adding an Aversive Stimulus:
The key characteristic of positive punishment is that an unpleasant stimulus is added after the unwanted behavior.
Decreasing Behavior:
The goal of positive punishment is to reduce the frequency or likelihood of the undesirable behavior occurring in the future.
Examples:
Examples of positive punishment include:
Giving a child extra chores for misbehaving.
Receiving a speeding ticket for driving over the speed limit.
Getting scolded for talking back.
Contrast with Negative Punishment:
It’s important to distinguish positive punishment from negative punishment, which involves removing a desirable stimulus to decrease behavior.

23
Q

Negative Punishment

A

removing a desirable stimulus to decrease the likelihood of a particular behavior occurring again.
Definition:
Negative punishment, also known as punishment by removal, is a type of operant conditioning where a desirable or pleasant stimulus is taken away after a behavior occurs, leading to a decrease in the frequency of that behavior.
Key Concept:
The core idea is that removing something good or enjoyable makes an individual less likely to engage in the behavior that led to the removal.
Examples:
Taking away a child’s favorite toy after they misbehave.
Losing screen time privileges for not completing homework.
Being grounded for breaking a rule.
Losing reward tokens for not completing a task.
Contrast with Negative Reinforcement:
It’s important to distinguish negative punishment from negative reinforcement. While both involve removing something, negative reinforcement strengthens a behavior by removing an unpleasant stimulus, whereas negative punishment weakens a behavior by removing a desirable stimulus.
Example of Negative Reinforcement:
Removing chores for the weekend when a child keeps their room clean all week.

24
Q

Primary Reinforcer

A

a stimulus that naturally satisfies a biological need, like food or water, and does not require learning to be reinforcing.
Definition:
Primary reinforcers are stimuli that are inherently rewarding and satisfying, meaning they don’t need to be learned or associated with other stimuli to be effective.
Examples:
Common examples include food, water, warmth, and relief from pain or discomfort.
Contrast with Secondary Reinforcers:
Secondary reinforcers, on the other hand, are stimuli that gain their reinforcing power through association with primary reinforcers (e.g., money, which can be used to buy food).
Operant Conditioning:
Primary reinforcers are a key concept in operant conditioning, which involves learning through the consequences of behavior.
Biological Need:
Primary reinforcers are effective because they satisfy basic biological needs, such as hunger, thirst, or the need for rest.
Example:
In a Skinner box experiment, a rat might be given a food pellet as a primary reinforcer for pressing a lever.

25
Q

Secondary Reinforcer

A

a stimulus that gains its reinforcing power through association with a primary reinforcer, meaning it’s learned to be reinforcing, rather than having inherent value.
Definition:
A secondary reinforcer is a stimulus that, through repeated pairing with a primary reinforcer (something that naturally satisfies a biological need), becomes associated with that primary reinforcer and gains reinforcing power.
Example:
A dog might not initially find the sound of a clicker rewarding, but if the clicker is consistently paired with a food treat (a primary reinforcer), the dog will eventually learn to associate the clicker sound with the treat and respond to the clicker as a reward.
Learned Association:
The key is that secondary reinforcers are learned associations, not innate responses.
Examples of Secondary Reinforcers:
Money: Money itself doesn’t satisfy a biological need, but it can be used to obtain primary reinforcers like food or shelter.
Grades: Grades are not inherently rewarding, but they can be associated with praise or other rewards, making them a secondary reinforcer.
Praise: Verbal praise, like “good job,” is not inherently rewarding, but it can be associated with primary reinforcers like affection or attention, making it a secondary reinforcer.
Tokens: Tokens earned for good behavior can be exchanged for other rewards, making them a secondary reinforcer.
Relationship to Operant Conditioning:
Secondary reinforcers play a crucial role in operant conditioning, where behaviors are learned through consequences.

26
Q

Shaping

A

a method of operant conditioning where a complex target behavior is taught by reinforcing successive approximations of that behavior, moving incrementally towards the desired outcome.

27
Q

Instinctive Drift

A

the tendency for an animal to revert to its natural, instinctual behaviors, even when those behaviors interfere with a previously learned behavior through operant conditioning.
Example:
The Brelands famously demonstrated this with raccoons, who, despite being trained to put coins in a box, would revert to their natural food-washing behavior (rubbing coins together) when given multiple coins.

28
Q

Superstitious behavior

A

actions that are repeated, not because they cause a desired outcome, but because they are coincidentally followed by a positive or negative event, leading to a false belief in control.
Accidental Reinforcement/Punishment:
Superstitious behaviors arise when a behavior is followed by a reinforcer (something that increases the likelihood of a behavior) or punisher (something that decreases the likelihood of a behavior) even though the behavior itself doesn’t cause the reinforcer or punisher.
False Belief in Control:
The individual mistakenly believes that performing the superstitious behavior is what caused the positive or negative outcome, leading to the repetition of the behavior.
Example:
A baseball player might start tapping the plate with his bat every time he comes up to bat because he got a hit after doing so once, even though the tapping had nothing to do with the hit.
Operant Conditioning:
Superstitious behavior is a classic example of operant conditioning, where a behavior is strengthened or weakened based on its consequences.
B.F. Skinner’s Experiment:
B.F. Skinner conducted an experiment where he randomly delivered food to pigeons, and the pigeons began to develop peculiar behaviors that they seemed to associate with getting the food, such as pecking at a certain spot or turning in a circle.

29
Q

Learned helplessness

A

a state where an individual, after repeatedly experiencing unavoidable aversive stimuli, becomes passive and accepts the situation, even when opportunities for escape or change are available.
Definition:
Learned helplessness is a mental state that develops when someone experiences repeated negative events or situations that they feel are beyond their control.
Mechanism:
The individual learns that their actions or efforts do not lead to a better outcome, leading them to give up trying to change the situation, even when they could.
Examples:
A student who consistently fails at a subject may develop learned helplessness and stop studying, even when they have the ability to improve.
Someone who experiences repeated failures in a relationship might become convinced that they are unable to find a good partner, even if they are in a new relationship.
Connection to Depression:
Learned helplessness is considered a factor in the development of depression and other mental health issues.
Attributions:
Learned helplessness can be linked to internal attributions (blaming oneself) or external attributions (blaming external factors).
Neuroscience:
Neuroscience research suggests that the brain’s default state is to assume that control is not present, and the presence of control is learned, which can be unlearned when faced with prolonged aversive stimulation.
Self-Efficacy:
Learned helplessness is related to the concept of self-efficacy, which is an individual’s belief in their ability to achieve goals.

30
Q

Continuous vs. Partial Reinforcement

A

continuous reinforcement means rewarding a desired behavior every time it occurs, while partial reinforcement (also known as intermittent reinforcement) means rewarding the behavior only sometimes.
Continuous Reinforcement:
Involves reinforcing a behavior every time it’s displayed.
Leads to rapid learning, but also rapid extinction of the behavior if reinforcement stops.
Example: Giving a child a sticker every time they raise their hand in class.
Study.com says “Every time a child remembers to raise their hand in class, the teacher gives them a sticker.”
Partial Reinforcement:
Reinforces a behavior only some of the time.
Leads to slower initial learning, but the learned behavior is more resistant to extinction.
Example: A child who raises their hand doesn’t always get a sticker, but gets one on an intermittent basis.
Study.com says “The child who raises their hand doesn’t always get a sticker, but gets one on an intermittent basis.”

31
Q

Fixed Ratio Schedule

A

a type of reinforcement schedule where a response is reinforced after a fixed or predetermined number of responses, regardless of the time elapsed
Operant Conditioning:
Fixed-ratio schedules are a type of reinforcement schedule used in operant conditioning, a learning process where behavior is modified through consequences.
Reinforcement:
Reinforcement is any stimulus that increases the likelihood of a behavior being repeated.
Fixed Ratio:
In a fixed-ratio schedule, reinforcement is delivered after a specific number of responses are made.
Example: If a rat is reinforced (e.g., with food) every time it presses a lever five times, it’s on an FR-5 schedule.
High Response Rate:
Fixed-ratio schedules tend to produce a high, steady rate of responding, as individuals learn that their effort directly leads to rewards.
Post-Reinforcement Pause:
There can be a brief pause in responding immediately after reinforcement, as the individual waits for the next opportunity to receive reinforcement.
Examples:
A factory worker getting paid for every 10 units they produce.
A student getting a sticker for every 5 math problems completed.

32
Q

Variable Ratio Schedule

A

a type of partial reinforcement schedule where a response is reinforced after an unpredictable number of responses, leading to a high, steady rate of responding.
Partial Reinforcement:
Unlike continuous reinforcement (where every response is reinforced), partial reinforcement only reinforces some responses.
Variable Ratio:
In a variable-ratio schedule, the number of responses required for reinforcement varies randomly.
High, Steady Response Rate:
This schedule tends to produce a high and consistent rate of responding because individuals don’t know when the next reinforcement will occur, leading to persistent attempts.
Examples:
Gambling, slot machines, and fishing are common examples of behaviors reinforced under a variable-ratio schedule, as the reward (winning) is unpredictable.
Resistant to Extinction
Variable ratio schedules are more resistant to extinction than fixed ratio schedules because the unpredictable nature of reinforcement keeps the behavior going

33
Q

Fixed Interval Schedule

A

a reward is delivered after a specific, predetermined amount of time has passed, regardless of how many responses occur, leading to a pattern of slow responding after reinforcement and a burst of responding as the time for reinforcement approaches.
Definition:
A fixed-interval schedule is a type of reinforcement schedule where a reward or reinforcement is delivered after a specific, fixed amount of time has elapsed.
Example:
A common example is getting a weekly paycheck, where the reward (paycheck) is delivered every week, regardless of how much work is done during the week.
Behavioral Pattern:
This schedule tends to produce a pattern of slow responding immediately after reinforcement, followed by an increase in responding as the time for the next reinforcement approaches.
Why it’s called “fixed”:
The key characteristic is that the time interval between reinforcements is consistent and predictable.
Other examples:
An annual performance review
Giving a quiz every Friday morning
A monthly meeting that happens every first Thursday
A worker getting paid every two weeks

34
Q

Variable Interval Schedule

A

rewards a behavior after an unpredictable amount of time has passed, leading to a relatively steady, moderate rate of responding.
Definition:
A variable-interval schedule reinforces a behavior after a varying amount of time has elapsed, rather than a fixed time period.
Example:
Imagine a fishing trip where you don’t know when the fish will bite, but you keep fishing because you might get a reward (a fish) at any moment.
Behavioral Impact:
Because the reinforcement is unpredictable, individuals tend to respond at a consistent rate, as they don’t know when the next reinforcement might occur.
Resistant to Extinction:
Variable-interval schedules are more resistant to extinction than fixed-interval schedules because the uncertainty of reinforcement keeps the behavior going.
Contrast with Fixed-Interval:
In contrast to a fixed-interval schedule, where reinforcement occurs after a set amount of time, variable-interval schedules introduce an element of unpredictability, making them more effective in maintaining behavior.

35
Q

Pattern graphing schedules (VI scalloped graph)

A

grasping how behaviors are reinforced, which can be visualized through graphing patterns. These patterns, like scalloped response rates in fixed-interval schedules, help predict behavior based on reinforcement timing and frequency.
1. Fixed Ratio (FR):
Definition: Reinforcement occurs after a fixed number of responses.
Example: A worker gets paid for every 10 items produced.
Graphing Pattern: A straight line with consistent marks indicating reinforcement points.
Response Rate: High and steady.
2. Variable Ratio (VR):
Definition: Reinforcement occurs after a variable number of responses.
Example: A slot machine pays out randomly after a variable number of pulls.
Graphing Pattern: A straight line with random points where reinforcement occurs.
Response Rate: High and steady.
3. Fixed Interval (FI):
Definition: Reinforcement occurs after a fixed amount of time, regardless of the number of responses.
Example: A worker gets paid every Friday, regardless of how much they work.
Graphing Pattern: A scalloped shape, with low response rates initially, increasing as the reinforcement time approaches.
Response Rate: Low initially, increasing as the reinforcement time nears.
4. Variable Interval (VI):
Definition: Reinforcement occurs after a variable amount of time, regardless of the number of responses.
Example: Checking your email frequently, as you don’t know when an important message will arrive.
Graphing Pattern: A relatively straight line with random points where reinforcement occurs.
Response Rate: Steady, but lower than variable ratio.

36
Q

Social learning

A

individuals learn behaviors, attitudes, and emotions by observing, imitating, and modeling others, particularly those they perceive as role models, and through experiencing vicarious reinforcement or punishment.
Observational Learning:
Social learning theory emphasizes that learning occurs through observing the behaviors and outcomes of others, rather than solely through direct reinforcement or punishment.
Modeling:
Individuals are more likely to imitate behaviors that are perceived as successful or rewarding, especially if the model is someone they admire or identify with.
Vicarious Reinforcement/Punishment:
Learning can also occur by observing the consequences of others’ actions. If a person observes someone being rewarded for a certain behavior, they are more likely to engage in that behavior, and vice versa.
Albert Bandura
is a prominent figure in the study of social learning theory.
Examples:
A child learning to tie their shoes by watching a parent.
A teenager adopting a certain style of dress or music taste based on their peers.
Learning about social norms by observing interactions in a classroom or workplace.

37
Q

Vicarious conditioning

A

learning by observing the consequences of others’ actions, rather than through direct experience.
Observational Learning:
Vicarious conditioning is a type of observational learning, where individuals learn by watching others and observing the outcomes of their behaviors.
Learning Through Observation:
Instead of experiencing something firsthand, people learn by seeing others react to a stimulus or situation.
Example:
If a child sees a peer get praised for answering a question in class, they might be more likely to participate more actively in the future, even without personally experiencing that praise.
Contrast with Direct Experience:
Vicarious conditioning differs from direct conditioning (like classical or operant conditioning) where an individual learns through their own experiences.
Consequences as a Learning Tool:
The key aspect of vicarious conditioning is that the learner observes the consequences (positive or negative) of another person’s actions and adjusts their own behavior accordingly.
Example of vicarious classical conditioning
A child sees their peer react fearfully to a barking dog and subsequently develops a fear of dogs, even if they have never directly encountered a barking dog before.

38
Q

Insight learning

A

a sudden, often novel realization of a problem’s solution, rather than a gradual process of trial and error. It involves a cognitive restructuring or mental rearrangement to achieve understanding.
Definition:
Insight learning is a form of cognitive learning where individuals or animals solve a problem using a sudden understanding or realization, rather than through trial and error.
Key Characteristics:
Sudden Realization: The solution appears “out of the blue” after a period of attempting to solve the problem.
Cognitive Process: It involves internal mental processes, such as visualizing the problem and the solution, rather than simply responding to environmental stimuli.
Long-Lasting Change: Once insight occurs, the understanding of how to solve the problem can be applied to similar situations in the future.
Not Trial and Error: Insight learning is distinguished from trial-and-error learning, where solutions are reached through repeated attempts and feedback.
Example:
Wolfgang Köhler’s experiments with chimpanzees demonstrated insight learning, where the chimps would suddenly realize how to stack boxes to reach food, rather than trying different combinations repeatedly.
Connection to other learning concepts:
Insight learning is a type of cognitive learning, contrasting with behaviorist approaches like classical and operant conditioning, which focus on stimulus-response associations.

39
Q

Latent learning

A

learning that occurs but is not apparent until there’s an incentive to demonstrate it, meaning knowledge is acquired without immediate reinforcement or motivation, but is later used when needed.
Definition:
Latent learning is a type of learning where an individual or animal learns something but doesn’t show that learning until there’s a reason to do so, or until a specific incentive or motivation is introduced.
Example:
Imagine a child learning the layout of a new park by wandering around with their parents, but only showing their knowledge of the park’s layout when they are tasked with finding a specific location, like the playground.
Key Concept:
Latent learning challenges the idea that learning only happens with immediate rewards or reinforcement, suggesting that learning can occur passively and be stored for later use.
Edward Tolman’s Research:
The concept of latent learning was extensively studied by psychologist Edward Tolman, who conducted experiments with rats in mazes to demonstrate this type of learning.
Tolman’s Maze Experiment:
Tolman’s experiments showed that rats who explored a maze without any reward still learned the layout of the maze, and when a reward (food) was later placed at the end, the rats quickly found the shortest path, demonstrating that they had learned the maze layout even without immediate reinforcement.
Cognitive Maps:
Tolman’s work also introduced the concept of cognitive maps, which are mental representations of spatial environments that help individuals navigate and find their way.
Latent learning vs. observational learning:
While observational learning involves learning by watching others, latent learning is about learning that is not immediately demonstrated, even if the individual has been exposed to the information.

40
Q

Cognitive map

A

a mental representation of a spatial environment, helping individuals navigate, recall information, and understand their surroundings.
Definition:
A cognitive map is an internal model or “mental picture” of a physical space, including the layout, landmarks, and relationships between different locations.
Origin:
The concept of cognitive maps was introduced by psychologist Edward Tolman in the 1940s, who used it to explain how rats appeared to learn the spatial layout of a maze.
Purpose:
Cognitive maps are used for:
Navigation: Helping individuals find their way through unfamiliar areas.
Memory: Storing and recalling information about spatial locations.
Learning: Understanding the relationships between different places and objects.
Decision-Making: Assessing routes and planning actions based on spatial knowledge.
Examples:
Knowing how to get to your favorite coffee shop without using GPS.
Being able to give someone directions to your house.
Understanding the layout of your school or workplace.
Variations:
Cognitive maps can vary depending on individual experience, cultural background, and the specific environment being mapped.
Tolman’s experiments:
Tolman’s experiments with rats demonstrated that they could learn the spatial layout of a maze, even when the goal was changed, suggesting they had developed a cognitive map of the maze.