Learning Flashcards

1
Q

Def : Learning (2)

A
  1. Learning is a lasting change in behavior due to experience.
  2. This change can be from a single event or repeated exposure.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Def : Unconditioned Stimulus (2)

A
  1. A stimulus that automatically triggers a response
  2. without any prior learning.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Def : Unconditioned Response

A
  1. A natural, automatic reaction to an unconditioned stimulus.
  2. Not learned
  • UCS = Food
  • UCR = Salivation (happens automatically when food is present)

E.g.
* UCS = Food
* UCR = Salivation (happens automatically when

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Def : Conditioned Stimulus (CS)

A
  1. A previously neutral stimulus that
  2. after being paired with the UCS, triggers a response.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Conditioned Response (CR)

A

A learned response to the conditioned stimulus.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are the two types of learning?

A

**1. Non-Associative Learning

**2. Associative Learning **

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Def : Non-Associative Learning

A
  • This type of learning does not involve linking two events.
  • It occurs naturally in response to repeated stimuli.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Def : Associative Learning

A

This type of learning involves linking two events/stimuli to produce a response.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are the different non associative learning types? (2)

A
  1. Habituation 🛑 → You get used to a stimulus and stop reacting to it.
  • Example: If you live near a train station, you stop noticing the noise over time.*
    • Sensitization 🚀 → You react more strongly to a repeated stimulus.
  • Example: If someone keeps tapping your shoulder, you get more annoyed each time.*
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Def : Pseudoconditioning (Cross-Sensitization (2)

A

A response to a neutral stimulus happens because of exposure to a strong stimulus.
* Example: A soldier might react to fireworks as if they were gunfire.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are the types of associative learning?

(3)

A
  1. Classical conditioning
  2. Operant conditioning
  3. Social learning theory
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Def : Classical conditioning

A

a form of learning where an organism associates two stimuli to produce a response.

  1. Learning happens when a neutral stimulus (NS) (e.g., a bell) is paired with an unconditioned stimulus (UCS) (e.g., food), which produces an unconditioned response (UCR) (e.g., salivation).
  2. Over time, the neutral stimulus becomes a conditioned stimulus (CS) that triggers a conditioned response (CR) (e.g., salivation to the bell).

🔄 Summary
* UCS → UCR (No learning needed)
* CS + UCS → UCR (Du

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What are the different types of Classical conditioning?

A
  1. Delayed/Forward conditioning
  2. Trace conditioning
  3. Simultaneous conditioning
  4. Backward conditioning
  5. Latent inhibition
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Def : Delayed/Forward conditioning

(Classical conditioning)

A
  1. Conditioned stimulus (CS) appears before the unconditioned stimulus (UCS)
  2. The CS continues until the UCS is presented
  3. The UCS in turn triggers the unconditional response
  4. Thus : Conditioned stimulus signals that the Unconditioned stimuli is coming, therefore illicits the Unconditioned response

📌* Example:
Bell rings (CS) → Food is given (UCS) → Dog salivates (UCR/CR)
The bell keeps ringing until the food arrives*

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Def : Trace conditioning (3)

A
  1. Conditioned stimuli appears and disappears before the UCS is presented
  2. Only works if learner pays attention and remembers the Conditioned stimuli occurring prior
  3. Thus, works better when the delay between CS and UCS is short (<0.5 seconds).
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Def : Simultaneous Conditioning

A

The Conditioned stimuli and Unconditioned stimuli are presented at the same time

📌 Example:
* Bell rings (CS) + Food appears (UCS) at the same time → Dog salivates (CR)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Def : Backward Conditioning

A

The Unconditioned stimuli is presented before the Conditioned Stimuli .

📌 Example:
* Food appears (UCS) → Bell rings (CS) → No strong response

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Def : Latent Inhibition

A
  • Cognitive process which describes how it is harder to associate new meaning with a stimulus that learner has previously encountered passively without any associations/consequences

*In comparision it is easier to link *

    • New stimuli - no previous exposure
    • Old stimuli which has been reinforced in the past
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Which type of classical conditioning is best for learning?

A
  1. Delayed conditioning - CS occurs before UCS and overlaps
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Rank the types of classical conditioning in order of effectiveness?

A
  1. Delayed Conditioning ⭐⭐⭐⭐⭐
    - CS Before UCS and overlaps
    2 . Trace Conditioning - ⭐⭐⭐⭐ (Requires memory)
    - CS before UCS but stops
    3 . Simultaneous Conditioning - ⭐⭐⭐ (Weaker association)
    - CS and UCS occur togather
    4 . Backward Conditioning - ⭐ (Least effective)
    - UCS before CS
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Identify and define the factors determine effectiveness of learning in classical conditioning? (4)

Cite the names of the theorists? (2)

Which factor matters more and why? (4)

A
  1. Temporal contiguity (Pavlov) - The closer in timing the conditioned stimuli and Unconditioned stimuli occur, the stronger the learning
  2. Prectability (Robert Rescorla) - refers to consistency of the relationship between the conditioned stimulus (CS) and the **unconditioned stimulus (UCS)*

Predictability matters more than timing in classical conditioning.
Consistent CS-UCS pairings create stronger associations.
Animals and humans learn by detecting patterns, not just by exposure.
Unpredictable stimuli lead to weaker learning and more stress.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Describe the Rescorla- Wagner Model?

(Classical conditioning)

A

Rescorla and Wagner developed a model showing that learning depends on how surprising or expected an event is. The brain uses prediction errors to update learning:

1️⃣ If the UCS is surprising, learning occurs quickly.


2️⃣ If the UCS is expected, little or no new learning happens.


3️⃣ If the CS is unreliable, conditioning is weak.

🔹 Example: If a dog hears a bell sometimes before food and sometimes without food, it will learn that the bell is not a strong predictor of food, and conditioning will be weak

Predictability matters more than timing in classical conditioning.
Consistent CS-UCS pairings create stronger associations.
Animals and humans learn by detecting patterns, not just by exposure.
Unpredictable stimuli lead to weaker learning and more stress.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Def : Higher order conditioning

A
  1. Training a new conditioned stimuli (light) by using the association of an already established conditioned stimuli (ie. a bell)
  2. The existing conditioned stimuli (Bell) is being used as a UCS

e.g. A dog learns to associate a light with a bell, then salivates when seeing the light.

    • A CS1 (bell) is used as a UCS to condition a new CS2 (light).
  1. light (CS2) → Bell (CS1) → Food (UCS),
  2. the dog starts salivating to the light alone.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Def : Social learning theory

A
  1. Social Learning Theory (SLT), proposed by Albert Bandura, explains that people learn by observing others and imitating behaviors.
  2. Social learning theory combines **Operant conditioning ** (learning through consequences) and Classical conditiioning (learning through association) by adding - cognition and social element (people don’t just learn from direct experiences, but also from watching others)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
How does the Social learning theory use : 1. Classical conditioning 2. Operant conditioning
**Classical conditioning in SLT** : - behaviors are learned through **associations between stimuli**. * In SLT, people **observe** others making associations between stimuli, which influences their own learning. 📌*Example: A child sees their sibling (CS) get scared when hearing thunder (UCS), leading to fear (CR). Even if the child has never been scared of thunder before, they might learn the association just by watching.* **Operant conditioning in SLT** Operant conditioning** teaches behavior through **rewards and punishments**. * In SLT, people learn **vicariously**—by **watching others get rewarded or punished**. *⠀📌 Example: A child sees their friend get praised (positive reinforcement) for doing homework, so they do their homework too. A teenager sees another student get detention (punishment) for talking in class, so they avoid talking in class.***
26
Social learning theory - 5 Key stages of observational learning
**Attention** 🧐 - Observing behaviors (classical association) The person must **pay attention** to the behavior being modeled. Encoding : The observed behavior is stored in **memory** as a visual or verbal representation. * Example: The child **remembers** the teacher’s method. l** – Repeating or mentally reviewing the behavior strengthens memory. **Reproduction** 🎭 - Imitating the learned behavior **Motivation** 🎯 -There must be a **reason** to perform the learned behavior. Influenced by rewards or punishments (operant conditioning)
27
The Bobo Doll Experiment
* Bandura’s **Bobo Doll Experiment** (1961) demonstrated SLT in action. * Children watched adults **either hitting or being kind** to a Bobo doll. * The children **copied** the adults' behaviors—even without direct reinforcement!
28
The Bobo Doll Experiment - Key findings
1 . **Operant conditioning** teaches consequences we learn from others' rewards & punishments). * If the adult was rewarded, children copied the behavior more (supports operant conditioning). 2 . **Classical conditioning** teaches associations (we learn from others' responses). **Even without rewards, children imitated behavior** (supports classical conditioning). 3 . **SLT combines both** and **adds cognition**—we think before imitating behavior.
 4 . **Observational learning is powerful**, shaping behavior without direct experience. There is a difference between learning (gaining knowledge) and performance (acting on that knowledge). Learning is influenced by cognition, meaning that expectations and awareness of consequences affect behavior. Learning can happen without rewards or punishments. Observational learning is crucial in child development, social behavior, and even media influence.
29
Reciprocal Causation (Bandura)
**What it means:** * Behavior, the **individual**, and the **environment** all influence each other. * This goes against the idea that people are just **passively shaped by their surroundings**. ## Footnote ⠀**Example:** * A shy person (individual) avoids social situations (behavior), making them even **less socially skilled** (environmental consequence). * If they **force themselves to interact**, they gain confidence, **changing their behavior** over time.
30
Cognitive learning theory - Tolman
**What it means:** * Learning can happen **without immediate reinforcement** – the knowledge is stored but may not be demonstrated until there’s a reason. * **Cognitive maps** – Mental representations of environments and experiences. ⠀**Example:** * **Tolman’s Rat Experiment** – Rats **explored a maze** without any reward. Later, when food was placed at the exit, they suddenly **found the quickest route** without trial and error. * This proved that the rats had **already learned the maze** (latent learning), even though they had no motivation before. ⠀**Why it matters:** * People **learn even when they don’t show it** immediately. * Example: A student might listen in class but **not answer questions**. Later, when needed, they **use the knowledge**, showing they had learned it all along.
31
Insight learning - Kohler
What it means:** * Learning happens through **sudden realization**, not trial-and-error. * **Opposite of associative learning**, where behavior is learned through repetition. ⠀**Example:** * **Kohler’s Experiment with Chimps** * A chimp was given **two sticks that couldn’t reach a banana**. * After thinking, the chimp **joined the sticks together** to make them long enough. * This showed that the chimp had a **sudden “aha” moment** rather than just random attempts. Why it matters:** * Insight learning is used in **problem-solving, creativity, and innovation**.
32
Gagné’s Hierarchy of Learning - 7 stages
1️⃣ **Signal Learning** – Basic reflexes (e.g., learning to stop at a red light).
2️⃣ **Stimulus-Response Learning** – Associating actions with consequences (e.g., dog sits when told).
3️⃣ **Chaining** – Learning sequences (e.g., tying shoelaces step by step).
4️⃣ **Verbal Association** – Remembering words and phrases (e.g., learning new vocabulary).
5️⃣ **Discrimination Learning** – Distinguishing between similar things (e.g., different musical notes).
6️⃣ **Concept Learning** – Understanding abstract ideas (e.g., math principles).
7️⃣ **Problem-Solving** – Applying knowledge in new ways (e.g., designing a project).
33
Gagné’s Hierarchy of Learning - Key takeaway
**What it means:** * Learning happens in **stages**, starting with **basic skills** and progressing to **complex ones**. * **Higher-level learning depends on mastering simpler steps first**. **Why it’s important:** * Learning follows a **structured path**, meaning **complex skills can’t be mastered before basic ones**. * Example: A child must **learn letters before they can read words**.
34
Def: **Extinction** | Classical conditioning
**Extinction** → If a CS (bell) is **not** followed by UCS (food) for a long time, the CR (salivation) disappears.
35
Def: **Spontaenous recovery**
Even after extinction, the CR (salivation) might randomly return.
36
37
Def : Operant conditioning
(Trial-and-Error Learning) 🎮 * Learning happens when **a behavior** is followed by a **reward** **or** **punishment**.
38
Thorndike's "Law of Effect" | Operant conditioning
→ Behaviors followed by **good** consequences **increase** → Behaviors followed by **bad** consequences **decrease**.
39
Define the types of consequences in Operant conditioning | (4)
**positive (i.e. something is given) or negative (something is taken away** **Reinforcement** **increases** behavior 1 . **Positive Reinforcement** ✅ - **Adds something good** to **increase desired behavior** e.g. *Giving a treat for good behavior* 2 . **Negative Reinforcement** 🚫✅ **Removes** something **bad** to **increase behavior** e.g. *Taking away chores if a child finishes homework* punishment **decreases** behavior. 3 . **Positive Punishment**❌ **Adds** something **bad** to **decrease behavior** e.g. *Getting a ticket for speeding* 4 . **Negative Punishment** 🚫❌ **Removes** something **good** to **decrease behavior** e.g. *Taking away video games for bad behavior* 5 . Primary Reinforcer - Stimulus affecting biological needs (such as food) 6 . Secondary Reinforcer- Stimulus reinforcing reinforcing behaviour associated with primary reinforcers (money, praise)
40
Def : Reinfocement schedule
Reinforcement schedules are rules that determine **when and how often** a behavior is **rewarded**. These schedules play a **big** role in **learning and behavior**, affecting how **quickly** someone (or an animal) learns and how **long they keep performing the behavior.**
41
Def : Continous Reinforcement
* **What it means:** Every time a behavior happens, it gets rewarded. * **Example:** A rat presses a lever, and it gets food **every single time**. * **Real-life example:** A vending machine – you put in money, and you get a snack every time. * **Effect:** - **Fast learning**, but if the **reward stops, the behavior also stops quickly (extinction).**
42
Def : Partial Reinforcement
* What it means: Behavior is rewarded **only sometimes**. * Why it matters: Makes **behaviors harder to extinguish compared to continuous reinforcement.** * Two types: **Ratio (number of responses) & Interval (time-based)**
43
Def : Fixed Interval (FI)
What it means:** Reward ( based on time) comes after a set amount of time, no matter how many times the behavior happens. *** Example: A worker gets paid once a month, regardless of their performance.*** * Effect: **Responses increase just before the reward but drop immediately after.**
44
Def : Variable Interval
* What it means: **Reward happens after random amounts of time.** * Example: Fishing – You might catch a fish in 10 minutes, then 45 minutes, then 5 minutes.* * * Effect: **Steady, consistent responses** since the person never knows when the next reward is coming.
45
Fixed Ratio (FR)
* **What it means:** You must do the behavior a specific number of times to get a reward. * **Example:** After answering **20 quiz questions**, you take a coffee break. * **Effect:** Fast response rate, but there is often a break right after getting the reward.
46
Variable Ratio (VR)
Variable Ratio (VR) – Reward After a Random Number of Responses * **What it means:** The reward comes **after an unpredictable number of behaviors**. * **Example:** Slot machines – You might win after **3 spins, then after 30, then after 5**. * **Effect:** The strongest and hardest-to-break behavior pattern – **most resistant to extinction**.
47
Why do pauses occur in reinforcement schedules?
In **fixed reinforcement schedules**, a behavior is rewarded **after a set number of responses (fixed ratio) or after a specific time interval (fixed interval)**. 1 . subject **knows exactly when the next reward will come**, 2 . After receiving a reward, the subject realizes that additional effort **immediately after the reward** will not lead to another reward right away 3.** So they take a short break before resuming the behavior.**
48
What are the differences between pauses in different reinforcement schedules?
**1. Fixed Interval (FI) – Longer Pause** - Since rewards come at set time intervals **no matter what**, there is little motivation to act early after a reward. - The response rate increases **just before the next reward is expected**. 2. **Fixed Ratio (FR) – Shorter Pause** * The subject knows that a **specific number of responses** leads to a reward, so they tend to **resume the behavior sooner** than in FI schedules.
49
Which reinforcement schedules lead to the fastest learning and why?
Fixed schedules : **Fixed interval** and **Fixed ratio** 1. Because they create predictable patterns**, subjects learn quickly **what they need to do to get rewarded**. ** 2 . This makes **behavior easier to control and reinforce.**
50
Which reinforcement schedules lead to a constant response rate and why?
**Variable reinforcement schedules** provide rewards **at unpredictable times or after an unpredictable number of responses**. the reward is not predictable, meaning that the **next reward could come at any time**. This creates **continuous motivation** to keep performing the behavior because the subject doesn't want to miss a potential reward. ⠀Since the subject **never knows when the next reinforcement is coming**, they keep responding at a steady rate without long pauses.
51
Which reinforcement schedule is most prone to extinction?
**Continuous reinforcement** (reward every time) makes learning fast, **but the behavior stops quickly** when rewards disappear.
52
Which reinforcement schedules are most resistant to extinction?
1. **Partial reinforcement ** ** (reward only sometimes) takes longer to learn but is **harder to extinguish** because the subject **expects that persistence will eventually be rewarded**. 2. variable ratio (VR) schedules** are the most resistant to extinction. * Since the reward comes after **an unpredictable number of responses**, the subject **keeps going indefinitely** because they think the next action **could be the winning one**. * This explains **why gambling is so addictive** – every unsuccessful attempt **still reinforces the belief that a big win is just around the corner**.
53
Role of contingency in operant conditioning
ontingency in Operant Conditioning – Learning the Probability of Rewards **Contingency** refers to how well a subject can predict the outcome of an action based on past experiences. * If a behavior **frequently leads to a reward**, the subject learns that it is **highly contingent** on the response. * If a behavior **only sometimes leads to a reward**, the subject learns that the probability of reinforcement is **low but still possible**, leading to **persistence**.
54
🏆 Key Takeaways ✅ **Variable schedules keep behavior going continuously** because the next reward **could happen at any moment**.
✅ **Partial reinforcement schedules are harder to extinguish** than continuous reinforcement schedules.
✅ **Variable ratio schedules (like gambling) are the hardest to break** because the subject keeps thinking, *"Maybe next time!"*
✅ **Contingency teaches subjects to estimate the likelihood of a reward** based on past experiences.
55
Key Concepts in Operant Conditioning : Premack’s Principle (Grandma’s Rule)
**What it means:** * A **high-frequency behavior** (something you like doing) can be used as a reward to encourage a **low-frequency behavior** (something you don’t like doing). * This makes **less enjoyable tasks more likely to be completed** because they lead to something desirable. **Why it works:** * High-frequency behaviors act as **motivators** for behaviors that people would normally avoid. * This principle is widely used in **parenting, education, and self-discipline strategies**. ## Footnote **Example:** * *"Eat your vegetables, and you can have dessert."* * **Eating dessert (high-frequency)** rewards **eating vegetables (low-frequency)**. * A student might tell themselves: *"If I finish my homework, I can play video games."*
56
Key Concepts in Operant Conditioning : Avoidance learning
What it means: **A person or animal** learns to avoid certain situations or behaviors because they associate them with negative outcomes. * Avoidance becomes a **powerful reinforcer** and is very difficult to unlearn. ⠀**Why it’s hard to break:** * **Avoidance reinforces itself** – the more a person avoids something, the stronger the avoidance becomes. * It prevents the person from **learning that the feared situation may not be as bad as they think**. ## Footnote ⠀**Example:** * A student who **gets anxious before public speaking** may start **avoiding presentations**. Over time, this avoidance **strengthens** because it reduces anxiety in the short term. * In **agoraphobia**, a person who experiences panic attacks in public places may start **avoiding going outside**, eventually leading to a **housebound state**.
57
Key Concepts in Operant Conditioning : Aversive conditioning
**What it means:** * Uses **punishment** or an **unpleasant stimulus** to reduce unwanted behavior. * It is **less effective than reinforcement** but can be used in extreme cases. ⠀**Why it works (but has limits):** * It discourages the behavior, **but people may stop following the punishment rules** (e.g., an alcoholic might stop taking Disulfiram so they can drink again). * Works best when combined with **positive reinforcement of alternative behaviors**. ## Footnote * **Disulfiram (Antabuse) for alcoholism** – This drug causes **severe nausea** when alcohol is consumed, making drinking **unpleasant** and discouraging the behavior. * **Biting nails?** Some people use **bitter-tasting nail polish** to make the habit unpleasant.
58
Key Concepts in Operant Conditioning : Covert Reinforcement
**What it means:** * **Imagining** a pleasant event as a reward for completing a behavior. * No **physical reward**, but **mental motivation** is enough to drive action. ⠀**Why it works:** * **Visualization techniques** are used in **sports psychology, self-improvement, and professional success strategies**. * Helps people stay motivated **without needing an external reward**. ## Footnote ⠀**Example:** * A student preparing for **MRCPsych exams** imagines themselves **graduating** and wearing their cap and gown. * An athlete **visualizes winning a race** to stay motivated during training.
59
Key Concepts in Operant Conditioning : Flooding and Implosion
**What it means:** * **Flooding** involves prolonged **exposure** to a feared stimulus until anxiety naturally fades. * **Implosion** is a similar technique, but the exposure happens **in imagination** instead of reality. ⠀**Why it works:** * Prevents **avoidance behavior**, forcing the subject to confront their fear. * Anxiety decreases naturally over time (**habituation**). ## Footnote **Example:** * A man with **fear of heights** is made to stand on top of the **Burj Khalifa** for an extended time. Over time, his fear **extinguishes** because he realizes nothing bad happens. * In **implosion therapy**, a person with a fear of drowning may be asked to **imagine** themselves stuck in deep water until their fear response diminishes.
60
Key Concepts in Operant Conditioning : Chaining
**What it means:** * Teaches a sequence of behaviors where **each step acts as a cue for the next**. * Used when the **target behavior already exists in parts**, but needs to be connected into a **complete routine**. **Why it works:** * Helps **structure complex behaviors** into manageable steps.
61
Key Concepts in Operant Conditioning :Shaping (Successive Approximation)
**What it means:** * Used when a **completely new behavior** needs to be learned. * Reinforces **small steps** toward the desired behavior. **Why it works:** * Helps teach **complex behaviors step by step**. * Used in **animal training, therapy, and skill-building**. ## Footnote **Example:** Training a dog to jump through a flaming hoop:
1️⃣ **Reward for running near the hoop** 🦴
2️⃣ **Reward for jumping near the hoop** 🦴
3️⃣ **Reward for jumping through the hoop** 🦴
4️⃣ **Reward for jumping through a flaming hoop** 🔥🦴
62
Key Concepts in Operant Conditioning : Incubation
What it means:** * **Repeated, brief exposure** to an anxiety-inducing situation **increases emotional response** rather than reducing it. **Why it’s dangerous:** * It reinforces **anxiety instead of reducing it**. * Avoidance becomes stronger over time. * **PTSD & phobias** – A person who keeps **ruminating** ( | ⠀**Example:** ## Footnote **Example:** * **PTSD & phobias** – A person who keeps **ruminating** (obsessively thinking) about a traumatic event **increases** their fear rather than overcoming it.
63
Key Concepts in Operant Conditioning : Stimulus Preparedness (Seligman)
What it means:** * Humans are **biologically wired** to fear some things **more than others** because of **evolution**. * We develop **phobias of things that were dangerous to our ancestors**. **Why it happens:** * Our brains **quickly learn fears that helped survival** in the past. ## Footnote ⠀**Example:**
✔ **Fear of snakes or spiders** – Common because these were **real threats** in human evolution.
❌ **Fear of shoes or watches** – Rare because these were never threats.
64
Key Concepts in Operant Conditioning :Learned Helplessness (Seligman)
**What it means:** * When someone **repeatedly experiences failure or inescapable suffering**, they **stop trying to change their situation**. ⠀**Why it’s dangerous:** * Leads to **depression and passivity**. ## Footnote Example:** * In an experiment, a dog was placed on an **electrified floor** with no escape. Over time, even when an escape route was provided, the dog **didn’t try** because it had learned that **escape was impossible**. * **Real-life:** Victims of **domestic abuse** may stop trying to leave their abuser because they believe they **cannot escape**.
65
Key Concepts in Operant Conditioning :Reciprocal Inhibition (Wolpe)
**What it means:** * Presenting an **undesirable behavior** alongside a **desired behavior** to reduce the unwanted response. ⠀**Example:** * Your **dog barks at a friend** → You **hug the friend** every time the dog barks → Over time, the dog **associates the friend with safety** and stops barking. * Used in **systematic desensitization** to reduce anxiety. **Why it works:** * The brain **cannot hold two opposite emotions at the same time**, so the negative one fades.
66
Key Concepts in Operant Conditioning : Cueing (Prompting) & Fading
**Cues (prompts)** are used to encourage specific behaviors. * **Fading** means gradually reducing the prompt so the behavior continues on its own. **Why it works:** * Helps establish **automatic behaviors**. ## Footnote ⠀**Example:** * A teacher **puts a finger on her lips** to signal students to be quiet (**cueing**). * Over time, students learn to **stay quiet without the cue** (**fading**).
67
Key take-away Concepts in Operant Conditioning
✅ **Flooding** – Full exposure to fear until anxiety fades (e.g., standing on a skyscraper for fear of heights) .
✅ **Shaping** – Reinforcing small steps toward a new behavior (e.g., dog learning to jump through a hoop). 
✅ **Chaining** – Teaching a sequence of behaviors (e.g., writing a name or baking cupcakes step by step) .
✅ **Incubation** – Repeated brief exposure **increases** emotional response (e.g., PTSD & phobias). 
✅ **Stimulus Preparedness** – Evolution makes some fears **more common** (e.g., snakes vs. shoes) .
✅ **Learned Helplessness** – Repeated failure leads to **giving up**, even when escape is possible. 
✅ **Reciprocal Inhibition** – Pairing **opposite emotions** to reduce a negative response (e.g., reducing barking with affection).
✅ **Cueing & Fading** – Using signals to shape behavior, then removing them gradually. ✅ **Flooding** – Full exposure to fear until anxiety fades (e.g., standing on a skyscraper for fear of heights).
✅ **Shaping** – Reinforcing small steps toward a new behavior (e.g., dog learning to jump through a hoop).
✅ **Chaining** – Teaching a sequence of behaviors (e.g., writing a name or baking cupcakes step by step).
✅ **Incubation** – Repeated brief exposure **increases** emotional response (e.g., PTSD & phobias).
✅ **Stimulus Preparedness** – Evolution makes some fears **more common** (e.g., snakes vs. shoes).
✅ **Learned Helplessness** – Repeated failure leads to **giving up**, even when escape is possible.
✅ **Reciprocal Inhibition** – Pairing **opposite emotions** to reduce a negative response (e.g., reducing barking with affection).
✅ **Cueing & Fading** – Using signals to shape behavior, then removing them gradually. ✅ **Premack’s Principle** – Use a fun activity to motivate a less enjoyable one (Grandma’s Rule). ✅ **Avoidance Learning** – Avoiding negative situations becomes a powerful (and hard-to-break) habit. ✅ **Aversive Conditioning** – Uses **punishment** to stop unwanted behaviors (e.g., Antabuse for alcoholism). ✅ **Covert Reinforcement** – **Imagining rewards** motivates behavior without physical incentives. ✅ **Covert Sensitization** – **Imagining negative consequences** discourages bad habits.
68