learning part 5 Flashcards

(63 cards)

1
Q

The Role of Context in Learning

A

Instrumental responses = actions we take that are shaped by consequences (like rewards or punishments).
→ Analogy: Like pushing a vending machine button to get a snack. You learn to push the button because it gives you something you want.

Reinforcers = things that increase the likelihood of a behavior happening again (like food, praise, money).
→ Analogy: Like a dog getting a treat for sitting.

Contextual stimuli = the environment or situation where the learning happens.
→ Analogy: Think of how different you feel at a party vs. a job interview. Same you, different surroundings = different behavior.

S-O associations = Stimulus-Outcome associations: When you associate a stimulus (S) with a certain outcome (O).
→ Example: Hearing a bell (S) and expecting food (O), like in Pavlov’s dogs.

S-R associations = Stimulus-Response associations: When a stimulus directly leads to a behavior.
→ Example: See red light (S) → stop walking (R).

✅ Key idea: The context is not just background noise. It controls how stimuli work. It influences which responses and rewards will happen.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Still the Same Person in every context– So What Changes?

A

You’re the same person, but your behavior changes depending on the context.

With parents → Polite, maybe reserved.

With partner → Intimate, affectionate.

With friend → Playful, relaxed.

With professor → Formal, respectful.

✅ Key idea: Context modulates behavior. You carry around multiple “behavioral programs” that get activated depending on where you are and who you’re with.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Everyday social context example where it doenst go in hand

A

A student plans to study during the holidays, but doesn’t follow through.

Why?

Because holiday context (Christmas tree, family, food, cozy vibes) doesn’t contain the same stimuli as the classroom (desk, clock, peers, chalkboard).

✅ Key idea: The holiday stimuli do not produce effective studying behavior.

Analogy: It’s like trying to sleep on a noisy airplane when your body is used to falling asleep in a dark, quiet bedroom. The behaviors are context-dependent.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Reynolds (1961) – Pigeons and Visual Stimuli

A

Pigeons trained using instrumental conditioning (they peck a key → get food).

VI schedule = Variable Interval: Reinforcement (food) becomes available after unpredictable amounts of time.

The visual cue: a white triangle on a red background.

Question: Were they pecking because of the triangle or the background?

This sets up a test of stimulus control.

Results – Differential Responding
They test the pigeons with:

Only the red background

Only the white triangle

One pigeon (#107) pecked more for the red background.
The other (#105) pecked more for the white triangle.

✅ This shows differential responding and stimulus discrimination.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Definition of Differential responding and Stimulus discrimination

A

Differential responding: Behavior changes based on the specific stimulus present.

A subject responds one way in the
presence of one stimulus and in a
different way in the presence of
another stimulus its behaviour is
under the control of those stimuli

Example: Responding with laughter to a joke but staying serious when hearing a sad story.
–>
Behavior changes with different cues (Laugh with friends, formal with boss)
_______________________________________

Stimulus discrimination: The ability to tell stimuli apart and respond differently to each.

An organism responds differently to
two or more stimuli

Analogy: Like knowing the difference between a fire alarm and a school bell—and reacting differently to each.
–>
Telling cues apart and reacting accordingly (Dog knows “sit” ≠ “stay”)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What does it mean for an organism to “respond differently to two or more stimuli”?

(Stimulus discrimination)

A

✅ It means: The organism has learned to distinguish between different environmental signals (stimuli), and it changes its behavior depending on which one is present.

🧠 Think of a Real-Life Analogy:
Imagine you’re in a classroom, and you hear two different sounds:

Sound 1: The school bell rings → You pack your things and leave.

Sound 2: The fire alarm goes off → You run outside quickly.

You heard two different stimuli, and you responded in two different ways.

Even though both are sounds, you’ve discriminated between them and assigned different meanings and behaviors to each.

🐦 In Pigeon Terms (From the Slide Example):
Pigeon sees a red background → Pecks more (because red is associated with food).

Pigeon sees a white triangle → Pecks less (or the opposite pigeon might do this).

This means the pigeon discriminated between the two visual cues, and responded differently.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

🔁 Stimulus Generalization

A

🧠 Core Definition:

Stimulus generalization is the degree to which an organism responds the same way to two or more stimuli.

🟣 “Stimulus generalization is the opposite of differential responding and stimulus discrimination.”

📦 Analogy:
Imagine you trained your dog to sit when you say “Sit!” in a high-pitched voice. Later, you say “Sit!” in a deeper voice, and the dog still sits.

Same behavior, different stimulus (pitch of your voice) → ✅ Generalization.

In contrast:

If the dog sits only to the high pitch but not the deep pitch → ✅ Discrimination.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How can 🔁 Stimulus Generalization be difficult?

A

“Identifying and differentiating several stimuli is not always so simple.”

Artists may notice tiny differences between lavender pink and carnation pink, while the rest of us just say: “It’s pink.”

If you treat both pinks the same → You’re generalizing.

If you respond differently to each → You’re discriminating.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

🐶 Pavlov and Watson: Classical

A

Examples (Slide 10–11)
Pavlov: Dogs trained to salivate to a bell would also salivate to similar tones.

Watson: “Little Albert” was conditioned to fear a rat, and he generalized that fear to rabbits and fur coats.

🟣 They responded to different but similar stimuli in the same way → Stimulus generalization.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

🐦 Pigeon Color Gradient Study (Slides 12–13 – Guttman & Kalish, 1956)

A

Pigeons were trained to peck at a yellow-orange light (580 nm).

During testing, other wavelengths (colors) were presented with no food.

Pigeons still pecked more at similar colors (570 nm and 590 nm), and less as the colors became more different.

📈 This creates a stimulus generalization gradient:
The more similar the new stimulus is to the original, the stronger the response.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

behavioral generalization gradient.

A

Think of a Spotify playlist.

If you like one song (580 nm), Spotify plays similar songs (570 and 590 nm), and you probably like them too.

As it recommends less similar songs, your interest (pecking rate) drops.

That’s a Think of a Spotify playlist.

If you like one song (580 nm), Spotify plays similar songs (570 and 590 nm), and you probably like them too.

As it recommends less similar songs, your interest (pecking rate) drops.

That’s a behavioral generalization gradient.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Even if pigeons respond to a red circle, what specific property is controlling behavior?

A

Reynolds (1961) Revisited:

Is it:

Redness (hue)?

Roundness (shape)?

Brightness (intensity)?

🧪 This is the problem of compound stimuli:
Sometimes multiple features are present. You have to isolate which one actually drives the behavior.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

👁️ Sensory Capacity and Orientation

A

For a stimulus to control behavior, the subject must be able to sense and perceive it.

You can’t expect a blind person to respond to a visual cue.

A stimulus behind someone’s back can’t control their behavior unless it reaches their senses.

🐴 Horse Example:
Horses could distinguish yellow, green, blue from grey.

But not red → shows limits in their sensory capacity.

🐕 Dog Example:
Dogs hear ultrasounds humans can’t.

They are also more sensitive to smell.

✅ Different species live in different sensory worlds, so stimuli don’t affect all organisms the same way.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Relative Ease of Conditioning and Overshadowing

A

Stimulus control isn’t just about perception. It’s also about competition between stimuli.

✨ Overshadowing:
When two stimuli are presented together during learning, the more salient (noticeable) one takes control.

📚 Analogy (Slide 17):
A child learning to read with pictures in a book:

They remember the pictures, not the words.

The images overshadow the text → Less learning about the less obvious stimulus.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

🧪 Pavlov’s Overshadowing Study

A

Stimulus A: Low intensity (e.g., quiet sound or faint light)

Stimulus B: High intensity (e.g., loud sound or bright light)

Both shown together during training.

Result: Learning was stronger for B, weaker for A.

So when tested with only A, animals responded less → learning about A was overshadowed by B.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

TYPE OF REINFORCEMENT: appetitive (reward) vs. aversive (punishment)

A

✅ Key Concept:
Stimulus control depends on the type of reinforcement used: appetitive (reward) vs. aversive (punishment).

🧪 Foree & LoLordo (1973) Experiment:
Two groups of pigeons trained with a compound stimulus: light + tone

🐦 Group 1 (Food group):
Pressed a pedal when light+tone was presented to get food.

Later, when light and tone were tested separately, light controlled behavior more.

🐦 Group 2 (Shock avoidance group):
Pressed a pedal when light+tone was presented to avoid shock.

When tested separately, tone controlled behavior more.

🔍 Why?
🟡 Visual cues are better at signaling positive outcomes (e.g., food).
🔵 Auditory cues are more associated with threats (e.g., danger).

📦 Analogy:
Think about survival in nature:

You see fruit 🍎 to find food → vision = appetitive learning.

You hear a snake hiss 🐍 to avoid danger → sound = aversive learning.

This experiment proves not all stimuli are equal in all learning contexts.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

STIMULUS ELEMENTS vs CONFIGURAL CUES

A

When animals (including humans) experience a compound stimulus—like a light and sound together, or a face with voice and expression—there are two ways their brains might handle it:

🧠 Two different ways of processing:
1. Stimulus-Element Approach
The brain breaks the compound into separate parts, and each part influences behavior on its own.

You respond to the light separately from the sound.

Think of it like judging a meal by each ingredient: “The chicken was great, but the sauce was too salty.”

🧪 Analogy: A personality test with individual traits — you’re not just one thing, but a mix of extraversion, openness, etc.

  1. Configural-Cue Approach
    The brain treats the whole stimulus as one unified thing.

You don’t react to light or sound alone, but to the combination as a unique experience.

Like tasting a full recipe where you don’t notice each ingredient — just the overall flavor.

🎵 Analogy: You hear the symphony, not each violin or flute — the beauty comes from the combined pattern.

⚖️ Why does this matter?
Whether behavior is shaped by individual parts or the whole setup depends on how the organism experiences and processes the situation.
This affects what they learn, how they respond, and how flexible their behavior becomes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

LEARNING FACTORS IN STIMULUS CONTROL

A

👁️ Seeing a stimulus doesn’t mean it will guide behavior
Just noticing something doesn’t mean your behavior will respond to it.

🔍 Example: You can easily see both a Peugeot and a Toyota. But unless you’ve learned the difference, they’re just “cars” — your actions won’t change based on which one you see.

⚖️ Two competing ideas on generalization:
🧪 Pavlov’s View:
He believed that generalization happens automatically — if two things look alike, learning just spreads from one to the other.

You’re afraid of one dog → You become afraid of all similar dogs.

🧠 Lashley & Wade (1946):
They argued the opposite: generalization happens when the subject hasn’t learned to tell the difference.

Once you’ve learned to distinguish specific dogs, you’ll only fear the one that matters.

✅ The slides say: Lashley & Wade got it right.

Generalization is not automatic — it’s what happens when you haven’t learned to discriminate yet.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

STIMULUS DISCRIMINATION TRAINING

A

Stimulus discrimination training is how we learn that some stimuli matter and others don’t.

🧪 Campolattaro et al. (2008): How animals learn to tell stimuli apart
Researchers trained animals using two tones:

A+ = a low-pitched tone always followed by a shock to the eyelid → the animal learns to blink (a conditioned response).

B– = a high-pitched tone that wasn’t followed by anything → no learning, no blinking.

📈 What happened during training?
In the beginning, the animal blinked to both tones — it hadn’t figured out which one mattered.
This is stimulus generalization: responding similarly to both signals.

But as training continued, the animal learned to tell them apart.
Eventually, it blinked only to A+, because it had learned A+ = shock, but B– = safe.

✅ This proves that with experience, animals can shift from generalization to discrimination — and behavior becomes controlled by the specific stimulus that predicts something important.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

🧪 Jenkins & Harrison (1960, 1962): Tones and Generalization Gradients

A

They wanted to know: How tightly does a specific tone (S⁺) control behavior?
To test this, they trained pigeons to peck when they heard a tone and measured how they reacted to other similar tones afterward.

🐦 Experimental Groups:
Group 1: Heard a 1000 cps tone (S⁺) that meant food, and silence (no tone = S⁻) meant no food.

Group 2: Heard a 1000 cps tone (S⁺) for food, and a very similar 950 cps tone (S⁻) for no food.

Control Group: Heard the 1000 cps tone and always got food — there was no S⁻ at all.

📊 Results:

Group: Control

Generalization Pattern: Flat – same response to all tones

What It Means: Didn’t learn to discriminate – tone had no control
_______________________________________

Group: Group 1

Generalization Pattern: Moderate peak around 1000 cps

What It Means: Some discrimination – tone had some control
_______________________________________

Group: Group 2

Generalization Pattern: Sharp peak at exactly 1000 cps

What It Means: Strong discrimination – tone had tight control
_______________________________________

The more the animal discriminates, the steeper the generalization gradient — meaning their behavior is more precisely tuned to the stimulus that actually matters.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What Is Extinction in Psychology?

A

Extinction is the process by which a learned behavior (a conditioned response) decreases or disappears when it is no longer reinforced or followed by the expected outcome (like a reward or punishment).

Extinction is an active/passive process
*
The loss of conditioned behaviour that occurs as extinction is/is not the same as the
loss of responding that occurs because of forgetting

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Extinction in Classical vs. Instrumental Conditioning

A

🔹 Classical Conditioning
Acquisition: The Unconditioned Stimulus (US) follows the Conditioned Stimulus (CS).

E.g., Bell (CS) → Food (US) → Salivation.

Extinction: The US no longer follows the CS.

Bell rings, but no food comes. Over time, the dog stops salivating to the bell.

🔹 Instrumental (Operant) Conditioning
Acquisition: A reinforcer follows a behavior.

E.g., A rat presses a lever → gets food → presses more.

Extinction: The reinforcer no longer follows the behavior.

The rat presses the lever → no food → stops pressing over time.

Analogy: You stop telling jokes if people stop laughing.

🔵 “Losing behaviours is also adaptive.”!!!!!

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Extinction Is NOT Forgetting

A

🟨 “Extinction is an active/passive process.”
Correct: Extinction is an active process.

Why? Because the subject learns that the old rule (e.g., bell → food) no longer works.

Analogy: It’s like realizing that pressing the elevator button twice doesn’t make it come faster. You stop because you learned the rule changed, not because you forgot elevators exist.

🔲 “Extinction is/is not the same as forgetting.”
Correct: Extinction is not the same as forgetting.

Forgetting is a passive fade of memory over time.

Extinction is an active update: the brain learns that the rule or pattern changed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Therapeutic Application — Anxiety, Exposure & Extinction

A

🧠 “Extinction forms the basis of many behavioural treatments for anxiety and mood disorders.”
Exposure therapy relies on extinction. You expose someone to what they fear, without the bad thing (US) happening, so they learn it’s safe.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Example – Fear of Flying (Anna’s Case)
Step-by-step explanation: Anna fears planes because she associates them (CS) with danger or panic (US). The therapist uses imagined exposure (e.g., thinking about booking a flight). Anna learns mindfulness to manage her response (prevents avoidance). She does real-life exposure, like visiting an airport or sitting on a plane that doesn’t take off. Over time, without the panic (US), the fear (conditioned response) extinguishes. Analogy: It’s like learning that the barking dog behind the fence never actually bites. The more you walk by, the less scared you feel.
26
🔷 What happens during extinction?
Extinction produces 2 main behavioral effects: Reduced responding (behavior gradually decreases over time) Increased response variability (behavior becomes more unpredictable at first) 🧠 Analogy: The Key and the Broken Lock Imagine coming home, inserting your usual key, but the door won’t open. You try: Turning it harder, Jiggling it up and down, Using your elbow, Kicking the door... These weird actions are increased response variability — you’re trying out lots of different responses because the usual one doesn’t work. This is common at the start of extinction. But after many failed tries... You give up. That’s reduced responding — the learned behavior fades when the outcome is no longer happening. 🟡 Highlighted key concept: "Pick response" = at first, your behavior becomes erratic, and you “try out” different responses. Only later do you reduce responding when nothing works.
27
🧪 Neuringer et al. (2001) Experiment
📍Setup: Rats, in a chamber, with two levers and one response key. 🧪 Training Phase: Group 1 (Varied Group): Got rewards only if they did new response combinations. Group 2 (Yoked Group): Got rewards for any combination (no need to vary). 🧪 Test Phase (Extinction): No reinforcement is given anymore. = extinction begins. 📊 Results: Both groups show... ✅ Increased Response Variability They try different combinations: pressing levers in weird new orders. ✅ Decreased Response Rate Over time, they try less and less. This shows that increased behavioral variability is a natural effect of extinction, even if the animal wasn’t originally trained to vary. 😡 Emotion: Frustration When extinction begins, the most frequent emotion is: 🔴 Frustration You expected something good to happen — it doesn’t — you get annoyed.
28
🐦 Azrin et al. (1966): Aggression and Extinction
Two pigeons in a Skinner box. One is trained to peck a key for food. During extinction (no food after pecking), that pigeon... …turns around and attacks the other pigeon. This shows extinction-induced aggression, also called frustration-aggression. 📌 Key quote from the slide: “The key-pecking bird ignores the other one as long as pecking is reinforced, but when extinction is introduced, the pigeon is likely to attack its innocent partner.”
29
🌀 Are the Effects of Extinction Permanent?
❌ No! Extinction does not erase the original learning. Key phrase: “Extinction does not cause a permanent loss of a conditioned response.” Instead, extinction creates a new learning: “This CS no longer predicts that US.” But the original memory is still there and can return later through 3 Types of Recovery from Extinction: Spontaneous Recovery, Reinstatement, and Renewal
30
What are the 🔁 3 Types of Recovery from Extinction?
Spontaneous Recovery The CR returns after a delay — even though nothing new has happened. E.g., A dog starts salivating to a bell again the next day, even after extinction yesterday. Reinstatement The US is presented alone, and CR returns. E.g., If you suddenly get shocked again, you may re-fear the tone that had been extinguished. Renewal The CR returns in a different context/environment. E.g., Fear extinguished in a therapist’s office returns at home or in school.
31
🧠 So what does extinction really do?
It overwrites the old learning with context-specific, flexible new learning: “In this situation, this cue no longer leads to this outcome.” But it doesn’t delete the old memory. This is called inhibitory learning.
32
SPONTANEOUS RECOVERY
🧠 KEY CONCEPT: Extinction dissipates with time. Even if you don’t do anything during a break, the extinguished behavior can spontaneously recover. 📖 Technical Explanation: In classical conditioning: You pair a CS (like a light or tone) with a US (like food). Eventually, the CS causes a CR (e.g. a rat sticking its head in the dish). Then, you extinguish the CR by presenting the CS without the US. Later, after a break, you present the CS again, and... 💡 Boom! The CR reappears — even though no new learning occurred. This is spontaneous recovery.
33
🧪 Rescorla’s Study (1997, 2004) on SPONTANEOUS RECOVERY
The rats were trained with two signals: S1 = Light S2 = Tone Both of these signals meant “food is coming” (US), so the rats learned to stick their heads in the food bowl (that’s the conditioned response, or CR). Then, extinction began: the light and tone were shown, but no food followed. Eventually, the rats stopped responding to both signals. Later, during a test: For S1 (light), they waited 8 days before showing it again. For S2 (tone), they showed it right away after extinction. What happened? S1 (light) triggered the rat to respond again — the CR came back. S2 (tone) didn’t trigger any response — the CR stayed gone. 🔍 Why did the CR come back for S1? Because the rat had a break with no new learning, the brain “reset” a little. The memory of food coming after light came back on its own — that’s what we call spontaneous recovery. 📈 Graph Insight: Extinction rate = same for both S1 and S2. Only S1, which had time to rest, showed spontaneous recovery. It wasn’t full recovery — the CR wasn’t as strong as before — but definitely returned.
34
RENEWAL
🔁 KEY CONCEPT: Extinction is context-dependent. Change the context, and the behavior (CR) can return. 📖 Technical Explanation: In renewal, we’re saying: “I only feel safe here — but the fear comes back there.”
35
📊 Bouton & King (1983) Study on RENEWAL
The rats were trained in Room A: They heard a tone (CS) followed by a shock (US). As a result, they learned to fear the tone and stopped pressing a lever (that’s the fear response, or CR). Then, the rats were split into 3 groups: Group A: Extinction happened in Room A (same place as learning). Group B: Extinction happened in Room B (a different room). Group NE: Didn’t get extinction at all. 🧪 During the test (all rats back in Room A): Group NE: Still scared — no surprise, they were never taught the tone was safe. Group A: Not scared — because they learned in that exact room that the tone was no longer dangerous. Group B: Fear came back — even though they learned the tone was safe, that learning happened somewhere else. 🧠 Why? Because extinction learning is tied to the room it happened in. The brain says, “It’s only safe in that place.” When the context changes, it stops trusting the extinction. When you remove the context, the brain doesn’t apply the extinction rule.
36
REINSTATEMENT
✅ Key Concept: Reinstatement is when a previously extinguished conditioned response (CR) returns after the unconditioned stimulus (US) is presented again on its own — even without the conditioned stimulus (CS). The Salmon Aversion Example 🔹Original Conditioning: You eat salmon (CS) → You get sick (US) → You develop a taste aversion (CR) to salmon. 🔹Extinction: You eat salmon several times and don’t get sick → Your aversion fades away. 🔹Reinstatement: One day, you get sick again, but not from salmon. Suddenly, your old aversion to salmon returns, even though the fish wasn't the cause. Why? ➡️ Because your brain reconnects the experience of being sick with its original cause: salmon. The US (sickness) reawakens the old CR (aversion), even though the CS (salmon) wasn’t the reason this time.
37
Rats and Cocaine (Kelamangalath et al., 2009) study on REINSTATEMENT
🔹Training Phase: Rat presses a lever → Gets cocaine → Learns to press lever a lot. 🔹Extinction Phase: Rat presses lever → No cocaine → Stops pressing (CR is extinguished). 🔹Reinstatement: Give the rat a free cocaine injection (US) → It starts pressing the lever again, even though pressing it gives nothing. 🧠 Why? The rat’s brain reactivates the old learning: cocaine = press lever. The free dose of cocaine brings the old behavior back to life.
38
Implications for Therapy (Fear of Public Spaces)
🔹Example: A person has panic attacks (US) in crowds (CS) → Develops fear of public spaces (CR). Therapy helps them stay calm in crowds (extinction). But if they have another panic attack, even in a different setting, their fear of public spaces might return. 🧠 This shows how extinction doesn’t erase the learning, it just suppresses it. Reinstatement can reactivate it.
39
RETENTION OF KNOWLEDGE OF REINFORCER
✅ Key Concept: Even after extinction, the organism still “remembers” which action led to which reward. The extinction didn’t erase the association — just stopped the behavior temporarily.
40
Experiment: Rats with Two Levers Setup: RETENTION OF KNOWLEDGE OF REINFORCER
🐀 Experiment: Rats with Two Levers Setup: Lever A → Food pellet Lever B → Sugar water Extinction: Both levers stop producing anything → Rat stops pressing. Reinstatement Test: If you randomly give a food pellet, the rat will start pressing Lever A again (not B). Same if you give sugar water → it presses Lever B again. 🔍 Interpretation: This shows that extinction: Does NOT erase the S-O (stimulus-outcome) memory. The memory of what each lever used to produce is still intact. 📊 Chart Insight: “Same” reinstatement (e.g. food pellet after Lever A) causes high responding. “Different” reinstatement (e.g. sugar after food lever) causes much lower responding.
41
ENHANCING EXTINCTION — How to Make Extinction More Durable
✅ Key Points: More extinction trials = stronger extinction. Like practicing a new habit repeatedly. Shorter gaps between extinction trials = better effects. Practice frequently, don’t wait too long. Repeating extinction training prevents spontaneous recovery. Helps "overwrite" the old behavior. Doing extinction in multiple contexts = more generalization. So the learned safety or non-response applies in different situations, not just one. Example: A Bad Memory from a Park Imagine a child is afraid of parks after falling off a swing. Extinction: They visit the park several times without falling → fear fades. But if they go to a different park, the fear might come back (renewal). To prevent this, you expose them to many different parks. Now they generalize that parks = safe.
42
difference is between these: SPONTANEOUS RECOVERY, RENEWAL, and Reinstatement
🔁 Spontaneous Recovery What: Return of a conditioned response after time passes (no new learning or stimuli). Key Factor: Time Example: Fear comes back after a break, even without retriggering it. 🌍 Renewal What: Conditioned response returns when context changes (e.g. new location). Key Factor: Context Example: Fear extinguished in therapy room returns at home. 💥 Reinstatement What: Conditioned response returns after re-exposure to the unconditioned stimulus (US). Key Factor: US returns Example: Fear of crowds comes back after a random panic attack.
43
AVOIDANCE PROCEDURES (Think: "I take action to prevent bad things")
✅ Key Concepts: “The response prevents an aversive event from occurring: negative contingency between instrumental response and aversive stimulus (if the response occurs, the aversive stimulus is omitted)” Negative contingency means there's an inverse relationship: if you do something (a response), the bad thing (aversive stimulus) does not happen.
44
🧠 Everyday Analogy of avoidance: Carrying an Umbrella ☔
Imagine it’s cloudy (warning cue). You bring an umbrella (instrumental response). It rains (aversive stimulus), but because you brought the umbrella, you don’t get wet. ➡️ That’s avoidance: your behavior (carrying umbrella) prevents the bad outcome. “Increased probability of response occurring in the future (i.e. it’s reinforcing)” Because not getting wet is pleasant, you're more likely to carry the umbrella again. Avoidance is reinforcing — it increases the behavior that helped you dodge something bad.
45
🧠 Everyday Analogy of punishment: Touching a Hot Stove 🔥
You touch a hot stove (response) → you get burned (aversive stimulus). There’s a direct link between action and pain: touch = burn. ➡️ That’s punishment: your behavior causes the bad event. “Reduced probability of response occurring (i.e. it’s not reinforcing)” Because getting burned sucks, you're less likely to touch the stove again. Punishment decreases future behavior. “Avoidance is active… punishment is passive” Avoidance: You do something to stop the bad thing. Punishment: You do something wrong and get punished.
46
⚡ HISTORICAL STUDY: Bechterev (1913)
Students put fingers on a metal plate. A warning stimulus (CS+) came before a shock (US+). They learned to lift their fingers (instrumental response) when they heard the CS to avoid the shock.
47
What is 🔁 DISCRIMINATED AVOIDANCE / SIGNALLED AVOIDANCE?
This type of learning mixes classical conditioning (learning through signals) with operant conditioning (learning through consequences): CS (Conditioned Stimulus): A warning cue — like a sound or light — that signals something bad is coming. US (Unconditioned Stimulus): The unpleasant event — like a shock. Response: The action the person or animal takes to avoid getting hurt. If the animal responds during the warning cue, the sound stops and the shock doesn’t happen. That’s avoidance — they learn to act in time to prevent the bad thing altogether. If they don’t respond during the warning, the shock starts — and then they must act to stop it. That’s escape — they didn’t prevent the pain, but they still learn to stop it once it starts. 🟦 AVOIDANCE BEHAVIOUR “Response is performed during the CS-US interval and turns off CS and US.” You act before the shock → You avoid it entirely. 🧠 Analogy: You get up when your alarm rings so you're not late. 🟨 ESCAPE BEHAVIOUR “Failure to respond during CS-US interval causes the presence of US until response occurs.” You act after the shock has started → You escape it. 🧠 Analogy: You don’t wake up when your alarm rings, but then run when your boss calls you angry.
48
📈 EARLY VS. LATER TRAINING
“Early in training, most trials produce escape behaviour, but with practice, avoidance behaviour increases.” At first, animals wait until they feel the shock and then respond to escape. Later, they learn to respond to the warning cue (CS) before the shock even comes — avoiding it. 📊 Diagrams Recap: Avoidance: CS comes on You act during CS US never happens R = response is reinforced Escape: CS comes on You do nothing US starts Then you respond to stop it
49
What are the few main types of experiments (among many others) for EXPERIMENTAL ANALYSIS OF AVOIDANCE BEHAVIOUR?
Escape from fear experiments Independent measurement of fear during the acquisition of avoidance behaviour Extinction of avoidance behaviour through response blocking and CS Alone exposure
50
What is Escape from Fear (EFF) Procedure?
Think of this as a lab version of learning to run away from something scary before it actually hurts you. 1. Classical Conditioning Phase Rats are placed on one side of a shuttle box. They hear a CS (Conditioned Stimulus) (like a tone), and shortly after they get a US (Unconditioned Stimulus) — a shock. Result: The CS comes to elicit fear. The sound becomes scary on its own. 2. Test Phase The rat can now move to the other side of the box. If it moves when the CS starts, the CS stops, and the shock is prevented. ➡️ This means moving is the response, and it helps the rat escape from fear (not just from pain, but from the fear-triggering signal). 3. Results Rats quickly learn that moving gets rid of the scary signal, so they start moving as soon as they hear it — even before any shock happens.
51
What is Independent Measure of Fear?
Here, we explore whether fear itself is the reason animals keep avoiding. ✅ Key Idea: If avoidance behavior is driven by fear, then more avoidance should mean more fear... But actually, the opposite happens: More avoidance = less fear, yet the avoidance behavior still continues. 🧠 Everyday Example: Imagine you're scared of talking to people. You avoid social events, and your fear stays strong. But if you start going to events and nothing bad happens, the fear drops. Yet… you might still keep those old habits of “preparing your escape plan,” even when you’re no longer afraid. “The more often you hang out with your friends, the less social anxiety you'll have.”
52
🔁 Cycle of Fear and Avoidance in Independent Measure of Fear?
Avoidance creates a self-reinforcing loop: 🔄 The Vicious Cycle: You avoid something scary → you feel better for now. But this tells your brain: “That thing really was dangerous! I did the right thing by avoiding it.” So next time, your fear is just as strong — maybe even stronger. ❌ The Problem with Avoidance “It provides temporary relief... but keeps the fear alive.” You never get the chance to prove to your brain: “Hey, this actually isn’t that bad.” ✅ Exposure = Fear Reduction When you face what you’re afraid of (even in small steps), your brain updates its belief: “I didn’t die. That wasn’t so bad.” Over time, your fear gets weaker. Example: Start by speaking in front of 2 people, then 5, then 10 — the more you do it, the less scary it feels.
53
🧪 Lovibond et al. (2008) Study
This study explored whether people keep avoiding something even after they stop feeling afraid. 🧬 Setup: Participants were shown 3 different colored blocks on a screen: A+: Shock could be avoided by pressing a button → instrumental learning B+: Shock happened no matter what → classical learning C-: No shock at all → control 🔍 Results: Participants learned to press the button during A+, and their fear went down (measured by less sweating). But even after their fear disappeared, they kept pressing the button. This means: Avoidance behavior can continue even when fear is no longer present.
54
Extinction of Avoidance Behaviour (🔓 Blocking the Response)
Show the CS (tone), but don’t let them avoid. For example: put a barrier in the shuttle box → the rat can’t switch sides. So now: They hear the tone, But they can’t run, And they don’t get shocked. Over time, the brain learns: “Oh… I guess this tone doesn’t mean pain anymore.”
55
Extinction of Avoidance Behaviour (🌊 Flooding)
Keep them exposed to the CS (tone or spider or crowd) without letting them escape. It’s intense, but: They’re fully exposed to the fear trigger, And they don’t escape or get hurt, So the fear fades faster. This is used in therapy for anxiety and phobias.
56
👥 Group Exercise Example
Participants are brought to a room that feels mildly threatening (like a tight space for someone with claustrophobia). They are kept there, gradually increasing time. ➡️ They can’t escape, but nothing bad happens, so their brain relearns that this is safe. This is how exposure therapy works — facing the thing, staying there, and proving to your brain it’s not dangerous.
57
🔹 What is Punishment?
At its core, punishment is a consequence applied after a behavior that reduces the likelihood of that behavior occurring again. 🧠 Think of it like this: Imagine you touch a hot stove (behavior) and get burned (punishment). The pain teaches you not to touch it again.
58
❗ Thorndike (1932) and Skinner (1953) on punishment
These behaviorists originally believed punishment was not effective, only producing temporary effects. That means they thought people (or animals) might stop a behavior briefly, but the behavior would come back later. 💡 Analogy: Like swatting a fly — it flies away temporarily but soon returns. ✅ Later Research: More recent studies disagree — they show that with the right kind of punishment, a behavior can stop quickly and effectively, even after one or two times. 💡 Analogy: If a child touches a live electric socket and gets a painful shock, they never try it again. That’s highly effective learning.
59
⚠️ Inappropriate Punishment (intensity matter)
Punishment only works if it’s applied correctly. Badly chosen punishments can cause the behavior to come back (called “recovery”). Example 1: Electric shock Child sticks fork in outlet → gets a shock. Result: Permanent suppression of behavior. Example 2: Speeding ticket You speed → get a ticket. But: you may still speed again. Why? The punishment is not dramatic or immediate enough. 💡 Analogy: It's like using a spray bottle to stop a dog from barking. It might stop temporarily, but the barking will likely return.
60
Experimental Analysis of Punishment
🔁 Punishment procedures have 2 phases: Phase 1: Establishment of Instrumental Response This is where the subject learns a behavior that gives them something good (a reinforcer). Example: A rat presses a lever → gets food. So the rat keeps pressing the lever. 💡 Analogy: You learn that putting money in a vending machine gives you chocolate. So you keep doing it. Phase 2: Punishment of Some Responses Now, some of those same behaviors that were rewarded are punished. Some responses → still get reward (e.g. food) Some responses → now get punishment (e.g. electric shock) ⚔️ This creates conflict: Should the rat press the lever (to get food)? Or avoid pressing it (to avoid a shock)? 💡 Analogy: Imagine you reach for a cookie and sometimes it tastes great, other times it shocks you. You’re now conflicted — reward vs. pain. 🐦 Example with a pigeon: Pigeon pecks a key → gets food. Then: pecking also sometimes leads to a shock. This conflict leads to less pecking overall. 📊 The level of behavior depends on: The strength of the punishment (how bad the shock is). The value of the reward (how hungry the pigeon is, or how tasty the food is).
61
Characteristics of the Aversive Stimulus (Punishment) 🔹 How strong is the punishment?
Low-intensity punishment: Only causes moderate suppression. Over time, the subject habituates (gets used to it). Behavior returns. 💡 Analogy: Like a parent softly scolding a child — the child learns to ignore it. High-intensity punishment: Causes complete suppression of behavior. Lasts for a long time. 💡 Analogy: Burning your hand on a stove makes you never touch it again.
62
🔹 How is the punishment introduced?
1. If the first punishment is severe, then: You get high suppression immediately. 2. If the punishment starts mild and increases: You develop resistance. Even when the punishment gets severe later, it’s less effective. 💡 Analogy: Think of slowly increasing the volume of a siren — at first it’s annoying, but you get used to it, even as it becomes loud. 🔹 Real-life application: A long prison sentence is less effective if the person has already experienced shorter sentences. They’ve built tolerance.
63
Schedules of Punishment 🔬 Study: Azrin et al. (1963)
🔬 Study by Azrin et al. (1963) Researchers trained pigeons to peck a key to get food, but the rewards came at random times — this is called a Variable Interval (VI) schedule (like checking your phone for a text; sometimes there’s a message, sometimes not). Once the pigeons learned to peck, the researchers added electric shocks as a punishment — but not every time. The shocks were delivered on Fixed Ratio (FR) schedules, which means the bird got punished after a set number of pecks. 💡 Main Findings: FR-1 = Continuous Punishment The pigeon got a shock every single time it pecked. Result: The bird completely stopped pecking. The punishment worked 100%. High FR (e.g., FR-1000) The pigeon only got shocked once every 1000 pecks. Result: The bird kept pecking a lot, because the punishment was rare and easy to ignore. 💡 Analogy: Imagine you grab a cookie and get a shock: If you get shocked every time (FR-1), you’ll stop grabbing cookies real fast. But if you only get shocked once in a thousand times (FR-1000), you’ll keep going — the chance of punishment is too low to care. 📈 Graph Meaning: A flat line on the graph = almost no pecking (high suppression). A steep line = lots of pecking (low suppression). So: FR-1 line is flat → the bird gave up. FR-1000 line rises sharply → the bird kept trying.