Rescorla & Wagner model Flashcards

1
Q

What is the relationship in surprisingness of US and learning ?

A

A US can be surprisingly big = excitatory effect > increase in associative strength

US can be surprisingly small =
Inhibitory effect > decrease in associative strength

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

what is the effect of US expectation according to R-W?

A

US expectation is based on the already acquired associative strength of all the stimuli that are present in a certain trial

Strong CR = index of strong expectation US

Weak CR = indication low expectation US

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is λ in R-W model?

A

λ = asymptote of learning possible with the US
(will be 1 when US is present in trial,
Will be 0 when US is not present in trial)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is ΔV in R-W model?

A

ΔV = change in associative strength of a stimulus on that trial

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is V in R-W model?

A

sum of already required associative strength through all the CS on a trial

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is (λ – V) in R-W model?

A

how much is the US surprising = discrepancy of expectation

the amount of learning on a trial

= large at start of learning , because associative value V is close to zero

1-0 = 1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

what is the formula for 1 CS?

A

ΔVA = k (λ – VA)

k = salience - constant

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

how does the aquisition of associative strength change over trials with 1 CS?

A

beginning: VA = 0, dus (λ – VA) = (1 – 0) = 1 = big (strong underprediction cause low expectation), so ΔVA= big increase

After a few trials: VA = big, so (λ – VA) = small (small underprediction, high expectation ), dus ΔVA = only small increase

Typical acquisition curve = negative growing curve > first it has steep increase, after it the growth becomes less

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is the formula for 2 CS in associative learning?

A

ΔVA = k(λ – VAB)

ΔVB = k(λ – VAB)

Where > VAB = VA + VB

According to Rescorla-Wagner this implies:
competition between CS-A and CS-B for associative strength with US because
Sum VA + VB = totally λ (and not more)

See overshadowing!

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is overshadowing according to R-W?

A

After A+ training CR on A will be bigger than after AB+ training

B “overshadows” acquisition of A
B also gains associative strength during AB+ training

overshadowing:
during simultaneous training of CS A and CS B > VA and VB will be smaller then if they were trained independently!

competition

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is Blocking mechanism according to R-W?

A

Phase 1: CS A is trained until a perfect expectation of US is acquired. At the end of phase 1: VA = λ = 1

Phase 2: Stimulus B is presented together with A and followed by US.

no conditioning will occur for B because US is perfectly predicted by A.
(λ - VA+B)= 0 in phase 2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

How does unblocking work according to R-W?

A

Unblocking > through stronger US on trial

Phase 2: the US is increased 2 times in this phase which means learning space λ is bigger.
2λ in this case. Adding of CS - B will have this effect:
A has association equal to λ = 1
B has association equal to 0 (because no association build yet)

The λ in the formula is double, there exists an underprediction in the second phase, - so there is still space to learn for CS B.

New formula = (2λ - VA+B)

No blocking for B > it becomes excitatory!

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

How can Loss of associative value ↓ happen - despite pairing with US?

A

answer: Over - expectation

Phase 1 = A and B are associated with US > but on separate trials!
They both gain associative value > until they both predict US perfectly = λ

Phase 2 = A and B are presented simultaneously on the same trial with the same US as in phase 1.

The subjects expect to get a US strength of A and B combined > but this is not the case.

RESULT: They will have to lower their expectation to meet the real condition > loss of associative value

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

How does conditioned inhibition work?

A

Once CS-A has acquired excitatory value and is predicting the US well, it is paired with the CS-B on trails without the US.

This creates overexpectation for CS-A.

To predict absence of US on non-reinforced trials -> the associative value of CS-A + CS-B have to sum to zero. Because CS-A has acquired positive value, CS-B has to than gain negative value for it to sum to 0.

ASSUMPTION = CS- acquires negative associative value

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What are the Conditioned inhibition extra facts?

A
  • During A+B no US trials > A will lose some of its excitatory value
  • Acquiring positive association CS+A goes faster, than acquiring negative association CS-B
  • Underprediction (1-VA) trials is bigger than overprediction op (0 - VAB) trials
    Will stabilize when A=λ and B= -λ
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

how does Extinction happen? (of excitation)

A

Phase 1:
First A+ training until VA=λ = 1

Phase 2: continuously presenting trials with A- > no US

Over expectation will occur.
λ = 0

CS - A will slowly lose its excitatory value until it drops to 0.

PROBLEM:
This is a faulty theory
extinction id not the opposite of acquisition!
You do not unlearn the relationship between a CS and US

17
Q

When is there protection from extinction?

A

Despite not presenting CS with US >
CS- > extinction procedure….

The CS will still keep its excitatory value!

Procedure: when CSA is presented together with inhibitor B > and no US = AB -
CS A will remain excitatory.

(VA + VB-) = (1 + (-1)) = 0

k (0 – 0) = 0

There is no overprediction > so CS-A keeps its excitatory value

18
Q

How does extinction conditioned inhibition work?

A

B has acquired inhibitory value > it is negative > or has acquired under prediction.

Phase 2: many times alone B- trial > B no US
VB = -λ

The underprediction will now be adjusted.

CS-B will slowly looses its inhibitory value until it reaches

PROBEM:
Does not actually work this way!
B- (no US)…. after A+/AB- training does not extinct the inhibitory value of B. It can even make it more inhibitory

What does work: extinction of excitatory value CS-A !!

19
Q

Can a CS have both inhibitory and excitory property?

A

not according to rescorla and wagner

in reality ; yes
The same CS can have inhibitory and excitatory properties.

According to Rescorla and Wagner a CS can have only one of the two. Be positive or negative but not both.

PROOF:
Tait and Sadlin
by backwards and foreward conditioning

20
Q

which fenomenon cannot be explained by R-W?

A

Latent inhibition

+

Unblocking through weaker US > is not shown by R-W but is possible

21
Q

Where does R-W put the attention in learning on ?

A

Rescorla and Wagner focussed on effectivity of US.

The more you learn on the trial > the less your prediction of the US. No learning is perfect prediction of US.