Flashcards in Week 11 The Information Processing Approach to Memory Deck (30):
What did Sternberg (1966) show in his 1st and 2nd experiment of high speed scanning in human memory?
Sternberg conducted a recognition task (probe or no probe in test item) followed by a recall task. There were no difference and interaction between positive trials (probe on list) and negative trials (No probe on list) and errors were low for both recognition and recall. RTs increase linearly with set size in both fixed and varied memory set trials.
Sternberg's results can be directly interpreted by what?
The Sternberg paradigm’s design can be directly interpreted by the Two-way ANOVA statistics
What is Sternberg's conception of the additive factors method?
Sternberg’s conception of the additive factors method is that there is a series of stages which are presumed to happen between whenever the stimulus is presented (e.g. whenever you see the probe) and whenever you make a response.
The sequence of processes are:
1. Stimulus encoding
2. Comparison (to the items you’re holding in memory)
3. Binary decision (about whether it was on the list)
4. Translation and response organisation (execute decision)
How can you show whether the sequence of stages in processing are independent?
The key part is that each one of these steps can be selectively influenced by some experimental manipulation.
- Make the letter/number hard to read, it’s going to slow down stimulus process, slows down the amount of time it takes to encode the stimulus
- Make the set size bigger, slow down the comparison process
- Change the response type, affects binary decision
- Make it more difficult to execute the response of change the frequency of the response, slows down the execution of response.
If these stages are independent and arranged sequentially, then if we manipulate the set size to slow down comparison stage and manipulate presence of item to slow down binary decision and they don’t interact with each other, then that’s an indication of the sequential and independent series of stages.
How is the additive factors method reflected in clinical populations?
Clinical populations scan at different rates (memory task).
Some populations have different slopes, this reflects problems with memory scanning.
Other populations have the same slope but different intercepts, reflecting problems with other processes e.g. response inhibition/encoding that is unrelated to memory scanning.
What are the stopping rules?
1. Self-terminating: search stops as soon as the probe is located in the memory set OR at the end of the list when there are no more items to search
2. Exhaustive: search must continue until all items have been scanned even if the target is found in the memory set
In the Sternberg paradigm, what pattern of results is predicted by the serial self-terminating model?
The serial self-terminating models predicts main effects of set size and possibly probe and an interaction. Remember that the serial self-terminating model predicts an interaction because scanning will stop sooner when the probe is present than when the probe is absent
In the Sternberg paradigm, what pattern of results is predicted by the parallel self-terminating model and exhaustive model?
They predict main effect of probe (different intercepts) but no main effect of set size (since there's no slope). In parallel models, you can items simultaneously regardless of stopping rule.
In the Sternberg paradigm, what pattern of results is predicted by the serial exhaustive model?
There is a main effect of both probe and set size but no interaction. Changes of slope in a serial exhaustive model reflects this rate of scanning (both exhaustive so no change in slope here), and changes in intercepts reflects other processes.
It predicts no interaction because scanning continues regardless of presence/absence or probe.
The standard Sternberg paradigm result is that mean RT increases with increasing set size and that there is no interaction between set size and probe. How can that result be explained?
The standard Sternberg result can be predicted by three models: 1) a serial exhaustive model, which assumes that processing of all items occurs sequentially but you cannot stop searching until the end of the list, 2) a limited capacity parallel model, in which processing of all items occurs in parallel and that each additional item slows down the overall processing rate, and 3) a global famiilarity model which assumes that you compare the probe item to all of the items on the list on the list and if it the global match is greater than a threshold you say old. Donkin & Nososfky showed that all 3 models are viable using Sternberg’s slow presentation time.
What is represented by a lag function in the Sternberg task?
As lags refers to how far back in time was an item presented, the lag function shows you the response time for each item as a function of how far back in time that item was presented.
What are the challenges to the serial exhaustive model?
1. Serial position effects
2. Redundant target effects
3. Item probability
What is serial position effects and how does it challenge the serial exhaustive model? (Corballis, 1967)
The effect of response time changing as a function of where in the list the item fell (primacy and recency effects can be observed). The serial exhaustive model predicts that there shouldn’t be any recency or primacy effects. When Corballis replicated Sternberg's experiment with faster presentation times, it was found that the lag function was not flat and that strong recency effects and primary effects were observed because there's not enough time for any self-rehearsal of the set items.
What does a serial exhaustive model predict the lag function will look like and why?
The serial exhaustive model predicts that the lag function will be flat because exhaustive scanning implies that the response time for a list will take the same amount of time regardless of where the target appears in that list.
What is a recency effect?
last item is processed/ remembered faster than other items
What is a primacy effect?
RT benefits from being the the first item.
What is redundant target effects and how does it challenge the serial exhaustive model? (Baddeley & Ecob, 1973)
The effect of response time changing as a function of target repetitions. Serial exhaustive model predicts that there shouldn’t be an effect of a redundant study item because you’re scanning all of the items on the list anyways. Baddeley & Ecob, (1973) showed that one repeat had faster RTs than two repeats. Therefore, RTs are faster if the repeated item is the test item.
What is item probability and how does it challenge the serial exhaustive model? (Darley, Klatzky & Atkinson, 1972)
The effect of response time changing as a function of prior exposure to probe item. The serial exhaustive model predicts that it shouldn’t matter if the identity of the probe item is known (before the probe is presented); because all of the items are scanned anyways. Darley, Klatzky & Atkinson (1972) used an auditory or a visual cue to indicate which of the study items would be the probe. They showed that RTs are much faster when the probe is cued beforehand and that slope was close to 0.
If items are more probable at test, RTs are faster because there’s no effect of the number of items you have to hold in memory. Presumably this is because you’re only holding the one cued item in memory.
What is the parallel, limited capacity model?
Assume that as the set size increase, the rate of simultaneous comparison decreases. Resource is flexibly, but shared and limited. If the rate of processing decreases with set size, then RT will increase with set size. The parallel model can also predict the lag functions. When you're looking at a function of set size, serial models and limited capacity parallel models are the same.
What is a familiarity, global memory strength model?
Instead of scanning each item individually, familiarity-based models assume that you form an overall, global memory trace.
Yes/No judgements are based on whether the probe seems familiar enough or strong enough when compared to the global memory signal e.g. SDT. The familiarity-based models assumes that we set a threshold for OLD and NEW responses. If the familiarity signal (computed by comparing the probe to the combined study item signal) exceeds the threshold, then an OLD response is emitted. Familiarity-based model predicts the set size function, and also predicts the correct serial position effects in the lag functions (Nosofsky 2011 fits to Monsell 1978). The model does predict recency effects but does not strongly predict primacy effects.
What does the serial model predict and not predict?
The serial model mis-predicts the lag function even though it gets the set size Mean RT function correct.
Describe the experiment of the Extralist feature effect (Mewhort & John, 2000).
They performed a recognition task where each list was drawn from a set of coloured shapes. The probes had different combinations of old and new features and some were a mixture of combinations of old features. In another experiment they manipulated the number of repeated features (matches).
What are the challenges to familiarity-based models?
The extralist feature effect. Results from this experiment shows that RT increases as a function of the similarity between the probe and the list. The more matching features, the slower the RT, but there was no difference when a feature was presented once or twice. What was important is that there was a new feature. Rather than computing the overall similarity, we are looking for things which are novel. We don’t form a global familiarity representation but looking at individual features of the items on the study list. This result is taken to imply that people don’t rely on some global, summed similarity signal but instead rely on individual feature information.
What does the familiarity model assume for the extralist feature experiment of (Mewhort & John, 2000)?
The familiarity model assumes that the familiarity signal should be equal (assuming the each match and new feature gets you “one-point”). But what actually matters is the novel feature, and not the number of matches/features in the probe.
What was Nosofsky et al.'s (2011) explanation for the Extralist Feature Effect?
Proposed that similarity (weighed differently) is different for features which appear on the list and features which are not on the list (context specific similarity). The idea is that each match is not worth “one point”. Mismatches on extremist features make the overall item less similar because they weren't on the study list (and decrease the correct rejection RT). This is an Extended Exemplar based random walk model.
This explanation does not fully contradict the familiarity based model.
Which models will work for fast presentation times?
Only the familiarity model and the parallel model make the correct lag predictions. The serial exhaustive model doesn’t work.
In a Sternberg task, one explanation is that instead of scanning each item individually, you form an overall, global memory trace. In these familiarity-based models, OLD/NEW judgments are based on whether the probe seems familiar enough or strong enough when compared to the global memory signal. If processing is based on this global memory strength, how can we interpret the costs associated with adding additional items to the memory set?
For a strength-based familiarity model, the slope should be interpreted in terms of the lag functions. If you examine the lags, then it is clear that the set size effect can be interpreted solely in terms of lag. (i.e., items which are one away from the target are recalled best, accuracy then decreases with lag. The set size effect occurs because you average over lags AND sometimes the probe is present in the last position - i.e., with the highest lag). The decreasing lag can then be interpreted in terms of memory strengths which decreases with lag. The slope of the set size function can be interpreted as the cost of including probes which have low memory strengths due to having higher lag. Things which are further back in time are harder to remember. In an applied setting, this means the deficits that result in increased slopes mean that people have memory strengths which decay faster than normal.
The lag functions for the serial and parallel models are a little harder to interpret in such a straight forward manner (even though the models predict lag functions which are not dissimilar from the familiarity predicted lags).
How can we interpret costs of serial and limited capacity parallel models associated with adding additional items to the memory set?
The increasing slope represents the time associated with scanning a single item. If the slope is 38ms/item then each additional item must “cost” an additional 38ms. In an applied/clinical setting, this means that deficits that result in increased slopes mean that people are slower at scanning each item.
How can we interpret costs of limited capacity parallel models associated with adding additional items to the memory set?
The increasing slope represents a capacity cost associated with increasing the number of items scanned at the same time. Overall scanning time is slowed by adding more items. In an applied setting, this means that deficits that result in increased slopes mean that people have a capacity limitation on the number of items that can be processed simultaneously.