Flashcards in Shallow processing Deck (29)
What is shallow processing?
People don't fully process the meaning of every word in a sentence, e.g. Sanford and Sturt (2002) asked participants "Can a man marry his widow’s sister?" and found that only 30% of people notice that the man must have to be dead and is not therefore in a position to remarry.
Why is shallow processing sometimes useful?
Because in real life people are ambiguous and contradictory, so we need system tolerance in order to understand what people mean.
What assumptions were made by traditional models?
Full lexical retrieval and integration into a fully specified syntactic structure, e.g. Just and Carpenter (1980) stated on incremental interpretation: “Readers interpret a word while they are fixating it, and they continue to fixate it until they have processed it as far as they can”, which implies an accurate and detailed understanding.
What did McDonald, Pearlmutter and Seidenberg (1994) state?
They acknowledged that “the communicative goal of the listener can be achieved only with partial analysis of the sentence”, but viewed ‘these as degenerate cases’ rather than the norm.
What evidence is there for shallow processing?
1. Incomplete semantic commitment
2. Garden path sentences
- Lingering incorrect interpretations
3. Pragmatic normalisation
- Misinterpretation of passive sentences
4. Failure to detect semantic anomalies
5. Failure to notice text changes
What did Sanford and Sturt (2002) do?
Demonstrated incomplete semantic commitment, using sentences about buying a radio in which the word 'it' could have meant one of two things. The fact that such sentences are used often and that people are unconcerned with lack of specificity shows that shallow processing does occur.
What did Christianson, Hollingworth, Halliwell and Ferreira (2001) do?
Used garden path sentences as evidence for shallow processing by examining people's understanding of the sentence "While Anna dressed the baby played in the crib" by then asking participants:
- Did the baby play in the crib?
- Did Anna dress the baby?
What did Christianson, Hollingworth, Halliwell and Ferreira (2001) find, and what does this suggest?
Some people still thought that Anna dressed the baby and therefore haven't got rid of their initial interpretation of the sentence. This suggests that once interpretation is ‘good enough’, people don’t bother clearing up the details - shallow processing.
What did Ferreira (2003) do?
Investigated pragmatic normalisation, whereby people misinterpret passive information (breakdown of local semantic interpretation) because of pragmatic override. Asked participants 'Who is the do-er?' in active and passive sentences in where one interpretation was more intuitive (the dog biting the man, rather than the other way around).
What did Ferreira (2003) find?
Accuracy is significantly lower for passive sentences, especially for the counter-intuitive sentence, because passive sentences are syntactically more difficult. Rather than analysing complex sentences fully, we rely on practical knowledge - shallow processing.
What did Barton and Sanford (1993) do?
Demonstrated people's failure to detect semantic anomalies using the survivors problem. Asked participants where the "survivors" (/injured/wounded/maimed) should be buried.
What did Barton and Sanford (1993) find?
The core meaning of the scenario-related words is related to the percentage detection - this suggests that core meaning may aid detection.
- To survive (i.e. be ALIVE) (70% detection)
- To be injured (~5% detection)
- To be wounded (~25% detection)
- To be maimed (~25% detection)
Words that fit the context may be processed less deeply, as shown by the influence of the scenario:
- Air crash (33% detection rate)
- Bicycle crash (80% detection rate)
Give an example of an easy-to-detect semantic anomaly.
He spread the warm bread with socks (Kutas & Hillyard, 1980).
Give an example of a hard-to-detect semantic anomaly.
How many animals of each kind did Moses take onto the Ark? (Erickson & Mattson, 1981)
What theories are there regarding why people miss hard-to-detect semantic anomalies?
1. Shallow processing hypothesis
- The full meanings of the anomalous words aren’t retrieved
- And/or integrated with the representation of the discourse
2. Reduced awareness hypothesis
- The comprehension system requires the meaning of the anomalies and attempts to integrate the semantics of the word in question with the rest of the text
- However, for some reason, the fact of the anomaly may not reach conscious awareness
What did Bohan and Sanford (2008) do?
Investigated eye tracking anomaly detection. Monitored people’s eye movements as they read sentences containing hard-to-detect anomalies to see whether there's system registration without conscious detection. Used the term 'negotiated' in reference to hostages.
What did Bohan and Sanford (2008) find?
No effects in first-pass reading times and longer total reading times on hostages when anomaly was detected, but not when not detected.
What did Bohan and Sanford (2008) conclude?
Detection isn’t immediate but slightly delayed (hence no effect on first pass reading times). Detection results in severe disruption, which is only observed when anomalies are consciously detected (no evidence for unconscious detection, which supports the shallow processing over the reduced awareness hypothesis).
What is a problem with Bohan and Sanford (2008)'s findings?
Perhaps eye tracking just isn't a sensitive enough measure.
What did Sanford, Leuthold, Bohan & Sanford (2011) do?
Investigated anomaly detection using ERPs. Compared processing of hard- and easy-to-detect anomalies and whether the processing of missed anomalies is more consistent with the shallow processing hypothesis or the reduced awareness hypothesis.
What did Sanford, Leuthold, Bohan & Sanford (2011) find?
- An N400 for easy-to-detect anomalies
- No N400 for hard-to-detect anomalies (this suggests that the words aren't semantically difficult to integrate)
- Late positive potential (LPP) for hard-to-detect anomalies but ONLY when actually detected. This supports the shallow processing hypothesis.
How does focus influence depth of processing?
1. Logical subordination
2. Linguistic focus (e.g. clefting)
- It was John who was late for the party
3. Discourse focus (e.g. question set up by the text)
- Everyone was wondering who had arrived late
4. Attention grabbing devices (e.g. bold, italics)
What did Baker and Wagner (1987) do?
Investigated logical subordination (subordinate clauses) as a method of focus, as they clearly distinguish focal information from ‘extra’ information. Found that people are more likely to rate a sentence where the incorrect fact is subordinate as being true compared to if it is the superordinate. This shows that information can be 'hidden' in a subordinate clause.
What did Bredart and Modolo (1988) do?
Studied the effect of linguistic focus on focalisation, using the Moses illusion with cleft constructions ("it was") emphasising either Moses or the two animals in a sentence verification task. Found that misdirection of focus resulted in less people identifying the statement as false.
What did Sturt, Stanford, Stuart & Dawydiak (2004) do and find in their Experiment 1?
Had two conditions - focused and unfocused (on the fact that it was the cider which Jamie liked), then changed cider to either 'beer' (related) or 'music' (unrelated) and asked participants whether the sentence had changed. Found that participants detected less change in the related meaning condition, especially for unfocused. This shows the importance of linguistic focus for depth of processing.
What did Sturt et al. (2004) do and find in their Experiment 2?
Studied discourse focus (either focused on 'hat' or unfocused) in a similar design to Experiment 1, then changed 'hat' to either 'cap' (related) or 'dog' (unrelated). Found greatly reduced detection of change for related meaning condition, especially for unfocused. This pattern, very similar to their first experiment, shows that discourse focus influences depth of processing.
How can text-change detection provide evidence for shallow processing?
Aims to discover when distinctions are NOT being made at some level of semantics, based on the ‘granularity hypothesis’, which refers to the fineness of detail there is in a representation. Focus increases the probability of detecting a change to a related word, which suggests that information in focus is represented at a finer level of detail. Also generally, the fact that text changes are often missed supports the idea of shallow processing.
What evidence is there that depth of processing is affected by attention-grabbing devices?
Bredart and Doquier (1989) used anomaly detection in the Moses illusion, and found that when 'Moses' was capitalised and underlined, detection was 86.5%, compared to 68.3% if the word 'two' was focused on.
Also Sanford, Sanford, Molle & Emmott (2006) showed that text-change detection and auditory-change detection are both better when the critical word is either presented in italics or stressed vocally, respectively.