lecture 9 - pragmatics Flashcards

1
Q

Language
Meaning influenced by:

A
  • Semantics –meanings of words
  • Syntax – themes (who, what, to whom)
    Pragmatics – influence of context
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

semantics

A

words map onto meanings/ concepts

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

syntax

A

Word order maps on themes/roles
“John fed the dog”
=/=
“The dog fed John”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

pragmatics

A

“he ate the whole dog!’

  • How context affects meaning
    • Figurative (nonliteral) speech
      ○ Love is a journey.
    • Inferences
      ○ A: Have you met Helen’s boyfriend?
      ○ B: Yeh, he’s got a nice personality
    • Anaphora
      ○ I hit it with that thing.
      Anaphora = when one expression refers to another
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Figurative meaning

A
  • Metaphor
    • Love is a journey
  • Sarcasm
    • Don’t you just love it when you have ten essays to do in one day?
  • Indirect speech
    Can you tell me the time?
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

what’s the processing problem?

A

words means things

‘my job is a jail!’

  • Words map onto concepts
  • Figurative meanings don’t use those concepts!
    How do we infer the correct meaning?
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Maybe it’s not a problem

A

Maybe we hardly ever use figurative language.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Wrong.

A
  • We use figurative language all the time.
    • 1 unique metaphor for every 25 words in political speech (Graesser, Long & Mio, 1989)
  • 1.5 novel and 3.4 clichéd figures of speech per 100 words spoken (Pollio et al. 1977)
  • 15 million in the course of a lifetime

Maybe it’s not a problem
* Maybe we hardly ever use figurative language.
Perhaps we just remember every metaphor as a single example.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Love is a journey

A

The lovers are traveling on a journey together, with their common life goals seen as destinations to be reached. The relationship is their vehicle, and it allows them to pursue common goals together. The relationship is seen as fulfilling its purpose as long as it allows them to make progress toward their common goals. The journey isn’t easy. There are impediments, and there are places (crossroads) where a decision has to be made about which direction to go in and whether to keep traveling together.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Theories of figurative language processs

A
  • Three stage (standard) view
  • (Clark & Lucy, 1975; Searle, 1979; Grice, 1975)
    • Pass through the literal meaning
  • One-stage view
  • (Gibbs, 1981; Glucksberg & Keysar, 1990)
    No different to other types of language
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Three-stage model

A
  • Find the literal meaning
  • Is it sensible in the context?
    If not, infer a figurative meaningcompute literal meaning
    |
    is meaning contextually appropriate
    / \
    yes no
    / \
    integrate compute
    with <———- figurative
    contextual meaning
    representation

grice 1975, searle 1979

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

(Grice, 1975; Searle, 1979).

A

Early pragmatic studies postulated an initial literal interpretation that only in the event of interpretation failure would trigger a subsequent search for a figurative interpretation
(Grice, 1975; Searle, 1979).
SM contends that a figurative interpretation is signaled by the failure to construct a plausible literal interpretation. According to this serial approach to figurative comprehension, listeners/readers first attempt to construct a literal interpretation for a figurative string, seeking a figurative interpretation only after a literal reading is found to be implausible.

(1): My lawyer charges for every phone call he makes.
(2): Lawyers are sharks.

diagram in notes

Three-stage model
* Find the literal meaning
* Is it sensible in the context?
* If not, infer a figurative meaning
=> Always compute literal meaning before figurative meaning

how do we do this?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Grice’s cooperative principle

A
  • People able to communicate because they agree to cooperate
    Follow the same set of rules in conversation
  • All speakers agree to cooperate
    “ Make your conversational contribution such as is required, at the stage at which it occurs, by the accepted purpose or direction of the talk exchange in which you are engaged.” (Grice, 1975, p.45)
  • How do we begin to make sense of figurative language

Grice’s Maxims for Cooperative Speakers
(1) Quantity:
Make your contribution as informative as is required
Do not make your contribution more informative than is required
(2) Quality:
Do not say that which you believe to be false
Do not say that for which you lack evidence
(3) Manner
Don’t be obscure
Don’t be ambiguous
Be brief
Be orderly
(4) Relevance
Be relevant
If people were required to always say those things for which they had evidence, to avoid verbosity and obscurity, to stick to the topic, to expound orderly silence would befall classrooms and locker rooms, but

Where can I find a good list of family films?
(1) Quantity:
Go to a website where there is good list of family films.
First learn how to type. Now buy a computer. Sit in front of the PC …
(2) Quality:
Go to www.naughtynurses.com
(3) Manner
There are many ways in which the internet is changing our lives…
(4) Relevance
Cooking cabbage is actually difficult.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

How did the maxims work?

A

“Lawyers are sharks”
* Maxims appear to be broken
(Quality, Relevance, are broken)
* But the speaker is being cooperative
* So they must mean something else
When someone appears to violate the maxims, the receiver of the message assumes there must be a reason by trying to infer a nonliteral interpretation. So, when “lawyers are sharks” is uttered, the speaker breaks the Quality maxim – they know it must be false so why did they say it? They also break the relevance maxim, why is it relevant to say that a lawyer is a shark? They must mean something metaphorical.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Grices cooperative principle

A

diagram in notes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Why should anyone violate the maxims?

A
  • Why not speak literally all the time?
    • Speed?
    • Politeness?
    • More powerful?
      Speech – is low info transmission compared to speaking
      Quicker for someoneto make inferences than speaker using speecgh doung the same thing
      Can be more powerful to speech in metaphors than speak literally
17
Q

Testing the 3 stage view

A

Are people faster to understand literal meaning than figurative meaning?

compute figurative meaning - extra stage for figurative meaning means slower comprehension times

grice 1975, searle 1979

18
Q

Gibbs (1979)

A
  • Reading time study
  • Indirect requests
    • “Must you open the window?”
  • Direct requests
    • “Please leave the window closed.”
  • Isolated
    At the end of a paragraph (in context).
19
Q

Results

A
  • Isolated
    • Longer reading times for indirect requests
  • Context
    • No difference between direct and indirect
      Evidence against 3-stage model
20
Q

Blasko and Connine (1993)

A
  • Cross-modal priming study
  • Heard sentences
    • “Indecision is a whirlpool”
  • Lexical decision on
    • Literal meaning: “water”
      Metaphorical meaning: “confusion”
21
Q

Hypothesis

A
  • If the literal meaning is accessed first, faster responses to the literal prime
    Results
  • Equally fast on literal and figurative responses
  • No evidence that figurative meanings were accessed more slowly than literal meanings
    No support for 3-stage model
22
Q

One-stage view

A
  • Lots of different one-stage views
    • Literal meaning is not computed before figurative meaning
    • Same processes involved in literal and figurative meaning
  • Alternative: One stage view
    • Glucksberg and Keysar (1991)
      Class inclusion model
23
Q

Atypical development

A
  • Children with autism have social interaction, communication difficulties
    • figurative language
      When they said Love is journey, why would they have said that?
24
Q

Two sorts of figurative language

A
  • Some figurative language involves understanding other perspectives
    • What did the other person mean when they said X?
    • Why would they have said that?
    • Metaphor
  • Other figurative language does not
    • Automatic, low-level language processing
      Metonymy
25
Metonymy
* use of the name of one thing to refer to another thing with which it is associated * Shakespeare is on the top shelf * Putin invaded Ukraine * The planes are on strike A lot of Americans protested during Vietnam
26
Rundblad & Annaz (2010)
* Metaphor vs metonymy * Typically developing children * Autistic children * Is the deficit in figurative language due to understanding other perspectives? Or a general problem with nonliteral meaning? - flood and Robbie Williams examples in notes and results
27
Conclusion
* ASD group * general impairment on metaphor * developing delay on metonymy General deficit in figurative language
28
summary
* Figurative language * Three-stage model * Grice’s maxims * One-stage model Psycholinguistic Evidence
29
language production
* The investigation of production is perceived to be more difficult than the investigation of comprehension, primarily because it is difficult to control the input in experiments on production. It is relatively easy to control the frequency, imageability, and visual appearance (or any other aspect that is considered important) of the materials of word recognition experiments, but our thoughts are much harder to control experimentally. * The processes of speech production fall into three broad areas called conceptualization, formulation, and execution (Levelt, 1989). At the highest level, the processes of conceptualization involve determining what to say. These are sometimes also called message-level processes. The processes of formulation involve translating this conceptual representation into a linguistic form. Finally, the processes of articulation involve detailed and articulatory planning during conceptualization, speakers conceive an intention and select relevant information from memory or the environment in preparation for the construction of the intended utterance. The product of conceptualization is a preverbal message. This is called the message level of representation. * To some extent, the message level is the forgotten level of speech production. A problem with talking about intention and meaning, as Wittgenstein (1958) observed, is that they induce “a mental cramp.” Very little is known about the processes of conceptualization and the format of the message level. Obviously the message level involves interfacing with the world (particularly with other speakers), and with semantic memory. * The start of the production process must have a great deal in common with the end point of the comprehension process. When we talk, we have an intention to achieve something with our language. * How do we decide on the illocutionary force of what we want to say? Levelt (1989) distinguished between macroplanning and microplanning conceptualization processes. * Macroplanning involves the elaboration of a communicative goal into a series of subgoals and the retrieval of appropriate information. * Microplanning involves assigning the right propositional shape to these chunks of information, and deciding on matters such as what the topic or focus of the utterance will be. * There are two major components of formulation: We have to select the individual words that we want to say ( lexicalization ), and we have to put them together to form a sentence (syntactic planning). It might not always be necessary to construct a syntactic representation of a sentence in order to derive its meaning. * Clearly this is not an option when speaking. Given this, it is perhaps surprising that more attention has not been paid to syntactic encoding in production, but the difficulties of controlling the input are substantial. * Finally, the processes of phonological encoding involve turning words into sounds in the right order, spoken at the correct speed, with the appropriate prosody (intonation, pitch, loudness, and rhythm). * The sounds must be produced in the correct sequence and specify how the muscles of the articulatory system should be moved. * What types of evidence have been used to study production? First, researchers have analyzed transcripts of how speakers choose what to say and how to say it (Beattie, 1983). For example, Brennan and Clark (1996) found that speakers cooperate in conversation so that they come to agree on the same names for objects. * Computer simulations and connectionist modeling, as in other areas of psycholinguistics, have become very influential. Much has been learned by the analysis of the distribution of hesitations or pauses in speech. Until fairly recently the most influential data were spontaneously occurring speech errors, or slips of the tongue, but in recent years experimental studies, often based on picture naming, have become important.
30
Slips of the tongue
Historical Background: Early models of speech production were based on naturally occurring errors. Speech errors remain a core topic in psycholinguistics. Everyday Examples: Spoonerisms (named after Dr. Spooner): swap initial sounds of words (e.g., “You have hissed all my mystery lectures” instead of “missed all my history lectures”). Freudian slips: Freud suggested speech errors may reveal repressed thoughts (e.g., replacing “experiments” with “temptations”). Freud & Alternatives: Freud wasn’t first; Meringer & Mayer (1895) provided a traditional, structural analysis. Ellis (1980) reinterpreted Freud’s examples in line with modern speech production models. Research Methods: Naturalistic approach: Collect large corpora, interrupt speakers to understand errors. Despite possible observer bias, data are consistent with recorded conversations. Induced errors (e.g., fast reading tasks) mirror natural slips (Baars et al., 1975). Types of Speech Errors: Errors can involve different linguistic levels: phoneme, syllable, morpheme, word, phrase, sentence. Error types include blends, substitutions, deletions, additions. Psychological Reality: Fromkin (1971/1973): Consistent error patterns suggest linguistic units are psychologically real. Each error reflects a target utterance and the actual spoken error (often analyzed in italics)
31
What can speech errors tell us?
* Let us now analyze a speech error in more detail to see what can be learned from them. Consider the famous example of (4) from Fromkin (1971/1973): * (4) a weekend for MANIACS— a maniac for WEEKENDS * The capital letters indicate the primary stress and the italics secondary stress. * The first thing to notice is that the sentence stress was left unchanged by the error, suggesting that stress is generated independently of the particular words involved. * Even more strikingly, the plural morpheme “-s” was left at the end of the second word where it was originally intended to be in the first place: it did not move with “maniac.” We say it was stranded. * Furthermore, this plural morpheme was realized in sound as /z/ not as /s/. That is, the plural ending sounds consistent with the word that actually came before it, not with the word that was originally intended to come before it. (Plural endings are voiced “/z/” if the final consonant of the word to which it is attached is voiced, as in “weekend,” but are unvoiced “/s/” if the final consonant is unvoiced, as in “maniac.”) * This is an example of accommodation to the phonological environment. Such examples tell us a great deal about speech production. * Garrett’s model, described next, is based on a detailed analysis of such examples. * On the other hand, Levelt et al. (1991a) argued that too much emphasis has been placed on errors, and that error analysis needs to be supported by experimental data. * If these two approaches give conflicting results, we should place more emphasis on the experimental data, as the error data are only telling us about aberrant processing. * There are three points that can be made in response to this. First, a complete model should be able to account for both experimental and speech error data. * Second, the lines of evidence converge rather than giving conflicting results (Harley, 1993a). * Third, it is possible to simulate spontaneously occurring speech errors experimentally, and these experimental simulations lead to the same conclusion as the natural errors. * Using a technique they called SLIP, Baars et al. (1975) required participants to rapidly read pairs of words such as “big dog,” “blocked drain,” and then “dart board.” If participants have to read these pairs from right to left, the priming effect of the preceding pairs leads them to make many spoonerisms on “dart board.” * Furthermore, the participants are more likely to produce “barn door” (two real words) than they are the corresponding “bart doard”—an instance of the bias towards lexical outcomes also displayed in the naturalistic data. * On the other hand, using the same technique, speakers are less likely to make exchanges that result in taboo words (e.g., from “hit shed”; work it out) than ones that do not. * Furthermore, galvanic skin responses were elevated on these taboo trials, suggesting that speakers generated the spoonerism internally, but are in some way monitoring their output (Motley, Camden, & Baars, 1982). * We should note that we sometimes correct our speech errors, which shows that we are monitoring our speech. Sometimes we notice the error before we speak it and can prevent it from being made; sometimes we notice the error as we are speaking and can correct, or repair, it; sometimes we notice it only after we have finished speaking. * Often we never notice we have made an error. The idea of a monitor plays an important role in the WEAVER++ model of speech production, discussed below. * Naming errors probably do not arise from people rushing their preparation, or, in the case of naming, from insufficient word preparation, or a failure to check names against objects. Griffin (2004) examined people’s eye movements while they described a visual scene. People tend to gaze at objects while they are preparing their names. If errors arise from rushed preparation, they should spend less time looking at an object just before naming it incorrectly (e.g., saying “hammer” when looking at an axe); however, they do not. Instead they spend just as long gazing at a referent before uttering errors as they do before uttering correct names. Indeed, if they corrected their utterance (“ham – axe”), they spent longer looking at the object after making their error, presumably because they were preparing their repair.
32
Garrett's model of speech production
Overview: Garrett (1975–1992) proposed a serial, multi-level model of speech production based on speech error analysis. Processing occurs in discrete stages—only one operation occurs at each level, but levels operate in parallel (e.g., speaking while planning ahead). The levels don’t interact directly. Stages in Garrett’s Model: Message Level (A): Initial intention and overall message plan. Little research on this stage; often depicted as a “thought bubble.” Functional Level (B): Content words (nouns, verbs, adjectives, adverbs) are selected. Assigned semantic roles (e.g., subject, object). Word exchanges occur here and are constrained by syntactic category, not position. Positional Level (C, D, E): C: A syntactic frame is built; function words (e.g., the, and, is) are inherent in the frame. D: Phonological forms of content words retrieved from the lexicon. E: Words inserted into syntactic positions; absolute word order specified. Sound-Level Representation (F): Function words now phonologically specified. Sound exchanges (e.g., phoneme slips) occur here; these are position-dependent. Articulatory Level (G): Final phonological plan is converted into motor instructions for speech articulation. Key Distinctions: Content vs Function Words: Content words: semantic focus, selected earlier (functional level). Function words: grammatical role, selected later (positional level). Error Types: Word exchanges → Functional level. Sound/phoneme exchanges → Positional/sound level
33
Evidence for Garrett's model of speech production
Evidence for Garrett's model of speech production * In morpheme exchanges such as (4), it is clear that the root or stem morpheme (“maniac”) has been accessed independently of its plural affix— in this case the plural ending “-s.” (In English, affix es are either prefix es, which come before a word, or suffixes, which come after, and are always bound morphemes , in that they cannot occur without a stem; morphemes that can be found as words by themselves are called free morphemes. * Bound morphemes can be either derivational or inflectional— see Chapter 1.) Because the bound morpheme has been left in its original place while the free morpheme has moved, this type of exchange is called morpheme stranding. Content words behave differently from the grammatical elements, which include inflectional bound morphemes and function words. This suggests that they are involved in different processing stages. * In (4) the plural suffix was produced correctly for the sentence as it was actually uttered, not as it was planned. This accommodation to the phonological environment suggests that the phonological specification of grammatical elements occurs rather late in speech production, at least after the phonological forms of content words have been retrieved. * This dissociation between specifying the sounds of content words and specifying the grammatical elements is of fundamental importance in the theory of speech production, and is an issue that will recur in our discussions of its pathology. * Furthermore, in word exchange errors, the sentence stress is left unchanged, suggesting that this is specified independently of the content words. Error analysis suggests that when we speak we specify a syntactic plan or frame for a sentence that consists of a series of slots into which content words are inserted. Word exchanges occur when content words are put into the wrong slot. * Grammatical elements are part of the syntactic frame, but their detailed phonological forms must be specified late. This model predicts that when parts of a sentence interact to produce a speech error, they must be elements of the same processing vocabulary. That is, things only exchange if they are involved in the same processing level. Therefore certain types of error should never be found. * Garrett observed that content words almost always only exchange with other content words, and that function words exchange with other function words. This is an extraordinarily robust finding: In my corpus of several thousand speech errors, there is not a single instance of a content word exchanging with a function word. This supports the idea that content and function words are from computationally distinct vocabularies that are processed at different levels. * There are also different constraints on word and sound exchange errors. Sounds only exchange across small distances, whereas words can exchange across phrases; words that exchange tend to come from the same syntactic class, whereas this is not a consideration in sound errors, which swap with words regardless of their syntactic class. In summary, word exchange errors involve content words and are constrained by syntactic factors; sound errors are constrained by distance.
34
Evaluation of Garrett's model
Morpheme Exchanges & Stranding Morpheme Exchange Example: In errors like “maniacs” → “maniac’s,” the stem morpheme (free) and the plural suffix "-s" (bound) are processed separately. The bound morpheme stays in place, while the free morpheme moves → Morpheme Stranding. Suggests content words and grammatical elements (inflectional morphemes, function words) are processed at different stages. Timing of Phonological Specification In exchanges, suffixes adapt to the spoken form, not the planned one → phonological forms of grammatical elements are specified late, after retrieving content word forms. Syntactic Frames and Word Insertion Sentence planning involves building a syntactic frame with slots → content words are inserted into these. Word exchange errors occur when content words are inserted into incorrect slots. Stress remains intact during word swaps → stress is planned independently of specific content words. Function words and grammatical elements are part of the frame, and their phonological form is added later. Constraints on Errors Exchange errors happen within the same processing category: Content words ↔ content words Function words ↔ function words No cross-type exchanges (highly robust finding). Sound errors are constrained by distance; word errors by syntactic class. Evaluation and Challenges to Garrett’s Model Evidence Against Strict Serial Processing: Word Blends (e.g., “valify” = validate + verify): Indicate simultaneous retrieval of multiple words → supports parallel processing. Phrase/Sentence Blends (e.g., “I’m making the kettle on”): Blending occurs where phrases sound alike → supports phonological-level crossover. Cognitive Intrusions: Nonplan-internal errors: Message-level concepts intrude (e.g., “I’ve eaten all my library books”). Often phonologically facilitated (sound similarity helps the error). Environmental contamination (e.g., saying “clark” while seeing the word “Clark’s”). Conclusion: Intrusions suggest message-level content influences lower levels. Speech production involves interactivity, not just discrete serial stages. Interactive Processing Evidence Word availability affects syntactic structure (Bock, 1982) → further support for interaction between levels. Word substitution errors and phonological similarity constraints also support interactive lexical access. Content vs Function words distinction may reflect word frequency, not separate systems (Stemberger, 1985). Support for Garrett Despite Criticism: Bound morphemes behave like function words → supports Garrett’s two-stage syntactic planning. Some neuropsychological evidence aligns with separate processing stages.
35
syntactic planning
* Garrett’s model tells us a great deal about the relative stages of syntactic planning, but says little about the syntactic processes themselves. * Bock and her colleagues examined these in an elegant series of experiments based on a technique of seeing whether participants can be biased to produce particular constructions. An important finding is that word order in speech is determined by a number of factors that interact (Bock, 1982). * For example, animate nouns tend to be the subjects of transitive sentences (McDonald, Bock, & Kelly, 1993), and conceptually more accessible items (e.g., as measured by concreteness) tend to be placed early in sentences (Bock, 1987; Bock & Warren, 1985; Kelly, Bock, & Keil, 1986). * In general, these experiments show that the grammatical role assignment component of syntactic planning is controlled by semantic-conceptual factors rather than by properties of words such as word lengths. * Speakers also construct sentences so that they provide “given” before “new” information (Bock & Irwin, 1980). * Generally, ease of lexical access can affect syntactic planning. * Studies of eye movements in the visual world paradigm (see also Chapters 10 and 14) tell us something about how people formulate descriptions of visual scenes. Speakers gaze at referents in the visual scene as they prepare words to refer to them (Griffin, 2001; Meyer, Sleiderink, & Levelt, 1998). * They also gaze at the referents of direct-object nouns while producing the subject; if they are uncertain which argument to produce immediately after the verb, their gaze moves between the alternative referents (Griffin & Bock, 2000). Gaze is a reliable indicator of what and when people are thinking and planning. Indeed, as is often said, the eyes can give us away; speakers will look at the intended referent of an object even if they are preparing to “lie” by giving an intentionally inaccurate label for it (Griffin & Oppenheimer, 2006).
36
language production and dialogue
Language production as a self-contained process * For any competent speaker language production seems a straightforward process. For instance, when holding a conversation you are rarely aware of encountering any difficulty in formulating your utterances. * However, the apparent ease of language production in informal settings, such as during a conversation, disguises the fact that it is a complex multi-stage process. The complexity is more apparent when writing. * For instance, imagine that you have to write a report on this chapter of the book. Suddenly language production becomes difficult. You may have trouble fi ding the right words to express yourself, or have problems organizing the report into a readily understandable document; you might even have trouble producing strictly grammatical sentences. Here we introduce the language production process in both these situations. First, we consider production as a self-contained process, as when you have to produce something like that report. Then, we consider why it seems to become more straightforward in the informal setting of a dialogue
37
Overall architecture of the language production system
* Much of what is known about language production has come from the study of speech errors. So first we consider what speech errors can tell us about the over-all organization of the language production process, and then we consider in more detail recent work on two particular topics – how speakers design their utterances for particular listeners and how speakers monitor their spoken output.
38
speech errors
Speech Errors: Frequency and Insights Speech errors are rare (≈1 in 2,000 utterances), but highly informative. Exchange errors reveal strong category constraints: Nouns ↔ nouns, verbs ↔ verbs Consonants ↔ consonants, vowels ↔ vowels Suggests distinct stages: Choosing a word/phoneme is separate from deciding its position in the sentence. Morpheme Stranding: Evidence for Multiple Stages Example: Intended: The dome doesn’t have any windows → Output: The window doesn’t have any domes. Plural markers (number features) are assigned after the word is positioned. Another example: If that was done to me → If I was done to that (not me was done). Grammatical case (I vs. me) assigned after syntactic placement. Model of Speech Production (Bock, 1996; Levelt, 1989) Three Core Stages: Message Formulation (Pre-linguistic) Grammatical Encoding (Lexical + syntactic structure) Phonological Encoding (Sound sequence generation) Grammatical vs. Phonological Encoding: Key Differences Grammatical Encoding Phonological Encoding Lexical selection + grammatical role assignment Converting structure to sound Concerned with words and syntactic roles Concerned with phonemes, syllables, stress Exchanges span phrases Exchanges span adjacent words Affected by word category Not constrained by grammatical class Error Evidence Supporting the Split Types of errors (anticipation, preservation, exchange) occur at lexical or phoneme level—not morphemic or phonetic feature level. Lexical exchanges are syntactically constrained; phoneme exchanges are not. Grammatical processing has global scope, phonological is local. Subcomponents Within the Two Main Systems Grammatical Encoding Includes: Lexical selection: Choosing abstract word representations Semantic role assignment: e.g. agent/patient roles Word form retrieval: Getting specific word forms (e.g. windows) Constituent assembly: Building grammatical sentence structure Evidence: In The window has several domes, the abstract word window is selected, but the plural windows form is retrieved later—leading to errors if mismatched. Phonological Encoding Includes: Phoneme sequencing Syllable formation Stress assignment Summary of Key Points Speech errors expose underlying stages of speech production. There's a clear division between grammatical and phonological encoding. These systems operate independently, over different linguistic units and domains. Grammatical encoding deals with structure and meaning. Phonological encoding handles sound realization
39
message selection and audience design
Audience Design & Common Ground in Language Production 1. Audience Design: Going Beyond Translation Language production isn’t just translating ideas into speech—it involves tailoring messages for a specific listener. Audience design: The speaker adapts language based on assumptions about what the listener knows. 2. Common Ground: Shared Knowledge Common ground = Shared knowledge both parties know they share (Clark, 1996). It determines how utterances are interpreted. Successful communication relies on properly establishing common ground. Example: Art Gallery Saying “It’s great, isn’t it?” works only if both people know they’re looking at the same painting and know the other knows that. Barriers to shared attention (e.g. obscured view) → statement becomes ambiguous or meaningless. 3. Empirical Support: Isaacs & Clark (1987) Referential communication task with images of NYC buildings. Participants were either: Both New Yorkers, One New Yorker, Neither New Yorkers. Findings: 85% figured out common ground category after just 2 descriptions. New Yorkers to New Yorkers: Used brief names (“Chrysler building”). New Yorkers to non-New Yorkers: Gave descriptive features of the image. Speakers adapted utterances based on assumed shared knowledge. 4. Challenges & Limits of Audience Design Controversy: Do speakers always use common ground? Horton & Keysar (1996): Under time pressure, speakers relied more on their own view than common ground. Keysar et al. (1998): Listeners, too, initially attend to objects outside common ground. 5. Flexibility in Use of Common Ground Without time pressure: Speakers can adjust their speech based on common ground. Listeners can integrate common ground during later stages of comprehension. With time pressure: Audience design deteriorates. First processes to suffer: Assessment of what the listener knows. Summary of Key Points Concept Explanation Audience Design Speakers adapt speech to the listener’s knowledge Common Ground Shared knowledge both parties recognize as shared Real-world Application Communication depends on perspective-taking and mutual knowledge Experimental Insight Speakers adjust detail based on shared context (e.g., New Yorkers study) Limitations Under pressure, speakers/listeners don’t always use common ground