Lecture 10 Bilingualism and Gestures Flashcards
(22 cards)
Bilingualism
■ Early Bilinguals: someone exposed to both languages as a child.
❑ Simultaneous bilinguals (parents diff languages)
❑ Early Sequential bilinguals (moved to different place when 5)
■ Late learner: Someone who learns a second language later in life, after having mastered their mother tongue. (differnt set of challenges)
■ Balanced bilinguals: someone who feels equally competent in both of their languages. (rare to have complete balanced)
■ Multilingual: Someone who speaks more than 2 languages.
■ L1: The first language; your mother tongue. Often but not necessarily your dominant language.
❑ Dominance - Which language is most used during a certain period of your life, becomes the more automatic and preferred language for that period. transient.
■ L2: The second language
■ People who speak more than one language have the special added difficulty of keeping their two systems of knowledge separate so that they don’t…
❑ Create ungrammatical sentences using the syntax of one language with
the words of the other
❑ Mix words from the two languages in one sentence unintentionally.
■ Important questions regarding bilingualism are:
❑ How are the two systems related in the mind?
❑ How are the two systems kept separate?
! For speaking
! For listening / reading
❑ How do dialects compare to languages?
Translating between languages (late learners)
-learn or builds 2nd late
■ How do words of a second language gain access to meaning?
■ Do words of the second language have direct access to this conceptual
knowledge?
■ Or do the words of the second language only have mediated access
via lexical associations with the L1? (do you translate from L1 into L2)
■ It is generally assumed that there is only one conceptual store where meaning is represented.
■ If we need access to a word’s meaning in order to translate it, then that is called Concept Mediation (same kind of relationship between lemmas and concepts that you have in your first language, get directly from concept store)
■ If we do not need a word’s meaning to translate it into another language, then that is translation by Word Association (build on the back of the first language)
How can test this?
-Metods and logic
■ How can we tell if L2 speakers have direct conceptual access?
-tested using translation task
■ Studies compare translation tasks from L1 to L2 to picture naming in L2.
❑ In L1, word reading ~250ms faster than picture naming. This extra time attributed to concept access.
■ According to the word association model, translation should be faster than picture naming because it does not require conceptual access.
■ According to the concept mediation model, both tasks should take the same time because they both require access to concepts.
■ Potter et al (1984) tested highly fluent Chinese-English bilinguals and found no
difference in the two tasks.
➯ Supports Concept mediation.
-translation task on one hand and picture naming task on other
-compare naming times and infer things (reading words faster process than picture naming)
concept mediation
- Picture naming 3 steps: recognise picture, access the meaning, access word
- translation 3 steps: read word, access word in language 1 get meaning (get cat, concept), get word for language 2
Word association model
- picture naming 4 steps: recognise picture, gets meaning, find word that associates with it in language 1 and associate it to word in language 2
- translation 2 steps: recognise word in language 1 and ge the associate
- not supported by Potter et al. found no difference, supports concept mediation view
Relation to fluency?
-What if tested people that were highly fluent
■ Kroll & Curley (1988) tested late learners with different amounts of experience.
■ Late learners who had been studying the second language for less than 2 years were faster for translation than picture naming. (consistent with word association model)
■ Late learners who had been studying the second language for more than 2 years showed no difference between translation and picture naming.
➯ Supports a developmental hypothesis which says that early on during acquisition you build associations between words of the two languages but as you build competence you establish direct connections between
words of the L2 and your conceptual system
-depending what level, depends how access information of second language
Developmental Hypothesis
Kroll and Curley
■ Low proficient speakers mediate via L1
■ As they grow in proficiency, they build direct links to Concepts and don’t rely on L1 as much.
-starts out by scaffolding L2 on L1, then as you grow proficient build direct links to concepts, don’t rely of L1 as much
-learning language after critical period very different process and depends in first instance on L1 and its only over time that builds own status and own independence where can get own links
How are the two systems related? (early bilinguals)
■ How do they avoid massive interference between languages?
■ How do bilinguals prevent massive interference from the language not in use?
■ It is assumed that the semantic system is shared by the two languages of a bilingual. The meanings of ‘Cat’ and ‘Gatto’ are the same.
- want to say cat but both words are cat so how do resolve situation
- catastrophic interference is that so much interference conversation breaks down
- one way to find out how avoid catastrophic interference can ask is activation select or non-selective for language
- > one easy way to think about, just shut door for language not using, activation only spreads to language selected, if we could do this then problem solved, easy (selective activation)
- > other alternative is that activation spreads to both languages, this is where have to deal with potential interference, two words that are equally good at expressing what want to say, so this is the idea of what we have to overcome (non-selective activation)
-evidence doesn’t support selective activation
Evidence for non-selective activation
Is non-target language active when speaking?
■ Costa et al, 2000 found evidence that both linguistic systems are automatically active during normal language production.
■ They compared picture naming times for Cognates vs Non-Cognates by bilinguals and monolinguals.
❑ Cognates are words that mean and sound similar in two languages.
! Cat = Katze, Bread = Brot, Hair = Haar
- had monolinguals and bilinguals name pictures in just one of the languages, but the pictures was getting to name had cognates in other language
- he reasoned that if have non-selective activation going to both languages then these spread it down to phonological level then both lemmas are going to activate same sound
- argued would be faster to name picture that had cognates (because activating same sounds)
■ The shared sounds will be easier to select than when they aren’t receiving activation from two systems (as in the monolingual case).
-monolinguals naming pictures, no difference
■ They found that bilinguals, but not monolinguals, named pictures whose names are similar in both languages FASTER than noncognates.
➱ Suggests that both linguistic systems are active and pass their activation on to the phonological level, speeding activation.
- because two sources of information coming from two languages
- if can shut door on one language monolinguals and bilinguals would look the same, but they don’t so can’t turn off language
- > cognate facilitation effect
Is non-target language active when reading?
Another study
■ Dijkstra, et al (1998) conducted a visual lexical decision study using intralingual homographs (words that mean different things in 2 languages but are written the same) ->arise by coincidence
❑ “coin” = corner in French
❑ “room” = cream in Dutch
❑ “brand’ = fire in Dutch…etc
■ Homographs were responded to more SLOWLY than nonhomographs, suggesting that both meanings (for both languages) were active.
❑ Is ‘Coin’ a word of French? The fact that it is also a word in English makes you unsure. (marks self doubt)
■ If both language relevant to response, homographs FASTER
-both meanings/languages are active
Is non-target language active when reading? Another study (study 3)
-visual-world paradigm
■ Marian & Spivey (2003) conducted an eyetracking study to test for within- and between-language competition effects.
■ „Click on the plug“ (plate = dress in Russian)
-when hear first sound direct attention to words that pictures start with that
- hear english speakers here pl look at plug and plum and ignore dress but russian speakers even though whole thing in english look at dress too and plug
- still activating russian lexicon
- 3rd example that shows us that can’t shut door on second language
- have to find way to deal with interference
■ When hearing their L2 (English) both within- and between language competition was observed, showing that their L1 could not be deactivated and was affecting L2 processing.
■ But, when hearing their L1 (Russian), only within-language competition was observed.
❑ Suggests that L2 did not affect L1 processing.
❑ Suggests language dominance affects direction and extent of potential influences of non-target language in listening.
Separation
If both lexica are active, how do bilinguals prevent massive interference from the language not in use?
■ If both lexica are active, how do bilinguals prevent massive interference from the language not in use?
Hypothesis 1:
■ Language non-specific: Both systems are active and we then inhibit the language we don’t want to use.
❑ How successful we are at inhibiting a system depends on how dominant that language is. L1 harder to suppress than L2 (unless you haven’t used L1 in ages, in which case L2 hard to suppress).
❑ Competition effects still expected…to the extent that the non-target language was not successfully inhibited.
Hypothesis 2:
■ language-specific : Both systems are active, but we ignore the language we don’t want. No competition between languages. The selection mechanism knows which language you are intending to speak and only considers competitors from that system.
Study looking at this
Is non-target language competing?
Hermans et al., 1998
Mountain (Berg in Dutch)
■ Picture word interference study
❑ Naming in L2
❑ Distractors in L2; phonologically related to picture name
■ Phonological: Mouth
■ Semantic: Valley
■ Phonological L1: Bench (related to berg)
■ Unrelated: Table
■ “Bench” should only have an effect on
mountain if the non-target language is active and competing
■ Hermans et al. find an interference effect from “bench”. Naming “mountain” was slower when speakers heard “bench” compared to an unrelated word.
■ Supports activation of non-target linguistic system.
➱ The distractor “bench” made the alternative word “Berg” more active and so it was a stronger competitor, which slowed naming.
(can’t ignore all of activation)
-bench primes Dutch word Berg, which competes with mountain
-evidence for direct competition between the languages
-must have to inhibit it, language wise inhibition process
What about dialects?
-not full blow language
■ “A regional or social variety of a language distinguished by pronunciation, grammar, or vocabulary, especially a variety of speech differing from the standard literary language or speech pattern of the culture in which it
exists” (dictionary.com)
-pronunciation, grammar, vocab and syntax different
■ Scots speak a distinct dialect of English
❑ Multiple Scottish dialects
■ Americans speak a distinct dialect of English
■ Are dialects processed like languages?
-where do diff dialects fit, same process as bilinguals?
eg. people can switch different dialects to people speaking to
Potential Parallels between bilinguals and dialects
■ Bilinguals have two linguistic systems that they must
control and coordinate.
❑ Have distinct linguistic systems
❑ Consequences of error are steep.
❑ Switch between languages facilely
❑ Make few between language errors
❑ Associated cognitive advantages and disadvantages
■ Bidialectal speakers also must control dialect selection
? Have distinct linguistic systems
? Consequences of error are arguably less steep
? can switch facilely between them
? Between-dialect errors
? Advantages or disadvantages
X-language facilitation
-experiment
Costal 1999
-mesa, table
- distractor word for picture but not in language thought
- should be the worst case of interference but actually facilitation
- > gave evidence for ignoring
-did in dialects (prof) ■ Pictures presented in 4 distractor conditions ❑ English Identity: Trousers ❑ Scottish Identity: Breeks ❑ English Unrelated: Chimney ❑ Scottish Unrelated: Lum
Scottish translation slows picture naming in English
“ Not what is seen in bilinguals
-looks like treated like subsection of english
■ Bilinguals can switch between their respective
languages.
❑ Can bidialectal speakers switch between dialects?
■ Evidence suggests different control mechanisms
used by early and late bilinguals.
❑ How do bidialectals control language selection?
■ Evidence for an EF benefit for bilinguals.
❑ Is there a comparable EF benefit for bidialectals?
■ As listeners we adapt to characteristics of the
speaker, such as foreign accents.
❑ Are we sensitive to dialectal characteristics?
■ By answer these (and other) questions about
dialectal language use, I hope to:
❑ Clarify the performance traits of bidialectals
❑ Address theoretical questions about lexical
organization and access in bilinguals and
monolinguals
❑ Speak to the distinction between languages and
dialects
❑ Identify cognitive benefits of bidialectalism
❑ Feed into debate surrounding exemplar-based
models of language representations
Gestures are everywhere
■ When you speak, you gesture.
❑ Cross-cultural phenomenon (all gesture just diff gestures)
■ Temporally and semantically coordinated with speech (semantic relation and temporal relation between what say and gestures)
❑ Two aspects of a single process
■ We don’t always gesture and some people gesture more than others
-children gesture before they speak
■ Questions
❑ What is the purpose of gestures?
❑ Why do we produce gestures? And when?
❑ What is the nature of the representation that creates gestures?
❑ What can we learn from gestures?
❑ Why do some people use gestures more than others?
Types of gestures
■ Gestures can be classified in various ways, depending on their function and characteristics.
■ Beat gestures: rhythmic movement of arm, finger, head. Often linked to emphasis in speech or rhythm of speech. (not linked to what saying but how say it)
■ Emblems: conventionalized gestures with recognized
meaning and fixed form (not iconic, diff for diff cultures, most word like)
■ Representational gestures: some visio-spatial (iconic) relationship to what is being said but not conventional (your interpretation, not word like, no agreed way to do something, semantically related to what say, can be abstract, often linked not to surface meaning but underlying)
❑ Deictic gestures: pointing gestures, to present or abstract referents
❑ Iconic gestures: various other sorts of semantically related gesture. Can be abstract or more ‘transparent’
Why we produce representational gestures
two different views of function
■ Gesture is a medium for communication
❑ Gestures are produced intentionally by speakers as part of the “message”. (adds value)
❑ Gestures convey information which is then considered common ground between speakers. (shared understanding)
❑ It influences subsequent verbal content.
❑ Listeners pay attention to gestures and take up info from them.
-maybe gestures medium but not primary process some thing
■ Gesture helps the speaker formulate speech. (cognitive processing )
❑ Gesture helps map information from imagistic representations to linguistic representations.
❑ They helps me think through how I can say what I want to say.
❑ They reflect alternative ways of thinking about something.
❑ Gestures help keep visual imagistic information active in memory.
Representation Gestures
■ These are the ones we’ll focus on.
■ They hold a special relationship to the accompanying speech.
■ Representational gestures can expand on the content of speech
❑ Add colour, flavor, emotion, or supplementary info not expressed in speech
■ They can be redundant with speech.
■ Lots of info in gestures not in speech. How do you know which bits of info was intended to the listener?
-things that have substance can be gestured more easily about than things that don’t have substance
Communicative Function Evidence?
■ We gesture more in face-to-face exchanges than when we can’t see our interlocutors (Cohen, 1977; Cohen & Harrison, 1972)
❑ Suggests communicative role. But, we do still gesture in these situations. (do still gesture on phone)
Why would we do that if they were purely communicative?
■ Comprehension is sometimes improved when speakers gesture (Graham& Argyle, 1975)
❑ Speakers described abstract objects to listeners. On some trials, gestures were allowed; on others, gestures were prohibited. Listeners produced more accurate drawings of objects when gestures were allowed.
-speaker also said more when not allowed to gesture
■ Speakers adjust their gesture rate and form to the situation
❑ Strategic gesture use
■ We gesture less on the phone than when face-to-face.
■ Even if no listener is present, we strategically use gesture
❑ When being video recorded, if speaker thinks someone will see it, more gestures used; if they think no one will see it, less gestures.
■ Audience design: Speakers orient their gestures to their audience (Özyürek, 2002)
❑ If the listener is in a different location, form and location of gesture may change (either in front or beside, internally nothing changed but form changed to orient to audience, strategic design factor)
■ Speakers compensate for the absence of gesture. (say more)
■ If they can’t gesture (gesture prohibition), they use more words to describe spatial content.
❑ But, gesture prohibition has all sorts of issues that might invalidate any interpretation of this finding (Graham & Haywood, 1975).
■ Speakers leave out critical (task-relevant) information when they gesture (Melinger & Levelt, 2004).
❑ Certain types of information that is never excluded without a gesture is excluded when a co-expressive gesture is present.
-why thing communicative component
A Communicative function BUT…
■ Gestures aren’t “needed” to interpret speech
❑ We can understand people on the phone just as well as face-to-face
■ Not always clear that gestures add any information.
❑ Early studies suggested that gestured information didn’t get incorporated into listeners’ understanding
❑ Gestures produced by university lecturers don’t enhance students’ performance (Kelly & Goldsmith, 2004)
■ When presented in isolation (without accompanying speech), people are bad at identifying the meaning of a gesture. (Feyereisen et al, 1988)
Four things gestures might do for the speaker
■ Gestures sometimes are fully redundant with speech, so unlikely they are intended for listener. Maybe gestures also function to:
■ Aid verbal formulation
■ Structure thought
■ Keep conceptual information, i.e., visual-spatial
information, active during formulation
■ Help in lexical retrieval, as a type of cross-modal prime.
Evidence for a speaker-directed function
■ When interlocutor not present (e.g., on phone), interactive gestures (like a wave goodbye) absent but ‘topic’ gestures stay steady.
■ When gesture is prohibited, speech not as smooth
❑ More disfluent, more pauses
■ More gestures produced during spontaneous speech than during scripted speech.
❑ More gestures from video retelling than written retelling.
❑ More gestures when describing something from memory than when
describing something that you can see.
■ Gesturing improves performance on memory tasks (GoldinMeadow et al., 2001) (dual task paradigm)
❑ Participants solve an equation for themselves (4 + X = 2 + 6)
❑ Then they are presented with list of words to memorize
❑ Then they have to give their solution to equation and explain how they solved it
❑ Then they are tested on the memory test
➱ People who gestured while explaining their solution performed better on the memory task than people who didn’t gesture.
-externalize things that otherwise would be internalized, more room internally
■ When processing is harder, more gestures (Melinger & Kita, 2007).
❑ When describing paths with branching routes, more gestures than without branching routs.
■ When conceptualization processes harder, gestures increase (Hostetter et al, 2007)
❑ Fewer gesture for segmented patterns
-one hand communicative function but also used to help think