Speech Perception Lecture Flashcards Preview

Memory, Language and Emotion > Speech Perception Lecture > Flashcards

Flashcards in Speech Perception Lecture Deck (45)
Loading flashcards...
1
Q

name and describe the two main aphasias

A

Broca’s aphasia-
problems producing language:
-Low on grammar, but functional words,
-still meaningful

But language comprehension is pretty much spared

Wernicke’s aphasia-

  • Language production is fluent, and often copious but often lacks meaning
  • language comprehension is also affected
2
Q

What are considered the neural correlates of these aphasias (based on lesions)

A

Broca’s aphasia: Inferior frontal gyrus (IFG) (Broca’s area)

Wernickes area: posterior superior temporal gyrus (pSTG) (Wernicke’s area)

3
Q

Name and describe an early language model

A

Wernicke- Lichtheim-Geschwind model

Auditory input

\/

Wernickes area (Fonological lexicon with sounds of words/ speech perception)

\/ (or Broca’s area)

Conceptual presentation

\/

Broca’s area (speech planning/ production)

\/

Speech output

Look at copy

4
Q

name a more complex model which explains speech comprehension

A

Hickok and Poepel’s dual stream model

Look at copy

5
Q

What is meant by the term propositions?

A

Meaningful grammatical units within a sentence

e.g once you have examined the city

6
Q

Name and describe two units smaller than propisitions which are still meaningful

A

Words - controllable

Morphemes- The smallest meaningful units of language that cannot be further divided or analyzed-

(control) (lable); 2 meanings, two morphemes
(un) (control)(able)(ly)- 4 morphemes (grammatical)

7
Q

How can words be broken down further

A

The phonemes are the linguistic units in speech that change the meaning of that utterance (but most don’t carry meaning themselves) (-s, -y)

8
Q

Describe the relationship between these parts (what the system has to do)

A

Usually a morpheme is made up of multiple phonemes and you have to extract the nature of each phoneme before you can extract the morphemes, which you then need to extract meaning and put them together to form words, know their meanings, know when the words end to make up sentences and calculate their meaning. This has to be done for both speech production and perception.

9
Q

What are meant by minimal pairs?

A

You can find out which phonemes matter to certain languages to different languages using minimal pairs

Rot vs lot matters in English
Rhot vs rot sounds weird but doesn’t matter in English, but would in dutch

10
Q

different languages have different phonemes (e.g clicking) how are consonants described/ distinguished

A

How they are produced by our anatomy (not by the sound)

  • Place of articulation (where it is produced in the mouth)
  • Manner of articulation (how it is produced in the mouth)
  • Voicing (wether the vocal cords are used or not)
11
Q

Name and describe two manners of articulation

A

plosives- short bursts of sounds (stops)(b, g, k)

Fricatives- continuous sounds we make with our mouth (shhhh, f, s, h; restrict the air flow using teeth or teeth and lip etc)

12
Q

How can plosives differ? give three examples

A
  • bilabial: using our two lips (b(uh), p(uh))
  • Vocal tract plosives- using our throats (k(uh), g(uh))
  • Using teeth and alveolar ridge0 (t(uh))
13
Q

How can you tell if a sound is unvoiced or not in a simple way?

A

Make a sound and feel your throat to see if it vibrates or not: usage of vocal cords

14
Q

Why is voicing important in distinguishing phonemes?

A

Phonemes can be identical in both place and manner of articulation but differ in whether they are voiced or not
t- unvoiced
d- voiced

15
Q

How are vowels dinstinguished?

A

Position of the tongue (high-low, back-front)

16
Q

What can cause variation in speech sounds

A
There are many factors affecting the precise acoustic realization of a phoneme.
Age
sex
dialect
speaking rate
Speech context (coarticulation)
17
Q

What is meant by coarticulation?

A

The acoustic realization of phonemes is influenced by the surrounding phonemes.
e.g streep, stroop

18
Q

Give a reason for coarticulation

A

The different movements required for each phones are executed quite quickly, its not like we go back to the start for each phoneme. Your mouth already prepares for the following phonemes and is affected by the position of previous ones, or your speaking rate will go down

19
Q

What problem does this have for the perceiver?

A

If phonemes change between people quite a bit and change depending on the sentence then how come the perceiver receives a constant speaker

20
Q

What explains this problem of variance in phonemes and what exercises demonstrate it? (2)

A

Categorical perception: although it is a continuum of boops to beeps, you hear it as a sudden transition from oo to ee

21
Q

What is meant by voice onset time (VOT)?

A

Voice onset time (VOT) is the amount of time it takes for vocal cords to vibrate after a phoneme is made. This decides whether a phoneme is voiced or not. . Voiced phonemes have a short VOT while unvoiced phonemes have a long VOT.

22
Q

How can VOT be used to measure categorical perception?

A

The timing of voice onset time differs for da (voiced) and ta (unvoiced). The latency of the vibration of the vocal cords can be manipulated (by injecting silence) to make the da sound like a ta in 80 steps.
VOT = 0 ms / d /
VOT ms = 80 / t /
Subjects hear the sound and decide whether they heard da or ta.

23
Q

What were the results of the study regarding VOT and categorical perception? What did this show?

A

Participants were 100% in agreement that the sound was “ta” until around 40ms, then there was a sharp decline and from 50 -80ms they were 100% sure it was da.

This displays a phonetic boundary (different to that of colours where continuum is perceived)

24
Q

Where is processing at the phoneme level carried out according to the dual stream model? (2)

A

Spectro-temporal analysis- dorsal superior temporal gyrus

Phonological network- mid-post superior temporal sulcus

25
Q

Describe a study which specifically investigated the role of the STS and STG in phoneme perception

A

They used intracranial recordings on patients who were being operated on for another reason while conscious. Activity was recorded from the posterior (dorsal) inferior temporal gyrus while they listened to phonemes varying gradually from ba to da to ga.

26
Q

What were the results of the study investigating the role of the STS and STG in phoneme perception?

A

Perception was similar to what was found previously. They carried out unsupervised multidimensional scaling (Multidimensional pattern analysis, comparable to MVPA) and if you allow the algorithm to look at the brains activation patterns you see that at around 110-150ms (Not great at predicting from 0-40ms or 180-220ms) this algorithm is actually very good at distinguishing the brain pattern that goes along with the perception of each phoneme. The auditory input changes gradually, but the brain patterns shift (rather) abruptly, similar to perception (categorically).

27
Q

Describe a classic study and its results which investigated the activation of the brain in response to phoneme/ sound perception

A

Used either words (sleep), non words (kleep), reversed words or tones (nothing speech like about it!). Participants listened and were measured with fMRI. Each of the three phoneme types activated the lateral STG and the middle STS but the tones did not.

28
Q

Describe a study which investigated the perception of more phonologically complex stimuli

A

Used phonologically complex stimuli (vs simple stimuli) using neighbourhood density, a word that can be derived from another word if you change one phoneme (slip/slap). People are faster to identify words with less neighbourhood density.

Words with higher neighbourhood density were found to activate the STS more.

29
Q

How else have classification methods been used to study

A

Classification algorithm can succesfully categorise vowels based on brain activation patterns in fMRI, Even when the algorithm is trained on brain activation to the perception of the same vowels spoken by a different speaker!

Voxels that contribute most to successful categorisation are again in the STG and STS

30
Q

Summarise these results about the role of STG and STS in perceiving phonemes

A

Categorical perception : dorsal STG
Phonological complexity: STS
Speech sounds: STG and STS
Phoneme perception: STG and STS

31
Q

Why does the dual stream model have that name?

A

It refers to a dorsal and ventral strem

32
Q

Describe the ventral stream of the dual stream model of speech

A

The ‘what’ stream
Lexical access/ interface (access meaning of a word): pMTG, posterior Inferior Temporal Sulcas
(weak left hemisphere bias)

Combinational networks (combine words to form a sentence/ message): aMTG, aITS
(left dominant?)
33
Q

Describe a study which found evidence for this ventral stream of speach

A

Used 2 types of stimuli:
Sounds- words, pseudowords and rotated (reversed) speech
Pictures- 1 back task

Two conditions:
> Attend to sound, ignore visual stimuli = Lexical processing for words (but not non words, altered speech)
> Do 1-back task, ignore auditory stimuli= no lexical processing at all

fMRI measured brain activity

34
Q

Describe the results of the study regarding words and the 1 back task

A

Activation for words more than nonwords in Medial temporal gyrus (MTG) that was only present in the attended condition. Therefore it is assumed that this brain activation is related to lexical processing.

35
Q

Describe the dorsal stream of the dual stream modal of language

A

“how” stream: How I would produce what I am currently hearing (simulation)

Articulatory network: post Inferior Frontal Gyrus (pIFG), PM, anterior insula (left dominant)

Sensorimotor interface: parietal-temporal Spt (left dominant)
(gets input from other sensory modalities

36
Q

Describe a study which attempted to provide evidence for these what vs how streams

A

Used r(repetitive)TMS simulation so that the underlying cortex is deactivated for ~30 seconds.

They either deactivated the medial temporal gyrus, superior temporal gyrus or did a sham simulation.

Subjects had to either do one of two tasks: a lexical decision task ( is this a word or not; “what”) or an auditory repetition task (repeat what you hear; “how”.)

37
Q

Describe the results of the TMS study on the dual streams

A

For the lexical decision task, performance was slower after rTMS on MTG, but not STG.

For the auditory repetition task there was no influence of the rTMS on MTG or STG, if anything people are better.

Provides evidence for the ventral pathway, not the dorsal pathway

38
Q

What region is assumed to play an important part in the “how” stream?

A

Sylvian Parietal-temporal area (Spt), a region in the planum temporale

39
Q

What evidence is there for the involvement of the Spt?

A

It is involved in the sensorimotor regulation of the vocal tract and speech perception

40
Q

Describe a study highlighting the importance of Spt for speech perception

A

Conduction aphasia is an aphasia which specifically causes problems repeating what you hear, comprehension and production are spared. When a high number of patients with this disorder were compared, it was shown that almost all patients had lesions around this area.

In healthy participants, when doing a phonological working memory task where they had to repeat and remember, fMRI shows a wide range of activation. However when you overlap the activation and the conduction aphasia lesions, the place with maximal overlap is this Spt area.

41
Q

How was TMS used to demonstrate the role of another brain area in the “how” stream?

A

TMS was applied over the motor area of the Tongue or lips and participants were asked to distinguish between 4 speech sounds:
2 labial: ba and pa
2 dental (also tongue): da and ta
They used a button to indicate and there was no production involved. Yet if the tongue area is TMSd then you are faster to perceive labial sounds than dental and if you TMS the lip area then the effect is the other way around.

This suggests that if you incapacitate the area that is involved in producing the phoneme, then you are worse at identifying and perceiving the phoneme.

42
Q

What else is observed that is in line with this research?

A

Broca’s area (inferior frontal gyrus) is activated both during production of speech and during perception of speech. (also mirror neurons could be involved?)

43
Q

What theory of language perception leans heavily on this research on the motor areas and brooks area activation?

A

that language comprehension is just a passive form of production- you understand it by simulating how you would say it

44
Q

How are chinchillas related to this line of study?

A

Chinchillas do not have a brain area for phonemes like humans as they do not use human language.
Avoidance conditioning however can occur when a /t/ sound is paired with a shock.
Positive reinforcement can also occur when a /d/ sound is paired with a reward (water).
If a /d/ sound is made then chinchillas learn to stay where they are.
This indicates that chinchillas can distinguish between these sounds. This means that we can use this behaviour to indicate where a chinchilla’s phoneme boundary is with a VOT continuum.

45
Q

What results are found with these studies of chinchillas and phonetic boundaries?

A

They are almost exactly the same- chinchillas never have to use their lil chinchilla tongues to talk to their chinchilla homies about chinchilla things using these phonemes so this produces problems for the speech production simulation theory of speech perception, or at least categorical perception and earlier stages.