Cours 10 - Chapitre 14 et 15 Flashcards

(262 cards)

1
Q

What causes sound in the air?

A

Sound comes from pressure fluctuations in the air caused by vibrating objects.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What do vibrating objects produce in the air?

A

They produce cycles of air compression and rarefaction.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are compression and rarefaction in sound waves?

A

Compression increases air molecule density; rarefaction decreases it.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What wave shape characterizes sound pressure fluctuations?

A

A sinusoidal wave.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What determines the loudness of a sound wave?

A

The amplitude of the sinusoidal wave.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Why is audition useful from a survival perspective?

A

It helps detect moving objects, like footsteps or cracking branches.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What special function do humans and songbirds use sound for?

A

Vocalizations for communication.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What does the chapter examine about sound?

A

How complex patterns of air waves are transduced by the ear and transmitted to the central nervous system.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is loudness in terms of sound perception?

A

Loudness is the psychological aspect of sound related to its perceived intensity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

In what unit is loudness typically measured?

A

Decibels (dB).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What do decibels reflect more: actual sound pressure or subjective perception?

A

Subjective perception.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is the reference point for decibel measurements?

A

The smallest pressure perceivable by most people.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Does 0 dB mean that there is no sound?

A

No, it means the sound is at the threshold of hearing for most people.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Is it possible to hear sounds below 0 dB?

A

Yes, for people with exceptionally good hearing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What unit is used to measure sound pressure level?

A

Pascals.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What does the measurement of Pascals represent in terms of sound?

A

The force exerted by air molecules.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is the value of the minimal perceivable sound pressure in Pascals?

A

2 x 10⁻⁵ Pascals.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

How do you calculate the sound pressure level ratio?

A

Divide the pressure of the sound by the minimal perceivable sound pressure.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Why must we square the pressure level to get sound intensity?

A

Because sound waves propagate in 3D and exert force on a 2D tympanic membrane, making intensity proportional to the square of pressure.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

How do sound waves propagate in space?

A

In a spherical fashion, like the expansion of an inflating balloon.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What is the formula for sound intensity ratio?

A

Sound intensity ratio = (sound pressure level ratio)²

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Why is the tympanic membrane important in hearing?

A

It is the 2D structure that the sound wave must exert force on to be perceived.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What is the sound intensity ratio of the minimal audible sound?

A

1 (because it is used as the reference point).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What is the sound intensity ratio of a helicopter?

A

10¹⁰ or 10,000,000,000.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
What is the purpose of the Bel scale?
To work with more manageable numbers when measuring sound intensity.
26
How many decibels are in one Bel?
10 decibels.
27
What happens to sound intensity each time you add 10 dB?
It multiplies by 10.
28
What type of relationship exists between decibels and sound intensity?
A logarithmic relationship.
29
How much more intense is a 100 dB sound compared to a 90 dB sound?
9,000,000,000 times more intense.
30
How much more intense is a 20 dB sound compared to a 10 dB sound?
90 times more intense.
31
What does the frequency of a sound wave represent?
The number of cycles of the sound wave that repeats in one second.
32
What is the unit used to measure frequency?
Hertz (Hz).
33
What determines the pitch of a sound?
The frequency of the sound wave.
34
Which type of frequency is associated with a low pitch?
Low frequencies.
35
Which type of frequency is associated with a high pitch?
High frequencies.
36
What influences the perceived loudness of a sound?
The combination of sound frequency and amplitude.
37
Why are very high and very low frequencies harder to hear?
They require higher intensity to be perceived.
38
Which frequencies are easiest for humans to process?
Frequencies in the middle range, such as those in speech and music.
39
What do equal-loudness curves represent?
Combinations of intensity and frequency that are perceived as equally loud.
40
What unit is used to measure subjectively perceived loudness?
Phons.
41
At what frequency is 40 dB equal to 40 phons?
At 1000 Hz.
42
At what frequency is 40 dB equal to 10 phons?
At 100 Hz.
43
Why can we perceive sounds below 0 dB between 2000 and 5000 Hz?
Because 1000 Hz was chosen as the dB reference, but it is not the most audible frequency.
44
What are sounds with only one frequency called?
Pure tones
45
Do pure tones exist in nature?
No, pure tones don't exist in nature; all natural sounds are complex.
46
What term describes sounds with more than one frequency?
Complex sounds
47
What is a power spectrum?
It refers to the energy or power associated with the different frequencies composing a sound.
48
What do the lengths of the bars in a power spectrum represent?
They represent how much a certain frequency is present in the sound.
49
In the example provided, what are the frequencies present in the third waveform?
100 Hz, 300 Hz, and 500 Hz
50
In the example provided, what are the frequencies present in the fourth waveform?
300 Hz and 400 Hz
51
What is the difference between the first and second pure tones in the example?
The first has an intensity of 50 dB, and the second has an intensity of 40 dB.
52
What is a harmonic spectrum?
A spectrum with energy at frequencies that are integer multiples of the fundamental frequency.
53
What are harmonics?
Frequencies that are integer multiples of the fundamental frequency in a harmonic spectrum.
54
If a sound has a fundamental frequency of 100 Hz, name three harmonics.
200 Hz (2nd harmonic), 300 Hz (3rd harmonic), 400 Hz (4th harmonic).
55
What does the fundamental frequency represent in a harmonic spectrum?
It is the lowest frequency of the sound, and all other harmonics are multiples of it.
56
Why do many natural sounds have a harmonic spectrum?
Because they are typically caused by a single vibrating source, such as a guitar string or a saxophone reed.
57
What determines the modes of vibration for a vibrating object?
The shape of the vibrating object, which only allows certain stable modes of vibration.
58
What analogy is used to explain harmonic vibrations?
A jumping rope, which can move along its whole length or at integer fractions of its length.
59
Why are waves of unequal sizes physically impossible in a vibrating rope?
Because they are unstable and always tend to converge towards waves of equal sizes.
60
What frequencies can a rope oscillate at?
At a fundamental frequency corresponding to its total length or at multiple integers of that frequency.
61
What do the different frequencies observed in a jumping rope correspond to in sound production?
They correspond to the harmonic frequencies of the vibrating object producing the sound.
62
Why do vocal folds also produce harmonic spectra?
Because they are subject to the same physical constraints as other vibrating objects like guitar strings.
63
What does the fundamental frequency determine in a sound?
The fundamental frequency determines the pitch that we hear.
64
What determines the timbre of a sound?
The profile of harmonics determines the timbre of the sound.
65
What is timbre?
Timbre refers to the qualitative aspect of the sound that characterizes, for instance, the sound produced by different instruments.
66
Why does the same note sound different when played by a flute and a violin?
Because the timbre is different; the fundamental frequency is the same, but the profile of the harmonics is different.
67
What is the 'missing fundamental' phenomenon?
It is the perception of the fundamental frequency even when it is physically absent from a sound's spectrum.
68
How does the auditory system estimate the missing fundamental?
By identifying the greatest common factor among the present frequencies.
69
What is the greatest common factor of 200 Hz, 300 Hz, and 400 Hz?
100 Hz
70
Does the perceived pitch change if the fundamental frequency is removed from a harmonic sound?
No, the perceived pitch remains the same, but the timbre is affected.
71
Why doesn't the brain assume a new fundamental when 100 Hz is reintroduced (e.g., at 50 Hz)?
Because multiplying 50 Hz by integers yields frequencies not present in the sound, like 150 Hz or 250 Hz.
72
What aspect of the sound is altered when the fundamental frequency is removed?
The timbre of the sound.
73
What are the three parts of the ear?
The outer ear, the middle ear, and the inner ear.
74
What structures make up the outer ear?
The pinna (auricle), the auditory canal, and the tympanic membrane.
75
What is the function of the pinna?
The pinna helps estimate the elevation of a sound source (whether it is coming from above or below).
76
What does the auditory canal do?
It conveys sound waves to the tympanic membrane.
77
What is the role of the tympanic membrane?
It vibrates in synchrony with the sound source.
78
What are the three small bones in the middle ear called?
Ossicles
79
What is the role of the ossicles?
To conduct sound to the cochlea and amplify the vibrations.
80
Which bone is directly connected to the tympanic membrane?
The malleus
81
How are the ossicles arranged?
In a cog-and-wheel manner to amplify movement.
82
What is the name of the second ossicle?
Incus
83
What is the name of the third ossicle?
Stapes
84
To what structure is the stapes attached?
The oval window of the cochlea
85
What happens when sound waves reach the tympanic membrane?
They cause it to vibrate, which moves the ossicles.
86
What is the function of the oval window?
To transfer the movement of the ossicles to the cochlea.
87
What are the two main structures of the inner ear?
The cochlea and the vestibular organs.
88
What is the function of the vestibular organ?
It is responsible for our sense of equilibrium.
89
What is the function of the cochlea?
It is the structure in which sound waves are transduced.
90
What does the cochlea resemble?
A snail shell.
91
What is the cochlea filled with?
Fluid distributed across three main canals.
92
Name the three main canals of the cochlea.
Vestibular canal (scala vestibuli), middle canal (cochlear duct), and tympanic canal.
93
What produces movement in the ossicles and oval window?
Sound reaching the tympanic membrane.
94
What happens when the oval window moves?
It produces waves in the lymphatic fluid inside the cochlea.
95
96
What is one of the main processes the auditory system uses to identify the frequency of a sound?
The cochlear place code.
97
How does the structure of the basilar membrane change from the base to the tip of the cochlea?
It becomes wider and thinner.
98
Which part of the basilar membrane is more sensitive to low frequencies?
The tip of the basilar membrane.
99
Which part of the basilar membrane is more sensitive to high frequencies?
The base of the basilar membrane.
100
Why do different parts of the basilar membrane respond to different frequencies?
Because of variations in its width and thickness from base to tip.
101
After hair cells transduce sound into neural activity, where do they transfer this information?
To cochlear (or auditory) nerve fibers.
102
What do auditory fibers have that allows them to differentiate different frequencies in a sound?
Characteristic frequencies (CF).
103
How are auditory fibers arranged on the basilar membrane in terms of frequency sensitivity?
Cells more responsive to high frequencies are located closer to the base of the cochlea, and cells responsive to low frequencies are located closer to the tip of the cochlea.
104
What is phase locking in relation to cochlear encoding of sound frequency?
It refers to the firing of hair cells in synchrony with the phase of the sinusoidal wave in the cochlea.
105
How do hair cells in the cochlea encode the temporal characteristics of sound?
Hair cells fire an action potential each time the tectorial membrane pushes their hairs in the direction of their tallest hair, with their firing synchronized to the phase of the sinusoidal wave.
106
Why can’t individual neurons encode frequencies higher than 4–5 kHz?
Because it is physiologically impossible for individual neurons to fire at such high rates.
107
How are high frequencies encoded in the auditory system if individual neurons cannot follow every cycle?
Through the combined firing of the whole population of neurons, each firing on a subset of cycles in a phase-locked manner.
108
What does it mean when neuron firing is phase-locked?
It means neurons fire in synchrony with the phase of the sound wave cycles they respond to.
109
How does phase-locking help explain the missing fundamental effect?
Neurons can phase-lock their firing to peaks that occur at regular intervals (e.g., every 4 ms), tricking the brain into perceiving a fundamental frequency (e.g., 250 Hz) that isn’t physically present.
110
In the case of a sound with components at 500, 750, and 1000 Hz, what is the perceived fundamental frequency due to phase-locking?
250 Hz, because the combined waveform peaks every 4 ms, corresponding to that frequency.
111
What is the first brain structure to receive auditory information?
The ipsilateral cochlear nucleus.
112
After the cochlear nucleus, where is auditory information sent?
To the contralateral superior olive.
113
What are the next brain structures auditory information travels through after the superior olive?
The superior colliculus, medial geniculate nucleus of the thalamus, and then the auditory cortex.
114
Is there only a contralateral auditory pathway?
No, there is also a secondary pathway on the ipsilateral side that connects the same structures.
115
Where is the auditory cortex located?
Inside the Sylvian (lateral) fissure.
116
What is the function of the primary auditory cortex?
It is the first cortical region to receive auditory information.
117
What kind of organization does the primary auditory cortex have?
Tonotopic organization, with adjacent frequencies processed in adjacent areas.
118
How is the tonotopic organization in the auditory cortex related to the cochlea?
It follows the same low-to-high frequency tuning gradient as in the cochlea.
119
After processing in the primary auditory cortex, where does sound information go?
To the belt and parabelt regions for processing more complex features.
120
What do the ventral and dorsal pathways process in the auditory system?
The ventral pathway processes sound identity ('what'), and the dorsal pathway processes sound location ('where').
121
What are conductive hearing impairments?
Hearing impairments caused by a loss of sound conduction to the cochlea, such as from earwax buildup, tympanic membrane tearing, or pus in the middle ear.
122
What are sensorineural hearing impairments?
Hearing impairments caused by damage to the cochlea or auditory nerve, such as to the basilar membrane or hair cells.
123
What tool is typically used to assess hearing loss?
Audiograms.
124
What does an audiogram measure?
Perceptual thresholds (smallest intensity of sound perceivable) across different frequencies.
125
What threshold generally represents hearing loss on an audiogram?
Perceptual thresholds above 20 dB.
126
What does severe hearing loss between 4000-8000 Hz look like on an audiogram?
It shows perceptual thresholds significantly above 20 dB, representing severe hearing loss in that range.
127
What is tinnitus?
A condition where a sound is perceived in the absence of sound waves, often associated with hearing loss.
128
What is a leading hypothesis for tinnitus?
The brainstem increases auditory pathway gain to compensate for hearing loss, amplifying spontaneous neural firing perceived as sound.
129
What other condition is tinnitus compared to?
Phantom limb pain, where sensations are perceived in an absent body part.
130
Why is sound localization important in dangerous situations, like being chased by a bear?
It helps us determine the direction the sound is coming from, which is essential for reacting appropriately.
131
What is the Interaural Time Difference (ITD)?
It's the difference in time it takes for a sound to reach each ear, depending on the location of the sound source.
132
How does the ITD help us locate sound sources?
By detecting which ear the sound reaches first, our brain can infer the direction the sound came from.
133
What happens when a sound comes from the left side of the head?
It reaches the left ear before the right ear.
134
What is the term for the angles of sound sources relative to the head?
Azimuths.
135
Which azimuths produce the largest Interaural Time Differences (ITDs)?
Azimuths of 90 degrees to the left or right.
136
What ITD values are associated with azimuths of 0 or 180 degrees?
ITDs of 0.
137
How does sound localization rely on interaural time difference?
It uses the tiny time differences between when sound waves reach each ear to estimate the location of the sound source.
138
What is the smallest interaural time difference (ITD) that the human auditory system can detect?
As small as 10 microseconds (0.01 milliseconds).
139
How much longer was the time difference between Usain Bolt and Justin Gatlin compared to the smallest ITD humans can detect?
8000 times longer (80 milliseconds vs 0.01 milliseconds).
140
Which structure receives auditory input from both the left and right cochlear nuclei?
The superior olive.
141
What allows the brain to compare the timing of sounds from both ears?
The projection of cochlear nuclei to both ipsilateral and contralateral superior olives.
142
What are coincidence detector neurons?
Neurons that are pre-tuned to respond to specific interaural time differences (ITDs).
143
In the example provided, where do the action potentials from the left and right cochlear nuclei meet?
In the left superior olive.
144
Why do the action potentials meet in the left superior olive in the example?
Because although the right signal starts earlier, it travels a longer path, allowing both signals to coincide in the left superior olive.
145
What is the function of coincidence detector neurons in sound localization?
They detect specific timing differences between inputs from both ears to infer the direction of the sound source.
146
Besides ITDs, what is another cue the brain uses to localize sound?
Interaural Level Differences (ILDs).
147
What does ILD stand for?
Interaural Level Difference.
148
What causes ILDs?
The head blocking or attenuating sound as it travels through space.
149
How does the sound intensity differ between ears if the sound comes from the left?
The sound is more intense in the left ear than in the right ear.
150
What role does the head play in creating ILDs?
It acts as a barrier that blocks some of the sound energy, creating a level difference between the two ears.
151
Why are ILDs considered a redundant mechanism in auditory localization?
Because they complement ITDs and help fine-tune the spatial location of sound sources.
152
At which azimuth angles are ILDs the largest?
At 90 degrees (to the left or right of the head).
153
At which azimuth angles do ILDs not exist?
At 0 and 180 degrees (directly in front or behind the head).
154
For which frequencies are ILDs more effective?
Higher frequencies.
155
Why are ILDs not as informative for low frequencies?
Because long wavelengths of low-frequency sounds are less affected by obstacles like the head.
156
How does the superior olive process ILDs?
It receives excitatory input from the ipsilateral ear and inhibitory input from the contralateral ear, then subtracts the decibels.
157
How is ILD circuitry different from ITD circuitry?
ILD circuitry is simpler, relying on excitatory/inhibitory inputs and subtraction of sound intensity.
158
What is a key limitation of ITDs and ILDs in localizing sound?
They can determine direction (azimuth) but not distance of the sound source.
159
Why can’t ITDs tell us how far away a sound source is on a given azimuth?
Because ITD remains the same for a given angle, regardless of the absolute distance from the source.
160
What cue can the auditory system use to infer the distance of a sound?
The absolute amplitude (intensity) of the sound.
161
Why is using amplitude alone not a perfect solution for determining distance?
Because a loud, far sound and a quiet, close sound could have the same amplitude.
162
What spectral cue helps estimate sound distance?
The spectral composition of the sound, specifically the ratio of low to high frequencies.
163
Why do distant sounds tend to have proportionally more low frequencies?
Because long wavelengths (low frequencies) are more resistant to obstacles and less absorbed by air.
164
What happens to high frequencies as sound travels through air?
They are more easily absorbed, decreasing in intensity with distance.
165
What is a “cone of confusion”?
A 3D region where all points produce the same ITDs, making it hard to localize the exact source of sound.
166
Why do cones of confusion occur?
Because multiple locations in space can create identical ITDs due to being equidistant from the two ears.
167
How do azimuths of 60° and 120° illustrate the problem of ITD ambiguity?
They produce the same ITD, making it hard to distinguish the true direction.
168
How can head movements help resolve cones of confusion?
By altering the ITDs associated with each possible sound source location, allowing the auditory system to identify the one location that remains constant across head positions.
169
In the cone of confusion example, which frog position represents the true sound location?
The blue frog, because it's the only location compatible with both head positions.
170
What auditory cue helps resolve elevation in sound localization?
Sound reflection and distortion by the pinna, which varies with elevation.
171
What is the Directional Transfer Function (DTF)?
A function that describes how the pinna modifies the intensity of different sound frequencies depending on the elevation of the sound source.
172
How does the DTF help identify sound elevation?
By analyzing the specific pattern of frequency distortions caused by the pinna, such as reduced intensity between 8 and 10 kHz for sounds coming from 40° above.
173
What physical features contribute to elevation detection in audition besides the pinna?
The ear canal, head, and torso.
174
Why does using elevation cues help with cones of confusion?
Because elevation cues reduce the 3D ambiguity of cones of confusion to a 2D localization problem.
175
What is auditory stream segregation?
The process by which the auditory system assigns different streams of sounds to different sound objects.
176
Which visual concept does auditory grouping resemble?
Gestalt grouping principles from visual perception.
177
How does sound location influence auditory grouping?
Sounds coming from the same location are grouped together as coming from the same source.
178
How does frequency affect auditory grouping?
Tones with similar frequencies are grouped together as part of the same sound stream.
179
What happens when tones are far apart in frequency?
They are perceived as two distinct auditory streams.
180
How does timing affect auditory grouping?
Tones close together in time are grouped together; tones separated by longer delays may be perceived as separate.
181
How does timbre influence stream segregation?
Tones with the same timbre are grouped together; tones with different timbres are perceived as coming from different sources.
182
What is the effect of onset timing on grouping?
Sounds that begin at different times are perceived as coming from different sources.
183
What makes it easier to distinguish tones in a cluster: gradual or abrupt rise time?
Abrupt rise time.
184
What is the continuity effect in auditory scene analysis?
When a sound is interrupted by noise, the brain can still perceive it as continuous if the gap is filled with noise.
185
What happens if the gap in a continuous sound is not filled with noise?
The sound is perceived as broken into separate chunks.
186
What are restoration effects in auditory perception?
They are higher-order effects where the brain fills in missing auditory information using semantic or syntactic knowledge, especially when gaps are filled with noise.
187
What condition is necessary for restoration effects to occur?
The gaps in the sound must be filled with noise.
188
What happens to restoration effects if the gaps are not filled with noise?
The effect disappears; the brain does not fill in the missing sound.
189
What type of knowledge supports restoration effects?
Higher-order semantic and syntactic knowledge.
190
What are phonemes?
Phonemes are the building blocks of speech—units of sound that distinguish one word from another in a specific language.
191
Give an example of two words distinguished by a single phoneme.
"Kill" and "Kiss."
192
Can the same phoneme be spelled differently?
Yes, the same phoneme can have different spellings across words.
193
What tool helps us represent phonemes more consistently across languages?
The International Phonetic Alphabet (IPA).
194
Approximately how many languages are spoken around the world today?
About 5000 languages.
195
How many different speech sounds are used across world languages?
Over 850 different speech sounds.
196
What is the first step in speech production?
Respiration — the diaphragm pushes air out of the lungs, through the trachea, and up to the larynx.
197
What happens during the phonation step of speech production?
Vocal folds vibrate as air passes through the larynx, producing sound.
198
What causes a higher-pitched voice during phonation?
More tension in the vocal folds.
199
Why do children have higher-pitched voices than adults?
Because smaller vocal folds create more tension, resulting in higher pitch (children < women < men).
200
What type of spectrum is associated with sounds that pass through the vocal folds?
A harmonic spectrum.
201
What determines the pitch of a voice?
The fundamental frequency.
202
What determines the unique characteristics of a person’s voice?
The profile of harmonics in the harmonic spectrum.
203
What does phonation provide in speech production?
Phonation gives us the pitch of the sound.
204
What must be added to phonation in order to produce phonemes?
Articulation using the vocal tract.
205
What structures are included in the vocal tract?
The oral and nasal tracts, including the jaws, lips, tongue body, tongue tip, and velum (soft palate).
206
How does changing the shape of the vocal tract affect speech?
It alters the resonance characteristics, affecting the harmonic spectrum and producing different phonemes.
207
In many languages, what distinguishes different phonemes?
Their timbre, or profile of harmonics.
208
What are formants?
Peaks in the speech spectrum corresponding to harmonics with the highest intensities.
209
How are formants labeled?
By number from lowest to highest: F1, F2, F3.
210
How many formants are typically enough to identify a phoneme?
The first three (F1, F2, F3).
211
What role does articulation play in the sound produced by phonation?
It filters the frequencies, amplifying some and reducing others, to create distinct phonemes.
212
What does a spectrogram show?
The sound amplitude of different frequencies over time, with warmer colors indicating higher amplitude.
213
In the spectrogram of “We were away a year ago,” what do the red zones for the “e” in “We” represent?
The formants of the phoneme “e” at low, medium, and medium-high frequencies.
214
Why can’t we always use formants directly to identify phonemes?
Because speech is produced very quickly and formants can vary due to coarticulation.
215
What is coarticulation?
The overlap in articulatory or speech patterns caused by anticipating the next consonant or vowel during speech.
216
How fast do humans typically produce phonemes in speech?
About 10–15 consonants and vowels per second, which can double with fast speech.
217
Give an example of coarticulation using the phoneme "d".
The "d" has a higher second formant when followed by "i" than when followed by "u".
218
What does coarticulation imply about using formants for phoneme identification?
That formants vary depending on surrounding phonemes, making identification more complex.
219
What is categorical perception?
A phenomenon where stimuli are perceived in discrete categories rather than as gradual changes, despite continuous variation in input.
220
Give an example of coarticulation using the phoneme 'd'.
The 'd' has a higher second formant when followed by 'i' than when followed by 'u'.
221
What visual example is used to illustrate categorical perception?
A set of images transitioning from a monkey to a cow, where people categorize them sharply despite gradual changes.
222
How does categorical perception help with speech perception?
It allows us to perceive consistent phoneme categories despite variability in acoustic features due to coarticulation.
223
How do people perform when distinguishing between pictures within a category vs. across categories?
They’re better at detecting differences when the pictures cross a categorical boundary.
224
What kind of perception does the brain favor, according to categorical perception?
All-or-none perception based on pre-existing categories.
225
What is the motor theory of speech perception?
It proposes that the motor processes used to produce speech can be used in reverse to understand speech sounds.
226
What phenomenon supports the motor theory of speech perception?
The McGurk Effect.
227
What is the McGurk Effect?
A perceptual phenomenon where visual input (e.g., seeing lips move) influences what we hear.
228
Are phoneme distinctions universal across languages?
No, phoneme distinctions are language-specific (e.g., Japanese does not distinguish between 'r' and 'l').
229
What happens to infants' ability to distinguish phonemes as they age?
They lose the ability to perceive phonemes that are not used in their native language, usually by 10 months.
230
How many different phonemes are estimated to exist across the world’s languages?
More than 850.
231
How many phonemes does the English language use?
44 phonemes.
232
Where does phonetic discrimination occur in the brain?
In the belt region surrounding the primary auditory cortex.
233
Where are phonemes assembled into words and meaning extracted?
In Wernicke’s area.
234
Where is Wernicke’s area located?
In the posterior part of the superior temporal gyrus of the left temporal lobe.
235
What is Wernicke’s aphasia?
A type of fluent aphasia where speech is grammatically correct but lacks meaning, and comprehension is impaired.
236
Do patients with Wernicke’s aphasia know they are not making sense?
No, they are often unaware of their deficits.
237
Where is Broca’s area located?
In the left frontal operculum.
238
What is Broca’s aphasia?
A nonfluent aphasia where patients understand language but struggle to produce speech.
239
Are people with Broca’s aphasia aware of their condition?
Yes, they are typically very conscious of their language production difficulties.
240
What is tone chroma?
A sound quality shared by tones that have an interval of one octave; notes with the same chroma share the same pitch class across octaves.
241
What is tone height?
The perceived pitch of a sound based on its frequency; higher frequencies correspond to higher tone heights.
242
What is the frequency ratio of an octave?
0.08402777777777781
243
What is the fundamental frequency of middle C (C4)?
261.6 Hz
244
How many notes are in an octave in Western music?
13 notes separated by 12 semi-tone intervals.
245
What is consonance in music?
A combination of notes that sound pleasant due to simple frequency ratios, like 3:2 or 4:3.
246
What is dissonance in music?
A combination of notes that sound unpleasant due to complex frequency ratios, like 42:33.
247
When does the auditory system perceive notes as coming from the same source?
When many harmonics coincide, as in consonance.
248
What evidence suggests that preference for consonance may be innate?
Infants as young as two months prefer consonant chords and intervals.
249
What is a musical scale?
A subset of notes within an octave that sound well together.
250
What is the interval pattern for a major scale?
2–2–1–2–2–2–1
251
What is the interval pattern for a minor scale?
2–1–2–2–1–2–2
252
How do major and minor scales differ in emotional quality?
Major scales sound 'happy', minor scales sound 'sad'.
253
What is a musical key?
The scale that serves as the basis for a musical composition.
254
What is the tonic in music?
The root note of a key; it acts as a gravitational center for a musical piece.
255
Why do notes in-key sound more pleasant?
They match the notes present in the original chord or scale, fulfilling musical expectations.
256
What defines a melody?
A melody is defined as a sequence of notes or chords perceived as a single coherent structure.
257
What aspect of a melody allows it to be recognized even when played in different octaves or keys?
The contour, or the pattern of rises and declines in pitch.
258
Where are musical contours primarily processed in the brain?
In the right auditory cortex, specifically in the belt and parabelt regions.
259
What is congenital amusia?
A lifelong musical disability characterized by difficulty detecting pitch deviations and recognizing out-of-key notes, not due to intellectual disability or brain damage.
260
What is the ERAN response and what does it indicate?
ERAN (early right anterior negativity) is a negative ERP occurring 200 ms after detecting a melodic tonal violation. It indicates basic perceptual detection of tonal anomalies.
261
What is the P600 response and how does it relate to congenital amusia?
P600 is a positive ERP that occurs 600 ms after a tonal violation and reflects conscious awareness. Congenital amusics typically lack this response despite detecting the violation.
262
What does the lack of a P600 response in congenital amusics suggest?
It suggests that their brains detect tonal violations, but they are not consciously aware of them—'in-tune' but 'unaware'.