lecture 6- hearing and language Flashcards
(16 cards)
What is sound
1) It is a vibration and the sound source emits pressure waves in the air
2) It travels in waves
Our ears pick up the waves and turns them into electrical signals which the brain interprets
Waves in music
Keys on a piano are arranged in rising frequency of the musical tone generated.
Harmonic intervals are determined by characteristic frequency ratios.
The frequency distribution is not linear, but each note is ratios of the previous one.
Complex tones are the combination of several pure tones
- Harmonic frequencies: determine the timbre
- Fundamental frequency: determines pitch
The frequency spectrum is the number of harmonics, frequency ratios of harmonic and relative amplitude and we perceive this as timbre
3 types of noise
1) White noise- equally contains all frequencies across the spectrum
2) Pink noise- frequencies decrease in power with each higher octave creating a lower pitch
3) Brown noise - deep pitched and the power behind the frequencies decreases two times as much as pink noise
Explain how active noise cancellation works
Headphones have built-in microphone which captures external noise. Headphones then generate anti-noise signal which is the exact opposite of the incoming sound wave and they cancel each other out.
- Works best for low-frequency sounds (traffic noise, humming)
Less effective for sudden and unpredictable sounds
How do we hear
Outer ear- The microphone
Middle ear- amplify sounds and overload protection
Inner ear- frequency analysis and sends sounds to the brain
What is frequency masking
When a louder sound at a certain frequency makes it hard to hear a quieter sound at a nearby frequency as they are occupying the same frequency range. For example, you can only hear the piccolo if the bassoon is played very softly.
Explain the psychophysical masking experiment
play a target tone in the presence of a masking tone at another frequency . The intensity of the making tone is increased and decreased and participant is asked if they can hear the target tone. The systematic variation of making frequency to determine tuning curve.
- How loud does a tone need to be to mask another
- If this is done successfully, a threshold can be identified.
Can create a tuning curve which is the minimum amplitude needed to detect a sound at different frequencies
Bandwidth is a range of frequencies over which a masked makes it harder to hear other sounds, close maskers have a stronger effects
How to measure perceived loudness
Physical sound intensity is measured in decibels dB and perceived loudness depends on how our ears and brain interpret the sound.
Ears do not hear all frequencies equally.
- We a more sensitive to mid-range frequencies where speech occurs
To measure perceived loudness, compare successively presented tones of different frequencies and decide which are louder.
Physical intensity is recorded as perceived loudness (SPL= sound pressure level)
Through this, we obtain equal loudness contours which should how loud different frequencies must be to be perceived as equally loud
The lowest line is the threshold contour, the quietest sounds we can hear and the basis for the audibility function
What is a clinical audiogram
Measures how well we hear. Plays different sounds at different frequencies and decibels until participant cannot hear.
The spectrum of human speech
Speech sounds cover a wide range of the audible spectrum.
- Vowel sounds are mainly in the lower frequency region
- Consonants cover almost the entire range
- Telephone systems used to cut off the upper part of the frequency spectrum with minimal effects of speech recognition
- Forms of hearing impairment:
○ Presbycusis: selective high-frequency hearing loss with age, typically ongoing
○ Noise exposure can lead to temporary threshold shifts known as auditory fatigue and permanent and partial deafness
○ Tinnus: continuous humming or ringing
○ Ludwig van Beethoven was totally deaf at 45 and used touch and feelings of vibrations to compose music
Sono and spectrograms
Auditory evens presented as patterns
- Time on the x-axis
- Frequency y-axis
- And intensity represented by colour
Chord sequence shows a schematic spectrogram as a sequence of different fundamental and harmonic frequencies
Spoken word is recorded as a spectrogram and is more complex.
building blocks of language
to extract meaning from sound
The aim is to segment a complex task into processing steps which we can understand and explain as cognition mechanisms
Broca’s area is used in speech production and Wernicke’s area in speech comprehension
Sound localisation
There is not direct representation of auditory space, location needs to be calculated from a number of cues
1) Pinnae: crucial for sensation of space, locating elevation
2) Inter-aural: processing to find azimuth (left or right)
3) Intensity differences
4) Temporal or phase differences
5) Auditory stereo- stereovision
The cocktail party effect
It is easy to single out a particular voice from the background of a noisy pub or a single instrument from a large orchestra
- This is because the mixture of wavefronts hitting the ear has an overwhelming complexity
- The detection of a tone is impaired if being masked by another tone, depending on proximity in space and similarity in frequency composition.
- Binaural unmasking: spatial distance and difference in frequency support separation
- High-level effects such as attention, language and familiarity of voice also help
early steps of language and speech
- Evolution of language in animals, they have advanced communications systems
- Learning to speak, read and write
- Bilingual language separation
- Human evolution of formal languages and communication tools