hearing 2 Flashcards

(51 cards)

1
Q

Sensorineural hearing loss and causes

A

More common and most serious form of hearing loss
* Due to damage of hair cells or auditory nerve
Mammalian hair cells don’t regenerate → damage is permanent
Causes:
Ingestion of ototoxic drugs, e.g., neomycin destroys stereocilia!, cancer drugs, antibiotics
Traumatic injury, e.g., fracture of temporal bone
Tumors (especially in cochlear nerve - inflammation prevents APs transmission, e.g., acoustic neurinomas)
Diseases, e.g., rubella
Most common! Exposure to extensive environmental noise (i.e., noise-induced hearing loss NHL or NHIL)
- Causes injury to hair cells and damage to the transductional mechanisms
- Degree depends on amount and duration
- Sounds > 80 dB are potentially dangerous; those over 120 dB are considered painful (sports games, concerts, earbuds for many hours)
Ex: ppl who lived in small/quiet town and moved to a big city, after 5 years hearing got way worse than those who stayed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Hereditary Hearing loss Factors, and Aging

A

Mutations in >150 different genes cause certain types of hereditary hearing loss
Usher syndrome (vision loss) and Waardenburg syndrome (multi colored eyes)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Presbycusis and causes

A

hearing loss that occurs gradually due to effects of aging
>2000 auditory nerve cells die each decade! Start w/35 000
Loss initially occurs with high-frequency sounds and progresses to the point where ordinary conversation becomes difficult to hear
Worse for men

Causes
Sensorineural hearing loss due to loss of IHCs (loss of hair cells)
Conductive loss due to abnormalities of ossicular function (calcification of ossicles)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

diagnosis of hearing condition

A

Otolaryngologists (ear-nose-throat doctors, ENTs): clinical diagnosis of auditory disorders

Audiologists: evaluate hearing function
Can use bone conduction test (Rinne test)
Apply a vibrating tuning fork to a bone on the skull behind the ear

Normal: sound is louder in air than on bone (pinna help amplify loud sound in air)

Conductive hearing loss: sound is louder when touching bone (Why? - vibrations bypass canal+ossicles and go directly to bone)

Sensorineural hearing loss: sound isn’t heard at all (if hair cells/cochlear nerve doesn’t work, person won’t hear sound regardless)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Hearing aids and their components and problem

A

Battery-operated miniature amplifiers that can fit into the auditory canal or can be placed behind the ear

Components:
Small microphone to collect/detect the sound

An electronic amplifier (louder)
Small speaker to deliver/project the sound to the ear
Most effective if cochlear function is not impaired

Problem: don’t want to amplify ALL sounds - ppl with hearing loss can still hear low frequency sounds, don’t want painfully loud sounds (e.g., background noise, loud sounds)
When we amplify all sounds, difference between all sounds is hard to detect

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Psychoacoustics

A

The branch of psychophysics that studies the psychological correlates of the physical dimensions of acoustics in order to understand how the auditory system operates.

  • The perceptual experiences of sound intensity and frequency are referred to as loudness and pitch, respectively
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Special features of hearing:

A
  • Can locate sounds in the environment
  • Can identify sounds among noise
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Absolute sensitivity

A

Knowing this can help understand how sound interacts with auditory structures, and how sound is perceived at different frequencies

  • Useful for creating hearing standards for clinical comparison
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Audibility threshold and 2 perceptual tests for it

A

the lowest sound pressure level that can reliably be detected at a given frequency

Perceptual Tests:
Minimum audible field (MAF) threshold: person sits in an enclosed room, speaker in front of them. Start at low intensity and slowly inc. intensity
Advantages: naturalistic to how we hear
Disadvantages: moving head can affect threshold, sound can be reflected/absorbed by room and walls

Minimum audible pressure (MAP) threshold: sounds presented through headphones
Advantages: control sound pressure value that reaches ear
Disadvantages: less naturalistic

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Absolute threshold depends strongly on

A

frequency: Plot shows different psychometric functions for sounds of four different frequencies.Typically use pure tones.

Ear is most sensitive to: 1500 Hz frequency (we can hear it at lowest sound intensity)
Audibility threshold of 100 Hz frequency: 30 dB spl

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

The audibility curve (minimal audibility curve, MAC): and terminal threshold

A

If we plot the 4 different sound levels on a graph of sound intensity vs. frequency,
Lowest detection thresholds are between 2000 and 6000 Hz

These frequencies are enhanced by the physical properties of the ear canal

Thresholds increase for frequencies above and below middle range

threshold: upper limit of auditory function

  • Area between MAC and terminal threshold is the dynamic range of human hearing
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What are ultrasonic and infrasonic sounds?

A

Infrasonic: below frequency range, low frequency sound. (ex: some animals can detect thunderstorms/earthquake)

Ultrasonic: above frequency range (higher than 20,000 Hz)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Why do we “feel” low frequency sounds (e.g., bass)?

A

Sound waves can be very long (metres long)

Body is physically within one long sound wave (period of compression)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Equal loudness curves are obtained by:

A

asking listeners to equate the loudness of sounds with different frequencies (Always start at 1000 Hz)

How loud does a 200 Hz tone need to be to sound as loud as a 1 kHz, 40 dB tone? Roughly 55 dbSPL

What do the two orange tick marks mean? Sounds are equally loud, (900 Hz at 60 db sounds same as 200 Hz at 40 db)

What do the purple tick marks mean? On diff curves, same sound intensity but perceived at diff loudness

Curves demonstrate that sound pressure level ≠ loudness

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Experimental factors that affect absolute sensitivity

A
  1. Method of data collection
    MAF thresholds < MAP thresholds
    Accounted for by resonance properties of outer & middle ear
  2. Monaural or binaural sound
    Monaural stimulation → thresholds are higher by 6 dB Binaural summation
  3. Masking
    The presence of white noise or a small band of noise (the masker) around the frequency of interest increases threshold
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Harmonic sounds

A

are the most common in our environment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Harmonic spectrum

A

he spectrum of a complex sound in which energy is an integer multiple of the fundamental frequency
* The lowest frequency of a harmonic spectrum is the fundamental frequency
* For harmonic complexes, the perceived pitch is determined by the fundamental frequency
* The harmonics (overtones) add to the perceived richness of the sound
Ex: a speaker produces a vowel sound with a fundamental frequency of 250 Hz. Her vocal cords will produce the greatest energy at 250 Hz, less energy at 500 Hz (2nd harmonic), even less at 750 (3rd harmonic) and so on

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

The Missing Fundamental Effect

A
  • Play entire harmonic series → subjects hear a single pitch corresponding to the fundamental frequency
  • When the fundamental frequency is removed, the pitch listeners hear still corresponds to the fundamental frequency!

How can we hear a sound when there is no stimulation at the corresponding region of the basilar membrane?
Frequency theory
* All harmonics of a fundamental have in common fluctuations in sound pressure at regular intervals that correspond to the fundamental frequency
* Neurons in cochlear nerve and cochlear nucleus fire action potentials once per cycle (thus encoding fundamental frequency)
Fluctuations in Sound Pressure:

Even when the fundamental frequency is missing, all harmonics share a pattern of sound pressure fluctuations at a rate corresponding to the fundamental frequency.
Example: If harmonics at 200 Hz, 300 Hz, 400 Hz, etc. are played, the combined waveform still fluctuates at 100 Hz intervals.

Phase-Locking & Neural Firing
Neurons in the cochlear nerve and cochlear nucleus fire action potentials in sync with these fluctuations. This means they fire once per cycle of the missing fundamental frequency.
Even though the basilar membrane is not directly stimulated at 100 Hz, the timing of neural signals encodes the missing fundamental.

Higher Brain Processing: The auditory cortex interprets this neural firing pattern and reconstructs the missing pitch. This is why listeners still hear the fundamental frequency even though it is not physically present

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Timbre

A

is the perceptual quality that allows us to distinguish two musical instruments, even
when they’re playing the same note at the same intensity
* Everything other than loudness and pitch
* Differences in timbre are accounted for by differences in frequency spectra, i.e., the relative strength of harmonics in the spectrum
(different musical instruments play the same note but we can tell them apart, fundamental frequencies are the same and the intensities can be too)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Attack and decay:

A

An important quality of complex sound is the way it begins and ends. How it changes over time.

  • Attack: the way a sound begins (onset) E.g., how quickly does it reach max intensity?
  • Decay: the way a sound ends (offset). Depends on how long it takes for the vibrating object creating the sound to dissipate energy and stop moving. How long does it take to dissipate and stop vibrating.

Ex: violin = slower attack and slower decay
X axis: time, y axis: amplitude

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Frequency spectra

A

accurately represent tone/sounds that don’t vary over time.

22
Q

Spectrogram

A

the visual representation of a sound as it varies with time. Spectrograms are used extensively in the fields of music, linguistics, and speech processing

  • For sounds that do change over time, we can plot frequency on the y-axis, time on the x-axis, and amplitude by the colour on the graph

Colour = intensity (red is more intense, blue is less)
Bands of acoustic energy on the spectrogram - formants

**it’s not just the timbre (energy at each harmonic) that allows us to differentiate between the different musical instruments - it’s also the attack and delay

23
Q

Auditory Localization/Interaural time difference typical result:

A

the person sitting in the middle is best at locating the sound at + or - 90 degrees (directly to the left or right), and sometimes when someone claps in front of them, they would point to the person behind them.

24
Q

When sound arrives at your ears at different times it allows

A

you to locate the sound in space

25
Acoustic cues for localization:
Acoustic signal contains no spatial info. Spatial info only comes from the location of our 2 ears. * Solution: compare intensity and timing of incoming sound to the 2 laterally displaced ears
26
How do we describe the physical location of a sound?
* Azimuth: horizontal coordinate, directly in front of you is 0, 180 degrees is behind you * Elevation: vertical coordinate, directly in front of you is 0, right above you is 90 degrees
27
Diotic stimulation vs Dichotic stimulation:
Diotic stimulation: there is no difference in the sound when it reaches the 2 ears, arrives at the 2 ears at the same time and same intensity since directly on the mid sagittal plane. Physical properties of sound are identical at both ear Dichotic stimulation: sound source is off of the mid sagittal plane. Sound quality is different in 2 ears; intensity is greater at right ear; sound reaches left ear late
28
How do you discriminate whether a sound is directly in front of or directly behind you?
You turn your head. We use auditory cues to localize sound sources, we can only use these cues when the sound is produced dichotically
29
Interaural Time difference, ITD:
time delay it takes a sound to reach one ear in comparison to another ear (time differences are at a maximum at + and -90) 0 degrees - directly in front of you - 0 time difference 90 degrees - to the right - time difference increases to 60us Relevant for dichotic sounds only Less susceptible to frequency
30
Auditory Space Map:
Auditory nervous system uses ILDs and ITDs to create an internal representation of where sounds are located in space,
31
Cones of confusion:
points is space where we cant tell the difference between sounds in front or behind us (ex: when interaural time differences are the same) in this case, we move our head. Example: ITD of -480 𝜇s can arise from sound at -60° (300°) or -120° (240°) * If we consider elevation, there are even more points like this * Fall on a cone of confusion (infinite number... Solution? Move your head! Only one spatial location is consistent with ITDs and ILDs perceived before and after moving head
32
Where does binaural integration happen?
above cochlear nucleus (which is monaural - therefore they cant be comparing sounds that arrive) superior olive (SO): Primary site where binaural differences are coded in a systematic way is in the lower part of the brainstem. Happens in the Pons, medial and lateral superior olive First place in the auditory system where info from 2 ears converges Contains medial (MSO) and lateral (LSO) divisions
33
How does the MSO compute sound location using ITDs?
Jeffress (1948) cross-correlation model: There is a slight time difference between ears in terms of when a travelling wave on the basilar membrane reaches a specific frequency MSO neurons have bipolar dendrites that receive inputs medially and laterally Axons of neurons from ear vary in length to create delay lines Neurons act as coincident detectors Ex: A, B, C, D, etc. - all bipolar neurons in the pons (lateral projection to one ear, medial projection to the other ear) They compare info that is arriving from the two ears Maximally excited when the input from the two ears arrive at the same time ^^^^ This is why they are called coincidence detectors If a sound is presented at 250 degrees azimuth at your left ear, it is going to arrive at your left ear first, therefore the signal will arrive at neuron A quickly (info from the blue neuron won't have even arrived yet). Therefore neuron A won't receive coincident information when sound is presented to the left ear. Due to the differences in the length of the neurons (delay lines) sounds can be heard coincidently
34
ILD arises for 2 reasons
Sounds are more intense at the ear closer to the sound source. Sound source decreases exponentially – if a sound source is presented to one ear, the sound will be less intense by the time it reaches the other? 1. Sound pressure decreases with distance 2. Head shadow effect (Head reflects incoming sounds > 1000 Hz) Sound shadow: The head is going to reflect the sounds (more than 100 Hz) so they don't pass to the back of the head. Any sounds less that 100 Hz have wavelengths that can wrap around the head, so they don't get reflected ^^^^This is why we see differenced in the ILD as seen in the figures below (lack of presence of sound shadow)
35
Auditory distance perception:
How listeners know how far away a sound is
36
1. Relative intensity
* Closer sounds are louder (given identical sound sources) * Problem: different sound sources * E.g., quiet, near tone and loud, distant tone might sound the same Effectiveness of relative intensity decreases with distance according to inverse square law * Bigger difference in intensities for closer sounds Intensity cues work best when listener or sound source is moving Objects closer: sound intensity will change a lot (things in distance will not change a lot in intensity) Intensity decreases further from source (not good at identifying further distances, we underestimate and it closer than it is)
37
2. Spectral composition
Sound-absorbing qualities of air dampen high frequencies more than low frequencies, so when sounds are far away high frequencies decrease in energy more than low frequencies * Noticeable only for large distances (i.e., >1000 m) Ex: thunder, closer we hear high/low frequency sounds (crackle), far we hear boom (air absorbs high frequency)
38
3.Relative amounts of direct vs. reverberant energy
When a sound source is near, most of the energy reaching the ear is direct Sound waves reflect/bounce off walls, taking time for energy to dissipate. If you are close the energy comes directly from sound source
39
Spatial hearing when blind
Severe loss of vision can result in improved localization of sounds in space * Evidence suggests that regions of occipital (visual) cortex are recruited to process auditory input when visual inputs are no longer available (see fMRI images) Some blind individuals can learn to echolocate: Make clicks with their mouths and use returning echoes to sense obstacles and objects in their environment (can tell size/shape of objects) Presented ppl with 2 sounds: those w/direct energy and echos and one with just sound and no echo -> showed neuroplasticity, increased activity in visual brain rather than auditory brain areas
40
Auditory scene analysis (source segregation)/ cocktail party effect:
the distinction of auditory events or objects in the broader auditory environment Sounds perceived as being part of the same source are part of the same “auditory stream” Often referred to as the cocktail party effect, our ability to follow one conversation despite the fact that there is usually surrounding noise
41
How does the auditory system perform Auditory scene analysis?
Spatial segregation: sounds that emanate from the same location in space typically come from the same source * Sounds that move can be easily separated from stationary sounds (ex: prof talking over the AC humming sound) Spectral segregation: sounds with similar pitches are treated as coming from the same source (if similar frequencies we hear single melody, if different/further frequencies we separate into 2 auditory streams and hear one high and one low)
42
Auditory stream segregation in music:
Interleaved sequences of low (blue) and high (red) notes are perceived as two melodies, one high and one low
43
Gestalt Principles:
how humans group similar elements, recognize patterns and simplify complex percepts. Perceptual whole is greater than its parts (ex: you may perceive a face in your waffle if blueberries fall in)
44
Grouping by timbre (Gestalt = similarity)
Timbre: the perceptual attribute relating to the quality of a sound. Sounds with similar timbres usually arise from the same source When sounds share the same timbre, they are grouped based on frequency… ... but when sounds in succession have different timbres, they separate according to timbre.
45
Grouping by perceptual restoration (Gestalt = good continuation):
Listeners continue to hear intact sound (e.g., speech) in the presence of noise * Experience sound as continuing through the noise
46
Grouping by onset (Gestalt = common fate)
* Sound components that begin at the same time (e.g., such as the harmonics of a speech sound) will tend to be heard as coming from the same source
47
Grouping by repetition (Gestalt = proximity)
* Sound components that repeat over time (e.g., the siren of an ambulance) will tend to be heard as coming from the same source
48
Grouping by familiarity
* Listeners make use of experience and familiarity to separate different sound sources
49
Acoustic startle reflex:
the very rapid motor response to a sudden, loud sound. * Humans and non-human animals * Muscle twitches follow the sound by as little as 10 ms (milliseconds)! * Auditory nerve → cochlear nucleus → caudal pontine reticular nucleus (PnC) → motor neurons! * Being afraid increases the acoustic startle response * Indirect projections from the amygdala to the acoustic startle pathway in the brainstem * Unselective (almost any sound will do)
50
Auditory Attention
When we are attending to a sound, we are being intentionally selective. If we know we are tested then we will pay more attention.
51
Inattentional deafness:
the failure to notice fully audible, but unexpected sound because attention was engaged on a different auditory stream. Represents an extreme example of auditory scene analysis Note: listeners are less accurate at understanding what they heard when switching between streams (2 streams can’t be processed simultaneously) cant listen to 2 convos at same time, can pick out parts of both and use context cues