Week 7- The Audition System Flashcards

1
Q

What are the two key reasons why sound is important?

A

-Communication
-To locate and identify objects

Both of these things are key to survival!

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is the stimulus for audition?

A

-Acoustic energy (mechanical displacement of molecules in a medium caused by changing pressure)

-Object moves displaces molecules around it and these have an effect on nearby molecules creating a ripple effect

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Can sound travel in a vacuum?

A

-No, there must be some sort of medium (some array of molecules) that sound can disrupt in order for sound to travel
-Can travel through gas (air), liquids and solids (unique from other sensory modalities in this)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What are sound waves?

A

Visual representations of acoustic energy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are the two key features of sound waves?

A

-Amplitude (Displacement from baseline: y axis, large disturbance in air molecules= large amplitude)

-Frequency (Is the distance between crests on x axis. Shorter time period means a higher frequency)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What two perceptual features of sound does amplitude and frequency correspond to?

A

-Amplitude= loudness (measured in decibels)
-Frequency = pitch (measured in hertz)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Human range for amplitude and frequency…

A

-We can perceive loudness across a huge range of amplitudes

-We can perceive pitch across a very specific range of frequencies (20-20,000 Hz)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Does the acoustic range for humans remain static across the lifespan?

A

-No, changes for both amplitude (loudness) and frequency (pitch)

-For example, extremely high pitches (greater than 17,000 Hz) are usually only perceptible by people under 25. This lead to the creation of the “sonic teenager deterrent” (blaring tone at this frequency to disperse crowds at parties)

-Diminished sensitivity to higher frequencies is considered a normal part of ageing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is an audiogram?

A

A graph of the absolute threshold of hearing across frequencies

In other words what is the minimum decibel/ loudness you can present a tone of a certain pitch to someone and they will perceive it.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Individual variation in hearing

A

-Sensitivity differs across frequencies (as shown in audiograms) not just among the elderly but also in younger individuals

Fletcher & Munson (1933) - equal loudness contours:
-Present pure tones of various intensities
-Start at 1000 Hz and increase/ decrease frequency
-Ask participant to adjust a reference tone (always at the same frequency:1000Hz) until loudness matches the test tone (Fechner’s method of adjustment)
-Lowest equal-loudness contour represents quietest audible tone (absolute threshold for hearing)
-Highest equal-loudness contour represents pain threshold

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

When does the process of hearing start?

A

-When acoustic energy (sound waves) reaches your outer ear: pinna or auricle

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Important features of the outer ear/ functions

A

-Shapes to collect sound and funnel into the ear canal
-Ear canal amplifies sound
-Sound then reaches ear drum (aka tympanic membrane)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What happens when acoustic energy reaches the tympanic membrane?

A

Vibrates:
-vibrations match frequency of incoming sound wave
-vibrations channeled through 3 ossicles (bones of the middle ear)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What are the three ossicles: how are they connected?

A

-Malleus (hammer) is connected to the tympanic membrane: movement of membrane moves malleus

-The malleus articulates with the incus: movement of the malleus moves the incus

-The incus articulates with the stapes: movement of the incus moves the stapes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is the stapes connected to? What does movement of the stapes cause?

A

-The oval window of the cochlear
-The stapes strikes the oval window of the cochlea, sends vibrations through the fluid inside the cochlea

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Cochlea

A

-A snail shapes, fluid filled chamber in the inner ear
-If unfurled would be 34mm long
-Two canals if the cochlea- the scala vestibuli and scala tympani. These are separated by the cochlea duct.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is inside the cochlea duct?

A

-Organ of corti
-This is home of many hair cells that sit on the basilar membrane

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What causes the basilar membrane to bend, what also subsequently bends?

A

-Sound driven vibration travels as a wave along the basilar membrane, causing it to bend
-And then so do the hairs on hair cells

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What happens when the cochlear hair cells bend?

A

-Cochlear hair cells are mechanoreceptors- when hairs (cilia) on the cells bend, the cells fire (signal transduction)

-The synapse onto spiral ganglion cells, whose axons are part of the cochlear nerve (aka auditory branch of CNVIII)

-Signals can make there way to the brain

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Why can’t sound waves directly just act on the cochlear?

A

-Need many steps for amplification

-Sound waves travel through air to reach the air, however, the cochlear is filled with fluid. Initiated pressure disturbances in liquid requirements more energy we therefore, need to amplify the signal before it reaches the cochlear as much as possible!

21
Q

How is amplification achieved?

A

-The tympanic membrane is a lot bigger than the oval window (20x the SA). Because the oval window is a lot smaller it means force can be concentrated onto the cochlear giving the best chance of disrupting the fluid and setting of the change of events leading to signal transduction.

-Additionally ossicles work to transfer force applied to the tympanic window to the oval window

22
Q

Acoustic reflex

A

-An involuntary contraction of muscles attached to the stapes and malleus

-Happens in response to intense sound stimuli (anywhere from 70-100db) depending on the individual

-Muscle contractions serve to reduce movement of the ossicles so they can’t transmit force efficiently to the cochlea

-Decreases transmission of pressure to the cochlea (15-30dB)

Note: this isn’t all that much and it takes 10-100ms to initiate the reflex so certainly doesn’t provide full protection against loud noise. Can’t protect against a sudden intense noise e.g. gunshot

23
Q

What needs to happen for amplification to occur?

A

-Air pressure needs to be equal either side of the tympanic membrane

-This happens because the eustachian tube allows outside air from the throat to the middle ear

-Mechanism goes wrong if the eustachian tube gets blocked (infection, tumour) and results in hearing loss due to lack of amplification and intense pain

24
Q

What happens if the tympanic membrane is damaged?

A

-Known as having a burst eardrum
-Hearing loss: no longer have equal pressure on both sides so can’t get amplification
-pain

25
Q

How do we parse out different frequencies and amplitudes?

A

-Pressure waves in cochlear cause basilar membrane to vibrate

-Physical properties of the membrane make it more/less likely to vibrate maximally at certain locations

-Membrane is narrowest +stiffest at base, widest + floppiest at apex

-These properties mean the membrane responds best to high frequencies at the base and low frequencies at the apex

-Two terms describe this : ‘frequency- to-place conversion’ and ‘tonotopic mapping’

-Hair cells active in the displaced region of the membrane send frequency specific neural signals to the brain resulting in pitch perception

-Additionally, a greater SPL (sound pressure level) causes greater amplitude of membrane displacement, causes more hair cells to fire more frequently resulting in loudness perception

26
Q

What two coding schemes are invovled in parsing out different amplitudes and frequencies of sound?

A

-Population coding= pitch perception
-Rate coding- stimulus intensity (loudness)

27
Q

What are the 2 types of hair cells? What do they each synapse with?

A

-There are inner and outer hair cells
-Inner hair cells synapse with afferent fibers of CNVIII (to brain)
-Outer hair cells receive input from efferent fibres of CNVIII (from brain)

28
Q

Inner hair cells function

A

-Sensory receptors: perform mechanoreception when basilar membrane vibrates. They are required for signal transduction and therefore, audition!

29
Q

Outer hair cells function

A

-Under low SPL conditions the brain sends signals via CNVIII to outer hair cells, making them contract

-Their hairs (cilia) are embedded in the tectorial membrane- whey they contact, the tectorial and basilar membranes are pulled closed together

-This pulls the basilar membrane taut, which amplifies its vibration. It also bends the cilia on inner hair cells, which depolarizes them and makes them more likely to fire

-In this way, the outer hair cells act as a cochlear amplifier that boosts inner hair cell firing from low SPL signals

Note: this mechanism of pulling the basilar membrane stiffens it, so it responds best to high-frequency input. As a result, low frequency sound is not amplified

30
Q

After inner hair cells have synapsed with spiral ganglion cells whose axons are part of the auditory nerve (CNVIII) what happens?

A

-The nerve travels to the cochlea nuclei (in the medulla of the brainstem)

31
Q

Tonotopic mapping is maintained all the way

A

-Specific cells of the auditory nerve convey signals generated at specific sites along the basilar membrane (and therefore convey info about sound of specific frequencies i.e. A place code)

-A1 receives point- by- point input from MGN that preserves tonotopic organisation. Different frequencies of sound represented in cortical columns of A1 from anterior (low-frequency) to posterior (high-frequency).

32
Q

Frequency tuning curves

A

-Plots of activity in specific nerve fibers in response to sound of specific SPL/ frequency

-Fibers are active across a range of frequencies, but exhibit lowest thresholds at highly specific frequencies

33
Q

Ascending auditory pathway

A

-From ear via the auditory nerve (CN VIII) to the brain

-Cochlear nucleus in the medulla of the brain stem (bilateral- have two, one on each side). Projections mainly contralateral but some ipsilateral to the…

-Superior olivary nucleus in the medulla of the brain stem. Projections ipsilateral only to the…

-Inferior colliculus in the midbrain of the brain stem. Projections go ipsilateral only to the medial geniculate nucleus of the thalamus to the…

-Primary auditory cortex (A1)

34
Q

Where is the first part of the pathway to receive information from both ears?

A

-Superior olivary nucleus as receives information from both the contraletal cochlear nucleus but also the some from the ipsilateral cochlear nucleus

-Important for sound localization (interaural time difference, interaural level difference)

35
Q

Interaural time difference

A

-Sound arrives at one ear first and the other ear 6-10ms later (if sound is not coming from directly infront, behind or above/ below)

-Neurons of the medial superior olive receive binaural input (both from ipsilateral and contralateral cochlear nucleus)

  • Some cells with the MSO respond best when sound is coming from the left, others when sound from the right (to do with distance to sound source- increasing delay in response the further away the cells are from the ear that the input is coming from)

-Because of both the difference in location of groups of cells within the MSO and distance to the ear which input is coming from + the delay between when each ear receives information from a given source coincident input can occur (input from the left and right ear arrive at a group of cells within the MSO at the same time!)

-Coincident input means that the group of cells in question are more likely to fire (two ‘depolarisation hits’ as opposed to one)

-When firing occurs information can reach the brain allowing for coincident detection and subsequently the computation of the interaural time difference.

36
Q

Interaural level difference

A

-The head masks sound coming from one side

-This results in an acoustic shadow whereby the ear opposite to the source of sound does not receive the same sound pressure level as the other ear

-Neurons of the lateral superior olivary nucleus receive ipsilateral excitation and contralateral inhibition, so they fire more when sound only presented to one ear (as no inhibition via contralateral ear)

-Relative activity of the LSO in each ear will tell us whether something is coming from the left or right

-Lateral SON therefore invovled in sound localization

37
Q

‘Where’ information versus ‘what’ information, how does information travel in these streams?

A

-Where information travels both contralaterally and ipsilaterally
-What information is believed to be projected monoaurally (i.e. ipsilaterally).

38
Q

What does the inferior colliculus trigger if sound is unexpected and loud?

A

-Startle reflex via descending projections to the spinal cord

-If incoming sound isn’t severe, signals will continue ascending to the thalamus (MGN)

39
Q

Medial geniculate nucleus of the thalamus

A

-A final relay station of the ascending auditory pathway

-Subregions of the MGN devoted to “where” and “what” processing. Keeps the streams separate and relays them to the auditory cortex

40
Q

Auditory cortex

A

-Central core area (primary auditory region (AI)= responds to specific frequencies and simple tones (lower order). “Individual building blocks”

-Secondary auditory cortex (belts) and tertiary/ association regions (parabelts)= respond to complex sounds, sort features of sound (higher order) “put the building blocks to create a perception”

-Flow of information is from the core to the belts to the parabelts

41
Q

Why in the tonotopic organisation of A1 are the mid frequencies over represented?

A

-These are the frequencies which are used in human speech/ sounds that a human makes

42
Q

Dorsal/ ventral streams after A1?

A

-Dorsal stream= where information (binaural integration, audiovisual integration, localization). Neurons here tuned to spatial features and combine visual and auditory information.
Note: audiovisual integration= match what you are hearing and seeing

-Ventral stream= what information (sorting sound based on patterns, content duration and location- identification). Neurons here more responsive to object features with specific areas devoted to specific objects (e.g. voice vs, non-voice, vowel sounds vs. non-vowel sounds). Looking for meaning.

-What and where streams project to different zones of the prefrontal cortex- attend and respond to each stream differently?

43
Q

Parabelts function

A

-Sound recognition requires matching incoming sounds to memory based on pitch, loudness, duration etc.. Evidence for auditory memory store in parabelts region

44
Q

Belts function

A

-Spatial localization via the dorsal belts
-Sound recognition via the ventral belts
-Signal-to-noise optimizations also via the ventral belts

(these are the three basic levels of processing after A1/ simple order processing)

45
Q

Signal to noise optimization

A

-If non-signal acoustic energy (noise) is of similar or greater amplitude this can mask the signal

-If non-signal acoustic energy (noise) is of a similar frequency it can mask the signal

-Worst situation is if both the amplitude and frequency of non signal acoustic energy is similar to the signal (worst masking)

To optimise signal to noise ratio:
-Can use head shadow effect to localize sound (presented better to 1 ear than the other)-. This means the brain simply listens to the ear with the better signal to noise ratio. Or once sound is localized could just turn both ears to face the localized sound.

-Alternatively brain can ‘unmask’ signal using interaural time difference and interaural level differences. Won’t be big differences for background noise as it’s typically presented equally to both ears so smaller ITD and ILD. Differences between signal and noise are determined in the brainstem, and once they reach the cortex we can attend to them and process their features- i.e. selectively attend to sounds that have bigger ITD and ILD as likely to be the signal (a person speaking to you in one ear for example).

46
Q

What can the superior olives be spilt into?

A

-MSO (medial is important for the inter-aural time difference)
-LSO (lateral is important for the inter-aural level difference)

47
Q

Frasen Effect

A

-Auditory illusion

-Cortex ‘decides’ that the sound is coming from one direction and then despite the location of the sound gradually changing you still think it is coming from the original direction

-We are sensitive to the onset of the sound and it’s localization and then a form a stable perception (illustrates the role of the cortex)

48
Q

Cocktail party phenomena

A

-In admit all background noise if you hear one familiar word such as your name you will immediately turn

-Binaural masking effect? Mask background chatter, but some portion of the auditory memory bank that will respond to a particular salient cue such as your name

-Then connection to prefrontal cortex that will result in attending to that informatio