Binaural hearing and Sound Localization Flashcards

1
Q

What are some binaural Summations/Benefits for localization? (4)

A
  1. Increase loudness
  2. Improvement in differential limen
  3. Better perception in noise: spatial filtering
  4. **Binaural fusion and beats
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

How does Binaural summation benefit loudness perception?

A

Increase loudness perception near threshold: 3 dB perfect
2:1 or 6 dB above threshold

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is binaural fusion?

A

It is one example of binaural interaction on hearing, more examples exists, such as central masking, contralateral suppression, co-modulation release, etc.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What does this graph show related to binaural summation for loudness?

A

Binaural Hearing increases loudness by ~6 dB

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is a differential limen?

A

The smallest change in stimulation that a person can detect

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is the difference between unilateral and binaural hearing in discrimination?

A

Binaural hearing is better than unilateral in discrimination, especially at low sensation levels (SL) due to binaural summation in loudness

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is the effect of binaural hearing on frequency and intensity discrimination?

A

Binaural advantage on frequency and intensity discrimination.

(The binaural benefit can’t be attributed to binaural summation on loudness because it would require more than a 30 dB diff in loudness to produce such a difference in discrimination.)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

According to Pickle, and Harris (1955), what is the difference in the differential limen between unilateral and binaural hearing?

A

adjust level to account for loudness advantage—a difference in discrimination threshold between binaural and unilateral hearing (disappeared at low SL)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

According to Jesteadt (1977), what is the effect of binaural hearing on differential limen of intensity and frequency?

A

Binaural hearing causes better intensity and frequency discrimination (at 70 dB), not due to loudness advantage.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

It is understandable why the improved discrimination is related to ________________________________: discrimination is NOT level dependent if signals are _________________________; better discrimination is seen in ________________________.

A

It is understandable why the improved discrimination is related to binaural summation on loudness: discrimination is NOT level dependent if signals are well above the threshold in SL; better discrimination is seen in binaural hearing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are the benefits of the potential mechanisms for binaural hearing in noise? (4)

A

Separates target sound from noise (spatial filtering)
Improves discrimination
Improves stream tracking of target sound
Unmasking (via efferent control and other

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Binaural fusion is related to: (2)

A

Binaural cues for the acoustic image in space: Binaural differences in intensity, spectrum, and timing

Fused image

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Fused image from binaural fusion is related to:

A

The fact that we do not feel that two ears work separately, but dichotic signals can be different or similar and be connected in certain ways

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

_________________ is required for binaural fusion

A

The commonality
Model by Cherry and Sayers: binaural fusion from two ears receiving similar signals: commonalities

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What are three examples of commonality for binaural fusion?

A

Co-modulation of harmonics presented dichotically (different components go to each ear).
Different speech components to each ear: complimentary for speech.
Residual pitch harmonics are presented dichotically.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Binaural beats (BB) occur in the ________ while monaural beats (MB) in the ________.

BB are in the ________________ frequency range than MB.

BB can occur at a ________________ and ____________________between the two tones; one tone can be______________________.

A
  1. BB occurs in CAS, while MB in the cochlea.
  2. BB occurs in a lower frequency range than MB.
  3. BB can occur at a larger level difference and at a larger frequency difference between the two tones; one tone can be below the audible level.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Gestalt principle states that:

A

When you’re presented with a set of ambiguous or complex objects, your brain will make them appear as simple as possible.

  1. Grouping units (sounds) together
  2. More than simple addition
  3. The whole is larger than the simple sum of all parts

*Principle can be seen in vision

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

In a c complex auditory task like tracking a target talker at a cocktail party, what are the cues that may be used in this situation? (5)

A
  1. Spectrum profile of the talker’s speech
  2. Temporal stream of the speech
  3. Spatial separation/identification
  4. Many more (such as familiarity, dynamic cues etc)
  5. Bottom-up and top-down processes involved.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What does this graph show related to the segregation of sounds in binaural hearing?

A

Segregation of the stream by the speed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What can we see from the Stream 1 and 2 effect of both speed and frequency difference?

A

When the frequency segregation is larger, you hear two streams at higher speeds.
However, you always hear one stream when the frequency segregation is small (galloping).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Give other phenomena and terms in signal processing.

A

Proximity (similarity): e.g.: similar signals for easy dichotic fusion

Common fate (e.g., on and off together are
the same and likely attribute them together)

Good continuation

Primitive process- Bottom up

Schema-based process- top down

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What is Schema based learning?

A

Schema-basedlearning is an active process in which learners construct new ideas or concepts based on their existing knowledge

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Importance of common onset: example of common fate

A

A: simple masking: on band of masker upon the signal
B: co-modulation masking release (CMR): reduced masking effect when the noise in the signal band and side bands are co-modulated.
C: CMR disappears due to the mismatched onset of noise between the signal band and the side bands.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Give an example of a primitive process:

A

Bottom-up
Frequency Modulation leads to vowel sensation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Example of good continuation in vision

A

Gliding tone: example of continuation in acoustics from blocking by fence (noise), not a blank in picket fence effect

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

What is the picket fence effect?

A

It is a bottom-up and top-down process combination causing a continuous signal when blocked by a fence (noise), not a blank gap.

The top-down process is involved, especially in the shape perception in the most right graph.

(example with gliding tone)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

What are the two planes for sound localization?

A

Azimuth (more critical) vs elevation (vertical plan)
(our system combines the information from these planes to form the location of the sound source)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Azimuth vs Elevation

A

Azimuth = Horizontal plane
Elevation = Medial Plane

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

We investigate the sound source localization ability in two measurements: _________________ and _________________

A

localization and discrimination

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

What is localization error and what is the task associated with this measure?

A

The difference between apparent location and physical location

The subject must point to the sound source with no reference just hearing the sound

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

What is spatial discrimination and its task?

A

It is measured as the minimal audible angle
(Direction major task, distance, and moving)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

What are the two listening fields we use to measure sound localization ability?

A

Open field (closer to reality)
Close field

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

What is close field testing in sound localization?

A

We use headphones which lead to intracranial lateralization

(Sound source trapped inside our head likely due to the loss of resonance in the ear)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

What is open field testing in sound localization?

A

We use stereophony which leads to extracranial, feeling the sound source (outside our head) localization to both ears (closer to reality)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

What are the two general issues (questions we ask ourselves for sound localization?

A

What are the cues for SL and how are they used?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

What are the approaches to answer these considerations for sound localization? (2)

A

Behavior studies and neurological mechanisms

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

What are the cues for sound localization?

A

ITD/IPD (phase difference)
IID/ILD

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

What is the duplex theory for localization in the azimuth plane?

A

ITD/IPD
Determined by the size of the head, larger the head which is like a bowl (diameter is like 22-23 cm)

IID or ILD
From shadow effect of head

39
Q

The ITD higher when sound at____________________________ with largest time difference between both ears of _______ μs and a time difference sensitivity: ____μs difference across frequency.

A

ITD is higher when a sound at a 90-degree angle (fully lateralized to the side of the head)
660 μs
Time difference sensitivity: 10 μs
difference across frequency
Time converts to angle and phase

40
Q

Shadow effect is larger at_____________________________________________________

A

High frequency, by shorter wavelength near ear

(When the frequency is low longer wavelengths, can go around obstacles, high-frequency sounds are attenuated by obstacles)

41
Q

How does the shadow effect change with frequency?

A

For higher frequencies, the ITD becomes larger and larger depending on the location of the sound source (90-degree), and for low frequencies almost no shadow effect and no level difference.

(Must consider the combined effect of distance and shadow)

42
Q

What is the effect of frequency on ILD?

A

ILD is higher with higher frequencies and shorter wavelength
ILD can go at 20 dB

43
Q

ITD summary: ITD comes from __________________ and is determined by ________________ and is influenced by ____________________________.

A
  1. ITD comes from a distance difference
  2. Determine by azimuth (angle), little effect of distance
  3. Influenced by the shape of the head, changes with frequency
44
Q

ITD produces IPD for __________________________________

A

for periodical signals

(Interaural time difference can be converted into phase delay)

45
Q

IPD depends on periodical signals and also on:

A

on frequency: for certain IDT, high frequency will have a larger IPD value

46
Q

Larger IPD ≠

A

stronger signal

47
Q

For a fixed IPD, ITD ___________________________________

A

ITD decreases with frequency

48
Q

IPD is a useful cue when it is __________ over that it will ___________

A

below 360 degrees, over that, it will repeat itself (sine
When 360o>IPD>180o, the IPD acts the same as of IPD<180o

49
Q

To make a clear decision based on IPD, ½ period must be:

A

larger than the maximal time difference (MTD) (660 microseconds), so that IPD <180.

50
Q

How does the frequency affect ITD and IPD?

A

360 phase difference for different frequency

Higher the frequency, the shorter the time difference for IPD<180o

51
Q

MTD and the highest frequency for useful IPD (to generate useful IPD):

A

To make the ½ period longer than MTD, the Frequency must be smaller than a value.

If MTD=0.7(or 660) microseconds, what is the max Frequency to ensure this?
To make period/2 larger than 0.7 ms, 1000/1.4 = 714 Hz
Therefore, max Fre for IPD < 180o is ~700 Hz

52
Q

What does this graph show related to the Intensity difference/time change with frequency?

A

a and b show the ITD at 90 and 45 degree: shorter the ITD, higher the frequency for 180 phase difference.

Limiting conditions for intensity and phase
For phase: the curves tell how much ITD will produce targeted phase difference, as a function of frequency

53
Q

Why are low frequencies better for ITD and IPD processing?

A

Temporal coding is better for low frequency
No ILD available at low frequency

54
Q

Smaller animals use _____, big animals ______ because MOC neurons ____________________________________.

A

Smaller animals use ILD, big animals ITD MOC neurons are more dominant and they require low frequencies to use ITD to localize a sound source.

55
Q

What are the elements for identifying sound sources at low frequencies?

A
  • Identity circle
  • Must be < 180 degrees, change with frequency
    At 90 azimuth, ITD 650 us = ½ cycle of 770 Hz
    At 45 azimuth, ITD 350 us = ½ cycle of 1400 Hz
    Close to 0 azimuths, ITD →0, frequency limit increase, but still low f signals make strong IPD cues!
56
Q

Sound localization accuracy in azimuth is always the best at 0 ____________
No matter what types of cues are used and largest ITD/IPD/ILD at ____________
but larger ITD/IPD/ILD does not mean _________________

A

Always the best at 0 azimuth
No matter what types of cues are used
Largest ITD/IPD/ILD at 90 degree
But larger ITD/IPD/ILD does not mean strong stimulation

57
Q

Why are we better at localization at 0 degrees related to neurons?

A

More neurons for localization are organized that they work best at 0 azimuth

58
Q

What is the duplex theory?

A

High frequency, rely on ILD
Low frequency, rely on ITD, predictable from sphere model

(At high fre, ITD also play roles
At low fre, ITD differ from sphere model (< 500 Hz, ITD = 800-820 us))

59
Q

The duplex theory depending on studies using ____________________________________

A

Depending on studies using pure tones, the real sounds are normally complex

60
Q

What can we say about our localization accuracy across frequency?

A

The higher the line, the poorer our accuracy: ITD/IPD is better at low frequencies, ILD is better at high frequency

If our localization depend on duplex theory (pure-tones) then it is poorer performance at middle frequencies (which is rare for our system to be poor at middle frequencies)

61
Q

What are the limitations of duplex theory? (3)

A
  • We do not rely upon pure tone for localization
  • High-frequency sound can have time cues, e.g., when modulated by low frequency
  • Break down front-back confusion and identical circle by pinnae cues
62
Q

Why should we use earphones instead of speakers?

A

Speakers require too much equipment, a testing environment is more strict

Earphones:
- can avoid these problems
- reduce the burden of equipment.
- We can change the parameters to change phase, and level independently in each ear

63
Q

With earphones, Halverson studied using:

A

500 Hz tone, 0-180o change in phase converted to 0-90o azimuth

Found that frequency limit for useful IPD Phase change leads position image change when frequency < 1400 Hz

64
Q

When using earphones:
The relative effectiveness _______________________________
Sound trapped in the head due to _____________________
______ more important than______ when using earphones
ITD: only works at _______________
IPD: works for ________________

A

The relative effectiveness of ITD and ILD can be evaluated
Sound trapped in the head due to the loss of the pinna effect
ITD is more important than IPD when using earphones
ITD: only works at onset and offset
IPD: works for continuous signals

65
Q

Explain the contributions from the ITD at onset and offset:

A

In virtual hearing (via earphone): early onset in the near ear leads to sound coming from the nearer ear (the effect of onset discrepancy), whereas early offset in the near ear leads to sounds coming from the farther ear (the effect of offset discrepancy)

Overall, the onset ITD is dominant: in real hearing, we hear sound based upon the onset discrepancy.

Also, there are studies comparing the effect of onset ITD and ILD: in virtual hearing, the near ear can have early onset but weak sound.

66
Q

In research, the major progress with improvements in technology showed that virtual hearing (earphones) can better mimic real hearing from: (3)

A
  1. From simple sounds to complex signals
  2. Use of headphones: lost spectrum cues
  3. Digital technology can put back the spectrum cues
67
Q

Explain MMA (Minimum Audible Angle)

A

Smallest detectable difference between the incident directions of two sound sources
Much more accurate than sound localization
Largest around 1-3 kHz
Smallest at 0o azimuth
Concurrent MAA (CMAA): two signals at the same time

68
Q

MAA according to Yost states:

A
  1. IPD shift required for image shift remains constant when f<900 Hz
  2. IPD shift required for image shift increases with original IPD
  3. Upper freq limit: 1200 Hz
69
Q

What does this graph show related to the IDL threshold and frequency in lateralization?

A

Y-axis: Sound level required for the subject to tell where the sound is. The IDL threshold does not change with frequency

0 dB: middle line (0 azimuth) Better performance
9 dB: 45o = poor performance
15 dB: 90o = poorest performance

70
Q

What does this graph show related to the IPD in lateralization using earphones?

A

No effect at 2000 Hz, the performance is so poor it’s not useful
For lower frequencies, we can see the change of phase difference that is required to identify sound source smaller value around 0 azimuth and poorer at 90

71
Q

The Minimal angle for earphones remains _____________________________and is proportional to the original (or baseline) phase difference at _________ a just detectable phase angle change is _________________________________________

A

The Minimal angle for earphones remains constant for up to 900 Hz and is proportional to the original (or baseline) phase difference at 500 Hz, a just detectable phase angle change is 2 degrees or 11 microseconds

72
Q

In Lateralization:
Just noticeable phase diff at 500Hz: ________ or _______
At 1200 Hz: _________ or ___________
In Localization:
just noticeable phase difference at ____________ or ____________
MAA is __________________localization error

A

In Lateralization:
Just noticeable phase diff at 500Hz: 2o or 11 microsec
At 1200 Hz: 12o or 27 microsec
In Localization:
just noticeable phase difference at 100 Hz:3o or 5.83 micros
MAA is much smaller than localization error

73
Q

What does this graph show related to the MMA change with frequency?

A

Three different locations
When at 60 degrees the MMA is huge at middle frequencies
Smallest at 0 azimuths its roughly at 2 degrees

74
Q

What occurs in front-back confusion in sound localization?

A

Sound sources on the cone of confusion are equidistant from the left ear and right ear and thus provide identical ITD and ILD for a listener and the duplex theory doesn’t work.

At any point on the cone surface, binaural cues are the same.

75
Q

To get rid of the confusion cone effect, we need to use:

A

Spectral cues

76
Q

What are the spectral cues and their use?

A

We use them for location and to avoid errors in azimuth (confusion cone effect) in the vertical plane from the pinna effect ear canal resonance.

77
Q

What are dynamic cues?

A

Dynamic versus stationary sound source (referred to location)
Stationary stimuli can be dynamic when the head is moving
We can break front-back confusion by moving head
The head movement helps monaural localization (Small effects reported)

78
Q

What are the cues for estimating the distance of a sound source? (5)

A

Sound Level
The ratio of direct-to-reverberant energy
Spectral Shape
Binaural Cues (ILD)
Familiarity or Experience

79
Q

What is the precedence effect?

A

The sound that is heard first takes the dominant role in the localization and is the most contributor (onset time difference is more important than offset).

80
Q

Which type of sound was used to test the precedence effect?

A

Found out from a classical click experiment and fusion occurs when the click interval is less than 5ms

Summing localization—fused when interclick interval <1 ms

Localization dominance, when click interval 2-5 ms, pair interval between 10-100 ms.

Causes Discrimination suppression

81
Q

Explain localization dominance.

A

τ1 = interval first pair of clicks
τ2 = interval second pair of clicks
τ3 = interval between the pairs of τ1 and τ2

When τ1, τ2, and τ3 are small enough (τ1, τ2 < 5 ms, 10ms<τ3 <100ms), the listener tells the
sound comes from the left ear (the summing localization of the leading ear dominates the result).

summary: The first pair is left lead and the second pair is right to lead. Playing the first pair along, the subject feels that the two clicks come from the left. Play the two pairs together, and the subject hears the fused image from the left.

82
Q

Explain how discrimination is suppressed from the interval between clicks.

A

a) summing localization since the interval between click pairs is too small less than 1 ms (fused)
b) summing localization since the interval between click pairs is too small about 3 ms (fused) lateralized to the leading ear
c) No summing localization since the interval between click pairs is large enough about 15 ms (not fused)
c) Left leading suppress localization at right (poorer)

83
Q

Discrimination suppression occurs when _____________________________________________________.

A

the leading ear receiving the sound first causes suppression of sound in the lagging ear.

84
Q

What cues do we use for localization? (summary)

A

Binaural - azimuth - divided into ITD (low frequency, big animals) and ILD (high frequency, small animals)

Spectral cues - vertical to avoid cone of confusion using pinna effect and EAC resonance

Dynamic cues - moving head or sound source for better localization to break down front-back confusion, or monaural localization

85
Q

What is head-related transfer function?

A
  • How our head changes the transmission of sound depending on its location
  • directionally related to the sound source (spectrum change with direction)
  • causes timbre changes
86
Q

What is binaural fusion?

A

It is one example of binaural interaction on hearing between ears from many sound cues

87
Q

Explain the masking level difference between dichotic and diotic presentations.

A

S: signal, N: noise, o: in phase, pi: out of phase (180 degree), tau: time delay

a) signal and noise in monaural
b) signal and noise binaural in phase
c) signal in monaural and noise both ears 9 dB masking level goes down (dichotic) due to efferent
d) diotic signal both ears noise both ears out of phase 13 dB of masking level difference
e) LARGEST signal out of phase both ears (dichotic) noise in phase both ears (diotic) 15 dB masking level threshold
f) signal diotic in phase noise time delay. 3-10 dB

88
Q

In which presentation is the masking level higher than the others? Why?

A

SpiNo

LARGEST signal out of phase both ears (dichotic) noise in phase both ears (diotic) 15 dB masking level threshold
-Indicating mechanisms related to phase locking
-Supported by MLD change with fre (Larger MLD for low fre)
- Larger for higher spectrum level of noise

89
Q

What does this graph show us related to the masking level difference depending on the frequency of signals?

A

There is a larger masking effect for low frequency signals

90
Q

Give the binaural responses of neurons.
(ipsilateral stimulation)

A

Table Below

91
Q

Give the spatial distribution of binaural neurons in the MSO and LSO.

A

Low Frequency neurons in MSO and IC: EE type dominant - ITD

High Frequency neurons in LSO: IE type dominant; in IC: EI type dominant - ILD

92
Q

How do neurons code sound time difference from both ears?

A

Binaural processing:
Period Excitation and inhibition:
Curve = sound wave
ITD—onset disparity

IPD—ongoing disparity
Maximal interaural delay: 0.8 ms
IPD depends on frequency

93
Q

What does this slide show related to the fact that MSO neurons have characteristic delay ?

A

This slide shows the time delay of neurons in respone to contralateral stimuli vs ipsilateral stimuli

Ipsilateral stimuli neuron in white has shorter latency

Contraleral stimuli neuron in black has bigger latency which is due to the length of the traveling wave to the cochlea

94
Q

What would happen if we would make the stimulation to the contralateral ear earlier (near ear)?

A

Shorter time delay stimulation when sound to travel from the speaker to the neuron if contralateral ear earlier (near ear)

Jeffery’s Coincident theory
Interaraul delay is countered by neural delay to make coincident