Final Written Study Guide Flashcards
(152 cards)
Explain the process of converting An analog electrical signal to a digital signal
Then sound goes to the analog to digital converter to go from an analog electric signal to a digital signal
ADC - when converting an analog signal to a digital signal the signal is sampled at discrete intervals (sampling rate)
Takes snapshots of the analog audio - must have 2 snapshots per wave cycle in order to accurately represent the frequency (one at the peak and one at the trough) - the higher the sample rate the more precise the recreation of the original audio is
The quality of the captured audio sample is determined by
sample rate - how many samples of the original signal are taken in periods of time
bit depth - determines the amount of possible amplitude values that can be recorded for each sample amplitude; The smaller the bit, the more noise you get; The greater the bit depth, the greater the detail in the audio
Quantization - each sampled value is rounded to the nearest value within a set of discrete levels and these levels are defined by the resolution of the ADC
Quantization error = noise floor
results in front-end distortion we see because for every bit is only 6 dB DR
16-bit digital word = DR of 96 dB
Digital signal now has numbers associated w/ it and moves to DSP
Explain the process of converting
A digital signal back to electrical
converts amplified electric signal back to an acoustic signal
armature (flexible strip of metal balanced between two magnets like a diving board) is magnetized as the electrical current flows through the coil and it moves up in the positive direction towards that magnetic and down to the negative magnetic mimicking the electrical sine wave which is attached to the diaphragm above it and as the armature moves so does the diaphragm and this diaphragm movement causes the push and pull of the air creating an acoustic signal
what is sampling rate
number of times per second an analog signal is sampled to create a digital signal - how we capture frequency information
Sampling rate takes regular snapshots of the continuous analog electric signal wave at evenly spaced moments in time & DSP only uses those snapshots (sampled points) and ignores everything else
In order to capture the wave correctly how many shots for each cycle is needed
2
one at the highest point and one at the lowest point
If you do not get enough snapshots, the wave’s frequency will not be able to be accurately recognized
what sampling rate do you want
high rate
more snapshots taken- the more accurate the original continuous wave is represented due to more sampling points
what is the nyquist theorem
Nyquist rate is 2x the given frequency to be measured accurately
minimum sample rate for the highest frequency wanting to be measured
what is bit depth
Measures the amplitude of the signal - horizontal measurement
Higher bit depth = more possible vertical amplitude values & more precisely the exact amplitude of a given sample can be recorded
It also means a wider dynamic range
what is Quantization (Bit Resolution)
Depending on the bit depth, the exact amplitude value is rounded up or down to the nearest value using quantization
Higher quantization = more fluid sample
Lower = more choppy sample
what is quantization error
The difference between the original acoustic signal and the transduced digital signal
Creates noise = noise floor
Every bit only has —-dB of dynamic range
6
16-bit digital word = DR of
96 dB
Noise floor in the hearing aid is noise created in the circuit due to quantization error
TRUE
waht is an algorith
Analytical calculations applied to the digital signal
They add, subtract or multiply strings of digital words
It creates a step by step set of decisions to achieve the desired result
explain how algorithm works
ha is listening to environment and to the acoustic scene and deciding when combo of characteristics happen (spectral, temporal, amplitude) will automatically change programs and improvise the digitized signal processing to match any given listening environment
Explain front end limitations associated with 16-bit processing, and how this impacts microphone sensitivities. How are these limitations resolved?
The smaller the bit, the more noise you get
The greater the bit depth, the greater the detail in the audio
16-bit processing has a dynamic range of 96 dB - it can collect sounds up to this but past this the sound becomes distorted
We get this dynamic range because each digital bit increase the front end dynamic range by 6dB
Solution: the dynamic range shifts by lifting the 96 dB higher to collect more sound sounds but it does this by sacrificing soft sounds
what is an auditory filter, its function, effect of LF masking on a damaged cochlea and frequencies impacted by noise
Cochlea is a series of overlapping band pass filters (frequencies that are grouped together because they are close together on the cochela) that allow certain regions on it to stimulate to a specific frequency region while ignoring frequencies outside of the band
The filters have a lot of overlap with each other so HF bands pick up LF signals from adjacent critical bands and as a result, noise can mask signals from adjacent critical bands
In normal hearing, sharp tuning curves allow for precise frequency discrimination & perception of sounds
w/ HL, the curve is broader and noise can easily affect perception of the desired signal
Broadening of the filters is mainly on the LF side, leading to the increasing of LF masking. So the LF noise masks the region it normally would but also spills over to the overlapping HF bands around it making it more difficult for the PT to use the cues to understand the speech over the noise
how many bands are in the as
25 bands
LF bandwidths are narrow - only 160 Hz wide
HF bandwidths are wide- up to 2500 hz
what is upward spread of masking
Intense 250 hz LF noise will mask that frequency region but the masking will also spill over to the overlapping HF critical bands
Noise energy peaks around 250 Hz but upward spread of masking impacts audibility up to about 1500 Hz
Types of Noise that Impact Intelligibility
Steady state signals
Random noise with an intensity frequency spectrum like speech (speech like sounds)
10-talker babble
4 talker babble
2 talker babble
Which is harder to hear in?
Room reverberation
what are the methods of sound cleaning technology
spatial domain
temporal domain
spectral
Differentiate modulation rate and depth for speech and noise.
Speech & noise signals have time differences
Modulation rates
Speed of the signal
Speech = slow rate
Noise = fast rate
Modulation depth
Amplitude variations bw loudest and softest portions of the signal
Intensity of the variations
Speech = highly variable
Noise = steady over time
How is poor SNR determined in a hearing aid?
it looks at the mod rate and depth
for noise, mod rate is slow and mod depth is steady over time so it takes this and eliminates it from the signal
how does digital noise reduction work
Steady state noise
Idling engine, hair dryer, vacuum etc.
Only acts on fast mod rates & low mod depths
Varying degrees can be applied to each frequency range
doesn’t improve speech intelligibility
Can improve listening comfort, reduce listening effort, reduce cognitive load
does DNR improve speech intelligibility
no
doesn’t improve speech intelligibility
Can improve listening comfort, reduce listening effort, reduce cognitive load