Lecture 9 - High Level Hearing Flashcards

1
Q

hearing in the cortex

A
  • A1 responds to sound frequencies
  • not sensitive spatial mapping in neurons in A1
  • auditory cortex is divided into 3 subareas
    1. core - contains A1, projects to belt
    2. belt - contains secondary auditory areas, projects to parabelt
    3. parabelt - contains tertiary areas, higher order level processing of complex contents of sound, processing info from both core and belt
  • rauschecker et al (1995) - measured neural responses to more complex sounds than simple pure tone stimuli
    > neurons in belt area showed preference for more complex sounds over pure tones
    > when freq range expanded get more of a response
  • recanzone (2000) - measured spatial tuning of neurons in A1 vs posterior belt. monkey indicated lever press when a sound changed direction. assessed responses to changes in azimuth and elevation
    > found spatial tuning in belt is greater than A1
    >pbelt neurons more sensitive to horizontal changes in sound and better prediction of monkeys perception
    -
  • rauschecker and Tian (2000) separate paths in auditory cortex comparable to the ventral-dorsal dissociation in human visual system
    1. where path: pbelt, projects to parietal & frontal lobe, localises sounds
    2. what path: anterior belt, projects to temporal and frontal lobes, recognises sounds
  • evidence from Lomber and Malhotra (2008) - cortical cooling to deactivate one of two areas: anterior or post auditory cortex
    > posterior deactivation = cat impaired at making precise spatial location performance. no marked impairment at frequency temporal pattern discrim.
  • suggests division between processing what a sound is and where sound is
    > posterior for spatial localising
    > anterior for sound pattern identification
  • but auditory perception requires integration of both
    > anterior deactivation = no marked impairment in spatial location performance. less able to make frequence temporal pattern discrim.
    -
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

perceptual organisation

A
  • gestalt laws = revert to whatever interpretation most likely. similar patterns in auditory.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

auditory segmentation

A
  • auditory system relies on different cues associated with sound position which can bring in difs of interaural sound position and help when sound sources are in different locations e.g. ITDs and spectrums
  • listen is still able to segment sound sources with same spatial origin
  • Bregman & campbell (1971) identified stream segmentation - how you segment acoustic info affects how you perform on sound judgement tasks
    > listeners could only judge order of notes accurately within same auditory stream
  • Bregman (1990) - auditory scene analysis - ability to group and segregate auditory data into discrete mental reps. argued streams are segregated based on the concept of the perceptual distance between successive auditory components
  • based on weighting of acoustic dimensions e.g. time & frequency- more perceptual similarity the more likely they will be combined into single meaningful auditory concept.
  • perceptual coherence is often measured in ABA-ABA sequences where A and B are separate tones indicating pause. listeners judge if they hear two or one stream. the temporal displacment of tone B harder to detect when freq difference between A and B inc.
  • stronger effect with longer seq of sounds suggests:
    > auditory system begins with assumption of one stream(fusion)
    > fission (multiple streams) is perceived with accumulation of evidence
  • is segmentation affected by attention?
  • Thompson used ABA triplet task to one ear and distractor to other ear. had to judge if there was temporal offset of B to A. found accuracy better in unattended (switched) condition where they performed noise task for 10s before switching to ABA task
  • attention is required for segregation and improves A and B segregation which impairs ABA performance
  • other ways of segregating auditory stream?
  • schema based segregation: top down based on stored templates of dounds, distinct from primitive.
    > Bey and McAdams (2002) - performance is better when listeners listen to unmixed melody first
  • schema effects in segregation shows how the brain knows what to listen for
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

speech perception

A
  • phonemes = units of speech which when changed can change meaning of word & vary across languages
  • vowel sounds occur when air is allowed to move free in mouth and vocal tract. changing shape of mouth affects how these vibrate
  • formants distinguish vowels
    > peaks in freq spectrum
    > ae vowel sound has formats in 3 frequencies: 500hz, 1700hz, 2500hz
  • harmonics are integer multiples of frequency. formants are difs in amplification based on formation of vocal tract & vary across people
  • rapid change in freq either side of formant is formant transition = consonants
  • the acoustic signal for same phoneme varies based on context.
  • coarticulation = interaction between overlapping speech sounds
  • however there are difficulties in variations between speakers e.g. dialetics
  • important to distinguish voiced and voiceless phonemes. voiced consonants inc vibration of vocal cord.
  • voice onset time is a strong cue to distingush voiced from voiceless sounds (time between the beginning of the sound and the onset of the voicing
  • experimentally manipulating the VOT in sound reveals phonetic boundary (point that separates two phonemes)
  • listeners also use top down knowledge to undertsand speech
  • miller & Isard measired signal to noise ratio thresholds for 3 types of sentences and pp had to repeat what they heard. (grammatical, anomolous and ungrammatical sentences)
  • signal to noise ratio threshold is lower (better) for grammatical sentences = expect to hear these rules
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

how do we segment the sound into different words

A
  • inspecting waveform suggests no straightforward relationship between acoustic transitions and speech transitions - do not correspond with words
  • often same acoustical signal can be interpreted in very different ways
  • listeners rely on transitional probabilities to segment speech into words
  • Saffran et al (1996) show this in 8m infants. infants pay attention to novel parts of words and are sensitive to probabilities of speech stimuli.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

neural evidence

A
  • brocas aphasia: slow, ungrammatical, mild impairment in understanding, frontal cortex
  • wernickes aphasia: fluent but incoherent, widespread impairment in understanding, temporal
  • neuroimaging shows vocie selective area in superior temporal sulcus involved in speech processing and complex aspects of sound
  • dual strean theory syggested:
  • speach recognition
    > recognise and comprehend speach
    > maps sensory and phonological representations onto lexical conceptual representations
    > ascribes meaning
  • speech production
    > production, repetition and phonological wm
    > maps sensory or phonological rep onto articulatory motor rep
How well did you know this?
1
Not at all
2
3
4
5
Perfectly