4- Multi-sensory processing Flashcards
What are multisensory interactions?
useful illusions that can illustrate that our brain can combine bits of information across multiple modalities and how we put info together is known as…
Sensory processing:
Historically, how were senses viewed?
as separate ‘modules’,
-each operating independently to provide us with unique information about the external world.
This modular view has been fuelled by findings of distinct cortical areas specialised for processing different types of sensory input.
-But ), there are also areas of overlap from cortical areas (e.g. spatial position of objects and timings of events)
Finish the sentence:
Our perception is actually shaped by interactions between different … modalities from different cortical areas of the brain.
sensory
Name all of the sensory modality areas:
Multisensory interactions:
Which effect occurs when the auditory component of one sound is paired with the visual component of another sound, leading to a third, distinct perception.
eg. hear the sound “ba” while watching a video of someone saying “ga,” you might perceive the sound as “da” or “tha.”
what you hear depends on what you see
the McGurk effect
Multisensory reactions in spatial perception:
Which effect occurs when visual and auditory stimuli are simultaneously presented at different spatial locations, the perceived location of the auditory stimulus is often shifted towards the visual location.
Ventriloquists exploit this to create the impression that their voice is emanating from the dummy’s mouth. We also experience it whenever we go to the cinema or watch television
(eg. ventriloquists)
Spatial ventriloquism
-location of sound is pulled towards location of visual event
What does it mean when the ventriloquist effect is perceived auditory location shifts towards the visual location, but perceived visual location is largely unaffected by the auditory stimulus?
the effect is typically asymmetric
Multisensory reactions in temporal (time) perception:
What term is used to describe this process:
The perceived timing of visual stimuli is biased towards that of asynchronously presented auditory stimuli
The visual stimuli have little or no effect on the perceived timing of auditory stimuli
So the direction of the interaction is reversed
(eg. sequences of brief auditory tones/clicks, alongside flashing lights. Observer will experience the visual timing of the event as being pulled towards the nearest auditory event)
Temporal ventriloquism
Which hypothesis describes that if there are discrepancies (not in agreement) between sensory estimates (temporal and spatial don’t match such as ventriloquists), there should be a resolvent in favour of the modality that is most appropriate for the task at hand (rely upon which sense it better at making the particular judgement)?
Modality appropriateness hypothesis
According to the Modality appropriateness hypothesis:
… is typically much better than auditory spatial acuity, so vision ‘captures’ the spatial location of auditory stimuli.
Visual spatial acuity
According to the Modality appropriateness hypothesis:
… is typically much better than visual spatial acuity, so audition ‘captures’ visual stimuli in time.
Auditory temporal acuity
Which sensory modality works well but sits between Audition and Vision?
Touch
What rules govern multisensory interactions?
Recent research has shown that these rules are quite flexible, and depend on the balance of … for a given judgement
unimodal sensitivities
-For example, Alais & Burr (2004) measured participants’ ability to localise auditory clicks and visual blobs of different sizes
-Depending on size, visual sensitivity was either better (4∘), equivalent to (32∘) or worse (64∘) than auditory sensitivity
Next they measured the perceived location of pairs of discrepant auditory and visual stimuli (i.e. the ventriloquist effect)
- Found perceived location of the bimodal stimulus was determined by vision when small blobs were used (4∘), but audition when the visual stimulus was large and difficult to localise (64∘). When sensitivity was matched, perceived location was mid-way between the two (32∘).
Is how we weight the different sensory cues fixed?
No
-not just determined by space or visual, it depends also on sensitivity of that particular moment in time
Which theory explains that we form a weighted average of what our senses are telling us which acts as a framework for understanding integration of sensory cues?
-proposes that the brain forms a weighted average depending on the sensitivity of that sensory modality (eg if spatial is more sensitive than auditory) of the estimates obtained from each sensory modality
-theory can predict performance over a range of different tasks
Maximum Likelihood Estimation
eg. auditory stimulus presented at 1 location, we make an estimate of its sound position, but that estimate will have some uncertainty associated with it.
If we also present a small visual stimulus, we make a visual estimate, but generally our uncertainty is going to be lower (more correct when we try and point to its location across different trials)
- The maximum likelihood formula, we can predict exactly where you can localise this combined auditory/ visual stimulus when presented together, this estimate is now closer to the auditory location (which was harder to distinguish)
What is the use of the maximum likelihood formula?
It allows us to predict exactly where you can localise this combined auditory/ visual stimulus when presented together
making a best estimate nearer to the stimuli with the uncertainty
Maximum Likelihood Estimation:
Finish the sentence:
Larger weight is assigned to estimates with … uncertainty (when sensitivity is high).
Inversely proportional process
low uncertainty
Maximum Likelihood Estimation:
Finish the sentence:
Smaller weight is assigned to estimates with … uncertainty (when sensitivity is low).
Inversely proportional process
high uncertainty
Benefits of multisensory integration:
When a single event gives rise to sensory cues in different modalities, what are the two main benefits of combining these cues?
1) resolves discrepancies associated with internal (neural) and external (environmental) noise, thus helping to maintain a unified percept of the world
(there is noise when the sensory modalities pick up information in the environment, it basically filters them eg. we are not really aware that things are happening at different times)
2) increases the precision (i.e. reduce the uncertainty) of perceptual judgements
(the combined estimate we end up with is more precise and less uncertain compared to individual estimates themselves)
What is the name of the types of delays that happen between how long light and sound reach to our senses?
intrinsic delays
Why do we need to be careful when combining information across multiple senses?
Those senses only really apply if they relate to a common cause out there in the world
When sensory cues relate to different sources, integrating them would not be beneficial (and could be hazardous in some scenarios).
eg. Visual estimate of dog location
but Auditory estimate of meow location
=separate estimates should be formed based on each sensory modality.
In the scenario of having a visual estimate of dog location but having an auditory estimate of meow location, separate estimates should be formed based on each sensory modality.
But How does the brain decide when to integrate and when to segregate?
By looking at how different those estimates are via balancing the benefits and costs.
-Integration is restricted to multisensory signals that occur close together in space and time
Fill in the sentence:
Multisensory interactions occur when conflicting information presented to one sensory modality alter … presented to another sensory modality
perception of a stimulus
Fill in the sentence:
The … effect = visual lip movements alter the perception of auditory speech
McGurk effect