Lecture 4 Flashcards
(20 cards)
Production of facial expressions?
- Large repertoire of facial expressions
- Humans and apes produce graded facial expressions that differ from discrete facial displays of monkeys - far more stereotypical and less overlap between facial expressions
- Controlled by two different neural systems: emotional motor system and cortical control system differentially control upper and lower part of face, and cortical control system has less control of upper than lower e.g duschent smile = real smile separate from fake = can tell via top of face
What is the universality of facial expressions?
- Darwin: emotional expressions are innate and universal - independent of culture and times
- Has been debated
- Ekman distinguishes six basic and universal expressions: happiness, anger, fear, sadness, disgust and surprise
- Current view on universality is less clear
What are the dimensions of facial expressions?
- What we know about facial expression perception come from studies associated with six basic emotions
- Facial expressions are examined within a two-dimensional space of valence and arousal
What was a study looking at the six basic emotions being on two dimensions?
- Looked at how perceptual features of expression map onto the two dimensions that underpin core affect
- Performed a PCA (identifies limited number of factors to represent complex visual information)
- Looked at images and how they differ in the dimensions, ratings along the dimensions of valence and arousal were correlated with the two main factors of the PCA
- Perceptual representation of facial expressions may mirror psychological representation of underlying emotions
How to separate identity and expression?
- Monkey electrophysiology showed that when monkey was looking at face images, neurons would fire when identity changed (STS), and neurons when facial expression changed (inferior temporal cortex), and some neurons that respond to both
- Acquired prosopagnosia: early studies suggested intact expression recognition - cannot say who they were but what their facial expression showed
- Congenital prosopagnosia: some patients who struggle with identity but spared facial expression recognition
What was a study looking at acquired prosopagnosia and facial expression perception?
- Extremely heterogenous pattern: brain damage is varied so perceptual pattern Is a reflection of that
- Few studies show spared expression recognition overestimate performance - no simple dissociation
- No double dissociation e.g if they are underpinned by separate mechanism, they would have one ability with one and problems with the other and vice versa = does not exist for the vice versa
- Means simple dissociation could result from task difficulty
Congenital prosopagnosia and facial expression perception?
- Stronger evidence for simple dissociation = still no double
- Difficulty of using developmental conditions to infer typical brain functioning
- Dissociation logic does not apply because there is no typical system to begin with
What is the neuroimaging of identity and expression perception?
- Different parts of the brain that responds to perception and identity
- Used an fMRI paradigm and changes relationship of faces presented along either expression or identity: first face, second face will show same identity with same/diff expression, or second face is different identity with same/diff expression
- Looked at FFA, posterior STS and mid-STS
- Compare the conditions where you either look at facial identity stays same independently of emotion vs where facial identity changes = if facial identity changes = more response to second face in FFA = FFA responds to facial identity = similar to posterior STS
- Do not see same effect facial expression
- Mid-STS is the mirror image: interested in facial expressions and changing independently of facial identity
- All studies from then have not found this
What did Ganel Do that contradicted the neuroimaging study?
- Ppts view screens of images in blocks - either attend to identity and change, or expression
- In blocks where identity was attended to, expression changed and vice versa for expression block
- When ppts were asked to attend to identity = activity of FFA is laregr if facial expression varied vs when it did not, same is true for STS = even if you ask people to focus on identity, both areas respond to changes in facial expression
- FFA responds to both, but STS mostly focuses on facial expressions
What did Fox do that contradicted the neuroimaging study? (Morphs)
- Used morphs of faces, either from one identity to another OR gradual changes from one emotion to another = we perceive things categorically even with gradual change = as seen with marilyn and maggie
- Present two faces and what happens if people perceive faces as different identities or different emotions
- Response to second image = if second image has been perceived as another identity, and same for facial expression
- Relationship between FFA responds to second face to subjective perception of those faces = FFA is interested in both
- Looked at mSTS and found pattern that facial expression is its main interest
What is behavioural performance? (Is there a relationship between identity and expressions)
- Close relationship if people can recognise facial identities and determine between facial expressions
- Measured ability to recognise facial identities, and emotion matching task = moderate relationship
- Second study was more detailed = showed strong relationship between people’s ability to recognise facial identities and perceive facial expressions = mechanisms that underpin both abilities
Are facial expressions coded by a norm-based code?
- Distance between adaptor and target = two different models = if you change relationship, as you move adaptor to make it more extreme = aftereffect should become larger according to norm-based model, second model says exact opposite
- Started with normal facial expressions, generated average faces with average facial expressions, had anti-faces as well (same difference just in opposite direction)
- Asked ppts which faces they would expect to see in the real world before experiment
- Adapted ppts to anti-faces and looking at perception of average faces
- More extreme adaptor becomes, more strongly average face should be perceived according to the model
- In both experiments you find those results: size of the after effect depends on the adaptor
Expression-specific mechanisms?
- Adaptation studies suggest a unified framework to represent facial expressions
- Contrasts with category-based approach to emotion and facial expression perception
- Is loosely consistent with dimensional account of emotion and facial expression perception
- More complicated: perceptual mechanisms and neural structures associated with processing specific expressions
How to decode facial expressions? (bubble task)
- What piece of information we use to recognise different facial expressions = with identities, we use whole face = is this true for facial expressions
- Have neutral face, show ppts face through a mask that has openings so ppts can only see parts of face = ask ppt to perform task e.g is this happy or neutral face
- Performing task is easier when you can see diagnostic features of the emotion
- Have lots of trials with different windows, separate trials where ppt correctly identifies expression, take all maps where windows for correct identification of emotion
- Correct windows showed mouth of face
What was a more complex decoding experiment?
- Start with face, don’t have windows of same size, but vary with spatial frequency of windows = either small, or larger
- Develop masks and normalise them that contains masks that contains all spatial frequencies
- Use this to look at different facial expressions = we use different features for the expressions
- Correlations for all masks = features we use to recognise facial expressions are uncorrelated and thus very distinct
How to encode emotion and its decoding?
- First have an emotion, brain encodes so you can present it to others, others infer so decode it
- Used a computational model where model gets same inputs as human and have to do same tasks = have access to all info to solve optimally
- Compares similarity between masked and unmasked images to perform task
- Ideal observer model shows which features are used to perform the task: not identical to humans but close relationship between ideal observer (info you should use to perform well on the task)
- Relatively closely correlated
- Encoding and decoding are closely linked
What is the background for fear perception?
- Amygdala damage leads to specific difficulty in recognising facial expressions of fear
- Consistent with PET/fMRI studies
- Amygdala important in coding fearful facial expressions
- Patient SM had bilateral amygdala damage and used the map/bubble method
- Does patient have a problem with fear perception or they do not use correct features of the face
- Tested healthy controls to SM = control group used eyes to recognise fear but SM did not = seen in eye tracking studies too
- Amygdala’s role is helping us to orient to salient social information = do not pick up key information from face to pick out fearful facial expression
What is the background for disgust perception?
- Huntington’s disease affects striatum in basal ganglia
- Leads to specific difficulty in recognising disgust facial expressions
- Consistent with imaging studies that show striatum and insula activation in response to disgust facial expression
- Insula supports idea that disgust is evolutionarily linked to distaste
What is multimodal emotion perception?
- Difficulties in prosopagnosia are face specific = fine with identities
- Emotion perception is multimodal: patient NM with bilateral amygdala damage shows difficulty in recognising fear in faces, body postures and emotional sounds
- HD patients show difficulty in recognising disgust in faces and emotional sounds - it is multimodal = does not affect only faces
- Context is important = facial expression perception is affected by context, and link to higher level social cognition and mental state attribution = main purpose to infer faces to understand emotion
Perception and experience of emotions:
- There is a link between ability to perceive emotions in others and experiencing own emotions
- Patient NK with high focal damage to basal ganglia and insula had multimodal difficulties in perceiving and experiencing disgust
- Patient NM with bilateral amygdala damage: multimodal difficulties in perceiving and experiencing fear
- Simulation of other’s sensorimotor and somatosensory state might be an important part of recognising emotions = use our own emotional states to simulate what others feel