exam 1 Flashcards

(115 cards)

1
Q

inattentional blindness

A

failing to see objects when attention is focused elsewhere (e.g. missing a gorilla in a video)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

change blindness

A

failing to notice changes in a scene due to attention limitations

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

what are ensemble statistics in the nonselective pathways

A

the nonselective pathways extracts the overall scene properties (e.g. color, texture, layout)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

what is attentional extinction (due to parietal damage)

A

inability to notice stimuli on the neglected side when competing stimuli are present

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

what is attentional neglect (due to parietal damage)

A

ignoring one side of space (often left) after right parietal damage

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

how do neurons implement attention

A
  • enhancement
  • sharper tuning
  • altered tuning
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

enhancement (neurons implementing attention)

A

stronger response to attended stimuli

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

sharper tuning (neurons implementing attention)

A

more precise focus on relevant features

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

altered tuning (neurons implementing attention)

A

changes in preferred stimulus properties

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

how does attention affect neural activity?

A
  • enhances firing rates of neurons responding to attended stimuli
  • suppresses irrelevant stimuli
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

attentional blink

A

a gap in perception when detecting two targets in rapid succesion (the second target is often missed if it appears 200-500 ms after the first)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

rapid serial visual presentation (RSVP) paradigm

A

stimuli appear in quick succesion at the same location (used to study attentional blink and perception limits)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

feature integration theory (FIT)

A

features (color, shape) are processed separately and must be bound together by attention

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

the binding problem

A

how does the brain combine features into a unified perception?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

what is guidance in visual search?

A

attention is guided by salient features and prior knowledge

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

feature search

A
  • fast, parallel
  • target “pops out”
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

conjunction search

A
  • slower, serial
  • target shares features with distractors
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

spatial configuration search

A
  • even slower
  • requires recognizing relationships
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

visual search paradigm

A

a task where participants find a target among distractors. The search efficiency is measured by reaction time vs. number of distractors

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

the “spotlight” metaphor of attention

A

attention is like a beam of light, enhancing what it illuminates. However, attention can split, shift, or diffuse, challenging the metaphor

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

differences between endogenous and exogenous attention cues

A
  • endogenous –> voluntary, based on goals (e.g. looking for a friend in a crowd)
  • exogenous –> involuntary, driven by sudden stimuli (e.g. flashing light)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

what is the Posner cueing task

A

a test where a cue directs attention to a location before a target appears
- valid cues (correct location) –> faster responses
- invalid cues (incorrect location) –> slower responses

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

are different kinds of scenes processed differently

A

Yes, the brain categorizes scenes (e.g. urban vs. natural). Some areas, like the PPA (parahippocampal place area) specialize in scene recognition

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

how much do we actually notice or remember from what we see

A
  • very little detail (most information is filtered out)
  • we retain the gist, rather than specific details
  • change blindness and inattentional blindness reveal our limited awareness
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
if we can only attend to a few things at once, why does the world seem rich and detailed?
- "gist perception" allows us to quickly understand scenes - the brain fills in missing details based on experience - rapid shifts of attention create an illusion of full perception
26
what changes in the brain when we "pay attention"?
- increased activity in visual and parietal cortex - enhanced neural responses to attended stimuli - suppression of irrelevant stimuli
27
how do we find what we are looking for?
- we use visual search and guidance from memory and expectations - the brain prioritizes salient features (e.g. color, shape) - top-down (goal-directed) and bottom-up (stimulus-driven) processed guide search
28
is attention really like spotlight?
Yes, in some ways (select specific locations, objects or features). But not exactly (attention can diffuse, tracking multiple things at once)
29
"zoom lens" model of attention
suggests attention can widen/narrow as needed
30
why can't we process everything at once?
- the brain has limited cognitive resources and selects relevant information - too much information would cause overload and slow down processing - attention prioritizes what's most important for survival or goals
31
visual problems that lead to abnormal development and stereoblindness
- strabismus (lazy eye) - amblyopia - congenital cataracts
32
strabismus
- lazy eye - misalignment disrupts stereopsis
33
amblyopia
weaker eye suppressed in the brain
34
congenital cataracts
block visual input, preventing normal depth perception development
35
stages in normal development of stereopsis
- birth: no stereopsis - 3-6 months: binocular coordination improves - 4-6 months: stereopsis emerges - childhood: refined depth perception
36
stereopsis
het vermogen van de hersenen om twee beelden van hetzelfde object, die elk door een oog worden waargenomen, te combineren tot één enkel 3D-beeld (diepte waarneming)
37
binocular rivalry
when two different images are shown to each eye, perception switches between them
38
suppression
the brain ignores conflicting information from one eye (e.g. in amblyopia)
39
how can misapplying depth cues lead to illusions
- Ames room (distorted size due to forced perspective) - Ponzo illusion (parallel lines & how perspective tricks the brain) - hollow face illusion (the brain expects convex faces)
40
how does the Bayesian approach apply to depth perception
the brain combines prior knowledge with sensory input to infer depth (helps resolve ambiguous or conflicting depth cues)
41
physiological basis of stereopsis and depth perception
- binocular neurons in V1 and beyond detect disparity - MT (middle temporal area) processes motion and depth cues - parietal cortex integrates depth information for spatial awareness
42
what is the correspondence problem in stereoscopic vision
the brain must match the correct points from each eye's image solved by: - feature matching (edges, textures) - global processing (analyzing whole scenes)
43
how do stereoscopes and stereograms create depth?
- present slightly different images to each eye to mimic binocular disparity - the brain fuses them, creating 3D perception
44
retinal disparity
differences in images between right and left eyes
45
crossed disparity
objects closer than fixation point appear displaced outward
46
uncrossed disparity
objects farther than fixation point appear displaced inward
47
accommodation & convergence as triangulation cues
- accommodation = the lens adjusts shape to focus on objects at different distances - convergence = the eyes rotate inward for near objects, outward for far objects
48
motion parallax as a triangultion cue
as you move, near objects move faster across your vision than far objects. The brain uses this difference to estimate depth and distance
49
difference between metrical and nonmetrical depth cues
- metrical cues provide exact distance (e.g. binocular disparity) - nonmetrical cues provide relative depth but not exact distance (e.g. occlusion)
50
recognizing pictorial depth cues
- occlusion - relative size - linear perspective - texture gradient - shading & shadows
51
occlusion (pictorial depth cues)
objects blocking others appear closer
52
relative size (pictorial depth cues)
smaller objects appear further away
53
linear perspective (pictorial depth cues)
converging lines suggest depth
54
texture gradient (pictorial depth cues)
more texture means closer
55
shading and shadows (pictorial depth cues)
suggest 3D shape and position
56
how does 3D vision develop?
- infants rely more on monocular cues at birth - binocular coordination develops around 3-6 months - stereopsis emerges between 4-6 months
57
how does the brain compute binocular depth cues
- use binocular disparity - disparity is processed in visual cortex to determine depth
58
how does the brain combine monocular depth cues
- use relative size, interposition, texture gradient, motion parallax, shading, and perspective - integrates multiple cues for consistency in depth perception
59
why do we have two eyes
- provides binocular disparity which enhances depth perception (stereopsis) - expands the field of view - offers redundancy (one eye compensates if the other is damaged)
60
how does the brain reconstruct a 3D world from 2D retinal images?
- the brain combines monocular and binocular depth cues - uses perspective, shading, texture gradients, motion, and disparities between two eyes' images - relies on experience and prior knowledge (Bayesian inference)
61
how does color influence perceived flavor?
- expectation effect (we expect a red drink to be cherry-flavored) - cross-modal interactions (the brain combines visual and taste signals) - marketing impact (people rate foods as tasting better when they have expected colors)
62
why is color vision useful?
- helps in finding food (ripe vs. unripe) - aids in recognizing objects & emotions - improves camouflage detection
63
color constancy and how it works
- the brain adjusts perceived color to remain consistent under different lighting - uses context, memory, and lighting cues
64
predicting negative afterimage colors
after staring at a color, the opponent color appears in the after image (red --> green, and vice versa)
65
how can context influence color perception?
- color contrast (a color looks different depending on surrounding colors) - color constancy (the brain adjusts for lighting to keep colors looking stable)
66
what is synesthesia?
a condition where one sense triggers another (e.g. seeing colors when hearing sound)
67
forms of anomalous color vision
- protanopia - deuteranopia - tritanopia
68
protanopia
missing L-cones (red-green deficiency)
69
deuteranopia
missing M-cones (red-green deficiency)
70
tritanopia
missing S-cones (blue-yellow deficiency)
71
does language influence color perception
some languages have fewer/more color words, which may affect perception (e.g. in russian they have seperate words for light and dark blue)
72
opponent color theory
it states that colors are perceived in opposing pairs (red-green, blue-yellow)
73
color cancellation experiments
adding a color cancels out its opponent (e.g. adding green cancels red)
74
how is 3D color space represented?
- RGB (red, green, blue) is used in screens and digital media - HSB (hue, saturation, brightness) is used in color design - CIE color space is more precise mapping of all visible colors
75
four ways cone outputs are pitted against each other in cone-opponent cells
- L-M (red-green opponent) compares long and medium-wavelength cones - M-L (green-red opponent) is opposite of L-M - S-(L+M) (blue-yellow opponent) compares short wavelength cones to the sum of L and M - L+M (luminance channel) measures overall brightness
76
additive color mixing
- process: mixing light (e.g. red + green = yellow) - example: computer screen
77
subtractive color mixing
- process: mixing pigments (e.g. cyan + yellow = green) - examples: paint
78
Young-Helmholtz trichromatic theory of color vision
color vision results from three cone types corresponding to different wavelengths. The brain compares their responses to determine color
79
principle of univariance
a single photoreceptor cannot distinguish between different wavelengths, only overall light intensity
80
principle of metamers
two different light spectra can produce the same color perception because they stimulate cones the same way
81
spectral sensitivities of the three cone types
- S-cones: blue, ~420 nm peak - M-cones: green, ~530 nm peak - L-cones: red, ~560 nm peak
82
three types of cones that contribute to color vision
- S-cones: short wavelengths, blue - M-cones: medium wavelengths, green - L-cones: long wavelengths, red
83
three steps to color perception
1. detection 2. discrimination 3. appearance
84
detection (color perception)
cones in the retina detect light
85
discrimination (color perception)
brain compares signals from different cones
86
appearance (cone perception)
brain interprets and assigns stable color
87
if an orange looks green, will it taste different?
color affects expectation of taste, but the actual taste depends on the chemical composition. But perception can be influenced by color, making us think it tastes different
88
why does an orange look orange in real life and in a photo, even though the physical basis is different?
in real life, we see the orange due to wavelengths of lights reflecting off it. In a photo, the camera captures those wavelengths and translates them into digital color signals that our screen displays. The brain interprets both using color constancy mechanisms
89
does everyone see the same color?
most people see similar colors, but color vision varies due to genetics, lighting, and perception. Color is affected by color blindness and cultural & linguistic differences
90
what is color for?
color vision helps us detect and recognize objects more effectively. It enhances contrast between objects and backgrounds
91
attention
a large set of selective mechanisms that enable us to focus on some stimuli at the expense of others
92
why are faces a special case of object recognition
they are processed holistically (not separate features). The FFA is specialized for face perception
93
objects labels at different levels of description
- superordinate = "animal" - entry-level = "dog" - subordinate = "golden retriever"
94
strengths and weaknesses of object recognition models
pandemonium model: - strength = explains future-based recognition - weakness = too simple for complex objects template model - strength = works well for specific images - weakness = fails with variations structural description - strength = describes objects as 3D parts (geons) - weakness = doesn't explain textures, lighting effects deep neural networks - strength = excellent for real-world images - weakness= requires large datasets
95
subtraction methods in the brain
compares brain activity with and without a stimulus
96
decoding methods in the brain
uses machine learning to interpret brain activity
97
receptive field properties of neurons that process objects and faces
- IT cortex (recognizes complex shapes and objects) - FFA (specialized for faces)
98
Bayesian approach in perception
the brain combines prior knowledge and sensory input to interpret the world
99
methods the visual system uses to deal with occlusion
- edge interpolation (filling in missing parts) - surface completion (assuming hidden parts continue) - contour continuation (using Gestalt rules)
100
define figure-ground assignment
determines what is object (figure) and what is background (ground)
101
principles figure-ground assignment
- surroundedness (the enclosed region is usually the figure) - size (smaller objects are typically figures) - symmetry (symmetric shapes are often figures)
102
accidental view points in perception
occurs when an object aligns in a way that misleads perception (e.g. a person standing in front of the Eiffel tower may appear to be holding it)
103
Gestalt psychology
the principles state that perception is based on innate grouping rules
104
Gestalt principles
- proximity (close things group together) - similarity (similar things group together) - closure (we fill in the missing information) - good continuation (we see smooth, continuous lines)
105
define midlevel (or middle) vision
the stage between low-level (edges, contrast) and high-level (objects, faces) processing. It organizes elements into coherent shapes
106
challenges in object recognition for the visual system
- viewpoint changes (objects look different from different angles) - occlusion (objects can be partially hidden) - lighting variations (shadows can distort appearance)
107
feed-forward processing
information flows one way (retina --> V1 --> IT)
108
reverse-hierarchy theory
higher areas can send feedback signals to refine early processing
109
visual agnosia
a condition where a person cannot recognize objects despite normal vision (caused by damage to the ventral stream, inferior temporal cortex)
110
dorsal pathway
- "where/how" - location and action - parietal lobe - speed is fast - damage causes optic ataxia (impaired grasping)
111
ventral pathway
- "what" - object recognition - temporal lobe - speed is slower - damage causes visual agnosia
112
concept of border ownership
cells in V2 assign edges to specific objects rather than seeing them as just standalone lines
113
differences between extrastriate cortex and striate cortex
- striate cortex = V1, processing of basic features (edges, orientation) - extrastriate cortex = V2-V5, processes higher-level aspects (shapes, motion, object identity)
114
how could a computer recognize objects?
- feature detection - template matching - deep learning
115