Brightness/Lightness and Edges LOs Flashcards

1
Q

What are the units fro quantifying light, and the ways of describing our experiences of it?

A

lumen - light produced by a standard candle, describes radiance (radiant power from a light source)

Lux = 1 lumen per square meter of area, describes illuminance (amount of light falling on a surface)

Nit = 1 candela per square meter of area, describes luminance (amount of light reflected from a surface)

Percent / albedo = luminance/illuminance x 100, describes reflectance (proportion of light reflected from a matte (non-glossy) surface)

Brightness is our perceptual impression of intensity of a light source

Lightness is our perceptual impression of surface greyness (psych counterpart to reflectance)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

How does visual system arrive at lightness constancy

A

Ratio principle: the percentage of light reflected determines perception of lightness

Lightness constancy: ability to perceive true reflectance properties of an object no matter what the illumination is

To filter out all but correct interpretation of reflectance we use properties that are generally true of the world (natural constraints)

Retinex Theory! - based on two natural constraints
Variations in illumination are gradual within an object

Changes in reflectance are abrupt between objects - this defines an edge

Feature detectors act like filters! Will match point of light across several positions in visual field

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

How do natural constraints in brightness perception relate to the detection of edges

A

Variations in illuminance are gradual within an object

Changes in reflectance are abrupt between objects (defines an edge)

Convolution can help detect edges, will not detect one if gradual luminance change, abrupt luminance changes = edge detection!

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What are convolution and reconsititution

A

Convolution: matching a certain receptive field/feature detector with an image, gives us the DOG (difference of Gausian) graph
Convolution smooths out any noise/small differences in illumination and also can detect edges

Reconstitution: how edges are filled in, to construct anew, occurs using deconvolution, activity is spread from an edge, smoothing out all small changes in illuminate (reflectance), process does distort things slightly

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What low, mid, high level factors contribute to contrast illusions?

A

Contrast illusions result in incorrect lightness perception due to a breakdown of lightness constancy, from an error in reconsitution

Low-level = vision at the retina, included light adaptation and centre-surround receptive fields.
Lateral inhibition works here for contrast illusions -> stimuli on receptors when activated turn off neighbours, lots of light has lots of inhibition (looks darker) on neighbours and less light has less inhibition (looks lighter) on neighbours creating an exaggerating difference in lightness between the the bars.
Can explain Mondrian light perception but not account for world more complex than Mondrians
Cylinders have weak illusion, see more of same grey and assume light is coming from the left
Squares have strong illusion and we see as left and right square being diff shades of grey

mid-level : ill-defined, involved surfaces, contours, grouping, etc.
Connected to gestalt approach
Koffkas ring: appears uniform one grey when connected, when split and line in middle rings appear slightly diff shades of grey (one on dark background looks lighter than one on light background), when shifted we see a diff perceptual organization occurs as the spatial configuration affects simultaneous contrast

High-level: cognitive processes; includes knowledge about objects materials and scenes
Different junctions (where 2 or more contours come together) provide information on reflectance and shading, provide cues about surface shading and reflectance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is Marr’s two stage approach

A

raw primal sketch: detects edge segments
Representation of an image, intensity differences are made explicit -> makes global structures
representation of primitives (features) and place tokens (their locations) are determines -> figures out contours that may belong together
Colour does not matter! More likely a separate system
Using gaussian function (first derivative)

Full primal sketch: represents higher-level boundaries
Boundaries and regions found by grouping primitive elements together
Uses Gestalt principles/features

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

How is the raw primal sketch obtained and what is the result of this processing

A

Averaging process is used as luminance is not constant and fluctuates imperceptibly (photon noise)
Averaging process is like passing an image through a spatial filter (low-pass: preserves gross features, band-pass: preserves intermediate details, high-pass: preserves fine details)

Creating best raw primal sketch: obtain multiple representations in parallel, each is a product of a different spatial filter - allows for fine details, gross features and everything else in between

Marrs Algorithm:

1) Smooth out small, unimportant variations that are due to differences in illumination (take average around given spot) -> use Gaussian function so that points closer to centre have greater weighting in average and periphery points contribute the least

2) Extract edge info from the representations -> use differentiation - (rate of change of one variable with another) to detect abrupt change between dark and light area. Apply differentiation transformation to filtered representations of the image
Rate of change of illumination over space: edge will produce peak in first derivation (Gaussian function) and zero crossing in second derivative (laplacian operator)

Raw primal sketch consists of a combo of all representations it is not about picking out best one, important boundaries should appear at multiple levels of detail

Output of raw primal sketch is a representation of significant gradients of light intensity in a given image

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How does this approach make up for the shortcomings in feature-template theory

A

Feature template theory: a bar is detected by one simple cell which receives connections from a number of similar sized center surround receptive field

Using Marr-Hildreth Algorithm: has many diff size cells talk and communicate to each other, feature detection depends on edge detection and an edge can be detected by taking activity of several simple cells with differing sizes. Must have agreement amongst themselves AND between light-side (on centre/off surround) and dark-side (off-centre/on-surround) detector groups

How well did you know this?
1
Not at all
2
3
4
5
Perfectly