Post-Midterm Flashcards
Define discontinuity-based segmentation
A segmentation technique that identifies abrupt changes in intensity (where values jump significantly compared to their neighbours)
What is the goal of discontinuity-based segmentation
To separate objects from background by finding boundary pixels (edges/lines/points) that mark transitions between regions
In order to detect edges, an important mathematical foundation needed is _________________
The derivative
A positive derivative means the intensity is __________ as x increases. Vice versa for a negative derivative.
Increasing
(dark –> bright when moving to the right)
How does a first-order derivative (gradient) behave in an edge detection algorithm
Gradients highlight where large changes occur, but also indicate which side is darker or lighter (via sign)
How does a second-order derivative (Laplacian) behave in an edge detection algorithm
Laplacians reinforce fine details and help locate edges precisely. The sign of the second derivative often reveals whether an edge is going from dark-to-light or light-to-dark
What are the three different types of discontinuity
Points, lines, & edges
Describe a point discontinuity
Single pixel that differs sharply from neighbours (detected by the Laplacian)
Describe a line discontinuity
1-2 pixel wide structures differing in intensity from surroundings (detected by directional masks or second derivatives)
Describe an edge discontinuity
Transition zones (ideal: step edges; real: blurred or ramp edges)
What are the advantages of discontinuity segmentation
Can directly locate boundaries. It’s good for images where objects exhibit strong contrast against the background
What are the challenges of discontinuity segmentation
It can be sensitive to noise (derivatives amplify noise). Images often require smoothing (pre-filtering) and careful threshold selection to avoid false edges or fragmented edges
Describe what a ‘point’ is in the context of point detection
An isolated pixel whose intensity differs significantly from its immediate neighbours. It typically appears as a bright or dark ‘spot’ in a relatively uniform background
What are the steps involved to implement a point detection algorithm
Step 1: Apply second-order derivative (Laplacian) filter
Step 2: Take the absolute value of the response
Step 3: Threshold the absolute response
Step 4: Label isolated points
Describe how to accomplish the first step, ‘Apply Laplacian Filter’, when implementing a point detection algorithm
Convolve the kernel below with an image to obtain a filter response
Second-Order 3x3 Laplacian kernel:
0 1 0
1 -4 1
0 1 0
Describe how to accomplish the second step, ‘Take the absolute value of the response’, when implementing a point detection algorithm following the application of the Laplacian filter
After the first step, the Laplacian response can be positive or negative. By taking the absolute value of the response, we get a magnitude that indicates how large the change is, regardless of sign
Describe how to accomplish the third step, ‘Threshold the absolute response’, when implementing a point detection algorithm after taking the absolute value of the Laplacian response
Test or take a percentage of the maximum magnitude such that only prominent ‘spikes’ get labelled
Describe how to accomplish the fourth and final step, ‘Label isolated points’, when implementing a point detection algorithm after thresholding the absolute response
If the magnitude of Z(x,y) is greater than the threshold, declare (x,y) an isolated point. Store this point as a 1 (or ‘white’) in an output binary image, while others are stored as 0 (or ‘black’)
Describe a ‘line’ in the context of line detection
A set of connected pixels with similar intensity, often just 1-2 pixels in thickness, differing in intensity from its background
Briefly describe the process to apply line detection to an image
Convolve the image with a second-order derivative filter or with specialized directional filters, as shown below. After convolution, threshold the filter response to isolate line pixels.
Vertical kernel:
-1 2 -1
-1 2 -1
-1 2 -1
Horizontal kernel:
-1 -1 -1
2 2 2
-1 -1 -1
45-degree kernel:
2 -1 -1
-1 2 -1
-1 -1 2
Define ‘edge’ in the context of edge detection
A boundary between two distinct regions of intensity or texture
What are the different types of edges?
Step edge - Sudden transition in intensity (ideal)
Ramp edge - Gradual transition (common)
Roof edge - Transition to one intensity from another, then quickly back to the original (typical in thin lines or object ridges)
True or False: Image noise/blur do not cause step edges to turn into ramp edges
False
What is ‘clustering’ in clustering segmentation
The clustering approach in segmentation involves grouping pixels based on intensity, colour, or feature similarity without requiring labelled data