PCA and Hough Transform Flashcards
(22 cards)
What is the input for the Hough Transform?
The input for the Hough Transform is edge points from an edge detection stage.
What does the Hough Transform detect?
It detects shapes like lines and circles by transforming edge points into parameters in a parameter space.
What is the Hough Transform’s output?
The output is a parametric model, such as line parameters (ρ, θ) or circle parameters (a, b, r).
Why is the normal form (ρ,θ) preferred for line detection?
It’s preferred because it creates a more bounded parameter space, allowing fewer bins compared to the slope-intercept form (m, b).
How does knowing the radius r simplify circle detection in the Hough Transform?
Knowing the radius reduces the parameter space from 3D (a, b, r) to 2D (a, b), making the search more efficient.
How does the edge gradient direction (θ) help in circle detection?
The edge gradient direction narrows down the voting area in the parameter space, making circle detection more efficient.
How does the Hough Transform detect shapes in the parameter space?
Shapes are detected as local maxima in the parameter space, where votes from edge points accumulate above a threshold.
What is a limitation of the Hough Transform?
It struggles with complex shapes due to the computational cost of handling higher-order boundaries and the large number of parameters required for these shapes.
What does PCA do?
PCA simplifies a dataset by reducing its dimensionality, increasing interpretability without losing significant information.
What does PCA use to reduce dimensionality?
PCA uses principal components, which are uncorrelated variables that represent the most significant variance in the data.
How are eigenvectors used in PCA?
Eigenvectors represent the directions of the greatest variance in the data and form the basis of the principal components. They are also orthogonal to each other, making them uncorrelated.
What does covariance tell us in PCA?
Covariance measures the relationship between two dimensions, showing if they increase/decrease together or are independent.
What are eigenvectors and eigenvalues in PCA?
Eigenvectors are directions in the data, and eigenvalues are scaling factors that represent the variance captured by each eigenvector.
Why are eigenvalues important in PCA?
Larger eigenvalues represent more variance along the corresponding eigenvectors, indicating their importance in explaining data variability.
What are eigenfaces in facial recognition?
Eigenfaces are basis faces derived from the eigenvectors of the covariance matrix of training faces.
How are eigenfaces used in face recognition?
Eigenfaces are weighted to represent a face in ‘face space,’ and the Euclidean distance is calculated to find the closest match.
How are faces reconstructed using eigenfaces?
Faces are reconstructed by combining weighted eigenfaces based on their respective contributions.
How is Euclidean distance used in face recognition?
It measures the similarity between weight vectors of faces, and the closest match is chosen based on the smallest distance.
What’s the difference between face reconstruction and recognition?
Face reconstruction involves creating a face from eigenfaces, while recognition compares the weight vectors to identify the closest match.
How does the Hough Transform handle complex shapes like circles?
For circles, a parametric equation is used to convert image space points to parameter space, simplifying the detection process.
How does PCA reduce dimensionality?
PCA reduces dimensionality by identifying principal components that represent the directions of the largest variance in the data, simplifying the dataset while retaining important information.
How do eigenfaces differ from regular eigenvectors in PCA?
Eigenfaces are specifically the eigenvectors of the covariance matrix derived from face images, used to represent and recognize faces.