recognition Flashcards

1
Q

explain face recognition eigen faces

A

initially, the images are subsampled and transformed into a single vector
zero centring the image through substructing the mean
Hence in a single big array, we will obtain the differences from the mean vector, we call it T

compute variance and covariance matrix of T

compute the eigenvalues (which are also weights)
The eigen vectors are ordered according to weight, meaning the eignen vectors with least weight are the ones that do not explain much of the variance
and eigenvectors of the image of T where they are represented

Finally, we perform the encoding through the product of the original image plus the mean with eigenvalues²

and we decode it through the product of the transposed vector

finally through compressing we can then perform the recognition

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

How do we ususally recognize an object

A

for a given image , we usually try to segment the image , then extract some sort of feature then extract

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

what is the patern matching

A

for a given image window we try to match all over the image through defining the matching criteria

for instance we may use the convolution and make use of the correlation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

wht is the lmitation of pattern matchine

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

explain in a very general way the Eigenfaces

A

we have a database and we want to recognize faces

The issue here is that the feature space is big , hence with use of the PCA we want to extract all the faces that describe best our data

we make an approximation, we consider a feature space that is composed of the grey level of every pixel n face

one face in the database is simply one point in the feature space, then simply for a gice new image we simply try to compute the distance

hence here we use the PCA to compress:

we have the eigenvectors

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

what is the downside and advanatages of Eigen faces

A

The

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Explain the nearest neighbours

A

For a given feature space, we try to classify one unknown case in that feature space, then we detemine the class of that case, such that the fitting relies of the mojority defined by the K neighbourhood

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What might be the issue in the nearest neighbours

A

The distance measurement might be an issue

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Explain the decision tree

A

iterative way of splitting the feature space by space,
initially, the space is splitter feature by feature, and we want to optimise the entropy before and after split

for instance, if we pick the whole space the entropy will be higher when we take a portion where thee a low variance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

what is the disadvantage of the decision tree

A

although it is very fast,
we can rapidly endup with a very large decision tree

, hence we may want to keep the split info, we may also want to add pruning to limit the dept or the level of the decision tree
very sensitive noize such that it is unstable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

what is the generalisation

A

the ability of the model to

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

explain Bag of visual word

A

extension of bag of words :
in an image series or points of interest, we extract some sort of a window around the corner

we can then use the clustering to group patches that are similar

then we build some kind of vocabulary by picking the center of the clusters

then for a given image, we extract the patches and and compare the image patches to the dictionary we to later on extract the frequencies which corresponds to the signature of the image

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

explain the edge histogram

A

we use some sort of local descriptor , where we compute the histogram of the direction of the gradient

How well did you know this?
1
Not at all
2
3
4
5
Perfectly