Support Vector Machines (SVMs) Flashcards

(13 cards)

1
Q

What are support vectors?

A

Training examples that are exactly on the margin

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is a consequence of having a short margin γ?

A

It leads to misclassification

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Why do we maximise the margin?

A

To avoid overfitting

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is the margin?

A

The margin, γ, is the perpendicular distance between the decision boundary and the closest training example.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What do we assign xo in SVMs?

A

We don’t assign a value for xo in SVMs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is the formula for the margin?

A

dist(h, x(n)) = |h(x(n))| / ∥w∥
where ∥w∥ = square root (wTw) is the Euclidean norm

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What does w0 represent in SVMs?

A

Bias

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is the constraint when calculating the margin?

A

y(n).h(x(n)) > 0, ∀(x(n), y(n))∈𝒯
(All training examples have to be correctly classified)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is the affect of scaling w and b on the hyperplane?

A

There is no affect on the position of the hyperplane

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What changes can we make to the constraint for SVMs?

A

By dividing w and b by min n y(n) h(X(n)), we can set the constraint equal to 1:
min n y(n).h(x(n)) = 1, ∀(x(n), y(n))∈𝒯

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Whats the equation for minimising the margin?
(for the constraint min n y(n).h(x(n)) = 1, ∀(x(n), y(n))∈𝒯)

A

argmax w,b = {1 / ∥w∥}
argmin w,b = {∥w∥}

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

How can we change the constraint so that the optimal solution satisfies for at least one training example?

A

min n y(n).h(x(n)) >= 1, ∀(x(n), y(n))∈𝒯

This is equivalent to the other constraint as the smallest value for ∥w∥ happens when the constraint is 1.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Whats the equation for minimising the margin?
(for the constraint min n y(n).h(x(n)) >= 1, ∀(x(n), y(n))∈𝒯)

A

argmin w,b = {1/2 . ∥w∥^2}

How well did you know this?
1
Not at all
2
3
4
5
Perfectly