SVM Flashcards

(30 cards)

1
Q

What is the goal of a Maximal Margin Classifier (MMC)?

A

To find the widest possible margin that separates two linearly separable classes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What assumption does the MMC make about the data?

A

That the data is perfectly linearly separable.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is the equation of a hyperplane used in MMC and SVM?

A

w · x - b = 0

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

How is a class label predicted in SVM?

A

By taking the sign of (w · x - b)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is the constraint for a positive class point in MMC?

A

w · x - b ≥ 1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is the constraint for a negative class point in MMC?

A

w · x - b ≤ -1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is the optimization goal of MMC?

A

Minimize ||w||² subject to margin constraints.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is a support vector?

A

A training point that lies on or inside the margin and defines the decision boundary.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Why are support vectors important?

A

They are the only points that influence the position of the hyperplane.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is the limitation of MMC?

A

It cannot handle overlapping classes or outliers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What does a Soft-Margin Classifier allow?

A

Margin violations and some misclassifications.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What loss function is used in soft-margin SVMs?

A

Hinge loss.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

When is hinge loss zero?

A

When a point is correctly classified and outside the margin.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is the effect of hinge loss on points inside the margin?

A

It increases the cost linearly with the margin violation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What does the regularization term in soft-margin SVM control?

A

The trade-off between margin width and misclassification penalty.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What does a high regularization parameter imply in SVM?

A

It allows more margin violations to reduce model complexity (more bias).

17
Q

What does a low regularization parameter imply in SVM?

A

It enforces fewer violations and a larger margin (more variance).

18
Q

What is the general name for the model that includes kernels?

A

Support Vector Machine (SVM).

19
Q

Why can’t linear SVM handle non-linearly separable data?

A

Because it can only draw straight-line decision boundaries.

20
Q

What is the kernel trick?

A

A method to compute dot products in higher-dimensional space without explicitly transforming the data.

21
Q

What does the kernel trick allow SVMs to do?

A

Handle non-linear decision boundaries efficiently.

22
Q

What is an example of a polynomial kernel?

A

K(x, y) = (x · y + 1)^d

23
Q

What is the effect of increasing the degree in a polynomial kernel?

A

It allows more complex, curved decision boundaries.

24
Q

What is the RBF (Gaussian) kernel good for?

A

Highly flexible, non-linear decision boundaries in complex datasets.

25
What is the relationship between MMC and SMC?
SMC is a relaxed version of MMC that allows soft constraints.
26
Is an SVM with no kernel an MMC or SMC?
It is an SMC — a linear SVM with soft margins.
27
Can a support vector be misclassified?
Yes, in soft-margin SVMs, support vectors can lie inside the margin or even on the wrong side.
28
Do all training points affect the SVM boundary?
No, only the support vectors do.
29
What kind of learner is an SVM?
A margin-based discriminative classifier.
30
What does the SVM aim to maximize in general?
The margin between classes while minimizing classification error.