2.2 Naive Bayes Flashcards

1
Q

How does a supervised classifier differ from an unsupervised?

A

it uses class labels during training.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Bayes rule can be used to express the likelihood of a class C given feature X.

The formula is?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Which term is the prior likelihood of a class C?

A

P(C) is the prior probability of class C. This is the probability of observing class C in general, regardless of features X.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Which term is the posterior probability?

A

The posterior probability is the left-hand side of the equation, P(C|X). This is the probability of class C given the features X.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Which are true of a naive Bayes classifier? (Select all that are true)

A

Naive Bayes is a supervised classifier – it uses class labels during training.

It assumes that attributes are conditionally independent (the “naive” assumption).

It generally computes the relative probabilities of classes given the attributes, but not their exact probabilities (the denominator of the Bayes rule formula is omitted).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

One option to avoid zero-value probabilities in naive Bayes is Laplace smoothing. What does this involve?

A

When computing probabilities, all counts are increased by a small value

How well did you know this?
1
Not at all
2
3
4
5
Perfectly