2.2 Naive Bayes Flashcards
How does a supervised classifier differ from an unsupervised?
it uses class labels during training.
Bayes rule can be used to express the likelihood of a class C given feature X.
The formula is?
Which term is the prior likelihood of a class C?
P(C) is the prior probability of class C. This is the probability of observing class C in general, regardless of features X.
Which term is the posterior probability?
The posterior probability is the left-hand side of the equation, P(C|X). This is the probability of class C given the features X.
Which are true of a naive Bayes classifier? (Select all that are true)
Naive Bayes is a supervised classifier – it uses class labels during training.
It assumes that attributes are conditionally independent (the “naive” assumption).
It generally computes the relative probabilities of classes given the attributes, but not their exact probabilities (the denominator of the Bayes rule formula is omitted).
One option to avoid zero-value probabilities in naive Bayes is Laplace smoothing. What does this involve?
When computing probabilities, all counts are increased by a small value