Classification Flashcards

1
Q

In Supervised Learning, what is the goal?

A

To find a function f̂ that maps input x to output y, based on training data (xi, yi)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is the difference between Classification and Regression in Supervised Learning?

A

Classification: y belongs to discrete classes (e.g., binary or multiclass)
Regression: y is a continuous value

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is the probabilistic view of Supervised Learning?

A

Specify a model P(X, Y | Θ), estimate parameters θ, and predict output using the estimated model

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is a Generative classifier?

A

A classifier that models the joint distribution P(X, Y) and uses Bayes’ rule for prediction

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is the Naive Bayes assumption?

A

Features xi are independent, conditionally on the class

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

How does the Naive Bayes classifier represent documents in text classification?

A

Using a bag of words model, where each document is represented by a binary vector of word presence/absence

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is the curse of dimensionality in classification?

A

As the number of features increases, the number of parameters needed grows exponentially (2^d - 1 per class)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How does Gaussian Naive Bayes differ from a full multivariate Gaussian classifier?

A

Gaussian Naive Bayes assumes the covariance matrix Σc is diagonal (features are independent given the class)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are the steps for classifying using MAP (Maximum A Posteriori) in a Naive Bayes classifier?

A

See hand written paper.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are two methods for estimating parameters in Naive Bayes?

A

Maximum Likelihood Estimation (MLE) and Maximum A Posteriori (MAP) estimation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly