ML | Linear models | Priority Flashcards

1
Q

What is linear regression? What do the terms p-value, coefficient, and r-squared value mean? What is the significance of each of these components?

Statistics q4 p9

A

(See source material.) mu = sum_k[b_k*x_k]

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are the assumptions required for linear regression?

Statistics q5 p10

A

(See source material.) MRLH pnemonic

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is linear regression?

ML q15 p42

A

(See source material.) mu = sum_k[b_k*x_k]

ML q15 p42

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What are the drawbacks of the linear model?

ML q16 p43

A

(See source material.)

ML q16 p43

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Say you are running a multiple linear regression and believe there are several predictors that are correlated. How will the results of the regression be affected if they are indeed correlated? How would you deal with this problem?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Equation for correlation coefficient r.

A

(See source material.)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

How does naive Bayes assign a class c to a document d (basic equation)? Why is this an example of a generative model?

A

(See source material.) Eq. (5.1). Uses a likelihood term, expresses how to generate the features of a document if we knew it was of class c.

Jurafsky SLP3E 5. Logistic regression

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

3 equations to standardize input features.

A

(See source material.) Eq. (5.8)

Jurafsky SLP3E 5. Logistic regression

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Equation to normalize input features.

A

(See source material.) Eq. (5.9)

Jurafsky SLP3E 5. Logistic regression

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Equation for softmax.

A

(See source material.) Eq. (5.15)

Jurafsky SLP3E 5. Logistic regression: 5.2.4 Choosing a classifier

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Equation for L2 regularized objective.

A

(See source material.) Eq. (5.37)

Jurafsky SLP3E 5. Logistic regression: 5.2.4 Choosing a classifier

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Equation for L1 regularized objective.

A

(See source material.) Eq. (5.39)

Jurafsky SLP3E 5. Logistic regression: 5.2.4 Choosing a classifier

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Equation for a Gaussian prior on weights. How does this relate to regularization?

A

(See source material.) Eq. (5.40). L2 regularization corresponds to assuming that weights are distributed according to a Gaussian distribution with mean m = 0.

Jurafsky SLP3E 5. Logistic regression: 5.2.4 Choosing a classifier

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Multinomial logistic regression: The loss function for a single example x

A

Sum of the logs of the K output classes, each weighted by their probability yk (Eq. 5.44). This turns out to be just the negative log probability of the correct class c (Eq. 5.45). (See source material.)

Jurafsky SLP3E 5. Logistic regression: 5.2.4 Choosing a classifier

How well did you know this?
1
Not at all
2
3
4
5
Perfectly