MLA FA Flashcards

1
Q

The two phases of supervised ML process: Training, ________.

A

PREDICTING

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Logistic Regression is an example of a regression algorithm.

A

FALSE

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

The _____ refers to the error from having wrong / too simple assumptions in the learning algorithm.

A

BIAS

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Its primary objective is to map the input variable with the output variable.

A

Supervised Learning

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

These concepts helps to understand how well a model performs: Overfitting, Underfitting, _________.

A

GENERALIZATION

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

If your model performs well on the training set but poorly on the validation set.

A

Overfitting

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

When the model fits too closely to the training dataset.

A

Generalization

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

In k-NN, High Model Complexity is underfitting.

A

FALSE

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

K-nearest neighbors make a prediction for a new data point by finding the data that match from the training dataset.

A

FALSE

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

In k-NN, Low Model Complexity is:

A

Underfitting

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

In k-NN, when you choose a small value of k (e.g., k=1), the model becomes more complex.

A

TRUE

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

There is a regression variant of the k-nearest neighbors algorithm.

A

TRUE

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

In k-NN, High Model Complexity is:

A

Overfitting

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

There is a regression variant of the k-nearest neighbors algorithm.

A

FALSE

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

When comparing training set and test set scores, we find that we predict very accurately on the training set, but the R2 on the test set is much worse. This is a sign of underfitting.

A

FALSE

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

When comparing training set and test set scores, we find that we predict very accurately on the training set, but the R2 on the test set is much worse. This is a sign of:

A

Overfitting

17
Q

Ridge regression is a linear regression model that controls complexity to avoid overfitting.

A

TRUE

18
Q

Lasso uses L1 Regularization.

A

TRUE

19
Q

The ‘slope’ parameter is also called _______ or coefficients.

A

Weight

20
Q

Linear Regression is also known as Ordinal Least Squares.

A

FALSE

21
Q

In supervised learning, market trend analysis is an example of:

A

REGRESSION

22
Q

A model that performs poorly on both training and new data because it hasn’t learned enough from the training data.

A

Underfitting

23
Q

Classification algorithms address classification problems where the output variable is categorical.

A

TRUE

24
Q

This refers to the error resulting from sensitivity to the noise in the training data.

A

Not in the options

25
Q

The ‘k’ in k-Nearest neighbors refers to the new closest data point

A

FALSE

26
Q

In k-NN, Euclidean distance (by default) is used to choose the right distance measure.

A

TRUE

27
Q

The ________ is the sum of the squared differences between the predictions and the true values.

A

Mean Squared Error

28
Q

In Ridge regression is α (alpha) is lesser, the penalty becomes larger.

A

FALSE

29
Q

Ridge is generally preferred over Lasso, but if you want a model that is easy to analyze and understand then use Lasso.

A

TRUE

30
Q

Dichotomous classes means Yes or No.

A

TRUE

31
Q

Linear models make a prediction using a linear function of the input features.

A

TRUE

32
Q

The ‘offset’ parameter is also called slope.

A

FALSE