Oefententamen Flashcards

1
Q

What is the purpose of regularization in linear regression?

A

Regularization penalizes the size of weights to reduce overfitting by discouraging large weights. This results in a model that generalizes better to unseen data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Which regularization terms are commonly used in linear regression?

A

L1 regularization (Lasso): lambda * SUM |w_i| and L2 regularization (Ridge): lambda * SUM (w_i)^2. L1 encourages sparsity (some weights zero), while L2 encourages small weights.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

How can we find an optimal regularization weight in a regularized model?

A

Cross-validation techniques, like 10-fold or leave-one-out cross-validation, are used to determine the best regularization weight by measuring performance across different subsets of data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

When should you increase or decrease the regularization parameter lambda in linear regression?

A

If the model is overfitting (too closely fits the data), increase lambda. If it is underfitting (too smooth, missing patterns), decrease lambda

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

How can you determine independence of variables in a multivariate Gaussian distribution?

A

Variables X and Y are independent if their covariance Cov(X, Y) is zero. This is represented in the covariance matrix by having zero off-diagonal elements.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is an unbiased estimator?

A

An estimator “theta hat” is unbiased if its expected value equals the true parameter value: E[theta hat] = theta. This means it does not systematically overestimate or underestimate the parameter.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What assumptions are made about the noise in linear regression?

A

Noise assumptions include: constant variance (homoscedasticity), independence of errors, and often, Gaussian (normally distributed) errors.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Define conjugate prior in Bayesian inference.

A

A prior is conjugate if the posterior distribution is in the same family as the prior. This simplifies computation as the functional form of the distribution remains consistent.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Explain Maximum Likelihood Estimation (MLE) in simple terms.

A

MLE finds parameter values that maximize the likelihood of observed data. It’s achieved by differentiating the likelihood with respect to the parameter, setting it to zero, and solving.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

How is uncertainty reflected in the Bayesian posterior distribution?

A

The Bayesian posterior reflects uncertainty due to unknown parameters and potential noise in both training and unseen data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q
A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly