Lecture 5 Flashcards

1
Q

What is model comparison?

A

A process that balances simplicity and accuracy when evaluating multiple models.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is the Akaike Information Criterion (AIC)?

A

A measure of model quality that penalizes complexity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are Akaike weights and evidence ratios?

A
  • Akaike weights: Probabilities that a model is the best among tested models.
  • Evidence ratios: Quantify relative support for different models by comparing their Akaike weights.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Why is it important to use the same dataset for model comparison?

A

To ensure fairness and consistency in evaluations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What does a lower AIC value indicate?

A

That the model has a better balance of fit and simplicity compared to models with higher AIC values.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is AICc, and why is it used?

A

A corrected version of AIC that adjusts for small sample sizes to reduce bias.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q
A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q
A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is the trade-off involved in model comparison?

A

More complex models may fit data better but risk overfitting, while simpler models may generalize better but fail to capture nuances.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is the goal of model comparison?

A

To identify the best model by balancing goodness of fit and model complexity to avoid overfitting.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is Akaike Information Criterion (AIC)?

A

AIC is a statistical measure that evaluates the goodness of fit of a model while penalizing for the number of parameters. The lower the AIC, the better the model.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is the formula for AIC?

A

AIC = -2L + 2k, where L is the likelihood of the model parameters given the observations, and k is the number of fitted parameters.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Why is AICc recommended for small sample sizes?

A

AICc includes a correction term (2k(k+1)/(n-k-1)) that reduces bias in small sample sizes, making the AIC more accurate.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What are delta AICc values?

A

Delta AICc values represent the difference between the AICc of each model and the best model. They are used to rank models.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What are Akaike weights?

A

Akaike weights are probabilities that indicate how likely a model is to be the best among those compared, based on the observations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

How are evidence ratios calculated?

A

Evidence ratios compare the likelihood of one model versus another by taking the ratio of their Akaike weights.

17
Q

What is k-fold cross-validation?

A

A technique that divides the dataset into k parts and validates the model on one part while calibrating on the others, repeating this process k times and averaging the errors.

18
Q

What are some challenges with k-fold cross-validation?

A

Issues include choosing the appropriate value for k, increased computation time, and potential dependencies between subsets.

19
Q

What does Occam’s razor suggest in model comparison?

A

It suggests preferring simpler models when they have similar goodness of fit, as they make fewer assumptions.

20
Q

Why must all models be fitted on the same dataset for valid AIC comparison?

A

To ensure consistency in comparisons, since differences in datasets could lead to biased evaluations.