Lecture 5 Flashcards
What is model comparison?
A process that balances simplicity and accuracy when evaluating multiple models.
What is the Akaike Information Criterion (AIC)?
A measure of model quality that penalizes complexity.
What are Akaike weights and evidence ratios?
- Akaike weights: Probabilities that a model is the best among tested models.
- Evidence ratios: Quantify relative support for different models by comparing their Akaike weights.
Why is it important to use the same dataset for model comparison?
To ensure fairness and consistency in evaluations.
What does a lower AIC value indicate?
That the model has a better balance of fit and simplicity compared to models with higher AIC values.
What is AICc, and why is it used?
A corrected version of AIC that adjusts for small sample sizes to reduce bias.
What is the trade-off involved in model comparison?
More complex models may fit data better but risk overfitting, while simpler models may generalize better but fail to capture nuances.
What is the goal of model comparison?
To identify the best model by balancing goodness of fit and model complexity to avoid overfitting.
What is Akaike Information Criterion (AIC)?
AIC is a statistical measure that evaluates the goodness of fit of a model while penalizing for the number of parameters. The lower the AIC, the better the model.
What is the formula for AIC?
AIC = -2L + 2k, where L is the likelihood of the model parameters given the observations, and k is the number of fitted parameters.
Why is AICc recommended for small sample sizes?
AICc includes a correction term (2k(k+1)/(n-k-1)) that reduces bias in small sample sizes, making the AIC more accurate.
What are delta AICc values?
Delta AICc values represent the difference between the AICc of each model and the best model. They are used to rank models.
What are Akaike weights?
Akaike weights are probabilities that indicate how likely a model is to be the best among those compared, based on the observations.