Metrics Flashcards
What does R² (Coefficient of Determination) measure in regression?
R² measures how well the predicted values explain the variance in the actual values, indicating the model’s fit.
What is Mean Absolute Error (MAE) used for in regression?
MAE measures the average absolute difference between predicted and actual values, indicating prediction accuracy.
What is Mean Squared Error (MSE), and why is it used?
MSE calculates the average squared differences between predicted and actual values, penalizing larger errors more heavily.
What does Root Mean Squared Error (RMSE) represent in regression?
RMSE is the square root of MSE, providing error measurements in the same units as the target variable.
What is Adjusted R², and how does it differ from R²?
Adjusted R² accounts for the number of predictors in the model, preventing overestimation of model performance.
What does Accuracy measure in classification tasks?
Accuracy is the proportion of correctly classified instances out of the total instances.
What is Precision in classification metrics?
Precision is the ratio of true positive predictions to all positive predictions, reflecting model specificity.
What does Recall indicate in classification?
Recall, or sensitivity, is the ratio of true positive predictions to all actual positives, measuring model completeness and sensitivity.
What is the F1-Score used for in classification?
The F1-Score is the harmonic mean of Precision and Recall, balancing the two in cases of class imbalance.
What does the Area Under ROC Curve (AUC) signify?
AUC measures a classifier’s ability to distinguish between classes across various thresholds, with 1 being perfect.
What is the Area Under Precision-Recall Curve (PR AUC)?
PR AUC evaluates classifier performance for imbalanced datasets by focusing on Precision and Recall trade-offs.
What information does a Confusion Matrix provide?
A Confusion Matrix summarizes classification predictions, showing counts of true positives, false positives, false negatives, and true negatives.
What is Mean Average Precision (MAP) used for in ranking tasks?
MAP evaluates the precision at multiple ranks, averaged across all queries, for ranking relevance.
What does Mean Reciprocal Rank (MRR) measure?
MRR calculates the average of reciprocal ranks for the first relevant item in ranked results, indicating query response quality.
What is the Silhouette Score in clustering?
The Silhouette Score measures how well data points fit within their clusters compared to other clusters, ranging from -1 to 1.