Metrics Flashcards

1
Q

What does R² (Coefficient of Determination) measure in regression?

A

R² measures how well the predicted values explain the variance in the actual values, indicating the model’s fit.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is Mean Absolute Error (MAE) used for in regression?

A

MAE measures the average absolute difference between predicted and actual values, indicating prediction accuracy.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is Mean Squared Error (MSE), and why is it used?

A

MSE calculates the average squared differences between predicted and actual values, penalizing larger errors more heavily.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What does Root Mean Squared Error (RMSE) represent in regression?

A

RMSE is the square root of MSE, providing error measurements in the same units as the target variable.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is Adjusted R², and how does it differ from R²?

A

Adjusted R² accounts for the number of predictors in the model, preventing overestimation of model performance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What does Accuracy measure in classification tasks?

A

Accuracy is the proportion of correctly classified instances out of the total instances.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is Precision in classification metrics?

A

Precision is the ratio of true positive predictions to all positive predictions, reflecting model specificity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What does Recall indicate in classification?

A

Recall, or sensitivity, is the ratio of true positive predictions to all actual positives, measuring model completeness and sensitivity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is the F1-Score used for in classification?

A

The F1-Score is the harmonic mean of Precision and Recall, balancing the two in cases of class imbalance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What does the Area Under ROC Curve (AUC) signify?

A

AUC measures a classifier’s ability to distinguish between classes across various thresholds, with 1 being perfect.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is the Area Under Precision-Recall Curve (PR AUC)?

A

PR AUC evaluates classifier performance for imbalanced datasets by focusing on Precision and Recall trade-offs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What information does a Confusion Matrix provide?

A

A Confusion Matrix summarizes classification predictions, showing counts of true positives, false positives, false negatives, and true negatives.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is Mean Average Precision (MAP) used for in ranking tasks?

A

MAP evaluates the precision at multiple ranks, averaged across all queries, for ranking relevance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What does Mean Reciprocal Rank (MRR) measure?

A

MRR calculates the average of reciprocal ranks for the first relevant item in ranked results, indicating query response quality.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is the Silhouette Score in clustering?

A

The Silhouette Score measures how well data points fit within their clusters compared to other clusters, ranging from -1 to 1.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is the purpose of Macro-Averaged Precision in multi-class classification?

A

Macro-Averaged Precision computes the unweighted average Precision across all classes, treating each class equally.

17
Q

What is Micro-Averaged Recall used for in multi-class classification?

A

Micro-Averaged Recall aggregates true positives and false negatives across the whole dataset. Gives equal weight to each instance.

18
Q

What does Macro-Averaged F1-Score indicate in multi-class metrics?

A

Macro-Averaged F1-Score calculates the F1-Score for each class and averages them, providing a balanced view of performance.

19
Q

What does a Multi-Class Confusion Matrix show?

A

A Multi-Class Confusion Matrix shows prediction counts for all possible class pairings, highlighting misclassification patterns.