Interpreting ML Book Cards Flashcards

1
Q

What does each coefficient represent in a linear model?

A

Each coefficient quantifies the change in the outcome variable for a one-unit change in that feature, assuming other variables are held constant.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Why is standardization important in linear models?

A

Standardization (scaling features to have zero mean and unit variance) helps in comparing the relative importance of features by making coefficients comparable.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are the key assumptions for interpreting linear models?

A

Linearity, no multicollinearity, homoscedasticity (constant variance of errors), and normality of residuals.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

How does high multicollinearity affect linear models?

A

High multicollinearity can make coefficients unstable and inaccurate, which can be mitigated by using techniques like ridge or lasso regression.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is the structure of a decision tree?

A

Decision trees split data based on feature values to create nodes and branches, leading to leaf nodes that provide the final prediction.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

How do decision trees handle interpretability?

A

Decision trees use if-then-else conditions that correspond to paths from the root to leaf nodes, making them easy to visualize and understand.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is a common limitation of decision trees?

A

Decision trees can overfit the data, capturing noise rather than underlying patterns, which can be mitigated by pruning.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is LIME and what does it do?

A

LIME (Local Interpretable Model-agnostic Explanations) explains predictions of any black-box model by creating an interpretable local approximation of the model around a specific instance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

How does LIME create explanations?

A

LIME perturbs the instance by making small random changes to its feature values and fits a simple model (e.g., linear) to approximate the complex model’s behavior around that instance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are some limitations of LIME?

A

LIME’s explanations are local, not global, and its effectiveness depends on how well the perturbed data samples the local space of the instance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are Shapley values?

A

Shapley values are derived from game theory to fairly attribute the output of a model to its input features, showing each feature’s contribution to a prediction.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What are the key properties of Shapley values?

A

Efficiency, symmetry, dummy, and additivity—these properties ensure fair attribution of the model’s output to features.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What are the limitations of Shapley values?

A

They are computationally intensive, especially with many features, and assume feature independence, which may not always hold.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is SHAP and how does it relate to Shapley values?

A

SHAP (SHapley Additive exPlanations) is an implementation of Shapley values tailored for machine learning models, offering efficient computation and a unified interpretation framework.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

How does SHAP provide interpretations?

A

SHAP values represent additive feature attribution, decomposing a prediction into contributions from each feature, with specific methods like Kernel SHAP, Tree SHAP, and Deep SHAP.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What are some visualization tools used with SHAP?

A

Force plots, summary plots, and dependence plots are used to visualize how features contribute to individual predictions and overall model behavior.

17
Q

What are the benefits of using SHAP?

A

SHAP provides consistent and unbiased explanations, handles feature interactions, and has fast computation methods, especially for tree-based models.

18
Q

What are the limitations of SHAP?

A

SHAP values can be overwhelming with many features, require good understanding of data and model, and are sensitive to data distributions and feature independence assumptions.

19
Q

How do you interpret the coefficient of a feature in a standardized linear model?

A

The coefficient represents the number of standard deviations the outcome variable will change for a one standard deviation increase in the feature, assuming all other variables are constant.

20
Q

What is homoscedasticity and why is it important in linear models?

A

Homoscedasticity means that the variance of the errors is constant across all levels of the independent variables; it’s important because it ensures that the model’s predictions are reliable and unbiased across the range of data.

21
Q

What methods can be used to handle multicollinearity in linear models?

A

Ridge regression (adds L2 regularization) and Lasso regression (adds L1 regularization) are common methods to stabilize coefficients and reduce multicollinearity effects.

22
Q

What is the Gini impurity in decision trees?

A

Gini impurity measures the likelihood of an incorrect classification of a randomly chosen element if it were randomly classified according to the distribution of labels in the node. A lower Gini impurity indicates a purer node.

23
Q

How do decision trees determine feature importance?

A

Decision trees determine feature importance based on how effectively each feature splits the data, using metrics like Gini impurity or information gain. Features that result in more homogeneous splits are deemed more important.

24
Q

What is pruning in decision trees and why is it used?

A

Pruning involves cutting off parts of the tree that have little predictive power to reduce overfitting and improve the tree’s ability to generalize to new data.

25
Q

What is information gain and how is it used in decision trees?

A

Information gain measures the reduction in entropy or uncertainty from a node split. It is used in decision trees to choose the best feature that splits the data most effectively.

26
Q

What are surrogate models in the context of LIME?

A

Surrogate models in LIME are simpler models (like linear models) that approximate the behavior of complex models locally around a specific instance to make the model’s predictions interpretable.

27
Q

How does LIME handle categorical features during perturbation?

A

LIME handles categorical features by sampling values from the observed distribution of the feature in the dataset, ensuring the perturbed data remains realistic and relevant to the specific instance.

28
Q

Why is it important to choose appropriate perturbations in LIME?

A

Appropriate perturbations are crucial in LIME because they ensure that the local surrogate model accurately captures the complex model’s decision boundary around the instance being explained.

29
Q

What are the computational challenges associated with Shapley values?

A

Calculating Shapley values requires evaluating the model’s output across all possible subsets of features, making it computationally expensive, especially for models with many features or complex interactions.

30
Q

How does Kernel SHAP differ from Tree SHAP?

A

Kernel SHAP is model-agnostic and uses a weighted linear regression approach to estimate Shapley values, while Tree SHAP is optimized specifically for tree-based models, providing exact Shapley values by leveraging the tree structure.

31
Q

How does Deep SHAP extend Shapley values to neural networks?

A

Deep SHAP combines Shapley values with DeepLIFT, a method for feature attribution in neural networks, to provide explanations for deep learning models by backpropagating contributions through the network layers.

32
Q

What does a force plot in SHAP show?

A

A force plot in SHAP shows how each feature pushes the prediction higher or lower compared to the average prediction, visually representing the additive contributions of each feature to the final prediction.

33
Q

How do SHAP dependence plots help in interpreting feature interactions?

A

SHAP dependence plots show the relationship between SHAP values of a feature and the feature’s value, allowing you to see how the impact of a feature changes in relation to other features, revealing interaction effects.

34
Q

What makes SHAP values unbiased and consistent for feature attribution?

A

SHAP values satisfy properties like efficiency, symmetry, dummy, and additivity, which ensure fair and consistent attribution of the prediction to input features across different subsets and combinations of features.

35
Q

How do SHAP summary plots provide insights into model behavior?

A

SHAP summary plots aggregate SHAP values across all instances in the dataset, showing the distribution and magnitude of feature impacts, with colors indicating feature values, helping to identify key features and patterns in the data.