Week 7 Flashcards

1
Q

Interpretability

A

The degree to which a human can understand the cause of a decision and consistently predict the result of a model.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Explainable

A

When feature values of instances can be related to model prediction in a humanly understandable way.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What levels of interpretability / explainability does Molnar state?

A

Interpretability is at a global level of the model, explainability is concerned with an individual prediction.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Intrinsically interpretable models

A

Provide all means necessary for the decisions explanation. Is interpretable due to simplicity, has all info itself.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Model agnostic explainable AI method

A

Explains any model, no matter the type

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Model specific explainable AI method

A

Accesses and uses the model internals.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Counterfactual explanation

A

The one that is the closest to the instance that we’re trying to predict, with minimal changes, that gives a different prediction.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

CNN (abbreviation)

A

Convolutional neural network

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Expressive power

A

What is the structure of the explanations?
EG. is it ‘if-then’, a tree, natural language…

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Translucency

A

Describes how much the explanation method relies on looking into the machine learning model

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Portability (property of explanation method)

A

Measures the range of machine learning models with which the explanation can be used.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Algorithmic complexity (property of explanation method)

A

The computational complexity of the explanation method.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Accuracy (property of explanation)

A

How well does an explanation predict unseen data?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Fidelity (property of explanation)

A

How well does the explanation approximate the prediction of the black box model?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Certainty (property of explanation)

A

Does the explanation reflect the certainty of the machine learning model?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Comprehensibility (property of an explanation)

A

How well do humans understand the explanations? How convincing are they?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

How is fidelity measured?

A

Objectively

18
Q

How is plausability measured?

A

By comprehensibility: it requires a user study.

19
Q

How is simulatability measured?

A

By measuring the degree that a human can calculate or predict the model’s outcome, given the explanation.

20
Q

What should a good explanation be?

A
  • Contrastive
  • Selective
  • Social
  • Truthful
  • General and probable
  • Consistent with prior beliefs.
21
Q

Black-box models

A

Require post-hoc explanations, cannot have perfect fidelity.

22
Q

What are the two families of interpretable models that we’re focusing on?

A
  1. Linear models
  2. Decision trees and decision rules
23
Q

Give the hierachy of linear models from big to small:

A
  1. Generalized additive models
  2. Generalized linear models
  3. Linear models
  4. Scoring systems
24
Q

Scoring system

A

A specialised type of linear model that gives an integer in a range as output.

25
Generalized linear model
26
Multivariate linear model
27
Multivariate polynomial model
28
Linearity
f(x+y) = f(x) + f(y) and f(c*x) = c* f(x)
29
Homoscedasticity
Constant variance
30
Name four assumptions for linear modeling:
1. Normality of the target variable 2. Homoscedasticity 3. Independent instance distribution 4. Absence of multicollinearity 5. Linearity
31
Multicollinearity
When there are pairs of strongly correlated features in the data, so coloms are correlated.
32
What does it mean when we have a modular view in the interpretation of linear models?
We assume all remaining feature values are fixed, so a change in a particular feature will be reflected in the outcome.
33
Numerical feature weight (in the interpretation of linear models)
When all other features are constant, it is the change in outcome when the feature weight value is increased by one unit.
34
Binary feature weight (in interpretation of linear models)
The contribution to the model outcome of the feature when it is set to one.
35
Categorical feature with L categories (interpretation of linear models)
36
One-hot-encoding
37
Feature effect
Multiplication of the estimated weight and the normalized feature value.
38
How can we model nonlinear component functions fj (xj) in GAM models?
We learn them greedyli and use splines.
39
Splines
40
Indicator functions
Are combinary. Give an output with a condition, so that output is the case if the condition is met. Otherwise another output will be given.
41