Explainable AI Flashcards
1
Q
feature importance
A
-
feature importance helps you understand why your models make their predictions
-
WHY?
- Explanations of model behavior are key for debugging auditing and understanding potential failure areas.
- help you to prioritize work to address data drift, because not all drift is created equal
- setup real-time alerting: text, emails,
-
WHY?
2
Q
causal Inference
A
- most ANN determine correlations, not causation
-
METHODS:
-
Double/Debiased ML: deconfound each input and then calculate causal inference of that feature
- Train a model that ads a noise feature
- Only works when you can identify confounding input
-
Double/Debiased ML: deconfound each input and then calculate causal inference of that feature
3
Q
feature redundancy
A
- DEFINITION: Feature redundancy is when you can use one to determine the other (e.g. they are correlated)
- the cons: Removing feature redundancy helps address the dependence of variables
- methods like SHAP work better on causal questions with strong feature independence
4
Q
unifold manifold approximation and projection (UMAP)
A
Unilateral manifold approximation and projection (UMAP) is visualization of embeddings used to map high dimensional data in a low dimensional space
- works for unstructured data if you break it down (e.g. certain types of patterns such as as color of hair