8.27 cards - Sheet2 Flashcards
What is the goal of Explainable AI (XAI)?
To make machine learning models understandable to humans, improving transparency and trust in AI systems.
What are the two main approaches to achieving model understanding?
Inherently explainable models and post-hoc explanations for pre-built models.
What are some examples of inherently explainable models?
Decision Trees, Linear Regression, and Rule-Based Systems.
What are LIME and SHAP used for?
LIME and SHAP are tools for post-hoc explanations, used to interpret predictions of complex models.
Why is model understanding important?
It helps with debugging, detecting biases, providing recourse, and assessing when to trust model predictions.
What is the main challenge with explainability in AI?
There is little consensus on what constitutes explainability and how to evaluate it effectively.
What are the three types of explainability evaluation?
Application-grounded, human-grounded, and functionally-grounded evaluations.
What does application-grounded evaluation involve?
It uses real humans and real tasks, typically with domain experts working on exact or simplified tasks.
What does human-grounded evaluation involve?
It involves real humans completing simplified tasks, often with laypeople, and is less expensive than other evaluations.
What does functionally-grounded evaluation involve?
It uses proxies instead of humans, suitable for models that are already validated or when human experiments are unethical.
What are some key motivations for XAI?
Safety, nondiscrimination, and the right to explanation in high-stakes ML applications.
What is an example of a post-hoc explanation tool?
LIME (Local Interpretable Model-agnostic Explanations) or SHAP (SHapley Additive exPlanations).
Why might inherently explainable models be preferred?
They provide transparency and clarity in decision-making, especially in settings where accuracy and explainability trade-offs exist.
What is the course objective for DSCI 789?
To learn and implement SOTA XAI models, understand when and why interpretability is needed, and conduct a research project.
What should students expect to do in each lecture?
Attend instructor-led lectures, participate in group discussions, and present their findings on XAI models.