8.27 Flashcards

1
Q

What are the goals of the course?

A

Learn and implement SOTA XAI models, understand interpretability/explainability, read and discuss XAI models, implement algorithms, and conduct a research project.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Who is the instructor for this course?

A

Nidhi Rastogi, Assistant Professor at the Department of Software Engineering, GCCIS, RIT.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is Explainable AI (XAI)?

A

A subfield of AI that focuses on making the outputs of machine learning models understandable to humans.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Inherently explainable models vs. post-hoc explanations?

A

Inherently explainable models are designed for transparency, while post-hoc explanations interpret complex, pre-built models.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Why is model understanding important?

A

It helps in debugging, detecting biases, providing recourse, and assessing when to trust model predictions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are some tools used in XAI?

A

LIME, SHAP, and TensorFlow Explain are common tools used for creating explainable models.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are some key research interests of the instructor?

A

Cybersecurity, Artificial Intelligence, Explainability, Graph Networks, and Autonomous Systems.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are some examples of when model understanding is needed?

A

Debugging model errors, detecting biases, offering recourse for individuals, and deciding model suitability for deployment.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What types of evaluations are used in explainability research?

A

Application-grounded evaluation, human-grounded evaluation, and functionally-grounded evaluation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are the main auxiliary criteria in XAI?

A

Safety, nondiscrimination, and the right to explanation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is the difference between incompleteness and uncertainty in XAI?

A

Incompleteness refers to gaps in problem formalization, while uncertainty can be quantified, such as learning from small datasets.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What are the two main approaches to achieving model understanding?

A

Build inherently explainable models or use post-hoc explanations for existing models.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What are some conferences relevant to XAI?

A

ICML, NeurIPS, ICLR, UAI, AISTATS, KDD, AAAI, FAccT, AIES, CHI, CSCW, and HCOMP.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is the significance of explainability in high-stakes settings?

A

It ensures that ML systems are not just accurate but also safe, non-discriminatory, and provide the right to explanation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What are some examples of inherently explainable models?

A

Decision trees, linear models, and rule-based systems are examples of inherently explainable models.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly