Explainability, Trust, Responsible AI Flashcards

1
Q

What is the main goal of Explainable AI (XAI)?

A

The goal of Explainable AI is to make the decision-making process of AI systems understandable and transparent to human users. It aims to provide clear explanations for why an AI model arrived at a specific outcome, building trust in AI systems.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is the “black box problem” in AI?

A

The “black box problem” refers to the opacity of many AI systems, especially deep learning models, where even the developers or engineers cannot fully explain how the model arrives at its decision. This lack of transparency makes it difficult to trust and audit AI models, particularly in high-stakes situations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are the three main components of Explainable AI?

A
  1. Prediction Accuracy – Ensures that the AI model’s predictions are reliable and match known outcomes in training data.
  2. Traceability – Allows us to track how individual decisions were made, often by observing the behavior of neural networks (e.g., using DeepLIFT).
  3. Decision Understanding – Provides human users with understandable explanations of AI’s decisions, often in the form of dashboards or visualizations.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is Local Interpretable Model-agnostic Explanations (LIME)?

A

LIME is a technique in Explainable AI that provides local explanations for individual predictions made by an AI model. It approximates the complex model around a specific instance, allowing humans to understand why a particular decision was made. LIME helps in understanding black-box models by simplifying them locally around each prediction.
e.g. select an instance in the data set and then display as explanations:
(1) the model’s predictions
(2) features contributions
(3) the actual value for each feature

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is DeepLIFT?

A

DeepLIFT (Deep Learning Important Features) is a traceability technique that compares the activation of each neuron in a neural network to its reference neuron. This helps to track which features of the input data contributed most to the model’s decision, enhancing traceability and transparency.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Why is Explainable AI important for trust?

A

Explainable AI builds trust by providing users with clear, understandable reasons for AI decisions. This transparency enables users to rely on AI systems, particularly in critical fields like healthcare, finance, and criminal justice, where decisions can significantly impact lives.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What role does Explainable AI play in responsible AI?

A

Explainable AI is a key component of responsible AI, which involves ensuring that AI systems are developed and deployed ethically. XAI helps to identify biases, ensure fairness, and maintain transparency in AI decisions, which are essential for accountability and regulatory compliance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How does Explainable AI help mitigate AI biases?

A

By making AI decisions more transparent, XAI allows stakeholders to identify and correct biases in AI models. If an AI model is found to disproportionately favor certain groups or outcomes, explainability helps to trace the root causes and refine the model to ensure fairness.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is performance drift in AI, and why is it a concern?

A

Performance drift refers to the degradation in the performance of an AI model over time, often caused by differences between the training data and real-world data (i.e., data drift). This can lead to incorrect predictions and decisions. XAI can help monitor performance and trigger alerts when models deviate from expected behavior.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

How does Explainable AI support regulatory compliance?

A

Regulatory frameworks like the GDPR require that automated decisions be explainable to affected individuals. XAI ensures that organizations can provide clear explanations for decisions made by AI systems, helping them comply with legal and ethical standards.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Why is transparency in AI important for public trust?

A

Transparency in AI allows the public to understand how AI models operate, what data they use, and how decisions are made. Without transparency, AI systems can be seen as opaque or manipulative, undermining trust. By ensuring transparency, XAI helps increase public confidence in AI technologies.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is prediction accuracy, and how does it contribute to Explainable AI?

A

Prediction accuracy is a measure of how well an AI model’s predictions match the true outcomes from training data. High accuracy means the AI system is making reliable decisions. In XAI, prediction accuracy is assessed to ensure that the AI is functioning as expected, which helps establish trust.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is SHAP (Shapley Additive Explanations), and how is it used in Explainable AI?

A

SHAP is a method in XAI that assigns a contribution value to each feature (input variable) based on its impact on the model’s output. SHAP values are derived from cooperative game theory, ensuring fair distribution of contributions among features. This helps users understand the relative importance of each feature in driving a model’s decision.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

How does DeepLIFT improve traceability in AI?

A

DeepLIFT (Deep Learning Important Features) improves traceability by comparing the activation levels of neurons in the network against a reference point. This helps to understand how the features of the input data influenced the final decision made by the AI model.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is a decision understanding dashboard, and how does it help users?

A

A decision understanding dashboard is a visual tool used in Explainable AI to show how key input features contributed to a model’s decision. For example, it may display the factors that led an AI system to flag a transaction as fraudulent, helping users understand the logic behind the AI’s decision.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Why is human understanding a critical part of Explainable AI?

A

Human understanding is essential because it ensures that AI systems are not only technically accurate but also interpretable to non-experts. Without decision understanding tools like dashboards or visualizations, users would have no way of knowing why an AI made a certain decision, which undermines trust and accountability.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is automation bias, and why is it a problem in AI systems?

A

Automation bias refers to the tendency of humans to over-rely on automated systems, even when those systems make errors. This can lead to critical mistakes in fields like healthcare or finance, where AI recommendations might be accepted without questioning their accuracy.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What is algorithm aversion bias, and how does it affect trust in AI?

A

Algorithm aversion bias is the tendency of people to distrust automated systems, even when those systems are more accurate than humans. This bias can cause people to reject AI systems, leading to missed opportunities or suboptimal decisions, especially in domains like healthcare where AI can enhance diagnostic accuracy.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

How can we achieve the “Goldilocks” level of trust in AI?

A

The “Goldilocks” level of trust refers to finding the right balance between too much and too little trust in AI systems. This can be achieved by educating users on how AI works, addressing both automation bias (overtrust) and algorithm aversion (distrust), and fostering an environment where AI is seen as a supportive tool rather than an infallible decision-maker.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Why is it important to balance trust in AI, particularly in healthcare?

A

In healthcare, balancing trust is crucial because overreliance on AI could lead to errors, such as misdiagnoses, while undertrust could result in AI’s potential being underutilized, such as missing out on valuable insights. Understanding the strengths and limitations of AI is vital to ensuring that it is used effectively.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

How do biases in data impact AI systems?

A

AI systems are trained on historical data, and if that data contains biases (e.g., racial, gender, or age biases), the AI model will likely learn and reproduce those biases. This can result in discriminatory outcomes, such as biased hiring practices or unfair loan approvals.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Why is it essential to have human oversight in AI decision-making?

A

Human oversight ensures that AI decisions are checked and validated, especially in high-stakes domains like criminal justice or healthcare. A “human-in-the-loop” system helps to correct mistakes and ensures that AI recommendations are aligned with human values and ethical standards.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What are some potential risks if AI is not transparent?

A

Without transparency, AI systems could make biased or harmful decisions that go unnoticed, leading to legal, ethical, and societal consequences. For example, an opaque AI system used for credit scoring might unintentionally discriminate against certain groups, but without transparency, it would be difficult to identify and address the issue.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

How does Explainable AI help with regulatory compliance?

A

Explainable AI helps organizations comply with regulations like GDPR and CCPA, which require that AI systems’ decisions be explainable to users. By providing clear explanations of decisions, XAI ensures that organizations meet the legal requirements of transparency and accountability.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Why is explainability alone not enough to ensure trust in AI?

A

While explainability helps to understand AI decisions, it does not address underlying issues like bias, fairness, and accountability. Additional mechanisms such as regular audits, bias detection tools, and compliance checks are needed to ensure AI systems are trustworthy and ethical.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

What additional frameworks are needed alongside Explainable AI?

A

AI systems should be subject to accountability frameworks, including independent audits, ethical guidelines, and fairness evaluations. These frameworks help ensure that AI systems are not only transparent but also ethically sound and legally compliant.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

What role do independent audits play in AI trust?

A

Independent audits provide an external review of AI systems, ensuring they meet ethical and regulatory standards. Auditors can assess whether AI systems are operating fairly, without bias, and in line with societal values, helping to mitigate risks and improve accountability.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

What is the IBM AI Explainability 360 Toolkit?

A

The IBM AI Explainability 360 Toolkit is a set of open-source tools designed to help organizations interpret and explain the decisions made by machine learning models. It offers algorithms tailored to different stakeholders, such as data scientists, business users, and consumers.

29
Q

How does the Explainability 360 toolkit help data scientists?

A

For data scientists, the toolkit provides techniques for analyzing and visualizing the behavior of AI models. It helps them identify which features are most influential in the model’s predictions and diagnose any issues with model accuracy or fairness.

30
Q

How does the Explainability 360 toolkit assist loan officers?

A

The toolkit helps loan officers understand the reasoning behind credit approval decisions by comparing applicants to similar individuals. This enables them to make more informed decisions about whether to approve or deny credit applications.

31
Q

What are the benefits of using Explainability 360 for end-users?

A

For end-users, the toolkit provides simple, understandable explanations for complex AI decisions. This transparency helps users trust the outcomes and provides them with a means of challenging or appealing decisions when necessary.

32
Q

What is the basic definition of trust?

A

Trust is a directional transaction between two parties, where one party (the trustor) believes another party (the trustee) will act in their best interest, accepting vulnerability as a result.
(Jacovi et al., 2021)

33
Q

What factors are essential in defining interpersonal trust?

A

Key factors include belief, anticipation, reliance, and the predictability of the other party’s actions, that they do what they say.
(American Psychological Association APA)

34
Q

How does trust in technology differ from interpersonal trust?

A

Trust in technology focuses on users’ confidence in the reliability and accuracy of technological systems, rather than their intrinsic honesty.

35
Q

Who is the trustor in an AI context?

A

The trustor is the individual or entity placing trust in the AI system, typically the user.

36
Q

What are the components of trustworthiness?

A

Trustworthiness involves:
* perceptions of integrity (moral norms)
* benevolence (goodwill)
* competence (skills and expertise)

37
Q

What is a trust-relevant (trust-diagnostic) situation?

A

Not every situation, but a scenario where trust is required due to some kind of risk, vulnerability and stakes (such as relying on AI for medical diagnoses or credit scoring).

38
Q

What is affect-based trust?

A

A form of trust derived from perceptions of warmth, benevolence, and adherence to social norms.

39
Q

How does cognition-based trust differ from affect-based trust?

A

Cognition-based trust relies on perceptions of competence, reliability, and predictability rather than emotional factors.

40
Q

Why is cognition-based trust more relevant for AI?

A

Because users must understand the AI’s competence and predictability to evaluate its reliability ethically and effectively.

41
Q

What is the Propensity to Trust in Technology Scale?

A

A six-item questionnaire assessing beliefs about the general trustworthiness of technology, such as reliability and problem-solving capability.

42
Q

What does the General Trust Scale measure?

A

It measures general trust in people, focusing on honesty, kindness, and reciprocity.

43
Q

What does the Trust in Automation (TiA) model explain?

A

It explains how trust in automated systems develops, emphasizing the balance of affective (emotional) and cognitive (rational) trust factors.
(Körber, 2018, from Lee & See, 2004)

44
Q

What are factors of perceived trustworthiness in the Model of Trust in Automation (TiA)?

A
  • Competence / Reliability
  • Understandability / Predictibility
  • Intention of Developers
  • Familiarity (pre-experience)
    *
45
Q

Why should cognitive trust components be prioritized in AI systems?

A

To ensure users can reliably assess the AI’s competence and suitability for the task at hand.

46
Q

What ethical questions arise from designing AI with anthropomorphic features?

A

Whether it is justifiable to elicit affective trust (emotional reliance) through design cues when cognitive trust (rational understanding) should be the focus.

47
Q

Are there situations in which we might not want to trust AI? If so, what are they?

A
  • Sharing too much personal information, too much trust into self-driving cars, …
  • Social hacking (robot entering security area, Wolfert et al., 2020)
  • mushroom identification app (Leichtmann et al., 2024)
48
Q

Are there situations in which it is harmful not to trust AI enough? If so, what are they?

A

AI has more information available to decide (e.g. infrared sensors), more knowledge

49
Q

What is “calibrated trust”?

A

Change question from “How can we increase user trust” to “How can we help people to establish appropriate level of trust”? i.e. to calibrate trust to the actual performance (trustworthiness) of a system
(De Visser et al., 2019)

50
Q

What is over-trust?

A

the perceived trustworthiness of a system is higher than the objective trustworthiness
-> lead to misuse of the system
-> has to be calibrated (trust dampening)

51
Q

What is under-trust?

A

the perceived trustworthiness of a system is lower than the objective trustworthiness
-> lead to disuse of the system, reluctance, fear
-> has to be calibrated (trust repair)

52
Q

How is objective trustworthiness of a system measured?

A

with metrics of accuracy, precision, data quality, fairness

53
Q

How is perceived trustworthiness of a system is measured?

A
  • questionnaire (how trustworthy a person judges an AI
    system to be)
  • measure how often the person follows the AI’s suggestions
54
Q

What does the European Industrial Policy on Artificial Intelligence and Robotics (2019) highlight about explainability?

A

“the importance of the explainability of AI systems’ outputs, processes and values, making them understandable to non-technical audiences and providing them with meaningful information, which is necessary to evaluate fairness and gain trust”

55
Q

What is stated in the Ethics Guidelines for Trustworthy AI (HLEG AI, 2019) about explainability?

A

“Whenever an AI system has a significant impact on people’s lives, it should be possible to demand a suitable explanation of the AI system’s decision-making process. Such explanation should be timely and adapted to the expertise of the stakeholder concerned (e.g. layperson, regulator or researcher).”

56
Q

What is Explainable AI (XAI)?

A

“Given an audience, an explainable Artificial Intelligence
is one that produces details or reasons to make its functioning clear or easy to understand.”
(Arrieta et al., 2020)
“[…] the term ‘Explainable AI’ loosely refers to an explanatory agent revealing underlying causes to its or another agent’s decision making.”
(Miller, 2019)

57
Q

What are major challenges of Explainable AI?

A
  • the “black box” problem
  • different target groups
58
Q

Explain the “black box” problem, which is a challenge of Explainable AI

A

models often have around 100 million parameters - difficult to decide which factors affect the decision, transparent code does not help to draw conclusions

59
Q

Explain the problem with different target groups, which is a challenge of Explainable AI

A

not everyone will understand explanations equally well, target groups (operators, affected people, programmers, …) have different requirements and need different styles of explanations

60
Q

Give some examples for Explainable AI

A

Provide an interface with explanations, e.g. attributes how a cat is classified (fur, whiskers, claws, ear shapes, …), what attributes are found in an angry face (lowered brow, flared nostrils, raised mouth, …) and which images are similar to that and which not

61
Q

What was researched in the HOXAI project?

A

Hands-On-Explainable-AI project, 2021-2023:
* Interdisciplinary team: LIT Robopsychology Lab &
Visual Data Science Lab
* Research question: Effects of visual XAI methods on
decision-making behavior and trust in AI?
* Use case: AI-supported mushroom identification
* Implementation: Interactive mushroom picking game
* Surveys: Three consecutive experimental studies
(N total = 1,239)

62
Q

Explain the HOXAI project

A
  • AI model for classification of 18 mushroom species
    based on photos
  • Training dataset: 3,480 mushroom images
  • Accuracy of the classifier: 71%
  • difference in outcomes (correctly identified mushrooms) between AI and XAI: participants with explanations outperformed those without and classified more mushrooms correctly, errors of AI could be easier noticed (but participants with high trust in AI also did less detect when AI made an error)
63
Q

What is a high-stakes decision-making task in the context of XAI?

A

Tasks where incorrect decisions can have severe consequences, such as identifying toxic mushrooms with AI assistance.

64
Q

What are visual explanations in AI?

A

Graphical outputs that clarify an AI’s decision-making process, making it easier for users to understand and trust the system.

65
Q

How do visual explanations affect user trust?

A

They help calibrate trust to align with the AI’s actual capabilities, leading to better decision accuracy and more appropriate reliance on the system.

66
Q

What is the Forestly AI system?

A

A machine learning model trained on mushroom images to classify species, using layered architecture to identify patterns and output predictions with confidence scores.

67
Q

What does the layered model architecture of Forestly do?

A

It processes data in stages, recognizing simple features like lines first, and then more complex patterns like mushroom caps or lamellae.

68
Q

Why are confidence scores important in AI predictions?

A

They indicate the level of certainty in the AI’s predictions, helping users assess reliability.

69
Q

What is adequate trust in the context of AI systems?

A

Trust that is appropriately matched to the AI’s actual capabilities, avoiding over-reliance or underestimation.