Explainability & Human Trust in AI Flashcards
What is a common basic definition of trust?
Trust is a directional transaction between two parties where A believes that B will act in A’s best interest, accepting vulnerability to B’s actions.
(Jacovi et al., 2021)
According to the APA, how is interpersonal trust defined?
Interpersonal trust is the reliance on or confidence in the dependability of someone or something, specifically the degree to which each party can depend on the other to do what they say they will do.
What are trust-relevant situations characterized by?
Trust-relevant situations involve vulnerability and require some stake, such as social, financial, personal, or organizational risk.
(Parkhe & Miller, 2000)
What is affect-based (or social) trust?
Affect-based trust is formed when a trustee is perceived as well-minded, warm-hearted, and adhering to social norms.
What is cognition-based (or performance-based) trust?
Cognition-based trust is formed when a trustee is perceived as competent, understandable, and predictable in terms of the performance required.
What is the potential issue with inappropriate over-trust in AI systems?
Inappropriate over-trust may lead a human operator to overestimate a system’s abilities and rely too much on them.
What is the significance of cognitive components of trust in AI?
Cognitive components should be more relevant for robotics/AI than affective components, as users must understand the AI’s competence to perform tasks.
What does Explainable AI (XAI) aim to achieve?
XAI aims to make AI decision-making processes understandable to humans, moving away from the ‘black box’ nature of traditional AI models.
List the key points of Explainable AI.
- Trust
- Accountability
- Regulatory Compliance
- Challenges in Achieving Explainability
- Approaches to XAI
What are the two mnemonics mentioned for remembering key concepts in AI trust and explainability?
- TRUST: Transparency, Reasons, Understanding, Simplified models, Tools
- ABCD: Avoid over-reliance, Beware of under-trust, Cultivate balanced trust, Develop transparency and education
What does Automation Bias refer to?
Automation Bias refers to the tendency of individuals to over-rely on automated systems, ignoring contradictory information.
What is Algorithm Aversion Bias?
Algorithm Aversion Bias involves a preference for human judgment over algorithmic recommendations, even when algorithms are more accurate.
What is the ‘Goldilocks Zone’ of Trust in AI?
The ‘Goldilocks Zone’ of Trust refers to a balanced level of trust in AI that is neither excessive nor insufficient.
What are barriers to trustworthiness in AI?
- Lack of Transparency
- Bias in Data
- Overconfidence
What components are essential for developing trustworthy AI systems?
- Transparency
- Robustness
- Uncertainty Quantification
- Safeguarding against biases
- Thorough testing and validation