lesson_10_flashcards
What is Responsible AI?
The practice of building AI systems that are ethical, safe, transparent, and fair, while ensuring compliance with societal and legal standards.
Why is Responsible AI important?
It mitigates risks, ensures public trust, complies with regulations, and aligns with moral imperatives.
What are the three pillars of Responsible AI?
Safety, fairness, and accountability.
What is bias in AI systems?
Unwanted or unfair preferences learned by models due to skewed training data or systemic issues in data collection.
What is fairness in AI?
Ensuring equitable treatment and outcomes for all individuals or groups, often evaluated on dimensions like gender, race, or age.
What is equity in AI?
Going beyond fairness to address historical and societal inequalities to create just outcomes.
What is calibration in fairness?
A measure of how well a classifier’s predicted probabilities align with actual outcomes across different groups.
What is the Fairness Impossibility Theorem?
A concept stating it is mathematically impossible to simultaneously achieve multiple fairness metrics (e.g., equal calibration and equal error rates) if groups differ.
What is interpretability in AI?
Understanding which features a model uses and how they influence predictions.
What is explainability in AI?
The ability to provide human-understandable explanations for specific model predictions.
What is transparency in AI?
Providing insight into a model’s data, design, training process, and decision-making pipeline.
What is governance in Responsible AI?
A structured process to ensure accountability, compliance, and ethical decision-making in AI system development and deployment.
What is the role of provenance in AI systems?
Tracking the origin and usage of data to ensure auditability and accountability.
What is differential privacy?
A privacy-preserving technique that adds noise to query results to prevent re-identification of individuals in datasets.
What is redress in AI systems?
Mechanisms that allow users to seek remedies for errors or harms caused by AI predictions.