lesson_10_flashcards
What is Responsible AI?
The practice of building AI systems that are ethical, safe, transparent, and fair, while ensuring compliance with societal and legal standards.
Why is Responsible AI important?
It mitigates risks, ensures public trust, complies with regulations, and aligns with moral imperatives.
What are the three pillars of Responsible AI?
Safety, fairness, and accountability.
What is bias in AI systems?
Unwanted or unfair preferences learned by models due to skewed training data or systemic issues in data collection.
What is fairness in AI?
Ensuring equitable treatment and outcomes for all individuals or groups, often evaluated on dimensions like gender, race, or age.
What is equity in AI?
Going beyond fairness to address historical and societal inequalities to create just outcomes.
What is calibration in fairness?
A measure of how well a classifier’s predicted probabilities align with actual outcomes across different groups.
What is the Fairness Impossibility Theorem?
A concept stating it is mathematically impossible to simultaneously achieve multiple fairness metrics (e.g., equal calibration and equal error rates) if groups differ.
What is interpretability in AI?
Understanding which features a model uses and how they influence predictions.
What is explainability in AI?
The ability to provide human-understandable explanations for specific model predictions.
What is transparency in AI?
Providing insight into a model’s data, design, training process, and decision-making pipeline.
What is governance in Responsible AI?
A structured process to ensure accountability, compliance, and ethical decision-making in AI system development and deployment.
What is the role of provenance in AI systems?
Tracking the origin and usage of data to ensure auditability and accountability.
What is differential privacy?
A privacy-preserving technique that adds noise to query results to prevent re-identification of individuals in datasets.
What is redress in AI systems?
Mechanisms that allow users to seek remedies for errors or harms caused by AI predictions.
What is GDPR and how does it impact AI?
The General Data Protection Regulation is an EU law that governs data privacy and requires compliance for systems handling EU citizens’ personal data globally.
What is CCPA?
The California Consumer Privacy Act, a broad online privacy law giving California residents control over their personal data, impacting companies nationwide.
What is the role of trust in AI privacy policies?
Trust ensures users understand and feel confident in how their data is collected, processed, and used by AI systems.
What is the importance of transparency in privacy compliance?
Transparency helps users understand AI’s decision-making and data handling, ensuring trust and adherence to privacy laws.
What is an example of bias in word embeddings?
Gender bias, where analogies like ‘man is to surgeon as woman is to nurse’ reflect societal stereotypes present in training data.
What is federated learning?
A machine learning approach that trains models across decentralized devices, preserving user privacy by keeping data local.
How do academic incentives affect the study of bias in AI?
Publishing pressures may lead to findings that don’t reproduce, highlighting the need for rigorous evaluation of biases in AI models.