Module 11: AI Ethics and A Brief Introduction to Morality Flashcards
Identify and explain the three core components of AI Trust
- Understanding – Define hazards and thresholds
- Action – Guardrails to mitigate hazard likelihood & severity
- Explanations – Communicate risk behaviors & events, i.e. compliance documentation & visualizations.
List the five defined measurements of performance
- Data Quality
- Accuracy
- Robustness
- Stability
- Speed
List the five defined measurements of operations
- Monitoring
- Compliance
- Security
- Humility
- Business Rules
List the four defined measurements of ethics
- Interpretability
- Bias/Fairness
- Governance
- Social Impact Assessment
Dataset is skewed towards certain group or may not reflect the real world
Skewed Sample
Feature collection for certain groups may be informative or reliable.
Limited Features
Unreliable label, historical bias
Tainted Examples
Do we have enough data?
Sample Size Disparity
Zip Code or school can be proxies for race School or sport activity can be proxies for gender.
Proxies
What are four suggestions are made for tackling AI bias?
- Identify- Identify protected features in your dataset
- Select- an appropriate fairness metric for your use case and value system
- Build insights to identify & understand your model’s potential bias
- Mitigate bias uncovered in your data or model
Transform your data such that the target would not be correlated to protected attributes
Pre-processing
Modifying the loss function in your algorithm to have fairness constraints
In-processing
Changing the model predictions to avoid discrimination.
Post-processing