Mitigating Algorithmic Discrimination Flashcards
What is the first key consideration when trying to achieve algorithmic fairness?
First, consider the ethical context of usage.
Why should we not focus only on the algorithm/model when trying to achieve algorithmic fairness?
Mitigating algorithmic discrimination requires attention to the broader context and application of the model.
Why is achieving algorithmic fairness difficult?
Achieving algorithmic fairness is difficult because it requires
- navigating trade-offs between fairness and utility,
- considering the context and application, and
- operationalizing law and policy.
Is there a single measure of fairness that can be applied to all AI/ML-based systems?
No, there is no single measure of fairness, but fairness should be considered as one criterion of quality among others.
Is it possible to prove that a system is fair?
No, fairness cannot be proven, but we can test for discrimination and address issues as they arise.
Why is building diverse teams important for achieving algorithmic fairness?
Diverse teams can help identify blind spots and bring different perspectives to the table.
What are some ways to proactively document fairness considerations in the development of AI/ML-based systems?
- Establish clear goals for the AI/ML system and consider the potential impacts on different groups of people.
- Define fairness metrics and evaluation criteria to measure the performance of the system.
- Ensure that the training data is diverse, representative, and free of biases.
- Document and disclose the technical details of the system, including its design, data sources, and algorithms.
When looking critically at data, what are some things to consider with respect to the target variable?
Consider whether a positive prediction will be good or bad for individuals, and whether the target variable is an inherently subjective human decision, an intentional proxy, or an apparently objective, measurable property.
When looking critically at data, what are some things to consider with respect to group representation?
Consider how each group is represented in the data and in the set of positive examples, and explain any large observed discrepancies with a worldview.
What is one key aspect to consider with respect to data when trying to achieve algorithmic fairness?
High-quality datasets are important, and fine-grained demographic data may not always be available.
What are pre-processing techniques for mitigating algorithmic discrimination?
Answer: Pre-processing techniques involve modifying the input data to mitigate discrimination. These techniques include
- data minimization,
- feature selection, and
- feature engineering.
Question: What is data minimization?
Answer: Data minimization is the process of collecting and retaining only the minimum amount of data necessary for a specific purpose.
It can help reduce the risk of discrimination by limiting the potential for sensitive information to be used to make decisions.
Question: What is feature selection?
Answer: Feature selection is the process of selecting a subset of relevant features from the input data.
By excluding irrelevant or discriminatory features, the risk of bias in the model can be reduced.
Question: What is feature engineering?
Answer: Feature engineering involves creating new features from the input data that may be more informative or less discriminatory than the original features. This can help improve the fairness of the model.
Question: What are in-processing techniques for mitigating algorithmic discrimination?
Answer: In-processing techniques involve modifying the learning algorithm to mitigate discrimination. These techniques include
- constraint-based methods and
- penalization-based methods.