Mitigating Algorithmic Discrimination Flashcards

1
Q

What is the first key consideration when trying to achieve algorithmic fairness?

A

First, consider the ethical context of usage.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Why should we not focus only on the algorithm/model when trying to achieve algorithmic fairness?

A

Mitigating algorithmic discrimination requires attention to the broader context and application of the model.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Why is achieving algorithmic fairness difficult?

A

Achieving algorithmic fairness is difficult because it requires

  • navigating trade-offs between fairness and utility,
  • considering the context and application, and
  • operationalizing law and policy.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Is there a single measure of fairness that can be applied to all AI/ML-based systems?

A

No, there is no single measure of fairness, but fairness should be considered as one criterion of quality among others.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Is it possible to prove that a system is fair?

A

No, fairness cannot be proven, but we can test for discrimination and address issues as they arise.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Why is building diverse teams important for achieving algorithmic fairness?

A

Diverse teams can help identify blind spots and bring different perspectives to the table.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are some ways to proactively document fairness considerations in the development of AI/ML-based systems?

A
  • Establish clear goals for the AI/ML system and consider the potential impacts on different groups of people.
  • Define fairness metrics and evaluation criteria to measure the performance of the system.
  • Ensure that the training data is diverse, representative, and free of biases.
  • Document and disclose the technical details of the system, including its design, data sources, and algorithms.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

When looking critically at data, what are some things to consider with respect to the target variable?

A

Consider whether a positive prediction will be good or bad for individuals, and whether the target variable is an inherently subjective human decision, an intentional proxy, or an apparently objective, measurable property.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

When looking critically at data, what are some things to consider with respect to group representation?

A

Consider how each group is represented in the data and in the set of positive examples, and explain any large observed discrepancies with a worldview.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is one key aspect to consider with respect to data when trying to achieve algorithmic fairness?

A

High-quality datasets are important, and fine-grained demographic data may not always be available.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are pre-processing techniques for mitigating algorithmic discrimination?

A

Answer: Pre-processing techniques involve modifying the input data to mitigate discrimination. These techniques include

  • data minimization,
  • feature selection, and
  • feature engineering.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Question: What is data minimization?

A

Answer: Data minimization is the process of collecting and retaining only the minimum amount of data necessary for a specific purpose.

It can help reduce the risk of discrimination by limiting the potential for sensitive information to be used to make decisions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Question: What is feature selection?

A

Answer: Feature selection is the process of selecting a subset of relevant features from the input data.

By excluding irrelevant or discriminatory features, the risk of bias in the model can be reduced.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Question: What is feature engineering?

A

Answer: Feature engineering involves creating new features from the input data that may be more informative or less discriminatory than the original features. This can help improve the fairness of the model.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Question: What are in-processing techniques for mitigating algorithmic discrimination?

A

Answer: In-processing techniques involve modifying the learning algorithm to mitigate discrimination. These techniques include
- constraint-based methods and
- penalization-based methods.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Question: What are constraint-based methods?

A

Answer: Constraint-based methods involve adding constraints to the learning algorithm to limit the potential for discrimination.

For example, a constraint may be added to ensure that the false positive rate or false negative rate is similar across different groups.

17
Q

Question: What are penalization-based methods?

A

Answer: Penalization-based methods involve adding a penalty term to the learning algorithm to discourage discriminatory behavior. For example, a penalty term may be added to reduce the difference in accuracy between different groups.

18
Q

Question: What is gradient reversal?

A

Answer: Gradient reversal is a technique used in neural networks to prevent the model from learning the protected attribute.

19
Q

Question: What are post-processing techniques for mitigating algorithmic discrimination?

A

Answer: Post-processing techniques involve modifying the output of the model to mitigate discrimination. These techniques include re-labeling and calibration.

20
Q

Question: What is re-labeling?

A

Answer: Re-labeling involves changing the output of the model to improve fairness. For example, if the model predicts a negative outcome for a protected group with a high probability, re-labeling may involve changing that prediction to a positive outcome to reduce discrimination.

21
Q

Question: What is calibration?

A

adjusting output of model to ensure it is equally interpretable across different groups.

This can help ensure that the model is fair and not biased towards any particular group.