Fairness Flashcards

1
Q

What are the downsides of human decision-making, and how can computers counteract these downsides? What is the main downside of computers for decision-making?

A

Human decision-making is deemed inefficient and subjective, e.g. human decision making is slow, expensive, limited by the ability to consider relevant information, influenced by contextual factors and driven by implicit biases.

AI is increasingly used for decision-support. This is because AI-supported decision making promises to be efficient and more objective, e.g. lower cost, faster, able to consider more information, algorithms follow explicit rules. However, AI systems are not as objective as was expected: there can be biased programmers creating biased algorithms, trained on biased data. Also, fairness is a broader concept than bias.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Why is discrimination sometimes necessary for decision-making?

A

Prior domain-specific knowledge may be essential for effective decision-making, so not all decision based on knowledge about participants should be considered problematic.

Classification necessitates differentiation of some kind (e.g. between males and females for medical treatment), not all of which is problematic.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is meant with bias, and why is it not always a problem?

A

Bias means having an a priori tendency or “leaning” in favor of one choice or another. Sometimes you prefer people in a particular group to fill a particular position, or when an expert has a ‘hunch’ about a certain choice for a certain problem.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What types of algorithmic bias and data bias are there?

A

Algorithmic bias:
1. Confirmation bias: encoding (or failing to detect) a biased algorithm because it confirms the developer’s own (implicit) biases
2. Biased objectives: learning algorithm optimizes an (implicitly) biased function (e.g., user engagement with an interface whose imagery is more appealing to individuals of a particular gender)

Data bias:
1. Historical bias (e.g., historical gender imbalance)
2. Representation bias (e.g., data that does not represent the general population)
3. Measurement bias (e.g., not everyone is measured equally often)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Give examples of how fairness can be thought of using the consequentialist approach to ethics.

A

From a consequentialist perspective, fairness might be viewed as equal impact: What matters is the result of a particular decision: are the affected individuals better off (outcome)? Determining whether an (AI-driven) decision is fair may require psychological, economic, and other empirical investigations into a decision’s long-term effects for different members of the population. But, there are many different ways to measure equality statistically like:
- Statistical parity: All groups have the same overall positive/negative classification rates
- Accuracy equity: Accuracy is the same for all groups.
- Disparate mistreatment: False positive rates are the same in all groups.

Keywords: outcome, empirical research, accuracy, classification, false positives

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Give examples of how fairness can be thought of using the deontological approach to ethics.

A

From a deontological perspective, fairness might be viewed as equal treatment: What matters is the intentions behind, and criteria for, a decision. Decisions are unfair when reference is made to protected classes individuated by features such as gender, race, sexual orientation, etc. Can we define the criteria which should (not) be considered in a decision?

Protected attributes e.g. gender, race, sexual orientation, religion, etc. are generally attributes which individuals have no choice or control over.

Proxy attributes such as reading habits, friend networks, food choices, etc.?

And what about e.g. AI-defined attributes such as screen resolution, operating system, or click-behavior and response times?

Keywords: equal treatment, intention, attributes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What can AI engineers do to promote fairness at different stages of the AI development cycle?

A

Pre-processing strategies: Include more (diverse) examples in the data. Review the way data is labeled, preferably not by yourself. Identify, and then re-weight or exclude, problematic input elements.

In-processing strategies: If you can change the learning algorithm, replace it with a bias-mitigating alternative. Modify the objective function to punish illegitimate bias. Use adversarial debiasing or other techniques to reward unbiased decisions.

Post-processing strategies: If neither the data nor the algorithm can be changed, it may be necessary to perform a “fairness audit”, and to use a biased system’s output with care, oversight, or not at all.

Keywords: Review data, bias-mitigation, corrections, objective functions, fairness audit

How well did you know this?
1
Not at all
2
3
4
5
Perfectly