Week 4 - Bias In Machine Learning Flashcards

1
Q

How do we learn? Which two systems do we use?

A

System 1; intuition and instinct (95%). Unconscious, fast, associative, automatic pilot.
System 2; rational thinking (5%). Takes effort, slow, logical, lazy, indecisive

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What does associative strength depend on?

A

Conditional probability, distinctiveness, utility (fear, pain)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What do cognitive biases lead to?

A

Generalizations that diverge systematically from general truths

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What are associations? What is their consequence?

A

They are biased towards the past, overemphasize differences and obliterate similarities between groups and emphasize danger and risk. As a consequence our generalizations are inherently biased.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is the danger of generalizations?

A

Even if not biased, they can lead to unjust and harmful behaviour.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

How can bias be harmful?

A

Allocative harm: resources and opportunities are distributed unfairly.

Representational harm: groups/ identities are represented in a less favorable or demeaning way, or are not recognized at all.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Is bias [in technology] just reflecting back our societal, cultural biases?

A

Yes and no:
- GIGO
- Newly produced information reinforces and fuels existing biases and stereotypes
- its a design problem -> design is done by humans -> humans are biased

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How should we address bias in AI?

A

Algorithms or ML are neither objective nor fair - they are human-made technology. Looking for a technical fix is still believing in the illusion that is scientific and neutral.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are solutions to bias in ML beyond technical solutions?

A
  1. (Self-) regulation: strict rules and practices for training and bias benchmarks + enormous commercial pressure on the field
  2. Policy making: bias is a socio-technological problem, use interpretable models instead of black box ones
  3. Challenge the power structures: AI harms power minorities + oppression is not always conscious; force awareness and educate
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What does our ability to deal with the world around us effectively rely on?

A

Our ability to make good (aka effective) generalizations (i.e. to learn)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

How can bias be harmful?

A
  1. Allocative harm = resources and opportunities are distributed fairly (COMPAS)
  2. Representational harm = groups/ identities are represented in a less favorable or demeaning way, or are not recognized at all (stereotyping, under-representation)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Why is it neither fair nor responsible to leave the issue of the harm bias in technology causes to AI engineers, researchers and companies?

A

Because it is a socio-technological problem that society needs to deal with, and it is a societal and human problem; AI is not going to solve it for us.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly