Week 4 - Bias In Machine Learning Flashcards
How do we learn? Which two systems do we use?
System 1; intuition and instinct (95%). Unconscious, fast, associative, automatic pilot.
System 2; rational thinking (5%). Takes effort, slow, logical, lazy, indecisive
What does associative strength depend on?
Conditional probability, distinctiveness, utility (fear, pain)
What do cognitive biases lead to?
Generalizations that diverge systematically from general truths
What are associations? What is their consequence?
They are biased towards the past, overemphasize differences and obliterate similarities between groups and emphasize danger and risk. As a consequence our generalizations are inherently biased.
What is the danger of generalizations?
Even if not biased, they can lead to unjust and harmful behaviour.
How can bias be harmful?
Allocative harm: resources and opportunities are distributed unfairly.
Representational harm: groups/ identities are represented in a less favorable or demeaning way, or are not recognized at all.
Is bias [in technology] just reflecting back our societal, cultural biases?
Yes and no:
- GIGO
- Newly produced information reinforces and fuels existing biases and stereotypes
- its a design problem -> design is done by humans -> humans are biased
How should we address bias in AI?
Algorithms or ML are neither objective nor fair - they are human-made technology. Looking for a technical fix is still believing in the illusion that is scientific and neutral.
What are solutions to bias in ML beyond technical solutions?
- (Self-) regulation: strict rules and practices for training and bias benchmarks + enormous commercial pressure on the field
- Policy making: bias is a socio-technological problem, use interpretable models instead of black box ones
- Challenge the power structures: AI harms power minorities + oppression is not always conscious; force awareness and educate
What does our ability to deal with the world around us effectively rely on?
Our ability to make good (aka effective) generalizations (i.e. to learn)
How can bias be harmful?
- Allocative harm = resources and opportunities are distributed fairly (COMPAS)
- Representational harm = groups/ identities are represented in a less favorable or demeaning way, or are not recognized at all (stereotyping, under-representation)
Why is it neither fair nor responsible to leave the issue of the harm bias in technology causes to AI engineers, researchers and companies?
Because it is a socio-technological problem that society needs to deal with, and it is a societal and human problem; AI is not going to solve it for us.