lecture 5 Flashcards
Prejudice in AI
AI and the algorithms we use are being trained on information from human society, and then produce results that consumers of AI use
This might be a pathway through which systemic forms of bias are re represented to humans
Bias towards societal stereotypes
where do these patterns come from
Algorithms are created by people, and since they have biases, the algorithm will be biased in some way
o They reflect and reinforce stereotypes
o Often attributed to human search tendencies, but reflect algorithm design
Based on marketing/business priorities
where do patterns come from according to different disciplines
SOCIOLOGISTS
Patterns reflect existing societal structures and disparities
COMPUTER SCIENTISTS
Emerge from the data & AI computations
PSYCHOLOGISTS
Have not been part of this conversation, but they need to be.
Reflect human social cognition and decision processes as they interface with societal structures & AI computation
o Its about individual humans and how prejudice affects social cognition and how these individuals then interact with both society and data
is gender inequality reflected in search output?
in countries with more gender inequality, “person” search was more often male
result:
exposure to search engine results in high-inequality country leads to participants assuming that men are more likely to be hired for jobs.
cycle of bias
Societal structures affect algorithms to affect users to act in ways that maintain societal structures.
bias in NLP
Relies of digitized texts
o Biased towards whose texts are digitized
o If you go back further in time, you’ll find more sexism and overt prejudice and racism
o Training data may have systemic bisa and race proxies built in.
Bias in computer vision
- Data used to create and train algorithms
- Social/cultural biases in human annotators in dataset creation
Does dataset diversity affect race classification accuracy?
More data and more diverse data = higher accuracy for all races
> instead of higher accuracy for white faces compared to other races
annotator prejudice effects
Model trained on images from low prejudice annotators was more accurate in classifying race from faces of black individuals