Stereotypes, Biases, Fairness Flashcards
What causes AI systems to exhibit bias?
Bias in training data and algorithms influenced by societal and historical biases
How do gender stereotypes manifest in NLP?
Word embeddings associate men with professions like “doctor” and women with “nurse,” reinforcing stereotypes
What was the key finding of the 2018 facial recognition study by Joy Buolamwini?
Facial recognition systems were less accurate for women and individuals with darker skin tones
Why is unbiased data crucial for AI systems?
It ensures fair and accurate outcomes, preventing discrimination
What is “algorithmic accountability”?
A framework to ensure transparency and fairness in AI decision-making
How does AI amplify existing biases?
By learning and perpetuating prejudices from biased data
Why is diversity in AI development teams important?
Diverse perspectives help identify and reduce biases during system design
What did ProPublica’s investigation reveal about “risk assessment” algorithms?
They falsely predicted Black defendants as future criminals at twice the rate of white defendants
What is interaction bias?
Bias introduced during user interactions with AI systems
Give an example of latent bias in AI
Algorithms associating physicists predominantly with male images due to historical data
What causes selection bias in AI?
Non-representative or skewed training datasets
Why is addressing AI bias a complex issue?
Bias can enter through multiple stages, from data collection to algorithm design
What are the three types of bias in AI?
- Interaction Bias: Bias introduced during user interaction with AI systems.
- Latent Bias: Bias stemming from historical stereotypes in training data.
- Selection Bias: Bias caused by non-representative training datasets.
What is the Gender Shades project?
A study evaluating accuracy disparities in facial recognition systems across gender and skin tone
Which group faced the highest error rates in Gender Shades findings?
Darker-skinned females
Why is inclusive testing important for AI systems?
To ensure fairness and accuracy across all demographic groups
What action did IBM take following the Gender Shades findings?
IBM updated its Watson Visual Recognition API to address biases
What is “first-person fairness”?
Ensuring unbiased AI responses based on users’ personal information
How does ChatGPT exhibit bias in name-based queries?
By generating stereotyped responses linked to gender or ethnicity
What strategies can mitigate bias in AI models like ChatGPT?
Preprocessing training data, diversity-aware algorithms, and continuous monitoring
How did GPT-4o improve over GPT-3.5 in terms of bias?
GPT-4o significantly reduced stereotyping rates in responses.
“People judge subjectively and have numerous bias effects in their decisions.
Decisions can even be influenced by situative moods or unconscious beliefs.” (True/False)
True
“AI systems, on the other hand, are objective, fair and able to judge neutrally.” (True/False)
That’s not (automatically) true, since AI is made by us and learns from us.
What is a stereotype?
A stereotype is an oversimplified and generalized idea about a group of people, ignoring individual differences.