Stereotypes, Biases, Fairness Flashcards
What causes AI systems to exhibit bias?
Bias in training data and algorithms influenced by societal and historical biases
How do gender stereotypes manifest in NLP?
Word embeddings associate men with professions like “doctor” and women with “nurse,” reinforcing stereotypes
What was the key finding of the 2018 facial recognition study by Joy Buolamwini?
Facial recognition systems were less accurate for women and individuals with darker skin tones
Why is unbiased data crucial for AI systems?
It ensures fair and accurate outcomes, preventing discrimination
What is “algorithmic accountability”?
A framework to ensure transparency and fairness in AI decision-making
How does AI amplify existing biases?
By learning and perpetuating prejudices from biased data
Why is diversity in AI development teams important?
Diverse perspectives help identify and reduce biases during system design
What did ProPublica’s investigation reveal about “risk assessment” algorithms?
They falsely predicted Black defendants as future criminals at twice the rate of white defendants
What is interaction bias?
Bias introduced during user interactions with AI systems
Give an example of latent bias in AI
Algorithms associating physicists predominantly with male images due to historical data
What causes selection bias in AI?
Non-representative or skewed training datasets
Why is addressing AI bias a complex issue?
Bias can enter through multiple stages, from data collection to algorithm design
What are the three types of bias in AI?
- Interaction Bias: Bias introduced during user interaction with AI systems.
- Latent Bias: Bias stemming from historical stereotypes in training data.
- Selection Bias: Bias caused by non-representative training datasets.
What is the Gender Shades project?
A study evaluating accuracy disparities in facial recognition systems across gender and skin tone
Which group faced the highest error rates in Gender Shades findings?
Darker-skinned females
Why is inclusive testing important for AI systems?
To ensure fairness and accuracy across all demographic groups
What action did IBM take following the Gender Shades findings?
IBM updated its Watson Visual Recognition API to address biases
What is “first-person fairness”?
Ensuring unbiased AI responses based on users’ personal information
How does ChatGPT exhibit bias in name-based queries?
By generating stereotyped responses linked to gender or ethnicity
What strategies can mitigate bias in AI models like ChatGPT?
Preprocessing training data, diversity-aware algorithms, and continuous monitoring
How did GPT-4o improve over GPT-3.5 in terms of bias?
GPT-4o significantly reduced stereotyping rates in responses.
“People judge subjectively and have numerous bias effects in their decisions.
Decisions can even be influenced by situative moods or unconscious beliefs.” (True/False)
True
“AI systems, on the other hand, are objective, fair and able to judge neutrally.” (True/False)
That’s not (automatically) true, since AI is made by us and learns from us.
What is a stereotype?
A stereotype is an oversimplified and generalized idea about a group of people, ignoring individual differences.
How does bias differ from stereotypes?
Stereotypes are the content of biased thinking, while bias is the mechanism by which preferences or prejudices are applied.
What is statistical bias?
Statistical bias refers to a systematic error that skews results due to issues like non-representative samples, flawed measurements, or omitted variables.
What is unconscious bias?
Unconscious bias is an implicit, learned belief about others that operates outside conscious awareness.
How can unconscious bias affect decisions?
Unconscious biases influence actions and decisions, often in ways incompatible with conscious values.
What factors can amplify unconscious bias?
Multi-tasking and working under time pressure can increase the impact of unconscious biases.
What is biased AI?
Biased AI refers to systems that unfairly favour certain outcomes or groups due to factors like biased data or algorithms.
How does Mehrabi et al. (2022) define fairness?
Fairness is “the absence of any prejudice or favouritism toward an individual or group based on their inherent or acquired characteristics.”
What is historical societal bias in AI?
Historical societal bias occurs when AI reflects stereotypes from historical data, such as gendered CEO predictions.
What is labelling bias?
Labelling bias happens when annotator prejudices or cultural influences affect how training data is labelled, leading to skewed AI results.
What is sampling bias?
Sampling bias arises when training data does not represent the real-world population, such as a medical AI trained on younger patients being inaccurate for older ones.
What is data aggregation bias?
Data aggregation bias occurs when models assume a “one-size-fits-all” approach, failing to account for subgroup differences.
How does design bias manifest in AI?
Design bias emerges from design choices like feminized digital assistants that reinforce stereotypes.
What is interaction bias?
Interaction bias arises when user interactions influence AI to adopt and reinforce societal stereotypes.
What are recommended strategies to keep human bias out of AI
- Awareness about Biased AI and understanding of various potential causes for it
- More diverse and interdisciplinary teams developing AI
- Integration of technical bias detection strategies in AI development
- Bias impact statements as a self-regulatory practice of AI creators/companies
- Ethical governance standards
How can diversity in development teams mitigate bias?
Diverse teams bring varied perspectives, helping identify and reduce bias during AI development.
What is the purpose of bias detection strategies in AI?
Bias detection strategies identify and address potential sources of bias in data, algorithms, and interactions.
What is a bias impact statement?
A bias impact statement is a self-regulatory practice where AI developers assess and document the potential impacts of bias in their systems.