Stereotypes, Biases, Fairness Flashcards
What causes AI systems to exhibit bias?
Bias in training data and algorithms influenced by societal and historical biases
How do gender stereotypes manifest in NLP?
Word embeddings associate men with professions like “doctor” and women with “nurse,” reinforcing stereotypes
What was the key finding of the 2018 facial recognition study by Joy Buolamwini?
Facial recognition systems were less accurate for women and individuals with darker skin tones
Why is unbiased data crucial for AI systems?
It ensures fair and accurate outcomes, preventing discrimination
What is “algorithmic accountability”?
A framework to ensure transparency and fairness in AI decision-making
How does AI amplify existing biases?
By learning and perpetuating prejudices from biased data
Why is diversity in AI development teams important?
Diverse perspectives help identify and reduce biases during system design
What did ProPublica’s investigation reveal about “risk assessment” algorithms?
They falsely predicted Black defendants as future criminals at twice the rate of white defendants
What is interaction bias?
Bias introduced during user interactions with AI systems
Give an example of latent bias in AI
Algorithms associating physicists predominantly with male images due to historical data
What causes selection bias in AI?
Non-representative or skewed training datasets
Why is addressing AI bias a complex issue?
Bias can enter through multiple stages, from data collection to algorithm design
What are the three types of bias in AI?
- Interaction Bias: Bias introduced during user interaction with AI systems.
- Latent Bias: Bias stemming from historical stereotypes in training data.
- Selection Bias: Bias caused by non-representative training datasets.
What is the Gender Shades project?
A study evaluating accuracy disparities in facial recognition systems across gender and skin tone
Which group faced the highest error rates in Gender Shades findings?
Darker-skinned females
Why is inclusive testing important for AI systems?
To ensure fairness and accuracy across all demographic groups
What action did IBM take following the Gender Shades findings?
IBM updated its Watson Visual Recognition API to address biases
What is “first-person fairness”?
Ensuring unbiased AI responses based on users’ personal information
How does ChatGPT exhibit bias in name-based queries?
By generating stereotyped responses linked to gender or ethnicity
What strategies can mitigate bias in AI models like ChatGPT?
Preprocessing training data, diversity-aware algorithms, and continuous monitoring
How did GPT-4o improve over GPT-3.5 in terms of bias?
GPT-4o significantly reduced stereotyping rates in responses.
“People judge subjectively and have numerous bias effects in their decisions.
Decisions can even be influenced by situative moods or unconscious beliefs.” (True/False)
True
“AI systems, on the other hand, are objective, fair and able to judge neutrally.” (True/False)
That’s not (automatically) true, since AI is made by us and learns from us.
What is a stereotype?
A stereotype is an oversimplified and generalized idea about a group of people, ignoring individual differences.
How does bias differ from stereotypes?
Stereotypes are the content of biased thinking, while bias is the mechanism by which preferences or prejudices are applied.
What is statistical bias?
Statistical bias refers to a systematic error that skews results due to issues like non-representative samples, flawed measurements, or omitted variables.
What is unconscious bias?
Unconscious bias is an implicit, learned belief about others that operates outside conscious awareness.
How can unconscious bias affect decisions?
Unconscious biases influence actions and decisions, often in ways incompatible with conscious values.
What factors can amplify unconscious bias?
Multi-tasking and working under time pressure can increase the impact of unconscious biases.
What is biased AI?
Biased AI refers to systems that unfairly favour certain outcomes or groups due to factors like biased data or algorithms.
How does Mehrabi et al. (2022) define fairness?
Fairness is “the absence of any prejudice or favouritism toward an individual or group based on their inherent or acquired characteristics.”
What is historical societal bias in AI?
Historical societal bias occurs when AI reflects stereotypes from historical data, such as gendered CEO predictions.
What is labelling bias?
Labelling bias happens when annotator prejudices or cultural influences affect how training data is labelled, leading to skewed AI results.
What is sampling bias?
Sampling bias arises when training data does not represent the real-world population, such as a medical AI trained on younger patients being inaccurate for older ones.
What is data aggregation bias?
Data aggregation bias occurs when models assume a “one-size-fits-all” approach, failing to account for subgroup differences.
How does design bias manifest in AI?
Design bias emerges from design choices like feminized digital assistants that reinforce stereotypes.
What is interaction bias?
Interaction bias arises when user interactions influence AI to adopt and reinforce societal stereotypes.
What are recommended strategies to keep human bias out of AI
- Awareness about Biased AI and understanding of various potential causes for it
- More diverse and interdisciplinary teams developing AI
- Integration of technical bias detection strategies in AI development
- Bias impact statements as a self-regulatory practice of AI creators/companies
- Ethical governance standards
How can diversity in development teams mitigate bias?
Diverse teams bring varied perspectives, helping identify and reduce bias during AI development.
What is the purpose of bias detection strategies in AI?
Bias detection strategies identify and address potential sources of bias in data, algorithms, and interactions.
What is a bias impact statement?
A bias impact statement is a self-regulatory practice where AI developers assess and document the potential impacts of bias in their systems.
Why are ethical governance standards important?
Ethical governance ensures that AI development and deployment prioritize fairness, accountability, and non-discrimination.
What is the focus of the “Gender Shades” study?
It evaluates intersectional disparities in the accuracy of commercial gender classification systems.
What is intersectionality?
The analysis of overlapping social categories, such as gender and race, and their combined effects on outcomes.
What is the Pilot Parliaments Benchmark (PPB)?
A dataset designed to balance representation of gender and skin tone for benchmarking algorithms.
How are darker-skinned females affected by gender classification systems?
They are the most misclassified group, with error rates reaching up to 34.7%.
What are “error disparities” in the context of AI?
Differences in algorithmic performance accuracy across demographic subgroups.
What role does phenotypic representation play in dataset design?
It ensures datasets include a diverse range of observable traits, such as skin tone, to reduce bias.
Why is algorithmic transparency important?
It provides insights into performance metrics across demographics, fostering trust and accountability.
What is algorithmic accountability?
Actions taken to identify and reduce bias in AI systems, ensuring fairness and equity.
How did the PPB (Pilot Parliaments Benchmark) improve upon existing datasets?
By offering balanced representation of lighter and darker skin tones and genders.
What is the Fitzpatrick skin type classification?
A six-point scale used to categorize skin tone, commonly employed in dermatology and AI phenotypic studies.
Why are intersectional evaluations crucial for AI fairness?
They identify disparities that are not apparent when analyzing single demographic categories independently.
Which commercial systems were evaluated in “Gender Shades”?
Gender classifiers from Microsoft, IBM, and Face++.
What were the key findings about lighter-skinned males in gender classification?
They had the lowest error rates, with some classifiers achieving near-perfect accuracy.
How do default camera settings affect skin tone representation in datasets?
Cameras are often calibrated for lighter skin tones, leading to poorly exposed images of darker-skinned individuals.
What steps can improve fairness in gender classification systems?
Using inclusive datasets, conducting intersectional evaluations, and increasing algorithmic transparency and accountability.
What are the broader implications of bias in facial recognition and classification?
Biased systems can perpetuate discrimination, especially in high-stakes domains like law enforcement and healthcare.
What are Socially Assistive Robots (SARs)?
Robots designed to support users through social interaction, often used in rehabilitation or therapeutic contexts.
What are the three components of Basic Psychological Needs (BPN)?
- Autonomy (control over actions)
- Competence (mastery of tasks)
- Relatedness (social connection)
How do gender stereotypes affect the design of assistive robots?
They often default to female-gendered designs, reinforcing the traditional association of caregiving roles with women.
What is the significance of norm-breaking robots?
These robots challenge gender stereotypes and promote diversity in perceptions of caregiving roles.
How does the gender of a robot impact men’s psychological need satisfaction?
Male-gendered robots are more likely to satisfy men’s needs for competence and relatedness, compared to female-gendered robots.
What is Intention to Use (ITU)?
It measures a user’s willingness to adopt and engage with a robot.
Why is addressing gender stereotypes important in Socially Assistive Robots (SAR) design?
To ensure ethical alignment, foster inclusivity, and improve acceptance across diverse user groups.
What findings did the study reveal about caregiver gender preferences?
Exposure to male-gendered robots increased men’s likelihood of choosing male or gender-neutral human caregivers.
How does the self-determination theory relate to Socially Assistive Robots (SAR) design?
It emphasizes fulfilling autonomy, competence, and relatedness to enhance user motivation and well-being.
What role does anthropomorphism play in the perception of Socially Assistive Robots (SARs)?
Users often attribute human-like traits to SARs, influencing their acceptance and effectiveness.
How can Socially Assistive Robots (SARs) contribute to reducing gender stereotypes?
By presenting non-traditional gender roles, such as male caregivers, SARs can shift societal perceptions and expectations.
What ethical considerations arise in the gendering of robots?
Designers must avoid perpetuating harmful stereotypes and instead promote equity and inclusivity.
What challenges exist in designing non-binary or gender-neutral robots?
Limited established practices and user biases toward binary gender attributions.
How do cultural differences impact the perception of gendered robots?
Perceptions vary based on societal norms, making it essential to consider cultural context in robot design.
What is the potential benefit of male-gendered robots in healthcare?
They may positively influence men’s engagement, challenge stereotypes, and improve perceptions of caregiving roles.
What is selective attention?
Selective attention is the ability to focus on certain details while unconsciously ignoring others.
Why can selective attention be problematic?
It can create blind spots, causing us to miss significant details or changes.
What does the “Whodunnit?” video illustrate?
It shows how distractions can lead us to overlook major changes in our environment.
What do humans often overestimate about their perception?
Humans overestimate their ability to see the “whole picture.”
How do cognitive biases affect awareness?
Cognitive biases unconsciously shape how we perceive and interpret reality.
How does AI inherit human bias?
AI systems inherit biases from their creators and training data.
What can happen if an AI model is trained on biased data?
It may exclude minority perspectives or misinterpret crucial contexts.
What are key strategies to mitigate AI bias?
Awareness, diverse datasets, and rigorous testing.
What stereotype does the surgeon’s riddle reveal?
The unconscious association of professional roles with men.
How can language reduce bias in the riddle?
Using gender-neutral terms like “child” instead of “son” reduces bias.
What biases are common in AI image generators?
Stereotypes in training data and limited representation of global cultures.
What example highlights bias in AI-generated images?
Wedding attire prompts producing Western, female-centric images.
What is the purpose of the Implicit Association Test (IAT)?
To measure unconscious associations between concepts.
How does the Implicit Association Test (IAT) work?
By comparing response times when pairing related or unrelated concepts.
What is a criticism of the Implicit Association Test (IAT) ?
It may lack predictive validity and reliability.
What cognitive function do stereotypes serve?
Stereotypes act as heuristics to simplify complex environments.
How are stereotypes learned?
Through family, peers, experiences, media, and culture.
What role do power dynamics play in stereotypes?
They justify hierarchies and reinforce dominant group cohesion.
Why did the soap dispenser fail for darker skin tones?
Infrared sensors reflected less light from darker skin, making detection difficult.
soap dispenser fail: What design flaw led to this issue?
The device was likely tested only on lighter-skinned individuals.
What lesson does the soap dispenser example teach?
Inclusive design and diverse testing are essential for equitable technology.
What is a stereotype?
An oversimplified belief that ignores individual differences
Stereotypes are always negative (True/False)
False
Stereotypes are the content of biased thinking, while biases are the mechanisms through which prejudices are applied (True/False)
True
What is one major cause of bias in AI systems?
Human prejudices in training data
One type of latent bias is historical/societal bias, which is defined as …
… bias that originates from past societal norms reflected in training data.
Write down another type of AI bias
Interaction bias, sampling bias, design bias, labelling bias
According to the Gender Shades study, the demographic group with the most inaccuracies in facial recognition systems was:
darker-skinned females
What types of biases were most likely responsible for higher error rates in darker skinned women?
sampling bias, algorithmic bias
What stereotype is reinforced by AI voice assistants designed with femal voices?
Women as compliant and service-oriented