Stereotypes, Biases, Fairness Flashcards

1
Q

What causes AI systems to exhibit bias?

A

Bias in training data and algorithms influenced by societal and historical biases

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

How do gender stereotypes manifest in NLP?

A

Word embeddings associate men with professions like “doctor” and women with “nurse,” reinforcing stereotypes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What was the key finding of the 2018 facial recognition study by Joy Buolamwini?

A

Facial recognition systems were less accurate for women and individuals with darker skin tones

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Why is unbiased data crucial for AI systems?

A

It ensures fair and accurate outcomes, preventing discrimination

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is “algorithmic accountability”?

A

A framework to ensure transparency and fairness in AI decision-making

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

How does AI amplify existing biases?

A

By learning and perpetuating prejudices from biased data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Why is diversity in AI development teams important?

A

Diverse perspectives help identify and reduce biases during system design

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What did ProPublica’s investigation reveal about “risk assessment” algorithms?

A

They falsely predicted Black defendants as future criminals at twice the rate of white defendants

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is interaction bias?

A

Bias introduced during user interactions with AI systems

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Give an example of latent bias in AI

A

Algorithms associating physicists predominantly with male images due to historical data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What causes selection bias in AI?

A

Non-representative or skewed training datasets

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Why is addressing AI bias a complex issue?

A

Bias can enter through multiple stages, from data collection to algorithm design

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What are the three types of bias in AI?

A
  • Interaction Bias: Bias introduced during user interaction with AI systems.
  • Latent Bias: Bias stemming from historical stereotypes in training data.
  • Selection Bias: Bias caused by non-representative training datasets.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is the Gender Shades project?

A

A study evaluating accuracy disparities in facial recognition systems across gender and skin tone

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Which group faced the highest error rates in Gender Shades findings?

A

Darker-skinned females

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Why is inclusive testing important for AI systems?

A

To ensure fairness and accuracy across all demographic groups

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What action did IBM take following the Gender Shades findings?

A

IBM updated its Watson Visual Recognition API to address biases

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What is “first-person fairness”?

A

Ensuring unbiased AI responses based on users’ personal information

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

How does ChatGPT exhibit bias in name-based queries?

A

By generating stereotyped responses linked to gender or ethnicity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What strategies can mitigate bias in AI models like ChatGPT?

A

Preprocessing training data, diversity-aware algorithms, and continuous monitoring

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

How did GPT-4o improve over GPT-3.5 in terms of bias?

A

GPT-4o significantly reduced stereotyping rates in responses.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

“People judge subjectively and have numerous bias effects in their decisions.
Decisions can even be influenced by situative moods or unconscious beliefs.” (True/False)

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

“AI systems, on the other hand, are objective, fair and able to judge neutrally.” (True/False)

A

That’s not (automatically) true, since AI is made by us and learns from us.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What is a stereotype?

A

A stereotype is an oversimplified and generalized idea about a group of people, ignoring individual differences.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

How does bias differ from stereotypes?

A

Stereotypes are the content of biased thinking, while bias is the mechanism by which preferences or prejudices are applied.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

What is statistical bias?

A

Statistical bias refers to a systematic error that skews results due to issues like non-representative samples, flawed measurements, or omitted variables.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

What is unconscious bias?

A

Unconscious bias is an implicit, learned belief about others that operates outside conscious awareness.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

How can unconscious bias affect decisions?

A

Unconscious biases influence actions and decisions, often in ways incompatible with conscious values.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

What factors can amplify unconscious bias?

A

Multi-tasking and working under time pressure can increase the impact of unconscious biases.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

What is biased AI?

A

Biased AI refers to systems that unfairly favour certain outcomes or groups due to factors like biased data or algorithms.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

How does Mehrabi et al. (2022) define fairness?

A

Fairness is “the absence of any prejudice or favouritism toward an individual or group based on their inherent or acquired characteristics.”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

What is historical societal bias in AI?

A

Historical societal bias occurs when AI reflects stereotypes from historical data, such as gendered CEO predictions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

What is labelling bias?

A

Labelling bias happens when annotator prejudices or cultural influences affect how training data is labelled, leading to skewed AI results.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

What is sampling bias?

A

Sampling bias arises when training data does not represent the real-world population, such as a medical AI trained on younger patients being inaccurate for older ones.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

What is data aggregation bias?

A

Data aggregation bias occurs when models assume a “one-size-fits-all” approach, failing to account for subgroup differences.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

How does design bias manifest in AI?

A

Design bias emerges from design choices like feminized digital assistants that reinforce stereotypes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

What is interaction bias?

A

Interaction bias arises when user interactions influence AI to adopt and reinforce societal stereotypes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

What are recommended strategies to keep human bias out of AI

A
  • Awareness about Biased AI and understanding of various potential causes for it
  • More diverse and interdisciplinary teams developing AI
  • Integration of technical bias detection strategies in AI development
  • Bias impact statements as a self-regulatory practice of AI creators/companies
  • Ethical governance standards
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

How can diversity in development teams mitigate bias?

A

Diverse teams bring varied perspectives, helping identify and reduce bias during AI development.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

What is the purpose of bias detection strategies in AI?

A

Bias detection strategies identify and address potential sources of bias in data, algorithms, and interactions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

What is a bias impact statement?

A

A bias impact statement is a self-regulatory practice where AI developers assess and document the potential impacts of bias in their systems.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

Why are ethical governance standards important?

A

Ethical governance ensures that AI development and deployment prioritize fairness, accountability, and non-discrimination.

43
Q

What is the focus of the “Gender Shades” study?

A

It evaluates intersectional disparities in the accuracy of commercial gender classification systems.

44
Q

What is intersectionality?

A

The analysis of overlapping social categories, such as gender and race, and their combined effects on outcomes.

45
Q

What is the Pilot Parliaments Benchmark (PPB)?

A

A dataset designed to balance representation of gender and skin tone for benchmarking algorithms.

46
Q

How are darker-skinned females affected by gender classification systems?

A

They are the most misclassified group, with error rates reaching up to 34.7%.

47
Q

What are “error disparities” in the context of AI?

A

Differences in algorithmic performance accuracy across demographic subgroups.

48
Q

What role does phenotypic representation play in dataset design?

A

It ensures datasets include a diverse range of observable traits, such as skin tone, to reduce bias.

49
Q

Why is algorithmic transparency important?

A

It provides insights into performance metrics across demographics, fostering trust and accountability.

50
Q

What is algorithmic accountability?

A

Actions taken to identify and reduce bias in AI systems, ensuring fairness and equity.

51
Q

How did the PPB (Pilot Parliaments Benchmark) improve upon existing datasets?

A

By offering balanced representation of lighter and darker skin tones and genders.

52
Q

What is the Fitzpatrick skin type classification?

A

A six-point scale used to categorize skin tone, commonly employed in dermatology and AI phenotypic studies.

53
Q

Why are intersectional evaluations crucial for AI fairness?

A

They identify disparities that are not apparent when analyzing single demographic categories independently.

54
Q

Which commercial systems were evaluated in “Gender Shades”?

A

Gender classifiers from Microsoft, IBM, and Face++.

55
Q

What were the key findings about lighter-skinned males in gender classification?

A

They had the lowest error rates, with some classifiers achieving near-perfect accuracy.

56
Q

How do default camera settings affect skin tone representation in datasets?

A

Cameras are often calibrated for lighter skin tones, leading to poorly exposed images of darker-skinned individuals.

57
Q

What steps can improve fairness in gender classification systems?

A

Using inclusive datasets, conducting intersectional evaluations, and increasing algorithmic transparency and accountability.

58
Q

What are the broader implications of bias in facial recognition and classification?

A

Biased systems can perpetuate discrimination, especially in high-stakes domains like law enforcement and healthcare.

59
Q

What are Socially Assistive Robots (SARs)?

A

Robots designed to support users through social interaction, often used in rehabilitation or therapeutic contexts.

60
Q

What are the three components of Basic Psychological Needs (BPN)?

A
  • Autonomy (control over actions)
  • Competence (mastery of tasks)
  • Relatedness (social connection)
61
Q

How do gender stereotypes affect the design of assistive robots?

A

They often default to female-gendered designs, reinforcing the traditional association of caregiving roles with women.

62
Q

What is the significance of norm-breaking robots?

A

These robots challenge gender stereotypes and promote diversity in perceptions of caregiving roles.

63
Q

How does the gender of a robot impact men’s psychological need satisfaction?

A

Male-gendered robots are more likely to satisfy men’s needs for competence and relatedness, compared to female-gendered robots.

64
Q

What is Intention to Use (ITU) in the context of Socially Assistive Robots (SARs)?

A

It measures a user’s willingness to adopt and engage with a robot.

65
Q

Why is addressing gender stereotypes important in Socially Assistive Robots (SAR) design?

A

To ensure ethical alignment, foster inclusivity, and improve acceptance across diverse user groups.

66
Q

What findings did the study reveal about caregiver gender preferences?

A

Exposure to male-gendered robots increased men’s likelihood of choosing male or gender-neutral human caregivers.

67
Q

How does the self-determination theory relate to Socially Assistive Robots (SAR) design?

A

It emphasizes fulfilling autonomy, competence, and relatedness to enhance user motivation and well-being.

68
Q

What role does anthropomorphism play in the perception of Socially Assistive Robots (SARs)?

A

Users often attribute human-like traits to SARs, influencing their acceptance and effectiveness.

69
Q

How can Socially Assistive Robots (SARs) contribute to reducing gender stereotypes?

A

By presenting non-traditional gender roles, such as male caregivers, SARs can shift societal perceptions and expectations.

70
Q

What ethical considerations arise in the gendering of robots?

A

Designers must avoid perpetuating harmful stereotypes and instead promote equity and inclusivity.

71
Q

What challenges exist in designing non-binary or gender-neutral robots?

A

Limited established practices and user biases toward binary gender attributions.

72
Q

How do cultural differences impact the perception of gendered robots?

A

Perceptions vary based on societal norms, making it essential to consider cultural context in robot design.

73
Q

What is the potential benefit of male-gendered robots in healthcare?

A

They may positively influence men’s engagement, challenge stereotypes, and improve perceptions of caregiving roles.

74
Q

What is selective attention?

A

Selective attention is the ability to focus on certain details while unconsciously ignoring others.

75
Q

Why can selective attention be problematic?

A

It can create blind spots, causing us to miss significant details or changes.

76
Q

What does the “Whodunnit?” video illustrate?

A

It shows how distractions can lead us to overlook major changes in our environment.

77
Q

What do humans often overestimate about their perception?

A

Humans overestimate their ability to see the “whole picture.”

78
Q

How do cognitive biases affect awareness?

A

Cognitive biases unconsciously shape how we perceive and interpret reality.

79
Q

How does AI inherit human bias?

A

AI systems inherit biases from their creators and training data.

80
Q

What can happen if an AI model is trained on biased data?

A

It may exclude minority perspectives or misinterpret crucial contexts.

81
Q

What are key strategies to mitigate AI bias?

A

Awareness, diverse datasets, and rigorous testing.

82
Q

What stereotype does the surgeon’s riddle reveal?

A

The unconscious association of professional roles with men.

83
Q

How can language reduce bias in the riddle?

A

Using gender-neutral terms like “child” instead of “son” reduces bias.

84
Q

What biases are common in AI image generators?

A

Stereotypes in training data and limited representation of global cultures.

85
Q

What example highlights bias in AI-generated images?

A

Wedding attire prompts producing Western, female-centric images.

86
Q

What is the purpose of the Implicit Association Test (IAT)?

A

To measure unconscious associations between concepts.

87
Q

How does the Implicit Association Test (IAT) work?

A

By comparing response times when pairing related or unrelated concepts.

88
Q

What is a criticism of the Implicit Association Test (IAT) ?

A

It may lack predictive validity and reliability.

89
Q

What cognitive function do stereotypes serve?

A

Stereotypes act as heuristics to simplify complex environments.

90
Q

How are stereotypes learned?

A

Through family, peers, experiences, media, and culture.

91
Q

What role do power dynamics play in stereotypes?

A

They justify hierarchies and reinforce dominant group cohesion.

92
Q

Why did the soap dispenser fail for darker skin tones?

A

Infrared sensors reflected less light from darker skin, making detection difficult.

93
Q

soap dispenser fail: What design flaw led to this issue?

A

The device was likely tested only on lighter-skinned individuals.

94
Q

What lesson does the soap dispenser example teach?

A

Inclusive design and diverse testing are essential for equitable technology.

95
Q

What is a stereotype?

A

An oversimplified belief that ignores individual differences

96
Q

Stereotypes are always negative (True/False)

A

False

97
Q

Stereotypes are the content of biased thinking, while biases are the mechanisms through which prejudices are applied (True/False)

A

True

98
Q

What is one major cause of bias in AI systems?

A

Human prejudices in training data

99
Q

One type of latent bias is historical/societal bias, which is defined as …

A

… bias that originates from past societal norms reflected in training data.

100
Q

Write down another type of AI bias

A

Interaction bias, sampling bias, design bias, labelling bias

101
Q

According to the Gender Shades study, the demographic group with the most inaccuracies in facial recognition systems was:

A

darker-skinned females

102
Q

What types of biases were most likely responsible for higher error rates in darker skinned women?

A

sampling bias, algorithmic bias

103
Q

What stereotype is reinforced by AI voice assistants designed with femal voices?

A

Wome as compliant and service-oriented

104
Q
A