w5 Flashcards

1
Q

Which of the following best describes algorithmic bias?
a) AI systems are designed to intentionally discriminate against certain groups.
b) AI systems reflect and amplify existing societal inequalities through their training data.
c) AI systems produce random errors unrelated to their input data.
d) Bias occurs only in AI systems trained on text data.

A

b) AI systems reflect and amplify existing societal inequalities through their training data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is the “Great AI Trade-Off” described in Chapter 7?
a) The balance between using AI’s benefits and ensuring electricity for its operations.
b) The decision to use AI extensively despite its unpredictable errors, biases, and lack of transparency.
c) Choosing between symbolic and connectionist AI approaches.
d) Whether to prioritize AI for creative versus repetitive tasks.

A

b) The decision to use AI extensively despite its unpredictable errors, biases, and lack of transparency.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What factor was found to influence Google image search results in high-inequality nations, according to Vlasceanu and Amodio (2022)?
a) The country’s GDP.
b) Gender inequality index scores.
c) The prevalence of smartphone usage.
d) Regional preferences in AI algorithms.

A

b) Gender inequality index scores.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is a major risk of using biased datasets in training AI models, as highlighted in the presentation?
a) Increased computational costs during training.
b) Lower accuracy in recognizing White faces.
c) Disparities in classification accuracy for underrepresented groups.
d) The inability to perform basic predictive tasks.

A

c) Disparities in classification accuracy for underrepresented groups.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Which of the following ethical concerns is associated with facial recognition systems?
a) Improving public safety.
b) Reducing human oversight in decision-making.
c) Misidentifying individuals, especially among marginalized groups.
d) Increasing computational efficiency.

A

c) Misidentifying individuals, especially among marginalized groups.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

How does societal inequality influence algorithmic outputs, and how do these outputs, in turn, affect user behavior?

A

Societal inequality impacts the training data used in AI, leading to biased outputs that reflect existing disparities. These biased outputs reinforce stereotypes, shaping user decisions and behaviors, thereby perpetuating a cycle of inequality.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Describe the role of annotators in creating bias in AI models. How can their influence be mitigated?

A

Annotators influence AI models through their biases when labeling data. For instance, prejudiced annotators might reinforce stereotypes in datasets, leading to biased models. To mitigate this, datasets should use diverse annotators, provide clear labeling guidelines, and evaluate models for biases before deployment.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Explain the “value alignment problem” in AI and its relevance to machine morality.

A

The value alignment problem involves ensuring that AI systems act according to human ethical standards. It’s critical for machine morality, as misaligned values can lead to decisions that conflict with societal norms, like in self-driving cars deciding whom to prioritize in accidents.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are the main trade-offs between embracing AI’s benefits and addressing its limitations?

A

AI offers immense benefits like efficiency, creativity, and life-saving applications. However, its unpredictable errors, susceptibility to bias, hacking vulnerabilities, and lack of transparency pose significant risks. Balancing these aspects requires careful regulation and responsible AI development.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

True or False: AI models trained on racially diverse datasets are less likely to exhibit bias in facial recognition.

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

The trolley problem illustrates how self-driving cars can make morally neutral decisions in all scenarios.

A

False

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Societal biases influence AI systems through the data and decisions used in their training.

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Algorithmic bias is entirely eliminated by using large datasets.

A

False

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Gender bias in AI image search results correlates with the level of gender inequality in a country.

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly