Domain 2 Flashcards

Understanding AI Impacts on People and Responsible AI Principles

1
Q

What are the OECD AI Principles?

A
  1. Inclusive growth, sustainable development and well-being
  2. Human centered values and fairness
  3. Transparency and Explainability
  4. Robustness, safety and security
  5. Accountability
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Give examples of documented AI ethical considerations.

A
  1. CNIL
  2. OECD AI Principles (2019)
  3. 2021: White House Office of Science and Technology Policy Blueprint for an AI Bill of Rights
  4. UNESCO Principles
  5. Asilomar AI Principles
  6. CNIL AI Action Plan
  7. IEEE Initiative on Ethics of Automomous and Intelligent Systems
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Common principles

A
  1. Data minimization or collection limitation
  2. Use limitation
  3. Safeguards/ security
  4. Notice/ openness
  5. Access/individual participation
  6. Accountability
  7. Purpose specification
  8. Data quality and relevance
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

An organization is deploying an AI system to assist with hiring decisions. During the initial testing phase, the system exhibits bias against certain demographic groups, raising ethical concerns. What should the organization do to ensure fairness and accountability in the development and use of the AI system?

Options:
A. Focus solely on improving the accuracy of the AI model through additional data.
B. Outsource the entire project to a third-party AI vendor.
C. Develop a cross-functional and demographically diverse oversight body to monitor and address issues related to bias and ethics.
D. Rely on the AI vendor’s documentation and guidelines without internal oversight.

A

C. Develop a cross-functional and demographically diverse oversight body to monitor and address issues related to bias and ethics.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

An organization plans to implement an AI-powered facial recognition system for security purposes. During initial planning, concerns are raised about privacy risks, potential misuse, and compliance with regulations. What is the most appropriate next step for the organization?

Options:
A. Proceed with deployment and address issues as they arise through user feedback.
B. Focus exclusively on technical testing to ensure the AI system functions correctly.
C. Assess whether the organization has appropriate policies and procedures in place to manage the associated risks before deployment.
D. Delegate responsibility for compliance and risk management entirely to the AI vendor.

A

C. Assess whether the organization has appropriate policies and procedures in place to manage the associated risks before deployment.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Who may be affected by core risks and harms posed by an AI system?

A
  1. individuals
  2. groups
  3. society
  4. companies/institutions
  5. ecosystems
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Bias types

A
  1. implicit bias
  2. sampling bias
  3. temporal bias
  4. overfitting to training data
  5. edge cases and outliers
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is an example of temporal bias in an AI system?

Options:
A. An AI model trained on outdated data fails to recognize new slang in a sentiment analysis task.
B. An AI system gives preference to one demographic group over another due to an imbalanced dataset.
C. An AI-powered hiring tool consistently favors male candidates for leadership roles.
D. An AI chatbot struggles to understand dialects from different regions.

A

A. An AI model trained on outdated data fails to recognize new slang in a sentiment analysis task.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Temporal bias

A

This occurs when an AI system is based on data that becomes outdated over time, leading to decreased accuracy and relevance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Noise is an example of what kind of bias?

A

Edge cases

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Examples of Individual harms

A
  1. bias and discrimination
  2. Privacy concerns
  3. economic opportunity and job loss
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Part 1:
An AI-powered hiring system rejects a qualified candidate for a managerial position. After an investigation, it is found that the algorithm disproportionately favors candidates from certain universities, leading to an unfair disadvantage for others. What type of individual harm does this scenario represent?

Options:
A. Discrimination in employment and hiring
B. Discrimination in credit approval
C. Discrimination in housing applications
D. Discrimination in education opportunities

A

A. Discrimination in employment and hiring

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What should the organization do to mitigate employment and hiring discrimnation and ensure fairness in its hiring process?

Options:
A. Regularly audit the AI system for bias and ensure the training data is representative of diverse educational backgrounds.
B. Disable the AI system and revert to manual hiring decisions.
C. Focus solely on increasing the speed of the AI’s decision-making process.
D. Outsource the hiring process to an external recruitment agency.

A

A. Regularly audit the AI system for bias and ensure the training data is representative of diverse educational backgrounds.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What group harms may be posed by an AI system?

A
  1. facial recognition
  2. mass surveilance
  3. civil rights
  4. deepening of racial and socio-economic divides
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Which of the following is NOT considered a group harm posed by an AI system?

Options:
A. Facial recognition leading to racial profiling
B. Mass surveillance disproportionately targeting specific communities
C. Denial of a loan due to biased credit scoring algorithms
D. Deepening of racial and socio-economic divides

A

C. Denial of a loan due to biased credit scoring algorithms

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What harm posed by AI systems can affect the democratic process?

A. individual harm
B. group harm
C. societal harm
D. ecosystem harm

A

C. societal harm

17
Q

What are the examples of societal harms?

A
  1. spread of disinformation
  2. ideological bubbles / echo chambers
  3. deepfakes
  4. safety
18
Q

A lethal autonomous weapons that identify targets to attack is an example of:

A. group harms; facial recognition
B. societal harms; safety
C. individual harms; privacy concerns
D. group harms; mass surveilance

A

B. societal harms; safety

19
Q

Which of the following is NOT considered a company or institutional harm posed by an AI system?

Options:
A. Reputational damage from biased AI decisions
B. Cultural shifts causing employee dissatisfaction due to AI implementation
C. Economic losses from AI system failures
D. Denial of social benefits to an individual due to algorithmic errors

A

D. Denial of social benefits to an individual due to algorithmic errors

20
Q

What are the ecosystem harms posed by AI systems?

A
  1. high carbon footprint ( + 626,000 lbs)
  2. drain on natural resources
21
Q

What are the characteristics of a trustworthy AI system?

A
  1. operates in an expected, legal and fair manner
  2. human-centric
  3. accountability
  4. transparency
22
Q
A