AI Ethics & Guidelines Flashcards

1
Q

What is the purpose of UNESCO’s Global AI Ethics and Governance Observatory?

A

The Observatory provides resources for policymakers, regulators, academics, and civil society to address ethical challenges in AI. It showcases countries’ readiness for ethical AI adoption and hosts the AI Ethics and Governance Lab to share research and practices.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are the four core values in UNESCO’s Recommendation on AI Ethics?

A

The four values are:
o Respect for human rights and dignity.
o Fostering peaceful, just, and interconnected societies.
o Ensuring diversity and inclusiveness.
o Promoting environmental and ecosystem flourishing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is the Readiness Assessment Methodology (RAM)?

A

RAM is a tool designed to help countries assess their preparedness to implement the Recommendation on AI Ethics and to tailor UNESCO’s capacity-building support accordingly.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Explain the Ethical Impact Assessment (EIA) process.

A

EIA is a structured process that allows AI project teams to evaluate the societal impacts of AI systems in collaboration with affected communities. It identifies potential harms and suggests prevention strategies.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Why is gender equality emphasized in UNESCO’s AI ethics framework?

A

Gender equality ensures non-discriminatory algorithms, increases representation of women in AI, and reduces biases in AI development and deployment. Initiatives like Women4Ethical AI advocate for these goals.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are some key principles in the Recommendation on AI Ethics?

A

Key principles include proportionality, safety, transparency, accountability, human oversight, sustainability, public awareness, and fairness.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What does the Business Council for Ethics of AI aim to achieve?

A

The Council, co-chaired by Microsoft and Telefónica, promotes ethical practices in AI development, strengthens technical capacities, implements Ethical Impact Assessments, and supports regional regulation development.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is the significance of transparency and explainability in AI?

A

Transparency ensures AI systems are understandable and traceable, while explainability helps users and stakeholders comprehend AI decisions, balancing these goals with privacy and security.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

How does UNESCO define AI in its Recommendation?

A

UNESCO broadly defines AI as systems capable of processing data in ways that resemble intelligent behavior. This flexible definition accommodates rapid technological advancements.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are the potential risks of AI highlighted by UNESCO?

A

Risks include embedding biases, contributing to climate degradation, threatening human rights, and exacerbating inequalities that harm marginalized groups.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What role does public awareness play in AI ethics?

A

Public awareness ensures that citizens understand AI’s implications, promoting education, digital skills, and ethical literacy to foster responsible AI use and governance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What does proportionality in AI ethics mean?

A

Proportionality ensures AI systems are used only for legitimate aims and do not cause unnecessary harm, with risks assessed and mitigated appropriately.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What are the three main characteristics of trustworthy AI as defined by the European Commission’s High-Level Expert Group?

A

Trustworthy AI should be:
1. Lawful - Complying with laws and regulations.
2. Ethical - Respecting ethical principles and values.
3. Robust - Secure, resilient, and socially aware.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What does human agency and oversight mean in the context of trustworthy AI?

A

Human agency and oversight ensure that AI systems empower individuals to make informed decisions and uphold their fundamental rights. Oversight mechanisms include human-in-the-loop, human-on-the-loop, and human-in-command approaches.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

How can technical robustness and safety be achieved in AI systems?

A

AI systems should be resilient, secure, and reliable, with fallback mechanisms to address failures. They must minimize unintentional harm and be accurate, reliable, and reproducible.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Why is privacy and data governance important in AI ethics?

A

Privacy and data governance ensure full respect for data protection, data quality, integrity, and legitimate access. These measures uphold trust in AI systems.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

How does transparency contribute to trustworthy AI?

A

Transparency involves making AI systems traceable and understandable. Stakeholders should receive explanations tailored to their needs, know they are interacting with AI, and be informed about the system’s capabilities and limitations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What is the significance of diversity, non-discrimination, and fairness in AI systems?

A

These principles prevent unfair biases, ensure accessibility, promote diversity, and involve stakeholders throughout the AI lifecycle to avoid marginalization or discrimination.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

How should AI systems support societal and environmental well-being?

A

AI should benefit humanity and future generations by being sustainable, environmentally friendly, and socially considerate of its broader impacts.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What mechanisms ensure accountability in AI systems?

A

Accountability requires auditability of algorithms, data, and design processes. Accessible redress mechanisms must also be in place, particularly for critical applications.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What is the purpose of the assessment list provided in the guidelines?

A

The assessment list helps verify whether AI systems meet the seven key requirements for trustworthiness outlined in the guidelines.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What oversight mechanisms are recommended for trustworthy AI?

A

Recommended oversight mechanisms include:
o Human-in-the-loop: Humans can intervene during AI operation.
o Human-on-the-loop: Humans monitor AI processes and outcomes.
o Human-in-command: Humans retain ultimate control over AI decisions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

How can AI systems avoid unfair biases?

A

By ensuring diverse and representative data, involving stakeholders, and making AI accessible to all, regardless of disabilities or other barriers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What role does sustainability play in AI ethics?

A

AI should promote environmental friendliness, reduce ecological impacts, and consider the needs of future generations and other living beings.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

How can AI systems foster public trust through transparency?

A

By ensuring traceability, providing clear explanations, and informing users about the AI system’s nature, capabilities, and limitations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

What is the primary consideration for computing professionals according to the ACM Code of Ethics?

A

The public good is the primary consideration.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

What is the role of the ACM Code of Ethics?

A

To inspire and guide ethical conduct, ensure accountability, and serve as a basis for remediation in case of violations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

How does the Code address ethical dilemmas?

A

It encourages thoughtful consideration of fundamental principles, recognizing that multiple principles may apply differently to an issue.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

What does “contributing to society and human well-being” entail?

A

Promoting human rights, minimizing harm, prioritizing marginalized groups, and supporting environmental sustainability.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

How should computing professionals handle unintended harm?

A

They should mitigate or undo the harm to the extent possible and ensure systems are designed to minimize risks.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Why is honesty important in computing ethics?

A

Honesty builds trust and ensures transparency in professional practices and system capabilities.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

What are examples of discriminatory behavior prohibited by the ACM Code?

A

Discrimination based on age, race, gender, religion, disability, or other inappropriate factors, as well as harassment and bullying.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

How does the Code suggest addressing intellectual property?

A

By respecting copyrights, patents, and other protections, while encouraging public contributions where beneficial.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

What are the responsibilities of computing professionals concerning privacy?

A

Use personal data responsibly, ensure transparency, obtain informed consent, and collect only the minimum necessary data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

When is it acceptable to disclose confidential information?

A

Only when it is ethically or legally justified, such as evidence of violations of laws or ethical codes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

A software engineer discovers a security vulnerability in a public-facing system but their manager decides not to act. What should they do?

A

They should consider whistleblowing responsibly after assessing the situation to reduce potential harm, as per the principle of avoiding harm.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

An AI system being designed excludes a significant portion of the population due to accessibility issues. Which principle does this violate?

A

The principle of fairness and non-discrimination.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

A company wants to repurpose customer data collected for one project into another. What must they ensure?

A

They must obtain informed consent from users, ensure transparency, and respect privacy guidelines.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

How can computing professionals promote sustainability according to the ACM Code?

A

By designing systems that are environmentally friendly and consider the impact on future generations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

An individual claims authorship of open-source code created by another developer. What principle is violated?

A

Respect for intellectual work and property.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

What must professionals ensure regarding the quality of their work?

A

They must deliver high-quality work to respect stakeholders and resist pressures to compromise quality

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

Why is transparent communication important in professional work?

A

It ensures all stakeholders are informed about the project’s progress and potential impacts

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

What are the components of professional competence?

A

Technical skills, social awareness, communication abilities, and ethical reasoning

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

How can professionals maintain their competence over time?

A

Through ongoing education, attending conferences, and independent study

45
Q

When is it ethical to challenge a rule?

A

When the rule lacks moral basis or causes harm, and efforts to challenge it through appropriate channels have been made

46
Q

What should professionals consider if they violate a rule?

A

They must accept responsibility and weigh the potential consequences of their actions

47
Q

Why is professional review essential?

A

It ensures high-quality work by leveraging peer and stakeholder feedback at all stages

48
Q

What is the responsibility of a professional when reviewing others’ work?

A

To provide constructive and critical feedback

49
Q

What is a professional’s responsibility in evaluating systems?

A

To objectively identify risks, provide credible assessments, and recommend alternatives

50
Q

What should be done if a system’s risks cannot be reliably predicted?

A

The system should undergo frequent reassessment or not be deployed

51
Q

What should a professional do if they lack expertise for a task?

A

They must disclose this to the employer or client and suggest alternatives, such as additional training or involving a qualified individual

52
Q

Who decides whether a professional should take on a task outside their expertise?

A

The professional’s ethical judgment should guide the decision

53
Q

Why should professionals communicate technical knowledge to the public?

A

To foster awareness of computing’s impacts, limitations, and opportunities

54
Q

How should professionals address misleading information?

A

Respectfully and with clear, accurate information

55
Q

When is unauthorized access to systems ethically acceptable?

A

When it serves the public good, such as disrupting malicious systems, and precautions are taken to avoid harm

56
Q

Is a system being publicly accessible sufficient to imply authorization?

A

No, authorization must be explicitly given

57
Q

Why must security be a primary consideration in system design?

A

Because security breaches can cause harm to users and stakeholders

58
Q

How should professionals handle data breaches?

A

They should notify affected parties promptly and provide appropriate guidance and remediation

59
Q

What should be done if misuse or harm from a system is predictable and unavoidable?

A

The system should not be implemented

60
Q

A professional is asked to design a system beyond their expertise. What should they do?

A

Disclose their limitations, suggest acquiring additional training, or recommend involving a more qualified individual

61
Q

A company decides not to address known risks in a deployed system. What should a professional do?

A

Report the risks to the appropriate parties and ensure they are documented, following ethical guidelines

62
Q

An individual accesses a public database for research without explicit authorization. Is this ethical?

A

No, explicit authorization is required, even if the database is publicly accessible

63
Q

A security breach compromises user data. What actions should a professional take?

A

Notify affected parties promptly, ensure transparency, and provide clear remediation steps

64
Q

A professional identifies misleading information about a computing system in the media. What should they do?

A

Respectfully correct the misinformation with accurate, clear details

65
Q

What should be the central concern in all phases of computing work?

A

The public good, including the impacts on users, colleagues, and broader society

66
Q

How should the public good be incorporated into professional decisions?

A

By explicitly considering it during research, design, testing, deployment, and other stages

67
Q

What is the role of leaders in promoting social responsibility?

A

To encourage full participation in ethical practices, raise awareness, and reduce harm to society

68
Q

How can organizations promote ethical conduct?

A

By fostering attitudes and procedures that prioritize transparency, quality, and societal welfare

69
Q

How should leaders enhance the quality of working life?

A

By prioritizing accessibility, safety, psychological well-being, and professional development

70
Q

Why are ergonomic standards important?

A

To ensure a safe and efficient workplace that supports workers’ health and productivity

71
Q

What must leaders ensure about organizational policies?

A

That they reflect the principles of the ACM Code and are clearly communicated

72
Q

What actions should leaders take against unethical policies?

A

They should discourage, reform, or challenge them as ethically unacceptable

73
Q

What types of opportunities should leaders provide?

A

Training in technical skills, ethical practices, and understanding the complexities of computing systems

74
Q

Why is familiarity with system limitations important?

A

To enable professionals to anticipate risks, address errors, and responsibly manage complex systems

75
Q

Why should leaders take care when retiring or changing systems?

A

To minimize disruptions for users and support transitions to alternative systems

76
Q

What is the leader’s responsibility regarding legacy systems?

A

To explore alternatives and assist users in migrating or adapting to new solutions

77
Q

Why do leaders have added responsibilities for societal infrastructure systems?

A

Because these systems impact commerce, healthcare, education, and other critical areas of society

78
Q

How should leaders address system accessibility?

A

By establishing policies that ensure fair access, especially for historically excluded groups

79
Q

How can professionals contribute to the Code’s principles?

A

By adhering to them, addressing ethical breaches, and proposing improvements to the Code

80
Q

What should a professional do when they notice an ethical violation?

A

Take reasonable action to address the issue, including discussing it with those involved

81
Q

What is expected of ACM members regarding the Code?

A

They should encourage adherence and report violations to the ACM when necessary

82
Q

What might happen if a violation is reported to the ACM?

A

The ACM may take remedial action as outlined in its enforcement policy

83
Q

A leader is considering retiring a legacy system. What steps should they take?

A

Investigate alternatives, notify users of risks, and provide support for a smooth migration

84
Q

A system designed for healthcare becomes critical infrastructure. How should leaders respond?

A

Monitor the system’s integration, ensure fair access, and adapt ethical responsibilities as adoption evolves

85
Q

A professional notices their organization’s policy violates the Code. What should they do?

A

Raise the issue with the appropriate stakeholders and work toward reforming the policy

86
Q

An ACM member sees a colleague violate the Code. What should they consider doing?

A

Discuss the issue with the colleague and, if necessary, report the violation to the ACM

87
Q

A leader is implementing a new workplace policy. How can they ensure it aligns with the Code?

A

Articulate the policy clearly, ensure consistency with ethical principles, and reward compliance

88
Q

What are AI Ethics Guidelines, and why are they important?

A

AI Ethics Guidelines are frameworks designed to ensure the ethical development and use of AI systems. They promote responsible innovation, appropriate trust, global cooperation, and policy guidance to benefit humanity without harm.

89
Q

Who creates AI Ethics Guidelines?

A

These guidelines are created by public institutions (e.g., UNESCO, EU), standards organizations (e.g., ISO), academic institutions (e.g., IEEE, ACM), and private companies (e.g., Google, Microsoft).

90
Q

What is ethics washing in AI?

A

Ethics washing refers to exaggerating or falsely claiming ethical AI practices to distract from harmful activities, such as promoting “AI for good” while selling unethical technologies.

91
Q

Who is Timnit Gebru, and why is her case significant?

A

Timnit Gebru is an AI ethics researcher who raised concerns about bias, environmental impact, and societal harm in AI. Her controversial resignation from Google in 2020 highlighted issues of “ethics washing” and corporate accountability.

92
Q

What are the EU Ethics Guidelines for Trustworthy AI?

A
  • The EU Ethics Guidelines for Trustworthy AI are one of the core pieces of the European AI strategy and were published in April 2019.
  • Created by an independent High-Level Expert Group on AI (HLEG AI), consisting of 52 experts from academia, civil society.
  • Published after several versions and more than 500 public consultations
  • Informed the European Union’s (EU) policies and legislation about AI (“AI Act”)
93
Q

List the EU’s 7 key requirements for Trustworthy AI

A
  1. Human agency and oversight
  2. Technical robustness and safety
  3. Privacy and data governance
  4. Transparency
  5. Diversity, non-discrimination and fairness
  6. Societal and environmental well-being
  7. Accountability
94
Q

What is the significance of the UNESCO AI Ethics Recommendation?

A

It is the first global standard on AI ethics, emphasizing human rights and providing actionable principles like proportionality, safety, privacy, fairness, and sustainability.

95
Q

What is the UNESCO Recommendation on the Ethics of AI?

A
  • First-ever global standard on AI ethics, published in 2021
  • Created with input from experts from UNESCO member states, applicable to all 194 member states
  • Protection of human rights and dignity as the cornerstone of the Recommendation
  • Practical applicability due to defined Policy Action Areas, which allow to translate the core values and principles into action
96
Q

What are the 10 core principles of UNESCO Ethics of AI?

A
  1. PROPORTIONALITY AND DO NO HARM
    use of AI systems only for what is necessary to achieve a legitimate aim. Risk assessment to prevent harms
  2. SAFETY AND SECURITY
    avoid and address unwanted harms (safety risks) and vulnerabilities to attack (security risks)
  3. RIGHT TO PRIVACY AND DATA PROTECTION
    Privacy must be protected and promoted throughout the AI lifecycle. Establish adequate data protection frameworks
  4. MULTI-STAKEHOLDER AND ADAPTIVE GOVERNANCE AND COLLABORATION
    Respect international law and national sovereignty in the use of data (i.e. states can regulate data generated within or passing through their territories). Participation of diverse stakeholders for inclusive approaches to AI governance.
  5. RESPONSIBILITY AND ACCOUNTABILITY
    AI systems should be auditable and traceable. There should be oversight, impact assessment, audit and due diligence mechanisms in place to avoid conflicts with human rights norms and threats to environmental wellbeing.
  6. TRANSPARENCY AND EXPLAINABILITY
    The ethical deployment of AI systems depends on their transparency and explainability, e.g. people should be made aware when a decision is informed by AI. The level of transparency and explainability should be appropriate to the context, as there may be tensions between transparency and explainability and other principles such as privacy, safety and security.
  7. HUMAN OVERSIGHT AND DETERMINATION
    Member States should ensure that AI systems do not displace ultimate human responsibility and accountability.
  8. SUSTAINABILITY
    AI technologies should be assessed against their impacts on ‘sustainability’, understood as a set of constantly evolving goals including those set out in the UN’s Sustainable Development Goals.
  9. AWARENESS AND LITERACY
    Public understanding of AI and data should be promoted through open and accessible education, civic engagement, digital skills and AI ethics training, media and information literacy.
  10. FAIRNESS AND NON-DISCRIMINATION
    AI actors should promote social justice, fairness, and non-discrimination while taking an inclusive approach to ensure AI’s benefits are accessible to all.
97
Q

Define “proportionality” in AI ethics.

A

Proportionality ensures that AI use does not exceed what is necessary to achieve a legitimate goal, with risk assessments to prevent harm.

98
Q

What does “explainability” in AI mean?

A

Explainability refers to making the logic behind AI decisions interpretable by experts and understandable to users, avoiding opaque “black box” systems.

99
Q

What is machine learning, and why can it be problematic?

A

Machine learning enables AI to learn from data and improve over time. However, it can introduce bias, as seen in cases where algorithms perform unequally across different demographic groups.

100
Q

What was the breakthrough provision in UNESCO’s recommendation?

A

The recommendation included a prohibition against AI use for social scoring and mass surveillance.

101
Q

What is the role of multi-stakeholder governance in AI ethics?

A

It ensures diverse participation in AI regulation, respecting international law and national sovereignty.

102
Q

How does the EU AI Act relate to the Ethics Guidelines for Trustworthy AI?

A

The EU AI Act is informed by the guidelines and aims to create enforceable regulations for ethical and robust AI systems.

103
Q

What are safety risks and security risks?

A
  • Safety risks: unwanted harms
  • Security risks: vulnerabilities to attack
104
Q

What are the eleven key areas for policy action?

A
  1. Ethical Impact assessment
  2. Ethical governance and stewardship
  3. Data policy
  4. Development and international cooperation
  5. Environment and ecosystems
  6. Gender
  7. Culture
  8. Education and research
  9. Communication and information
  10. Economy and labour
  11. Health and social wellbeing
105
Q

What are the two methodologies for implementing the recommendation?

A
  • Readiness Assessment Methodology (RAM)
  • Ethical Impact Assessment (EIA)
106
Q

Explain Readiness Assessment Methodology (RAM)

A

A series of quantitative and qualitative questions, gather information about different dimensions related to a country’s AI ecosystem (legal and regulatory, social and cultural, economic, scientific and educational, and technological and infrastructural)
RAM is carried out by an independent consultant or research organization, supported by a national team (variety of stakeholders, e.g. UNESCO personnel, country’s government, the academic community, civil society and the private sector)
Final output: country report with overview status of AI readiness in the country, concrete policy recommendations on how to address governance gaps

107
Q

Explain Ethical Impact Assessment (EIA)

A

entire process of designing, developing and deploying an AI system for assessment of risks before and after the system is released to the public
1. Scoping questions: assess fundamentals of the AI project and whether you and your team are in a position to continue with the rest of the EIA (should have established that the AI project is not prohibited by the Recommendation, approach is proportionate to intended aims, plans to involve stakeholders in line with guidelines)
2. Implementing the UNESCO principles: assess whether design, development and deployment of AI system will result in processes and outcomes consistent with the UNESCO principles for ethical AI.
For each principle:
a. sufficient procedural safeguards have been put in place
b. (potential) positive outcomes and adverse impacts that may arise from the procurement and deployment of the system, specific to its context of use.
EIA is a living document that will be filled out progressively and iteratively at different stages including:
* project research, design, development and pre-procurement (e.g. reflect on scope of project, legitimate aims and whether AI is an appropriate solution)
* during procurement process (help selecting a supplier, formulating contractual obligations)
* after project deployment regular intervals for revisiting EIA (answers may change over time)

108
Q

What should a trustworthy AI be according to the EU High-Level Expert Group’s Guidelines?

A

(1) lawful - respecting all applicable laws and regulations
(2) ethical - respecting ethical principles and values
(3) robust - both from a technical perspective while taking into account its social environment

109
Q

What are the 7 key requirements that a trustworthy AI should fulfil according to the EU High-Level Expert Group’s Guidelines? (list)

A
  • Human agency and oversight
  • Technical Robustness and safety
  • Privacy and data governance
  • Transparency
  • Diversity, non-discrimination and fairness
  • Societal and environmental well-being
  • Accountability