draft AI Ethics & Guidelines Flashcards
What is the purpose of UNESCO’s Global AI Ethics and Governance Observatory?
The Observatory provides resources for policymakers, regulators, academics, and civil society to address ethical challenges in AI. It showcases countries’ readiness for ethical AI adoption and hosts the AI Ethics and Governance Lab to share research and practices.
What are the four core values in UNESCO’s Recommendation on AI Ethics?
The four values are:
o Respect for human rights and dignity.
o Fostering peaceful, just, and interconnected societies.
o Ensuring diversity and inclusiveness.
o Promoting environmental and ecosystem flourishing.
What is the Readiness Assessment Methodology (RAM)?
RAM is a tool designed to help countries assess their preparedness to implement the Recommendation on AI Ethics and to tailor UNESCO’s capacity-building support accordingly.
Explain the Ethical Impact Assessment (EIA) process.
EIA is a structured process that allows AI project teams to evaluate the societal impacts of AI systems in collaboration with affected communities. It identifies potential harms and suggests prevention strategies.
Why is gender equality emphasized in UNESCO’s AI ethics framework?
Gender equality ensures non-discriminatory algorithms, increases representation of women in AI, and reduces biases in AI development and deployment. Initiatives like Women4Ethical AI advocate for these goals.
What are some key principles in the Recommendation on AI Ethics?
Key principles include proportionality, safety, transparency, accountability, human oversight, sustainability, public awareness, and fairness.
What does the Business Council for Ethics of AI aim to achieve?
The Council, co-chaired by Microsoft and Telefónica, promotes ethical practices in AI development, strengthens technical capacities, implements Ethical Impact Assessments, and supports regional regulation development.
What is the significance of transparency and explainability in AI?
Transparency ensures AI systems are understandable and traceable, while explainability helps users and stakeholders comprehend AI decisions, balancing these goals with privacy and security.
How does UNESCO define AI in its Recommendation?
UNESCO broadly defines AI as systems capable of processing data in ways that resemble intelligent behavior. This flexible definition accommodates rapid technological advancements.
What are the potential risks of AI highlighted by UNESCO?
Risks include embedding biases, contributing to climate degradation, threatening human rights, and exacerbating inequalities that harm marginalized groups.
What role does public awareness play in AI ethics?
Public awareness ensures that citizens understand AI’s implications, promoting education, digital skills, and ethical literacy to foster responsible AI use and governance.
What does proportionality in AI ethics mean?
Proportionality ensures AI systems are used only for legitimate aims and do not cause unnecessary harm, with risks assessed and mitigated appropriately.
What are the three main characteristics of trustworthy AI as defined by the European Commission’s High-Level Expert Group?
Trustworthy AI should be:
1. Lawful - Complying with laws and regulations.
2. Ethical - Respecting ethical principles and values.
3. Robust - Secure, resilient, and socially aware.
What does human agency and oversight mean in the context of trustworthy AI?
Human agency and oversight ensure that AI systems empower individuals to make informed decisions and uphold their fundamental rights. Oversight mechanisms include human-in-the-loop, human-on-the-loop, and human-in-command approaches.
How can technical robustness and safety be achieved in AI systems?
AI systems should be resilient, secure, and reliable, with fallback mechanisms to address failures. They must minimize unintentional harm and be accurate, reliable, and reproducible.
Why is privacy and data governance important in AI ethics?
Privacy and data governance ensure full respect for data protection, data quality, integrity, and legitimate access. These measures uphold trust in AI systems.
How does transparency contribute to trustworthy AI?
Transparency involves making AI systems traceable and understandable. Stakeholders should receive explanations tailored to their needs, know they are interacting with AI, and be informed about the system’s capabilities and limitations.
What is the significance of diversity, non-discrimination, and fairness in AI systems?
These principles prevent unfair biases, ensure accessibility, promote diversity, and involve stakeholders throughout the AI lifecycle to avoid marginalization or discrimination.
How should AI systems support societal and environmental well-being?
AI should benefit humanity and future generations by being sustainable, environmentally friendly, and socially considerate of its broader impacts.
What mechanisms ensure accountability in AI systems?
Accountability requires auditability of algorithms, data, and design processes. Accessible redress mechanisms must also be in place, particularly for critical applications.
What is the purpose of the assessment list provided in the guidelines?
The assessment list helps verify whether AI systems meet the seven key requirements for trustworthiness outlined in the guidelines.
What oversight mechanisms are recommended for trustworthy AI?
Recommended oversight mechanisms include:
o Human-in-the-loop: Humans can intervene during AI operation.
o Human-on-the-loop: Humans monitor AI processes and outcomes.
o Human-in-command: Humans retain ultimate control over AI decisions.
How can AI systems avoid unfair biases?
By ensuring diverse and representative data, involving stakeholders, and making AI accessible to all, regardless of disabilities or other barriers.
What role does sustainability play in AI ethics?
AI should promote environmental friendliness, reduce ecological impacts, and consider the needs of future generations and other living beings.
How can AI systems foster public trust through transparency?
By ensuring traceability, providing clear explanations, and informing users about the AI system’s nature, capabilities, and limitations.
What is the primary consideration for computing professionals according to the ACM Code of Ethics?
The public good is the primary consideration.
What is the role of the ACM Code of Ethics?
To inspire and guide ethical conduct, ensure accountability, and serve as a basis for remediation in case of violations.
How does the Code address ethical dilemmas?
It encourages thoughtful consideration of fundamental principles, recognizing that multiple principles may apply differently to an issue.
What does “contributing to society and human well-being” entail?
Promoting human rights, minimizing harm, prioritizing marginalized groups, and supporting environmental sustainability.
How should computing professionals handle unintended harm?
They should mitigate or undo the harm to the extent possible and ensure systems are designed to minimize risks.
Why is honesty important in computing ethics?
Honesty builds trust and ensures transparency in professional practices and system capabilities.
What are examples of discriminatory behavior prohibited by the ACM Code?
Discrimination based on age, race, gender, religion, disability, or other inappropriate factors, as well as harassment and bullying.
How does the Code suggest addressing intellectual property?
By respecting copyrights, patents, and other protections, while encouraging public contributions where beneficial.
What are the responsibilities of computing professionals concerning privacy?
Use personal data responsibly, ensure transparency, obtain informed consent, and collect only the minimum necessary data.