Module 7 Flashcards

1
Q

What are the core objectives of the EU AI Act?

A
  • Ensure AI systems are safe and respect EU values and fundamental rights
  • Ensure legal certainty to promote investment in innovation in AI across the EU
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are the important dates related to the EU AI Act?

A
  • April 2021 – European commission first published proposals
  • December 2022 – The Council of the European Union published their position on the AI Act
  • June 2023 – The European Parliament agreed on their final negotiated position
  • Summer 2023 – Trilogue negotiations (takes a few months)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are Trilogue negotiations?

A

3-way negotiations between the EU Commission, Council of the EU and European Parliament to determine the final version of the Act

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

According to the EU Commission’s original proposal, what is the definition of AI?

A
  • They defined AI very broadly
  • AI is any software that is developed with specific techniques and approaches that can, for a given set of human defined objectives, generate outputs like content, recommendations or predictions which influence the environments they interact with
  • They also refer to a range of software-based techniques such as machine learning, logic and knowledge-based systems, and statistical approaches
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What part of the EU Commission’s definition of AI was considered controversial?

A

Statistical approaches
- It potentially encompasses a broad range of technologies – as such, the Council and the European Parliament seek to narrow the definition of AI to focus more on machine learning

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

According to the EU Commission’s original proposal, what is an AI provider?

A

An entity that develops AI systems to sell or otherwise make available

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

According to the EU Commission’s original proposal, what is an AI user?

A
  • An entity that uses an AI system under its authority
  • A customer of the AI provider that uses the AI system for a specific objective
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Who is responsible for the majority of the compliance obligations and requirements in the Act?

A

The AI provider

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Can the AI Act apply to providers and users based outside the EU?

A

Yes, the AI Act is extraterritorial

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

List the exception in the EU AI Act

A

Military context

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

How does the Council of the EU propose to broaden the exceptions in the EU AI Act?

A
  • Widen Military context to cover national security and defense
  • Add Research & development, for example for products in the private sector
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

List the 4 risk classification levels in the EU AI Act

A
  • Unacceptable risk
  • High risk
  • Limited risk
  • Minimal risk
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

According to the EU AI Act, which risk level(s) is/are prohibited?

A

Unacceptable risk

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

According to the EU AI Act, which AI techniques are considered prohibited?

A
  • Subliminal techniques
  • Exploitation
  • Social credit scores
  • Real-time biometric identification in public spaces by law enforcement
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What are subliminal techniques in AI models?

A

AI systems that deploy subliminal techniques beyond the individual’s consciousness in order to materially distort a person’s behavior in a manner that is likely to cause harm

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is exploitation in AI models?

A

AI systems which exploit the vulnerabilities of a group due to their age, physical or mental disability in order to distort a group-member’s behavior in a manner that is likely to cause harm

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What are social credit score in AI models?

A

AI systems typically used by public authorities to score people based on their behavior in the social sphere and then either remove or grant benefits based on that behavior

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Provide an example of real-time biometric identification in public spaces by law enforcement according to the EU AI Act

A

Facial recognition

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What exceptions exist in relation to real-time biometric identification in public spaces by law enforcement according to the EU AI Act?

A
  • Prevention of terrorist attacks
  • Finding missing children
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What do you have to do to use the exception for real-time biometric identification in public spaces by law enforcement according to the EU AI Act?

A
  • Judicial authorization
  • Safeguards have to be put in place
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What prohibited techniques did the European Parliament suggest be added to the EU AI Act?

A
  • Predictive policing systems
  • Emotion recognition systems in law enforcement, educational institutions and workplaces
  • Any real-time biometric identification systems in public spaces (not just in law enforcement as suggested by the Commission)
  • Scraping facial images for databases for facial recognitions models
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

To which classification level do the majority of the EU AI Act’s requirements apply?

A

High risk

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What are the 2 categories of high risk AI systems according to the EU AI Act?

A
  • 8 different high risk areas listed in Annex III
  • AI systems that are a safety component of a product, or is itself a product, covered by EU safety laws
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Provide examples of AI systems that are a safety component or a product, or are safety products

A
  • Machinery
  • Medical devices
  • Motor vehicles
  • Toys
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

List the 8 high risk areas listed in Annex III of the EU AI Act

A
  1. Biometrics
  2. Critical infrastructure
  3. Education and vocational training
  4. Employment, workers management and access to self-employment
  5. Access to and enjoyment of essential private services and essential public services and benefits
  6. Law enforcement
  7. Migration, asylum and border control management
  8. Administration of justice and democratic processes
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

What 2 items is the European Parliament seeking to add to the list of high-risk systems?

A
  • Influence of voters and the outcome of elections
  • Systems used by social media platforms
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

What important amendment to the definition of high risks systems was proposed by the Council of the EU?

A

AI should only be considered high-risk when the output of the AI system has a high degree of importance (not purely accessory to the relevant action or decision)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

According to the EU AI Act, which systems are considered limited risk?

A
  • Designed to interact with people – for example, chatbots
  • Used to detect emotions – or to determine associations with social categories based on biometric data (emotional detection systems)
  • Able to generate or manipulate content (for example, producing deep-fake videos)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

According to the EU AI Act, what are the requirements for limited risk systems?

A
  • Inform individuals interacting with, or being assessed or classified by, these systems
  • Disclose that AI generated the content (some exceptions for artistic expression and law enforcement)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

According to the EU AI Act, what are the requirements for minimal risk systems?

A
  • No obligations or rules as to how these systems are developed or used
  • Codes of conduct may be created to encourage organizations to apply requirements for high-risk systems to lower-risk systems; this is voluntary
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

According to the EU AI Act, what are the requirements for high risk systems?

A
  1. Implement a risk-management system
  2. Data and data governance requirements
  3. Technical documentation must be created and maintained
  4. Record-keeping logging of AI system
  5. Requirements for transparency
  6. Requirements for human oversight
  7. Requirements for accuracy, robustness and cybersecurity
  8. Implement a quality management system and perform a conformity assessment
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Describe the following requirement of the EU AI Act:
1. Implement a risk-management system

A

The provider has to:
- Identify and do an analysis of the different risks which could be posed by the system
- Put in place risk management measures and mitigations such as AI system testing which takes into account the state of the AI in the field at the time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Describe the following requirement of the EU AI Act:
2. Data and data governance requirements

A
  • The aim is for the data used to train, test and validate AI systems be as high quality as possible
  • Input data should be relevant, representative, free of errors and complete
  • Robust data governance and management should be put in place – data collection, labelling, cleansing
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

Describe the following requirement of the EU AI Act:
3. Technical documentation must be created and maintained

A
  • There is a range of documents and evidence which the provider has to put together before they can put the system on the market, and they have to maintain it post-deployment
  • The purpose is to demonstrate how the AI system is complying with all of these requirements such as setting out the purpose, information about the risk management system, the conformity assessment, information about how the system was developed, its architecture and model, training data, etc.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

Describe the following requirement of the EU AI Act:
4. Record-keeping logging of AI system

A
  • High risk AI systems should automatically record events such as inputs that the system is receiving and the outputs that the system is generating (prediction, content, etc.)
  • AI systems and their functioning should be traceable, you should be able to go back in time and understand at a given point in time what the system was doing and how it was working
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

Describe the following requirement of the EU AI Act:
5. Requirements for transparency

A
  • Providers have to put together an instructions for use document with clear, accessible and relevant information that is intended for the user
  • For example, system maintenance, capabilities and limitations, how you can implement human oversight, information about the provider and how they built the system
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

Describe the following requirement of the EU AI Act:
6. Requirements for human oversight

A
  • Humans should be able to understand how the AI system works, its capacities and limitations, and crucially, humans should be able to understand the entire AI output (explainability)
  • Humans should also be able to intervene, stop, and override AI outputs
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

Describe the following requirement of the EU AI Act:
7. Requirements for accuracy, robustness and cybersecurity

A
  • You have to have a high level of accuracy, robustness and cybersecurity
  • AI system should perform consistently, be tested regularly and be resilient to cybersecurity threats
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

Describe the following requirement of the EU AI Act:
8. Implement a quality management system and perform a conformity assessment

A
  • Quality management system should cover the strategy for regulatory compliance, technical build specifications or standards, and post-deployment monitoring
  • Conformity assessment is meant to formally demonstrate how the AI system is compliant prior to putting it on the market
  • Once a declaration of conformity is completed, the provider should affix the CE marking on the AI system, similarly to traditional products
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

According to the EU AI Act, what are the requirements for users/deployers of high-risk AI systems?

A
  • Follow the instructions for use
  • Monitor high-risk systems and suspend use if there are any serious issues
  • Update the provider about serious incidents or malfunctioning
  • Keep automatically generated logs
  • Assign human oversight
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

According to the EU AI Act, what are the requirements for importers/distributors of high-risk AI systems?

A
  • Ensure the conformity assessment is completed and marked on the product
  • Ensure all technical documentation is available, including instructions for use
  • Not place a product on the market if the high-risk system does not conform to requirements
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

According to the EU AI Act, how will registration be managed?

A
  • The Act establishes an EU-wide database for high-risk AI systems
  • Public database accessible to anyone, operated and maintained by the commission, with data provided by the AI providers
  • Providers will have to register an AI system prior to putting it to market
  • Information that will be included – things like contact information, details about the provider, intended purpose of the system, copy of the EU declaration of conformity, copy of the instructions for use
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

According to the EU AI Act, how will notification be managed?

A
  • Providers must establish and document a post-market monitoring system
  • Keeping track of how the AI system is performing and what it is doing after it is being used
  • Key requirement is that providers must report serious incidents or malfunctioning which could breach obligations under EU law to protect fundamental rights to their local market surveillance authority
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

Under the EU AI Act, what are the reporting requirements?

A

Serious incidents or malfunctioning of high-risk AI systems must be reported within 15 days of the provider becoming aware of the issue

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

According to the Council of the EU, what is general purpose AI?

A

General purpose AI includes systems that can have many downstream tasks and use cases

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

Is general purpose AI included in the EU AI Act?

A

Not at the moment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

How does the Council of the EU want to deal with general purpose AI?

A

The Council wants a future act stipulating which requirements for high-risk AI should apply to general-purpose AI and that this implementing act should come following a consultation and detailed impact assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

What is the European Parliament’s position on which requirements should be applied to general-purpose AI and foundation models?

A
  • They say that providers of foundation models must assess and mitigate the model’s risks and they should register their models in the EU database prior to release on the EU market
  • They also want greater transparency requirements for providers of foundation models (for example, disclosure and labels that the content is AI generated, as well as publicly available and detailed summaries of copyrighted data that was used in training the models)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

In the EU AI Act originally proposed by the EU Commission, who is responsible for enforcing the Act?

A
  • Member states would have to designate national supervisory authority or authorities to enforce the Act
  • These could be new or existing authorities
  • It is likely that multiple authorities may be responsible for governance of the Act because within a member state because, according to the Act, existing market surveillance authorities (for financial services, medical devices, motor vehicles, etc…) will continue to be the market surveillance authorities in relation to AI Act requirements
  • Some member states could potentially designate a central coordinating supervisory authority in these cases
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

Who did the European Parliament propose to enforce the Act?

A

The European Parliament proposed that there should be only one national supervisory authority

51
Q

Describe the current discussions around the creation of a European AI Board

A
  • It would be composed of the national supervisory authorities from each state and chaired by the European Commission
  • The idea is to have some sort of EU board which ensures alignment and consistency in application across the member states
  • The Commission and the Parliament have weighed in and have a slightly different view on what this organization should do
  • The Commission calls it the AI Board, the Parliament calls it the AI Office
  • So, there will be some sort of body at the EU level but precisely what it does, and the scope of its powers are to be determined
52
Q

What penalties were included in the original EU AI Act proposed by the European Commission?

A
  • 30 million euros or six percent of global turnover for the preceding year
  • The highest fine would be reserved for use of prohibited AI
  • For most instances of noncompliance, the penalty would be up to 20 million euros or four percent of global turnover for the preceding year
53
Q

What amendment to the EU AI Act penalties were proposed by the Council of the EU?

A

Make fines more proportionate for start-ups and small- or medium-sized enterprises - so, there could be additional, lower, caps for those organizations

54
Q

What are the 6 top tips for complying with the EU AI Act?

A
  1. Identify and understand which of your AI systems will likely be classified as high-risk by the Act
  2. Determine whether the systems are within the territorial scope of the Act
  3. Determine if your organization is the provider or a user of the high-risk system
  4. Consider the AI procurement policies and processes used (because many organizations will be users of AI systems, the only way to ensure the provider is doing the right thing is to have in place a robust procurement process)
  5. Perform a gap analysis comparing your existing AI policies, processes and standards with the Act’s requirements – and over time, plug the gaps
  6. Keep up to date on technical standards from international and European standards organizations
55
Q

Which jurisdictions currently have laws directly related to AI?

A
  • Australia
  • Canada
  • China
  • EU
  • South Africa
  • UK
56
Q

Which jurisdictions have proposed national laws related to AI?

A
  • Brazil
  • US
57
Q

Which jurisdictions have proposed laws that would be in addition to existing laws?

A
  • Canada
  • EU
58
Q

Which locality has NOT taken a risk-based approach to governing AI?

A

China

59
Q

Which locality has outright banned certain AI systems?

A

EU

60
Q

What have many laws included as a key way to address AI accountability?

A

Transparency

61
Q

Provide examples of how you can include transparency in AI governance law

A
  • Recordkeeping and disclosure requirements
  • Requiring audits and publicly available audit results, to describe how models are performing
62
Q

Which localities have proposed AI laws that include recordkeeping and disclosure requirements?

A
  • Canada
  • US
63
Q

Which locality includes a requirement to audit and provide publicly available audit results?

A

New York city

64
Q

What Act was introduced in 2022 in Canada?

A

The Digital Charter Implementation Act of 2022

65
Q

What 3 Acts are included in the Digital Charter Implementation Act of 2022?

A
  • Consumer Privacy Protection Act
  • Personal Information and Data Protection Tribunal Act
  • Artificial Intelligence and Data Act
66
Q

In the Canadian Artificial Intelligence and Data Act, how is AI defined?

A

The definition is broad, including automated decision-making widely

67
Q

What is the approach of the Canadian Artificial Intelligence and Data Act?

A
  • Identifies many regulated activities that will be affected
  • Focused on high risk systems
68
Q

How are high risk systems identified in the Canadian Artificial Intelligence and Data Act?

A
  • The nature and severity of the potential harms
  • The scale of use of the system
  • The extent to which individual consumers have the ability to opt-out or control their interaction with the system
  • The imbalances of the economic and social status of the individuals who interact with the system and their autonomy or authority to make a different choice
  • The degree with which risks are already regulated through other contexts (industry, consumer protection in general…)
69
Q

How would the Canadian Artificial Intelligence and Data Act be enforced?

A

Canada has proposed creating a federal AI and Data Commissioner who would have significant powers for accessing information and ordering AI systems to cease operations in some circumstances

70
Q

What approach has Singapore taken to address AI governance?

A
  • Actively leveraging AI’s benefits for productivity and growth
  • But they base this on the idea of managing risk and ensuring people can trust in the systems they are using
71
Q

What advisory council has Singapore established?

A
  • Industry-level advisory council on ethical use of AI and data
  • Council advises the government on issues that are brought up by commercial uses, applications, features and services that AI is included in
72
Q

When was the first AI governance framework issued in Asia?

A

2019 - Singapore

73
Q

What frameworks has Singapore produced?

A
  • AI governance framework
  • Model framework (voluntary) to guide organizations in deploying AI responsibly
74
Q

What aspect of AI do the Singapore frameworks not address?

A

Development of models

75
Q

What 2 fundamental principles are embedded into Singapore’s frameworks?

A
  • The decision-making process should be explainable, transparent and fair
  • It should be human-centric (by being able to appeal, having human review of output, or having alternative paths, etc.)
76
Q

What toolkit did Singapore release?

A

AI Verify in 2023

77
Q

What is the purpose of AI Verify?

A
  • Support AI governance testing and oversight
  • Foster interoperability of these systems
  • Enable common approaches and standards
78
Q

How does AI Verify work?

A
  • It is a testing toolkit that companies can use to test their own systems but provides outputs that can then be shared at the discretion of the company
  • Rather than setting specific standards, AI Verify is designed to allow the developers and owners of a system to state what their claim is about the performance of their system and then demonstrate that they are complying
79
Q

What approach has China taken to AI governance?

A

Rights-based approach

80
Q

What rights are included in China’s national AI governance legislation?

A
  • Clear notice
  • Ability in all cases to opt-out of personalized recommendations
  • Prohibition on price differentiation
81
Q

Name the 3 areas where China has included clear directives in their AI governance legislation

A
  • Online recommendations
  • Social media
  • Gaming
82
Q

What features are seen in municipal AI governance laws in China, in additional to national requirements?

A
  • Oversight for compliance in development
  • Auditing
  • Bans (or contemplated bans) for certain aspects of AI usage, particularly when they see it as a threat to national security, personal privacy, health or discrimination
  • Potentially include bans on the development of metaverse-related technologies as well
83
Q

How is AI governance addressed in South Africa?

A
  • Protection of Personal Information Act (GDPR-ish)
  • Includes 8 conditions for the lawful processing of data which largely mirror FIPPs
84
Q

How is AI governance addressed in Brazil?

A
  • Proposed a risk-based approach
  • Requiring human oversight if fundamental rights may be implicated
  • Currently contains no binding limitations or enforcement mechanisms
85
Q

How is AI governance addressed in India?

A
  • Strongly promoting AI and big data use in their economy
  • They have created AI governance principles and established 4 committees at the national level to develop a policy framework on AI addressing the legal questions, insurance, contractual controls, governance and safety parameters
86
Q

How is AI governance addressed in Japan?

A
  • Published non-binding AI guidelines
  • Created a national AI strategy including governance guidelines and principles
  • Very focused on innovation and building trustworthy, reliable and fair systems
87
Q

How is AI governance received by companies in Japan?

A

Although the requirements are non-binding, corporate compliance is generally expected to align with the guidance that is being provided

88
Q

Provide examples of the guidelines issued by Japan

A
  • Contract guidelines for AI and data use, including model clauses
  • Machine learning management guidelines
  • Cloud services with specific guidance on AI safety and reliability
89
Q

What is the focus of Japan’s machine learning guidelines?

A

Quality management processes

90
Q

How is AI governance addressed in South Korea?

A
  • Non-binding guidance
  • National strategy
  • People-centric, human-centric approach
91
Q

What is South Korea’s goal in regards to AI governance?

A
  • Prevent AI disfunction, which is the negative impact on individuals
  • Encourage expansion and innovation
92
Q

What areas of consensus need to emerge globally to effectively govern AI?

A
  • How to enforce and penalize AI violations
  • Whether to criminalize aspects of AI governance
  • The impact or role of third-party liability
  • Intellectual property issues
93
Q

What is the difference between AI governance and AI governance principles?

A
  • AI governance principles are a set of values put forth by an organization
  • An AI governance framework is how to operationalize those values
94
Q

What should be considered when using AI governance frameworks?

A
  • Industry or sector
  • Jurisdiction(s) in which you operate
  • Risk tolerance levels
  • AI business strategy (is it managed by procurement or sales…)
  • Resources available to implement the framework
  • Use cases you want to implement
95
Q

What core AI ethical principles are most frameworks based on?

A
  • Fairness
  • Transparency
  • Human accountability
  • Oversight
  • Accuracy
96
Q

What should your AI governance framework cover?

A
  • AI development
  • AI procurement
  • Use of AI
97
Q

How can the guardrails imposed by an AI governance framework be a benefit to the organization (other than purely compliance)?

A
  • Makes it easier to develop; you know what is expected and it is predictable
  • Your timelines are less likely to change
  • It provides the structure required to streamline operations
98
Q

What steps could you take to select the right framework?

A
  1. Identify what AI systems are already built
  2. Evaluate their risks
  3. Plan risk mitigations
  4. Review previous incidents and lessons learned
  5. Speak to “Red team” (staff authorized to test an organization’s defenses to identify vulnerabilities)
  6. Choose the right framework for your sector and use case
99
Q

List 5 examples of AI governance frameworks

A
  • ISO 31000:2018
  • Singapore’s Model AI Framework
  • NIST’s AI Risk Management Framework
  • HUDERIA
  • The EU AI Act
100
Q

What is the approach used in 31000:2018 ISO Risk Management Guidelines?

A

Not intended to eliminate risks, but identify risks and implement mitigation strategies and processes

101
Q

What is included in 31000:2018 ISO Risk Management Guidelines?

A
  • Breaks down the process into principles, process and framework
  • Eight risk management principles
  • Six areas of focus
  • A process
102
Q

List the 8 risks management principles included in 31000:2018 ISO Risk Management Guidelines

A
  • Inclusive
  • Dynamic
  • Best available information
  • Human and cultural factors
  • Continuous improvement
  • Integration
  • Structured and comprehensive
  • Customized
103
Q

List the 6 areas of focus included in 31000:2018 ISO Risk Management Guidelines

A
  • Leadership
  • Integration
  • Design
  • Implementation
  • Evaluation
  • Improvement
104
Q

What are the high level steps in the process included in 31000:2018 ISO Risk Management Guidelines?

A
  • Identify risks
  • Evaluate the probability of a risk event occurring
  • Determine the severity of the effects of a risk event occurring
105
Q

List NIST’s 7 characteristics of trustworthy AI

A
  • Valid and reliable
  • Safe
  • Secure and resilient
  • Accountable and transparent
  • Explainable and interpretable
  • Privacy-enhanced
  • Fair, with harmful bias managed
106
Q

What are are the high level steps in the governance process proposed by NIST?

A
  • Map: Identify use and risks related to use
  • Measure: Assess, analyze and track risks
  • Manage: Prioritize risks and act based on projected impact
107
Q

What are the key steps proposed by NIST to embed within organizational processes and activities to assess and manage risk?

A
  • Test
  • Evaluate
  • Verify
  • Validate
108
Q

What general guidance is included in HUDERIA’s framework for AI systems?

A
  • Develop impact assessment models that incorporate human rights with AI-centered approaches
  • Apply a risk-based approach based on specific principles
  • Formulate a methodology of impact assessments that follow the proportionality principle
  • Develop a method for assessing and grading the likelihood and extent of risks associated with an AI system
109
Q

What does HUDERIA consider when developing methods for assessing and grading the likelihood and extent of risks associated with AI systems?

A
  • Use-contexts and purposes
  • Underlying technology
  • Stage of development
  • Stakeholders
110
Q

List the 8 principles in HUDERIA’s framework for AI systems

A
  • Human dignity
  • Human freedom and autonomy
  • Prevention of harm
  • Non-discrimination
  • Transparency and explainability
  • Data protection and the right to privacy
  • Democracy
  • Rule of Law
111
Q

Describe the human dignity principle, included in the HUDERIA framework for AI systems

A

Humans should be treated as moral subjects and not as objects to be algorithmically manipulated

112
Q

Describe the human freedom and autonomy principle, included in the HUDERIA framework for AI systems

A

Humans should be informed and empowered to act; systems should enrich humans, not control or condition them

113
Q

Describe the prevention of harm principle, included in the HUDERIA framework for AI systems

A

AI systems must not be permitted to adversely impact humans’ mental or physical health or planetary health

114
Q

Describe the non-discrimination principle, included in the HUDERIA framework for AI systems

A

AI systems must be fair, equitable and inclusive in their beneficial impacts and in the distribution of risks

115
Q

Describe the transparency and explainability principle, included in the HUDERIA framework for AI systems

A

AI use and the rationale for this use must be made clear to affected individuals

116
Q

Describe the data protection and the right to privacy principle, included in the HUDERIA framework for AI systems

A

Informed, freely given and unambiguous consent for AI use when personal information is involved

117
Q

Describe the democracy principle, included in the HUDERIA framework for AI systems

A

Transparent and inclusive oversight mechanisms to ensure the safeguarding of the above principles

118
Q

Describe the rule of law principle, included in the HUDERIA framework for AI systems

A

AI systems must not undermine judicial independence, due process, etc.

119
Q

What is included in the process proposed in the HUDERIA framework for AI systems

A
  • Identify relevant human rights that could be adversely impacted
  • Assess the impact on those rights
  • Assess governance mechanisms to ensure the mitigation of risks, stakeholder involvement, effective remedy, accountability and transparency
  • Monitor and evaluate the system continuously for sufficient response to changes in context and operation
120
Q

List the 5 principles proposed in the EU AI Act (per Parliament)

A
  • Safe
  • Transparent
  • Traceable
  • Non-discriminatory
  • Environmentally friendly
121
Q

List the 12 principles included in Singapore’s Model AI Framework

A
  • Transparency
  • Explainability
  • Repeatability/reproducibility
  • Safety
  • Security
  • Robustness
  • Fairness
  • Data governance
  • Accountability
  • Human agency and oversight
  • Inclusive growth
  • Societal and environmental well-being
122
Q

List the 5 principles set forth by the OECD

A
  • Impartiality (Objective): Inclusive growth, sustainable development and well-being, diversity
  • Human-centered values and fairness (non-discrimination)
  • Transparency and explainability
  • Robustness, security and safety
  • Accountability
123
Q

List 2 major challenges for AI governance progams

A
  • Change management
  • Cultural change
124
Q

How can you address change management and cultural change in relation to AI?

A
  • Ensure continuous involvement and support from of leadership
  • Refine and streamline processes
  • Get feedback from individuals that might be affected by an AI