Module 7: Existing and Emerging AI Laws and Standards: the EU AI Act Flashcards

1
Q

What is the EU AI Act what are the aims of the act?

A

The EU AI Act is the world’s first comprehensive AI regulation. It aims to:
1) Ensure that AI systems in the EU are safe with respect to fundamental rights and EU values
2) Stimulate AI investment and innovation in Europe by providing legal certainty

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

How does the EU AI Act define “AI Provider”?

A

An entity that develops AI systems to sell or otherwise make available.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

How does the EU AI Act define “AI User”?

A

An entity that uses an AI system under its authority.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

To whom does the EU AI Act apply?

A

The EU AI Act has extraterritorial scope. It can apply to AI providers and users outside of the EU in some cases (e.g. if the AI system is placed in the market in the EU and if the output generated by the AI system is used in the EU).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are the exemptions to the applicability of the EU AI Act?

A

AI used in:
- A military context (national security and defense)
- Research and development (including R&D for products in the private sector)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What does the EU AI Act require of AI Providers (and in some cases AI Deployers)?

A
  • Process AI use in accordance with the risk level
  • Document AI use
  • Audit AI use
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are the 4 classifications of risk under the EU AI Act?

A

1) Unacceptable risk
2) High risk
3) Limited risk
4) Minimal or no risk

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Which techniques, systems and uses are deemed to have an unacceptable risk level under the EU AI Act?

A
  • Social credit scoring systems
  • Emotion recognition systems in the areas of workplace and education institutions
  • AI that exploits a person’s vulnerabilities, such as age or disability
  • Behavioral manipulation and techniques that circumvent a person’s free will
  • Untargeted scraping of facial images to use for facial recognition
  • Biometric categorization systems using sensitive characteristics
  • Specific predictive policing applications
  • Real-time biometric identification by law enforcement in publicly accessible spaces, except certain limited, pre-authorized situations
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are the 8 high risk areas set forth in Annex III of the EU AI Act?

A

1) Biometric identification and categorization of natural persons
2) Management and operation of critical infrastructure (such as gas and electricity)
3) Education and vocational training
4) Employment, worker management and access to self-employment
5) Access to and enjoyment of essential private services and public services and benefits (e.g., emergency services dispatching)
6) Law enforcement
7) Migration, asylum and border control management
8) Assistance in legal interpretation and application of the law

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are the requirements for Providers and Deployers of Limited Risk AI Systems?

A
  • Providers must inform people from the outset that they will be interacting with an AI system (e.g., chatbots).
  • Deployers must:
    • Inform and obtain the consent of those exposed to permitted emotion recognition or biometric categorization systems
    • Disclose and clearly label visual or audio deepfake content that was manipulated by AI
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

The requirements for Limited Risk AI Systems apply to which techniques, systems, and uses?

A
  • Systems designed to interact with people (e.g., chatbots)
  • Systems that can generate or manipulate content
  • Large language models (e.g., ChatGPT)
  • Systems that create deepfakes
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Provide some examples of minimal or no risk AI systems.

A
  • Spam filters
  • AI-enabled video games
  • Inventory management systems
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What are the data governance requirements for Providers of high risk AI systems under the EU AI Act?

A
  • Input data should be relevant, representative, free of errors, and complete.
  • Robust data governance and management should be used.
  • High risk AI systems should automatically record events.
  • Providers must create instructions for the use of an AI system.
  • High risk AI systems should be able to be effectively overseen by humans
  • AI systems should perform consistently, be tested regularly, and be resilient to cybersecurity threats
  • The quality management system should cover strategy for regulatory compliance, technical build specifications, and plans for post-deployment monitoring
  • Demonstrate compliance prior to putting the AI system on the market (via a conformity assessment)
  • Report any incidents or malfunctioning to their local market surveillance authority which could affect fundamental rights within 15 days of discovery
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What are the data governance requirements for Users/Deployers of high risk systems under the EU AI Act?

A
  • Users must follow the instructions for use
  • Users must monitor high risk AI systems and suspend the use of them if there are any serious issues
  • Users must update the Provider about serious incidents or malfunctioning
  • Users must keep automatically generated logs
  • Users must assign human oversight to the appropriate individuals
  • Cooperate with regulators
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What are the data governance requirements for Importers/Distributors of high risk systems under the EU AI Act?

A
  • Ensure the conformity assessment is completed and marked on the product
  • Ensure all technical documentation is available
  • Refrain from putting a product on the market that does not conform to requirements
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What are the registration and notification requirements for Providers under the EU AI Act?

A

Providers must:
- Register the system in the EU-wide database for high risk AI systems (contact info, conformity assessment, and instructions)
- Establish and document a post market monitoring system
- Report any incidents or malfunctioning to their local market surveillance authority which could affect fundamental rights within 15 days of discovery

17
Q

What is the definition of General Purpose AI (GPAI)?

A

An AI model that displays significant generality and can perform a wide range of distinct tasks, regardless of how the model is released on the market
- Can be integrated into a variety of downstream systems or applications
- A new categorization was created: high-impact GPAI models with systemic risk, and all other GPAI

18
Q

What are the obligations of GPAI with systemic risk (where the definition is based on computing power and substantial compliance requirements)?

A
  • Assessing model performance
  • Assessing and mitigating systemic risks
  • Documenting and reporting serious incidents and action(s) taken
  • Conducting adversarial training of the model (also known as “red teaming”)
  • Ensuring security and physical protections are in place
  • Reporting the model’s energy consumption
19
Q

What are the obligations of “all other” GPAI?

A
  • Maintaining technical documentation
  • Making information available to downstream providers who integrate the GPAI model into their AI systems
  • Complying with EU copyright law
  • Providing summaries of training data
20
Q

What are the key elements of the EU AI Act governance?

A
  • All relevant EU laws still apply
  • European AI Office and AI Board established centrally at the EU level
  • Sectoral regulators will enforce the AI Act for their sector
  • Providers can combine or embed AI Act requirements in existing oversight where possible, to prevent duplication and ease compliance
21
Q

What are the penalties for noncompliance with the EU AI Act?

A
  • Up to 7 percent of global annual turnover or 35 million euros for prohibited AI use.
  • Up to 3 percent of global annual turnover or 15 million euros.
  • Up to 1 percent of global annual turnover or 7.5 million euros for supplying incorrect information to authorities.
  • There will be more proportionate caps on fines for startups and small or medium-sized enterprises.
22
Q

When will the EU AI Act apply?

A

2 years after its entry into force, with some exceptions for specific provisions, such as for prohibited AI.

23
Q

What is the intention of the EU AI Pact?

A

A voluntary commitment of industry to begin complying with EU AI Act requirements before legal enforcement begins.

24
Q

How can organizations prepare for the EU AI Act?

A

1) Identify which of your AI systems will likely be classified as high risk by the act
2) Determine whether the AI systems are within the territorial scope of the act
3) Determine whether your organization is a Provider or User/Deployer
4) Consider the AI procurement policies and processes used
5) Perform a gap analysis comparing your existing AI policies, processes and standards with the act’s requirements
6) Keep up to date on technical standards from international and European standards organizations