Module 7: Existing and Emerging AI Laws and Standards: the EU AI Act Flashcards
What is the EU AI Act what are the aims of the act?
The EU AI Act is the world’s first comprehensive AI regulation. It aims to:
1) Ensure that AI systems in the EU are safe with respect to fundamental rights and EU values
2) Stimulate AI investment and innovation in Europe by providing legal certainty
How does the EU AI Act define “AI Provider”?
An entity that develops AI systems to sell or otherwise make available.
How does the EU AI Act define “AI User”?
An entity that uses an AI system under its authority.
To whom does the EU AI Act apply?
The EU AI Act has extraterritorial scope. It can apply to AI providers and users outside of the EU in some cases (e.g. if the AI system is placed in the market in the EU and if the output generated by the AI system is used in the EU).
What are the exemptions to the applicability of the EU AI Act?
AI used in:
- A military context (national security and defense)
- Research and development (including R&D for products in the private sector)
What does the EU AI Act require of AI Providers (and in some cases AI Deployers)?
- Process AI use in accordance with the risk level
- Document AI use
- Audit AI use
What are the 4 classifications of risk under the EU AI Act?
1) Unacceptable risk
2) High risk
3) Limited risk
4) Minimal or no risk
Which techniques, systems and uses are deemed to have an unacceptable risk level under the EU AI Act?
- Social credit scoring systems
- Emotion recognition systems in the areas of workplace and education institutions
- AI that exploits a person’s vulnerabilities, such as age or disability
- Behavioral manipulation and techniques that circumvent a person’s free will
- Untargeted scraping of facial images to use for facial recognition
- Biometric categorization systems using sensitive characteristics
- Specific predictive policing applications
- Real-time biometric identification by law enforcement in publicly accessible spaces, except certain limited, pre-authorized situations
What are the 8 high risk areas set forth in Annex III of the EU AI Act?
1) Biometric identification and categorization of natural persons
2) Management and operation of critical infrastructure (such as gas and electricity)
3) Education and vocational training
4) Employment, worker management and access to self-employment
5) Access to and enjoyment of essential private services and public services and benefits (e.g., emergency services dispatching)
6) Law enforcement
7) Migration, asylum and border control management
8) Assistance in legal interpretation and application of the law
What are the requirements for Providers and Deployers of Limited Risk AI Systems?
- Providers must inform people from the outset that they will be interacting with an AI system (e.g., chatbots).
- Deployers must:
- Inform and obtain the consent of those exposed to permitted emotion recognition or biometric categorization systems
- Disclose and clearly label visual or audio deepfake content that was manipulated by AI
The requirements for Limited Risk AI Systems apply to which techniques, systems, and uses?
- Systems designed to interact with people (e.g., chatbots)
- Systems that can generate or manipulate content
- Large language models (e.g., ChatGPT)
- Systems that create deepfakes
Provide some examples of minimal or no risk AI systems.
- Spam filters
- AI-enabled video games
- Inventory management systems
What are the data governance requirements for Providers of high risk AI systems under the EU AI Act?
- Input data should be relevant, representative, free of errors, and complete.
- Robust data governance and management should be used.
- High risk AI systems should automatically record events.
- Providers must create instructions for the use of an AI system.
- High risk AI systems should be able to be effectively overseen by humans
- AI systems should perform consistently, be tested regularly, and be resilient to cybersecurity threats
- The quality management system should cover strategy for regulatory compliance, technical build specifications, and plans for post-deployment monitoring
- Demonstrate compliance prior to putting the AI system on the market (via a conformity assessment)
- Report any incidents or malfunctioning to their local market surveillance authority which could affect fundamental rights within 15 days of discovery
What are the data governance requirements for Users/Deployers of high risk systems under the EU AI Act?
- Users must follow the instructions for use
- Users must monitor high risk AI systems and suspend the use of them if there are any serious issues
- Users must update the Provider about serious incidents or malfunctioning
- Users must keep automatically generated logs
- Users must assign human oversight to the appropriate individuals
- Cooperate with regulators
What are the data governance requirements for Importers/Distributors of high risk systems under the EU AI Act?
- Ensure the conformity assessment is completed and marked on the product
- Ensure all technical documentation is available
- Refrain from putting a product on the market that does not conform to requirements