Module 7 Flashcards
What are the core objectives of the EU AI Act?
- Ensure AI systems are safe and respect EU values and fundamental rights
- Ensure legal certainty to promote investment in innovation in AI across the EU
What are the important dates related to the EU AI Act?
- April 2021 – European commission first published proposals
- December 2022 – The Council of the European Union published their position on the AI Act
- June 2023 – The European Parliament agreed on their final negotiated position
- Summer 2023 – Trilogue negotiations (takes a few months)
What are Trilogue negotiations?
3-way negotiations between the EU Commission, Council of the EU and European Parliament to determine the final version of the Act
According to the EU Commission’s original proposal, what is the definition of AI?
- They defined AI very broadly
- AI is any software that is developed with specific techniques and approaches that can, for a given set of human defined objectives, generate outputs like content, recommendations or predictions which influence the environments they interact with
- They also refer to a range of software-based techniques such as machine learning, logic and knowledge-based systems, and statistical approaches
What part of the EU Commission’s definition of AI was considered controversial?
Statistical approaches
- It potentially encompasses a broad range of technologies – as such, the Council and the European Parliament seek to narrow the definition of AI to focus more on machine learning
According to the EU Commission’s original proposal, what is an AI provider?
An entity that develops AI systems to sell or otherwise make available
According to the EU Commission’s original proposal, what is an AI user?
- An entity that uses an AI system under its authority
- A customer of the AI provider that uses the AI system for a specific objective
Who is responsible for the majority of the compliance obligations and requirements in the Act?
The AI provider
Can the AI Act apply to providers and users based outside the EU?
Yes, the AI Act is extraterritorial
List the exception in the EU AI Act
Military context
How does the Council of the EU propose to broaden the exceptions in the EU AI Act?
- Widen Military context to cover national security and defense
- Add Research & development, for example for products in the private sector
List the 4 risk classification levels in the EU AI Act
- Unacceptable risk
- High risk
- Limited risk
- Minimal risk
According to the EU AI Act, which risk level(s) is/are prohibited?
Unacceptable risk
According to the EU AI Act, which AI techniques are considered prohibited?
- Subliminal techniques
- Exploitation
- Social credit scores
- Real-time biometric identification in public spaces by law enforcement
What are subliminal techniques in AI models?
AI systems that deploy subliminal techniques beyond the individual’s consciousness in order to materially distort a person’s behavior in a manner that is likely to cause harm
What is exploitation in AI models?
AI systems which exploit the vulnerabilities of a group due to their age, physical or mental disability in order to distort a group-member’s behavior in a manner that is likely to cause harm
What are social credit score in AI models?
AI systems typically used by public authorities to score people based on their behavior in the social sphere and then either remove or grant benefits based on that behavior
Provide an example of real-time biometric identification in public spaces by law enforcement according to the EU AI Act
Facial recognition
What exceptions exist in relation to real-time biometric identification in public spaces by law enforcement according to the EU AI Act?
- Prevention of terrorist attacks
- Finding missing children
What do you have to do to use the exception for real-time biometric identification in public spaces by law enforcement according to the EU AI Act?
- Judicial authorization
- Safeguards have to be put in place
What prohibited techniques did the European Parliament suggest be added to the EU AI Act?
- Predictive policing systems
- Emotion recognition systems in law enforcement, educational institutions and workplaces
- Any real-time biometric identification systems in public spaces (not just in law enforcement as suggested by the Commission)
- Scraping facial images for databases for facial recognitions models
To which classification level do the majority of the EU AI Act’s requirements apply?
High risk
What are the 2 categories of high risk AI systems according to the EU AI Act?
- 8 different high risk areas listed in Annex III
- AI systems that are a safety component of a product, or is itself a product, covered by EU safety laws
Provide examples of AI systems that are a safety component or a product, or are safety products
- Machinery
- Medical devices
- Motor vehicles
- Toys
List the 8 high risk areas listed in Annex III of the EU AI Act
- Biometrics
- Critical infrastructure
- Education and vocational training
- Employment, workers management and access to self-employment
- Access to and enjoyment of essential private services and essential public services and benefits
- Law enforcement
- Migration, asylum and border control management
- Administration of justice and democratic processes
What 2 items is the European Parliament seeking to add to the list of high-risk systems?
- Influence of voters and the outcome of elections
- Systems used by social media platforms
What important amendment to the definition of high risks systems was proposed by the Council of the EU?
AI should only be considered high-risk when the output of the AI system has a high degree of importance (not purely accessory to the relevant action or decision)
According to the EU AI Act, which systems are considered limited risk?
- Designed to interact with people – for example, chatbots
- Used to detect emotions – or to determine associations with social categories based on biometric data (emotional detection systems)
- Able to generate or manipulate content (for example, producing deep-fake videos)
According to the EU AI Act, what are the requirements for limited risk systems?
- Inform individuals interacting with, or being assessed or classified by, these systems
- Disclose that AI generated the content (some exceptions for artistic expression and law enforcement)
According to the EU AI Act, what are the requirements for minimal risk systems?
- No obligations or rules as to how these systems are developed or used
- Codes of conduct may be created to encourage organizations to apply requirements for high-risk systems to lower-risk systems; this is voluntary
According to the EU AI Act, what are the requirements for high risk systems?
- Implement a risk-management system
- Data and data governance requirements
- Technical documentation must be created and maintained
- Record-keeping logging of AI system
- Requirements for transparency
- Requirements for human oversight
- Requirements for accuracy, robustness and cybersecurity
- Implement a quality management system and perform a conformity assessment
Describe the following requirement of the EU AI Act:
1. Implement a risk-management system
The provider has to:
- Identify and do an analysis of the different risks which could be posed by the system
- Put in place risk management measures and mitigations such as AI system testing which takes into account the state of the AI in the field at the time
Describe the following requirement of the EU AI Act:
2. Data and data governance requirements
- The aim is for the data used to train, test and validate AI systems be as high quality as possible
- Input data should be relevant, representative, free of errors and complete
- Robust data governance and management should be put in place – data collection, labelling, cleansing
Describe the following requirement of the EU AI Act:
3. Technical documentation must be created and maintained
- There is a range of documents and evidence which the provider has to put together before they can put the system on the market, and they have to maintain it post-deployment
- The purpose is to demonstrate how the AI system is complying with all of these requirements such as setting out the purpose, information about the risk management system, the conformity assessment, information about how the system was developed, its architecture and model, training data, etc.
Describe the following requirement of the EU AI Act:
4. Record-keeping logging of AI system
- High risk AI systems should automatically record events such as inputs that the system is receiving and the outputs that the system is generating (prediction, content, etc.)
- AI systems and their functioning should be traceable, you should be able to go back in time and understand at a given point in time what the system was doing and how it was working
Describe the following requirement of the EU AI Act:
5. Requirements for transparency
- Providers have to put together an instructions for use document with clear, accessible and relevant information that is intended for the user
- For example, system maintenance, capabilities and limitations, how you can implement human oversight, information about the provider and how they built the system
Describe the following requirement of the EU AI Act:
6. Requirements for human oversight
- Humans should be able to understand how the AI system works, its capacities and limitations, and crucially, humans should be able to understand the entire AI output (explainability)
- Humans should also be able to intervene, stop, and override AI outputs
Describe the following requirement of the EU AI Act:
7. Requirements for accuracy, robustness and cybersecurity
- You have to have a high level of accuracy, robustness and cybersecurity
- AI system should perform consistently, be tested regularly and be resilient to cybersecurity threats
Describe the following requirement of the EU AI Act:
8. Implement a quality management system and perform a conformity assessment
- Quality management system should cover the strategy for regulatory compliance, technical build specifications or standards, and post-deployment monitoring
- Conformity assessment is meant to formally demonstrate how the AI system is compliant prior to putting it on the market
- Once a declaration of conformity is completed, the provider should affix the CE marking on the AI system, similarly to traditional products
According to the EU AI Act, what are the requirements for users/deployers of high-risk AI systems?
- Follow the instructions for use
- Monitor high-risk systems and suspend use if there are any serious issues
- Update the provider about serious incidents or malfunctioning
- Keep automatically generated logs
- Assign human oversight
According to the EU AI Act, what are the requirements for importers/distributors of high-risk AI systems?
- Ensure the conformity assessment is completed and marked on the product
- Ensure all technical documentation is available, including instructions for use
- Not place a product on the market if the high-risk system does not conform to requirements
According to the EU AI Act, how will registration be managed?
- The Act establishes an EU-wide database for high-risk AI systems
- Public database accessible to anyone, operated and maintained by the commission, with data provided by the AI providers
- Providers will have to register an AI system prior to putting it to market
- Information that will be included – things like contact information, details about the provider, intended purpose of the system, copy of the EU declaration of conformity, copy of the instructions for use
According to the EU AI Act, how will notification be managed?
- Providers must establish and document a post-market monitoring system
- Keeping track of how the AI system is performing and what it is doing after it is being used
- Key requirement is that providers must report serious incidents or malfunctioning which could breach obligations under EU law to protect fundamental rights to their local market surveillance authority
Under the EU AI Act, what are the reporting requirements?
Serious incidents or malfunctioning of high-risk AI systems must be reported within 15 days of the provider becoming aware of the issue
According to the Council of the EU, what is general purpose AI?
General purpose AI includes systems that can have many downstream tasks and use cases
Is general purpose AI included in the EU AI Act?
Not at the moment
How does the Council of the EU want to deal with general purpose AI?
The Council wants a future act stipulating which requirements for high-risk AI should apply to general-purpose AI and that this implementing act should come following a consultation and detailed impact assessment
What is the European Parliament’s position on which requirements should be applied to general-purpose AI and foundation models?
- They say that providers of foundation models must assess and mitigate the model’s risks and they should register their models in the EU database prior to release on the EU market
- They also want greater transparency requirements for providers of foundation models (for example, disclosure and labels that the content is AI generated, as well as publicly available and detailed summaries of copyrighted data that was used in training the models)
In the EU AI Act originally proposed by the EU Commission, who is responsible for enforcing the Act?
- Member states would have to designate national supervisory authority or authorities to enforce the Act
- These could be new or existing authorities
- It is likely that multiple authorities may be responsible for governance of the Act because within a member state because, according to the Act, existing market surveillance authorities (for financial services, medical devices, motor vehicles, etc…) will continue to be the market surveillance authorities in relation to AI Act requirements
- Some member states could potentially designate a central coordinating supervisory authority in these cases