7. Existing and Emerging Laws Flashcards
What is the worlds first comprehensive regulation for AI?
The EU AI Act that reached provisional agreement on December 8, 2023.
What is the EU AI Act?
- Is a risk-based regulation: the higher the risk, the stricter the rules.
- Has far-reaching provisions for organizations that use, design or deploy AI systems. (Like the GDPR’s impact on the processing of personal data, the Act is expected to have a
global impact.) - Aligns with the approach proposed by the OECD to ensure the definition of an AI system provides clear criteria for distinguishing AI from simpler software systems.
What is the scope of the EU AI Act?
The regulation applies to all systems placed in the EU market or used in the EU, including
those from providers who are not located in the EU.
What is the purpose of the AU AI Act?
- Regulate AI
- Address potential harms
- Ensure AI systems reflect EU values and fundamental rights
- Ensure legal certainty to promote investment and innovation
- Align organizations’ use of AI with EU core values and rights of individuals:
- Protect individuals from harm
- Provide organizations with legal bases for using AI in its current state and as the
technology advances
What is the AU AI Act’s applicability?
Applies to:
- All providers and users situated in EU member states
- Providers not located in the EU but providing products for use in the EU
- Operators located outside of the EU producing output to be used in the EU
What is the differentiation between “providers” and “deployers” under the EU AI Act?
Providers:
- Develop AI systems (usually to place on the market or put into service)
- Sell AI systems for use or make available through other means
- Majority of compliance obligations and requirements will apply to providers
Deployers:
- Organizations, individuals or other entities that use AI systems for specific purposes or goals
- AI system is considered “under the user’s authority,” except where the system is used for
personal, non-professional activities - May also be referred to as “users”
What are exemptions to the EU AI Act?
Exemptions to the Act include:
- AI used in a military context, including national security and defense
- AI used in research and development, including in the private sector
- AI used by public authorities in third countries and international organizations under international agreements for law enforcement or judicial cooperation
- AI used by people for non-professional reasons
- Open-source AI (in some cases)
What are the four risk categories classified by the EU AI Act?
- Unacceptable risk
- High risk
- Limited risk
- Minimal or no risk
What are unacceptable risks under the EU AI Act?
Social credit scoring systems
- Emotion-recognition systems used in law enforcement, border patrol and educational institutions
- AI that exploits a person’s vulnerabilities, such as age or disability
- Behavioral manipulation and techniques that circumvent a person’s free will
- Untargeted scraping of facial images to use for facial recognition
- Biometric categorization systems using sensitive characteristics
- Specific predictive policing applications
- Real-time biometric identification by law enforcement in publicly accessible spaces, except certain limited, pre-authorized situations
What is important to know about the EU AI Act’s risk categories?
- Each risk level has a different level of compliance obligation
- Provides flexibility and adaptability for the Act
- Provides clear guidance for organizations
- Providers and, in some cases, users/deployers, will be required to:
- Process AI use in accordance with the risk level
- Document AI use
- Audit AI use
What are high risks under the EU AI Act?
Majority of the Act will apply to AI that falls into the high-risk category
- Specific articles within the Act outline requirements
- Will require CAs, among other obligations, to ensure the system is safe prior to it going on the market or into use
What are two high-risk subcategories under the EU AI Act?
- Product safety
AI used as a safety component of a product covered by EU legislation, such as toys,
machinery, medical devices, aviation, vehicles and railways
- Systems that pose a significant risk of harm to health, safety or fundamental rights
- Biometric identification and categorization of natural persons
- Management and operation of critical infrastructure (such as gas and electricity) * Education and vocational training
- Employment, worker management and access to self-employment
- Access to and enjoyment of essential private services and public services and benefits (e.g.,
emergency service dispatching) - Law enforcement
- Migration, asylum and border control management
- Administration of justice and democratic processes
What are the provider requirements for managing high-risk under the EU AI Act?
- Implement a risk management system
- Identify and analyze risks posed by the AI system, and add measures to minimize and
mitigate risks
- Provide technical documentation of the risks and mitigatory processes
- Maintain and update documentation even after product release - Manage data and data governance
- Ensure input data is relevant for the purpose, free of errors, representative and complete
- Robust data management: collection, annotation, labelling, cleaning; examination for biases
(providers may process special category personal data to monitor, detect and correct bias)
- Monitor performance and safety; take corrective steps for nonconforming systems * - Register in the public EU database of high-risk AI systems before placing them on the market
- Keep logs in an automatic, documented manner (e.g., inputs and outputs should be traceable)
- Comply with transparency measures about the provider and how the system was built, and provide instructions for use
- Must be clear, concise and relevant
- How to use system safely: system maintenance, capabilities and limitations, how to implement human oversight - Develop the system in a way that allows for human oversight
- Humans must be able to oversee processes and understand how the system works,
understand and interpret output and intervene to stop or override the AI outputs - Ensure the system performs consistently to achieve its intended purpose
- Test regularly for accuracy and robustness
- Build with resilience to cybersecurity threats - Create a quality management system and undertake a conformity assessment
- Quality management: strategy for regulatory compliance, build standards, post-market
monitoring
- Fundamental rights impact assessment (CA: Demonstrate compliance prior to marketing. May be self-assessed, or may require
third-party assessments, depending on various factors.) - Report serious incidents and malfunctions that lead to breach of fundamental rights
What are high-risk areas requiring registration under the EU AI Act?
EU-wide database for high-risk AI systems
- Public, accessible by anyone
- Operated and owned by the European Commission
- Data provided by providers
- Providers must register prior to placing system on the market
What are high-risk areas requiring notification under the EU AI Act?
Providers must establish and document a post-market monitoring system
- Track how the AI system is performing (What the AI system is doing after it has been sold)
- Report any serious incident or malfunctioning which is, may be, or could become a breach of the obligations to protect fundamental rights (If an incident occurs: required to report to local market surveillance authority)