EU law Flashcards
Ai compliance with EU
What is the AI Act?
The AI Act is the EU’s first comprehensive legal framework regulating artificial intelligence. It establishes standards for data quality, transparency, human oversight, and accountability across AI systems used within the EU.
What are the main goals of the AI Act?
The AI Act aims to create a consistent approach to AI regulation, protect fundamental rights, prevent discriminatory practices, and ensure transparency and safety in AI usage.
How does the AI Act classify AI systems?
The Act classifies AI systems by risk level: unacceptable, high, limited, and minimal, each with corresponding compliance requirements.
Does the AI Act affect non-EU companies?
Yes, the AI Act has extraterritorial reach, meaning it applies to any company, inside or outside the EU, whose AI systems are used in or impact individuals within the EU.
What are “high-risk” AI applications, and why do they matter?
High-risk AI applications are those that could significantly impact people’s lives, such as in healthcare, employment, and access to essential services. These systems face strict requirements for data quality, transparency, oversight, and accountability.
What AI applications fall under “unacceptable risk”?
AI systems that manipulate individuals’ behaviors, exploit vulnerabilities (like age or disability), or infringe on fundamental rights are prohibited and considered unacceptable risk.
What is required for “limited-risk” AI systems?
Limited-risk AI, like chatbots, must clearly inform users they are interacting with an AI. This transparency helps manage expectations and maintain trust.
Are there any restrictions on minimal-risk AI applications?
No, minimal-risk applications, like spam filters, are mostly unrestricted, though companies may follow voluntary codes of conduct if they choose.
What is required for non-profits using high-risk AI systems in services?
Non-profits must ensure that AI systems used in determining eligibility or distributing aid undergo a fundamental rights impact assessment and adhere to transparency and data accuracy standards.
How does the AI Act affect supported housing organizations using AI?
Supported housing providers using AI for tenant eligibility assessments or prioritizing services must maintain transparency, provide documentation on how decisions are made, and ensure that human oversight prevents potential biases.
How can charities comply with the AI Act’s transparency requirements?
Charities should clearly document their AI systems, explain decision-making processes to beneficiaries, and maintain records that detail how data is collected, processed, and used in AI models.
How does the AI Act impact AI used in rehab or healthcare settings?
AI systems in healthcare, like those assessing patient needs, are high-risk and must comply with strict data quality, documentation, and human oversight requirements to ensure reliable and ethical use.
Why is human oversight important in rehab facilities using AI?
Human oversight ensures that AI decisions are reviewed for ethical implications and that clients’ individual needs are met, helping to prevent discrimination or errors in service delivery.
What are the key compliance requirements for pharmaceutical companies using AI?
Pharmaceutical companies must ensure high data quality, regular testing, and thorough documentation for AI models used in diagnostics or patient care. This includes accuracy checks, clear explanations of AI outputs, and robust security measures.
Are pharmaceutical companies required to perform impact assessments under the AI Act?
Yes, for high-risk AI, including diagnostics, companies must conduct impact assessments that evaluate how the AI system could affect patient rights, data privacy, and outcomes.