EU law Flashcards

Ai compliance with EU

1
Q

What is the AI Act?

A

The AI Act is the EU’s first comprehensive legal framework regulating artificial intelligence. It establishes standards for data quality, transparency, human oversight, and accountability across AI systems used within the EU.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are the main goals of the AI Act?

A

The AI Act aims to create a consistent approach to AI regulation, protect fundamental rights, prevent discriminatory practices, and ensure transparency and safety in AI usage.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

How does the AI Act classify AI systems?

A

The Act classifies AI systems by risk level: unacceptable, high, limited, and minimal, each with corresponding compliance requirements.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Does the AI Act affect non-EU companies?

A

Yes, the AI Act has extraterritorial reach, meaning it applies to any company, inside or outside the EU, whose AI systems are used in or impact individuals within the EU.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are “high-risk” AI applications, and why do they matter?

A

High-risk AI applications are those that could significantly impact people’s lives, such as in healthcare, employment, and access to essential services. These systems face strict requirements for data quality, transparency, oversight, and accountability.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What AI applications fall under “unacceptable risk”?

A

AI systems that manipulate individuals’ behaviors, exploit vulnerabilities (like age or disability), or infringe on fundamental rights are prohibited and considered unacceptable risk.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is required for “limited-risk” AI systems?

A

Limited-risk AI, like chatbots, must clearly inform users they are interacting with an AI. This transparency helps manage expectations and maintain trust.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Are there any restrictions on minimal-risk AI applications?

A

No, minimal-risk applications, like spam filters, are mostly unrestricted, though companies may follow voluntary codes of conduct if they choose.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is required for non-profits using high-risk AI systems in services?

A

Non-profits must ensure that AI systems used in determining eligibility or distributing aid undergo a fundamental rights impact assessment and adhere to transparency and data accuracy standards.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

How does the AI Act affect supported housing organizations using AI?

A

Supported housing providers using AI for tenant eligibility assessments or prioritizing services must maintain transparency, provide documentation on how decisions are made, and ensure that human oversight prevents potential biases.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

How can charities comply with the AI Act’s transparency requirements?

A

Charities should clearly document their AI systems, explain decision-making processes to beneficiaries, and maintain records that detail how data is collected, processed, and used in AI models.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

How does the AI Act impact AI used in rehab or healthcare settings?

A

AI systems in healthcare, like those assessing patient needs, are high-risk and must comply with strict data quality, documentation, and human oversight requirements to ensure reliable and ethical use.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Why is human oversight important in rehab facilities using AI?

A

Human oversight ensures that AI decisions are reviewed for ethical implications and that clients’ individual needs are met, helping to prevent discrimination or errors in service delivery.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What are the key compliance requirements for pharmaceutical companies using AI?

A

Pharmaceutical companies must ensure high data quality, regular testing, and thorough documentation for AI models used in diagnostics or patient care. This includes accuracy checks, clear explanations of AI outputs, and robust security measures.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Are pharmaceutical companies required to perform impact assessments under the AI Act?

A

Yes, for high-risk AI, including diagnostics, companies must conduct impact assessments that evaluate how the AI system could affect patient rights, data privacy, and outcomes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

How does the AI Act ensure transparency in AI systems?

A

The Act requires organizations to maintain documentation, disclose AI-driven interactions to users, and provide clear explanations of AI system functionality and decision-making processes.

17
Q

What is “human oversight,” and why is it required?

A

Human oversight means that humans are involved in the monitoring and review of AI decisions. This is required to ensure that AI outcomes align with ethical standards, and it prevents AI from making unchecked, potentially harmful decisions.

18
Q

What types of documentation are required under the AI Act?

A

Organizations must document the purpose, design, and functionality of their AI systems, keep records of decisions, and provide user information detailing how the AI operates and what data it uses.

19
Q

What are the potential penalties for non-compliance with the AI Act?

A

Non-compliant organizations may face fines of up to 35 million euros or 7% of global annual revenue, whichever is higher.

20
Q

How can early compliance with the AI Act benefit organizations?

A

Early compliance helps organizations avoid future legal challenges, enhances reputation, builds trust with users, and streamlines internal processes for better efficiency and reliability.

21
Q

How can organizations start complying with the AI Act?

A

Organizations should conduct risk assessments for their AI systems, implement human oversight protocols, develop transparent documentation, and provide training on AI ethics and compliance.

22
Q

What are “fundamental rights impact assessments,” and when are they required?

A

These assessments evaluate the AI system’s potential impact on privacy, equality, and access to essential services. They are required for high-risk AI applications that can significantly affect individuals’ rights.

23
Q

Are there exemptions under the AI Act?

A

Yes, the AI Act exempts AI systems used solely for scientific research or development and open-source projects, unless they qualify as high-risk or prohibited AI.

24
Q

How does the AI Act relate to the GDPR?

A

The AI Act complements GDPR, meaning that any personal data processed through AI must also comply with GDPR’s data privacy and protection requirements.

25
Q

What is expected for organizations using general-purpose AI?

A

Organizations using general-purpose AI models must disclose AI use, ensure compliance with risk-specific requirements, and possibly perform additional testing if the AI poses high or systemic risk.

26
Q

How can you, as a consultant, help organizations comply with the AI Act?

A

I offer services like risk assessment, compliance strategy development, transparency documentation, and human oversight training to ensure that organizations meet all AI Act requirements effectively and ethically.

27
Q

What kind of support do you provide for high-risk AI applications?

A

I assist in conducting impact assessments, setting up data quality checks, developing human oversight structures, and ensuring transparency in high-stakes areas like healthcare and service eligibility.