Chapter 3 HIGH-RISK AI SYSTEMS Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

Q: What are the rules for classifying an AI system as high-risk under the EU AI Act?

A

A: An AI system is high-risk if it is intended to be used as a safety component or is itself a product covered by Union harmonisation legislation listed in Annex I and is required to undergo third-party conformity assessment. AI systems listed in Annex III are also considered high-risk.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Q: What are some exceptions to an AI system being classified as high-risk under Annex III?

A

A: An AI system under Annex III is not high-risk if it does not pose significant risk of harm and performs a narrow procedural task, improves a previously completed human activity, detects decision-making patterns without replacing human assessment, or performs a preparatory task. Profiling systems are always high-risk.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Q: What obligations apply to providers of high-risk AI systems?

A

A: Providers must ensure compliance with requirements in Section 2; indicate their name and contact details; have a quality management system; keep documentation; keep logs; ensure conformity assessment; draw up an EU declaration of conformity; affix a CE marking; comply with registration obligations; take necessary corrective actions; demonstrate conformity to authorities; and ensure accessibility.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Q: What must a provider’s quality management system for high-risk AI include?

A

A: It must include a compliance strategy; design, development and validation procedures; data management systems; a risk management system; a post-market monitoring system; communication procedures; record-keeping; resource management; and an accountability framework. Implementation is proportionate to the size of the provider.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Q: What are the record-keeping obligations for high-risk AI providers?

A

A: Providers must keep the technical documentation, quality management system documentation, changes approved by notified bodies, decisions from notified bodies, and the EU declaration of conformity for 10 years after the AI system is placed on the market or put into service.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Q: What are the logging obligations for high-risk AI systems?

A

A: High-risk AI systems must automatically record events relevant for identifying situations that may result in risks or substantial modifications, facilitating post-market monitoring, and monitoring for remote biometric identification systems. Providers must keep the logs for an appropriate period of at least 6 months.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Q: What information must providers of high-risk AI include in instructions for use?

A

A: Instructions must include the identity and contact details of the provider; the characteristics, capabilities and limitations of the system; human oversight measures; expected lifetime and maintenance; and a description of mechanisms included for logging. They must be in a digital format, concise, complete, correct and clear.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Q: What are the human oversight requirements for high-risk AI systems?

A

A: High-risk AI must be designed to enable effective human oversight through appropriate human-machine interface tools and oversight measures implemented by the provider or deployer. Natural persons must be enabled to understand, monitor, interpret, decide to use or not use, and intervene in the operation of the system as appropriate.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Q: What are the accuracy, robustness and cybersecurity requirements for high-risk AI?

A

A: High-risk AI systems must achieve an appropriate level of accuracy, robustness, and cybersecurity and perform consistently throughout their lifecycle. Relevant metrics must be declared, systems must be resilient to errors and inconsistencies, and include technical solutions to prevent manipulation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Q: Who is considered the “provider” of a high-risk AI system in various circumstances?

A

A: The provider is the one who places the system on the market under their name. Distributors, importers, deployers or other third parties are considered the provider if they put their name on it, substantially modify it, or modify its intended purpose to make it high-risk. For AI that is a safety component of a product, the product manufacturer is the provider.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Q: What are the obligations of deployers of high-risk AI systems?

A

A: Deployers must use high-risk AI in accordance with instructions, assign human oversight, monitor operation, inform the provider of risks, keep logs, suspend use if a serious incident occurs, inform workers at the workplace, comply with registration obligations, cooperate with authorities, and for some systems perform a fundamental rights impact assessment.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Q: What must the fundamental rights impact assessment conducted by some deployers include?

A

A: It must include a description of the deployer’s processes, period/frequency of use, categories of affected persons, risks to those categories taking into account information from the provider, human oversight measures, and measures to take if risks materialize. It is done before deployment, can rely on prior assessments, and is notified to the market surveillance authority.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Q: What are the notification and designation requirements for conformity assessment bodies (notified bodies)?

A

A: Member States designate notifying authorities to assess, notify and monitor conformity assessment bodies. Notified bodies must meet organizational, quality, resource, process, independence, confidentiality and competence requirements. Notifying authorities notify the Commission and Member States of bodies meeting the requirements.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Q: What are the operational obligations of notified bodies conducting conformity assessments?

A

A: Notified bodies must verify conformity of high-risk AI according to relevant procedures, avoid unnecessary burdens for providers, make documentation available to notifying authorities, and inform them of certificates issued, refused, suspended or withdrawn as well as other notified bodies of quality system approvals and assessment results.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Q: How can harmonised standards and common specifications be used to demonstrate conformity with the requirements for high-risk AI?

A

A: High-risk AI systems that conform to harmonised standards or parts of them published in the Official Journal are presumed to comply with the corresponding requirements in the Act. The Commission can also adopt common specifications that providers can conform to in the absence of harmonised standards.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Q: What are the different conformity assessment procedures for high-risk AI systems?

A

A: For high-risk AI under Annex III point 1, if harmonised standards apply, providers can choose internal control or assessment of quality management and technical documentation by a notified body. Otherwise, the latter assessment is required. For systems under Annex III points 2-8, internal control is used.

17
Q

Q: What information must notified bodies exchange?

A

A: Notified bodies must inform the notifying authority of certificates issued, modified, withdrawn or refused and any circumstances affecting the scope of their notification. They must also inform other notified bodies of quality system approvals refused, suspended or withdrawn and assessment results.

18
Q

Q: In what cases can an EU Member State derogate from the conformity assessment requirements?

A

A: Upon a justified request, a market surveillance authority can authorize placing on the market of a high-risk AI system for exceptional reasons of public security, health and safety protection, or environmental or infrastructural protection. In urgent cases, it can be used before authorization. If objections are raised the Commission decides if the authorization is justified.

19
Q

Q: What must an EU declaration of conformity for a high-risk AI system include?

A

A: The EU declaration of conformity must state the AI system complies with requirements in Section 2, contain the information in Annex V, and identify the system. A copy is kept for 10 years and submitted to authorities on request.

20
Q

Q: What are the CE marking requirements for high-risk AI systems?

A

A: The CE marking must be affixed visibly, legibly and indelibly or, if not possible, on packaging or documentation. For AI systems provided digitally, a digital CE marking can be used. The CE marking indicates the system complies with applicable requirements.

21
Q

Q: What are the registration obligations for providers and deployers of high-risk AI systems?

A

A: Providers of high-risk AI, except under Annex III point 2, must register themselves and their system in an EU database before placing it on the market. Deployers of those same systems who are public authorities must also register. For Annex III points 1, 6 and 7 in law enforcement, migration, asylum and border control, registration is in a restricted section of the database.