Chapter 3 HIGH-RISK AI SYSTEMS Flashcards
Q: What are the rules for classifying an AI system as high-risk under the EU AI Act?
A: An AI system is high-risk if it is intended to be used as a safety component or is itself a product covered by Union harmonisation legislation listed in Annex I and is required to undergo third-party conformity assessment. AI systems listed in Annex III are also considered high-risk.
Q: What are some exceptions to an AI system being classified as high-risk under Annex III?
A: An AI system under Annex III is not high-risk if it does not pose significant risk of harm and performs a narrow procedural task, improves a previously completed human activity, detects decision-making patterns without replacing human assessment, or performs a preparatory task. Profiling systems are always high-risk.
Q: What obligations apply to providers of high-risk AI systems?
A: Providers must ensure compliance with requirements in Section 2; indicate their name and contact details; have a quality management system; keep documentation; keep logs; ensure conformity assessment; draw up an EU declaration of conformity; affix a CE marking; comply with registration obligations; take necessary corrective actions; demonstrate conformity to authorities; and ensure accessibility.
Q: What must a provider’s quality management system for high-risk AI include?
A: It must include a compliance strategy; design, development and validation procedures; data management systems; a risk management system; a post-market monitoring system; communication procedures; record-keeping; resource management; and an accountability framework. Implementation is proportionate to the size of the provider.
Q: What are the record-keeping obligations for high-risk AI providers?
A: Providers must keep the technical documentation, quality management system documentation, changes approved by notified bodies, decisions from notified bodies, and the EU declaration of conformity for 10 years after the AI system is placed on the market or put into service.
Q: What are the logging obligations for high-risk AI systems?
A: High-risk AI systems must automatically record events relevant for identifying situations that may result in risks or substantial modifications, facilitating post-market monitoring, and monitoring for remote biometric identification systems. Providers must keep the logs for an appropriate period of at least 6 months.
Q: What information must providers of high-risk AI include in instructions for use?
A: Instructions must include the identity and contact details of the provider; the characteristics, capabilities and limitations of the system; human oversight measures; expected lifetime and maintenance; and a description of mechanisms included for logging. They must be in a digital format, concise, complete, correct and clear.
Q: What are the human oversight requirements for high-risk AI systems?
A: High-risk AI must be designed to enable effective human oversight through appropriate human-machine interface tools and oversight measures implemented by the provider or deployer. Natural persons must be enabled to understand, monitor, interpret, decide to use or not use, and intervene in the operation of the system as appropriate.
Q: What are the accuracy, robustness and cybersecurity requirements for high-risk AI?
A: High-risk AI systems must achieve an appropriate level of accuracy, robustness, and cybersecurity and perform consistently throughout their lifecycle. Relevant metrics must be declared, systems must be resilient to errors and inconsistencies, and include technical solutions to prevent manipulation.
Q: Who is considered the “provider” of a high-risk AI system in various circumstances?
A: The provider is the one who places the system on the market under their name. Distributors, importers, deployers or other third parties are considered the provider if they put their name on it, substantially modify it, or modify its intended purpose to make it high-risk. For AI that is a safety component of a product, the product manufacturer is the provider.
Q: What are the obligations of deployers of high-risk AI systems?
A: Deployers must use high-risk AI in accordance with instructions, assign human oversight, monitor operation, inform the provider of risks, keep logs, suspend use if a serious incident occurs, inform workers at the workplace, comply with registration obligations, cooperate with authorities, and for some systems perform a fundamental rights impact assessment.
Q: What must the fundamental rights impact assessment conducted by some deployers include?
A: It must include a description of the deployer’s processes, period/frequency of use, categories of affected persons, risks to those categories taking into account information from the provider, human oversight measures, and measures to take if risks materialize. It is done before deployment, can rely on prior assessments, and is notified to the market surveillance authority.
Q: What are the notification and designation requirements for conformity assessment bodies (notified bodies)?
A: Member States designate notifying authorities to assess, notify and monitor conformity assessment bodies. Notified bodies must meet organizational, quality, resource, process, independence, confidentiality and competence requirements. Notifying authorities notify the Commission and Member States of bodies meeting the requirements.
Q: What are the operational obligations of notified bodies conducting conformity assessments?
A: Notified bodies must verify conformity of high-risk AI according to relevant procedures, avoid unnecessary burdens for providers, make documentation available to notifying authorities, and inform them of certificates issued, refused, suspended or withdrawn as well as other notified bodies of quality system approvals and assessment results.
Q: How can harmonised standards and common specifications be used to demonstrate conformity with the requirements for high-risk AI?
A: High-risk AI systems that conform to harmonised standards or parts of them published in the Official Journal are presumed to comply with the corresponding requirements in the Act. The Commission can also adopt common specifications that providers can conform to in the absence of harmonised standards.