Law and Tech- AI Act Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

when was the final version of the AI Act introduced?

A

March 13, 2024

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

when did the EU Commission propose the AI Act?

A

2021

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

what is the purpose of the AI Act?

A

1) to improve the functioning of the internal market;
2) to promote trustworthy AI

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

how does AI affect the environment?

A

1) carbon footprint- energy required to train AI models is growing, which means more greenhouse emissions
2) e-waste disposal contaminates soil and water.
3) robotics (drones) threaten animals and birds.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

what article of the AI Act contains the purpose?

A

Art 1.2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

what article of the AI Act contains the scope?

A

Art 2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

who does the AI Act apply to?

A

providers, deployers, importers and distributors of AI systems, manufacturers, representatives of providers, and affected persons in the Union. (essentially the AI Act applies to those who want to commercialise AI)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

what is reasonably foreseeable misuse?

A

the use of an AI system in a way that is not in accordance with its intended purpose, but which may result from reasonably foreseeable human behaviour or interaction with other AI systems.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

what is an AI system?

A

A machine-based system that is designed to operate with varying levels of autonomy and that infers from the input it receives how to generate outputs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

what is general purpose AI?

A

An AI model that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

what is a general purpose AI system?

A

An AI system which is based on a general-purpose AI model and which has the capability to serve a variety of purposes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

what is a provider?

A

A natural or legal person, public authority, agency or other body that: develops an AI system or a general-purpose AI model, or that; has an AI system or a general-purpose AI model developed and places it on the market

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

what is a deployer?

A

A natural or legal person, public authority, agency or other body using an AI system under its authority

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

what is an authorised representative?

A

A natural or legal person located in the Union who has received a written mandate from a provider of an AI system or a general-purpose AI model to perform
the obligations and procedures established by this Regulation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

what is a distributor?

A

A natural or legal person in the supply chain that makes an AI system available on the Union market.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

what is an importer?

A

A natural or legal person located in the Union that places on the market an AI system that bears the name or trademark of a natural or legal person established in a 3rd country.

17
Q

what is an operator?

A

A provider, product manufacturer, deployer, authorised representative, importer or distributor.

18
Q

which article of the AI Act contains the prohibited AI systems?

A

Art 5

19
Q

what are the prohibited AI systems?

A
  1. That deploy subliminal, manipulative or deceptive techniques
  2. That exploits vulnerabilities
  3. social scoring systems
  4. Criminal Risk assessment systems
  5. That creates facial recognition databases through the untargeted scraping of facial images.
  6. That infer emotions of people in workplace institutions AND TWO BIOMETRICS-RELATED SYSTEMS:
  7. Biometric categorization systems that categorize individuals on the basis of their biometric data
  8. Real time remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement.
20
Q

what is Article 5.2 about?

A

Remote Biometric Identification Systems (RBIS)- it should only be used to search for abducted individuals, human trafficking or sexual exploitation. Also, the use of these systems shall comply with safeguards in accordance with national law, this is because the AI Act categorises RBIS as high-risk AI systems.

21
Q

what is Article 27 about?

A

Fundamental Rights Impact Assessment (FRAI)- an assessment of how an AI system might affect fundamental rights (fundamental rights are not human rights!). It does not apply to all AI systems, mainly just high-risk ones.

22
Q

what is Article 49 about?

A

Registering the AI system in the EU database. This is done before placing it on the market and is done by deployers. However, not all AI systems have to be registered.

23
Q

what is each use of RBIS subject to?

A

Art 5.3:
1) a prior authorisation granted by a judicial authority or;
2) an administrative authority whose decision is binding on the member state

24
Q

when is an AI system deemed to be high risk?

A

a) when the system is intended to be used as a safety component of a product and;
b) when the product whose safety component is the AI system, is required to undergo a 3rd party conformity assessment, with a view to the placing on the market or the putting into service of that product pursuant to the Union harmonisation legislation in Annex I (the AI system needs to undergo a 3rd party conformity assessment because it is covered by the legislation in Annex I).
* both of these requirements must be met.

25
Q

what is in Section A of Annex I?

A

machinery; safety of toys; boats for leisure; lifts; equipment in explosive atmospheres; radio equipment; pressure equipment; PPE; medical devices

26
Q

what is in Section B of Annex I?

A

marine equipment; motor vehicles; quadricycles and agriculture vehicles.

27
Q

what makes an AI system low risk?

A

if they perform a narrow procedural task or; if they are intended to improve the result of a previously completed human activity.

28
Q

in Annex III, what are the 8 additional systems which made be considered high risk?

A

biometric systems; critical infrastructure; education and vocational training; employment; access to essential private and public services; law enforcement; migration and; administration of justice.

29
Q

when is are the systems in Annex III not a high-risk system?

A

if they do not pose a risk of harm to the health, safety or fundamental rights of natural persons.

30
Q

what is article 6.4 about?

A

providers whose system is in Annex III, but they consider it not to be high-risk.

31
Q

what is required to prove article 6.4?

A

provide with written evidence why the system is not high-risk before it is placed on the market. the provider decides whether it is high-risk or not.

32
Q

what is article 49(2) about?

A

it places an obligation on providers who believe their AI system is not high-risk to register the system in the EU database referred to in art 71.

33
Q

what are we still unclear about?

A

what makes a system high-risk, as the EU commission still has not provided guidelines.

34
Q

what should a high-risk AI system always have?

A

a risk management system (RMS)

35
Q

which articles contains the requirements for high-risk AI systems?

A

art 8-15

36
Q

what should the risk management system include?

A

1) identification of the reasonably foreseeable risks
2) estimation of the risk that may emerge when the system is used.
3) evaluation of other possibly arising risks.
4) adoption of appropriate risk management measures to deal with the risks.

37
Q

what does article 13 say that high-risk systems should be?

A

they should be transparent to enable deployers to interpret a system’s output. they should also be accompanied with instructions.

38
Q

what is article 14 about?

A

high-risk AI systems have to be designed in such a way that the human could effectively oversee them.