Module 6 Flashcards

1
Q

AI adoption generally falls within one of two broad categories, name them

A
  1. To perform an existing function in a new way
  2. To accomplish a new process that has not been done or was not possible before AI
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

When using AI to perform an existing function in a new way, what laws and regulations would it have to adhere to?

A
  • Safety standards
  • Software liability
  • Consumer protection requirements
  • Data retention and disclosure rules
  • Any other existing frameworks it would otherwise be accountable under if the work were conducted manually by a human
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

List 3 important questions that AI has raised in the context of law and regulation

A
  • How do the principles and protections of copyright laws apply to AI?
  • Can the output of AI be considered original and therefore warrant copyright protection?
  • How much human intervention/participation is necessary to meet the threshold of invention or development that can be patented? Where is the line and how do we measure it?
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What issues have been raised in relation to how the principles and protections of copyright laws apply to AI?

A

Data scraping and collection practices leveraged to train generative AI systems have already been putting pressure on our understanding and expectations around intellectual property protections

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What decision was recently made in relation to AI and patents in the US?

A

A recent U.S. federal court decision determined AIs cannot be listed as “inventors” for the purposes of obtaining a patent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What laws in the US will have to be interpreted to determine how and when they apply to AI technologies?

A

Related to employment:
- Title VII
- EEOC regulations
Related to consumer finance:
- Equal Credit Opportunity Act
- The Fair Credit Reporting Act
- SR 11-7
OSHA’s guidelines for robotics safety and “hazard analysis”
The Food and Drug Administration’s systemic approval processes for software as a medical device

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is SR 11-7?

A

A regulatory standard set out by the U.S. Federal Reserve Bank that gives guidance on model risk management

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

A joint statement was published by the FTC and other US agencies clarifying that existing legal authorities apply to automated systems and innovative new technologies. Which authorities did they list?

A
  • The Consumer Financial Protection Bureau
  • The Department of Justice’s Civil Rights Division
  • The Equal Employment Opportunity Commission
  • The Federal Trade Commission
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What laws in the EU will have to be interpreted to determine how and when they apply to AI technologies?

A
  • European Union’s Digital Services Act
  • Local intellectual property and competition laws
  • Product safety laws
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Provide 2 examples where the EU Digital Services Act overlaps the GDPR with regard to transparency

A
  • Recommender systems, which is ML that recommends products: online platforms should ensure users are informed about how recommender systems impact the way information is displayed, and how and what information is presented
  • Online advertising: recipients should have information directly accessible from the online interface where an ad is presented, such as parameters used for determining why an ad was directed to them (the logic used and whether it was based on profiling)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

When was the GDPR enacted?

A

May 2018

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is the subject of Article 22 of the GDPR?

A

Automated decision-making

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

When does Article 22 of the GDPR apply?

A

Where there is an impact for individuals that might be adverse or material

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is the requirement of Article 22 of the GDPR?

A

The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Describe the prohibition on automated decision-making in Article 22 of the GDPR

A
  • There is a general prohibition on automated decision-making that can have a serious effect on an individual
  • Some exceptions exist, for example where there is the fulfilment of a contract or explicit consent or necessity, but generally, the prohibition is pretty broad
  • Not easy to understand what a significant effect might be, and they are evaluated on a case-by-case basis
  • We are still trying to understand through court cases and how it is applied within different organizations
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What are the challenges in implementing Article 22 of the GDPR?

A
  • Getting explicit consent
  • Data Subject Rights
  • Automated decision-making that requires a manual review of the AI decision
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Describe how getting explicit consent may be challenging, in the context of Article 22 of the GDPR

A

For GDPR compliance, there must be explicit, freely given and informed consent; and you must have an option to opt-out
- Where you are going to get that consent, and where will individuals be able to opt-out
- What is the context of the decision – you have to provide broad interpretation of fairness, lawfulness and transparency to ensure the data subject knows they are talking to a chatbot or robot, so they are fully aware of the implications of continuing on and providing personal information

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Describe how data subject rights may be challenging, in the context of Article 22 of the GDPR

A

How are you going to enforce accuracy, correction and right to erasure
- If you remove or correct the data from the training set, right now the only way to correct the model is to have the model go through re-training
- AI models are not set to dynamically update their inference based on new training data without going through a formalized training process

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Describe how automated decision-making that requires a manual review of the AI decision may be challenging, in the context of Article 22 of the GDPR

A
  • The reviewer must be competent with the AI technology to know what to look for and make an accurate decision of whether or not the AI decision needs to be overturned
  • If the AI is a black box, then it makes it very difficult for anyone to honor the automated decision right to review the outcome, they don’t understand how the AI came to that decision
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What is the process of redress?

A

A way for data subjects to register a formal complaint or request a review of an automated decision

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What is the subject of Article 25 of the GDPR?

A

DPIAs of high-risk processing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

The current draft of the EU AI Act includes a requirement to perform what assessment?

A

AI conformity assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What does the requirement to perform an AI conformity assessment in the EU AI Act depend on?

A

The risk to health, safety and fundamental rights of individuals

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

In the EU AI Act, can an AI conformity assessment be required when there is no personal information being processed?

A

Yes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

What makes DPIAs and AI conformity assessments similar?

A

Both are at heart a method for providing accountability in developing new technology and using data

26
Q

What is the focus of a DPIA?

A
  • The DPIA is really focused on the processing of personal information that is an underlying requirement
  • But also, the scenarios where that processing may result in a high risk to the rights and freedoms of individuals
27
Q

Are DPIAs always required?

A
  • No, because they are tied to personal information
  • But they are a best practice to understand the implications of AI processing
28
Q

What makes AI conformity assessments different?

A
  • They are also intended as an accountability tool
  • They are much broader and holistic in the way they analyze the technology
29
Q

What should you identify in an AI conformity assessment?

A
  • How the AI was developed
  • Its data set
  • How the learning process was developed
  • The AI behavior and potential impacts
30
Q

How do you ensure your DPIA and AI conformity assessments are effective?

A

Ensure they include:
- A privacy risk threat model
- Identifying potential harms and mitigations
- Covering business, technical and risk aspects

31
Q

Why should your DPIA include a data flow diagram?

A

To ensure your mitigations are targeted and narrow (and therefore more effective)

32
Q

Which is more technical, the DPIA or the AI conformity assessment?

A

AI conformity assessment

33
Q

Which assessment might include a logic diagram, the DPIA or AI conformity assessment?

A

AI conformity assessment

34
Q

What is the subject of Recital 26 of the GDPR?

A

Pseudonymization and anonymization

35
Q

What are some of the issues to consider when deidentifying data for use in AI systems?

A
  • How can you deidentify at scale?
  • When do you do it (pre-processing, within the algorithm, on the output)?
36
Q

List the 2 distinct approaches to product liability

A
  • Fault liability regimes
  • Strict liability regimes (sometimes referred to as no-fault liability regimes)
37
Q

Describe fault liability regimes

A
  • The individual must prove an action or inaction by the product maker caused the harm
  • For example, non-compliance with product safety law, or some form of negligence where they fail to exercise due care
38
Q

Describe strict liability regimes

A
  • The individual does not need to prove intentional wrongdoing or fault, only that a product defect caused harm
39
Q

In a strict liability regime, what is the process for determining whether compensation is required?

A
  • The AI product was defective
  • They suffered some sort of damage
  • The products defectiveness caused that damage
40
Q

What are the 2 big reasons why it is difficult to prove liability and to compensate for AI induced harm?

A
  • It is difficult to attribute harm due to the autonomous, constantly evolving and changing nature of AI systems
  • AI systems are highly complex and technical in nature, so it can be difficult to understand what led to harm
41
Q

Which 2 proposals did the EU publish in September 2022 to make it easier for victims to prove liability and receive compensation?

A
  • Reform of the EU 1985 Product Liability Directive
  • EU AI Liability Directive
42
Q

What important change did the EU make when reforming the EU 1985 Product Liability Directive?

A

AI and other software and digital products fall under the scope of regulated products

43
Q

Is the reformed EU 1985 Product Liability Directive a fault or strict liability regime?

A

Strict

44
Q

In a strict liability regime, who has the burden of proof?

A

The victim

45
Q

List the categories of damage covered by the reformed EU 1985 Product Liability Directive in relation to AI

A
  • Injury, death or psychological harm
  • Property or financial damage
  • Data loss or corruption
46
Q

According to the reformed EU 1985 Product Liability Directive, can a victim claim compensation for harms caused by products produced in another country?

A

Yes, it is extraterritorial

47
Q

Is the EU AI Liability Directive a fault or strict liability regime?

A

Fault

47
Q

Which Act does the EU AI Liability Directive reinforce?

A

The EU AI Act

48
Q

What 2 core measures are brought into force by the EU AI Liability Directive?

A
  • Courts are going to be empowered to order the disclosure of evidence about high-risk AI systems from providers
  • Courts will be able to presume a causal link between non-compliance with relevant laws and AI-induced harm
49
Q

Provide examples of the information courts will be able to order the disclosure of, according to the EU AI Liability Directive

A
  • Technical documentation the provider maintains
  • Information about how the AI system was developed
  • Which data it was trained on
  • What its purpose is
  • What model was used
  • What its architecture is
  • What testing and monitoring is being performed
50
Q

In a fault liability regime, who has the burden of proof?

A

The defendant

51
Q

Are the following directives in force? The reform of the EU 1985 Product Liability Directive, and the EU AI Liability Directive.

A

No, they remain under consideration and negotiation by EU institutions

52
Q

How are product liability laws managed in the US?

A
  • Determined at the state level, vary widely
  • Much of the law is court-made law - because there isn’t much case law regarding AI, there is a lot of uncertainty on how product liability applies to AI
53
Q

List 3 types of liability claim in the US

A
  • Strict liability, where victims have to prove they were harmed by a defective product
  • Negligence, where the product maker has failed to exercise due care, and this led to harm
  • Breach of warranty, where promises about a product have not been met and some sort of harm has been caused
54
Q

What is the key difference between product liability laws in the EU and the US?

A

Within the US system, it is not clear whether AI systems are products (in the EU they have made this very clear)

55
Q

What is the importance of the Rogers vs Christie case in the US in 2020?

A
  • The US district court of New Jersey ruled that AI software does not qualify as a product
  • The court ruled that information, guidance, ideas and recommendations are not products
  • The system recommended that an inmate be released pre-trial, and then shortly afterward, the plaintiff’s son was murdered
  • The court was unable to assign liability because the AI system was not classified as a product
56
Q

What is the importance of the ongoing Connecticut Fair Housing Association vs Core Logit Rental House Solutions case in the US?

A
  • The plaintiff argued that the AI model that was used to reject the plaintiff’s disabled son’s tenancy breached fair housing requirements
  • Core logic is the AI vendor that provides technology the landlord uses to make decisions about who should and shouldn’t be able to live in certain properties
  • The way the case is going, the vendor may be held liable
57
Q

What has the FTC in the US communicated in relation to product liability and AI?

A
  • FTC standards on non-deception and fair use
  • Recently the FTC warned that unsubstantiated claims about the accuracy or efficacy of biometric information sources, such as facial recognition software, may violate the FTC act
57
Q

What is the US federal government doing to address product liability and AI?

A
  • The White House recently published a blueprint for an AI Bill of Rights
  • There are several important guidance documents from FTC and FDA
  • Application of the NIST Risk Management Framework
  • Publication of Presidential Executive Order 14091 on advancing racial equality which states that federal agencies must root out bias in the design and use of AI
58
Q

How can organizations prepare in the absence of legislation and regulation in relation to product liability and AI in the US?

A
  • As the law develops, organizations will increasingly be exposed to risk of litigation, compensating victims and disclosing sensitive information on AI systems and practices
  • Organizations need to be educated about possible harms and relevant legislation, and they need to take product safety seriously