7. Existing and Emerging Laws Flashcards

1
Q

What is the worlds first comprehensive regulation for AI?

A

The EU AI Act that reached provisional agreement on December 8, 2023.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is the EU AI Act?

A
  1. Is a risk-based regulation: the higher the risk, the stricter the rules.
  2. Has far-reaching provisions for organizations that use, design or deploy AI systems. (Like the GDPR’s impact on the processing of personal data, the Act is expected to have a
    global impact.)
  3. Aligns with the approach proposed by the OECD to ensure the definition of an AI system provides clear criteria for distinguishing AI from simpler software systems.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is the scope of the EU AI Act?

A

The regulation applies to all systems placed in the EU market or used in the EU, including
those from providers who are not located in the EU.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is the purpose of the AU AI Act?

A
  1. Regulate AI
  2. Address potential harms
  3. Ensure AI systems reflect EU values and fundamental rights
  4. Ensure legal certainty to promote investment and innovation
  5. Align organizations’ use of AI with EU core values and rights of individuals:
    - Protect individuals from harm
    - Provide organizations with legal bases for using AI in its current state and as the
    technology advances
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is the AU AI Act’s applicability?

A

Applies to:

  • All providers and users situated in EU member states
  • Providers not located in the EU but providing products for use in the EU
  • Operators located outside of the EU producing output to be used in the EU
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is the differentiation between “providers” and “deployers” under the EU AI Act?

A

Providers:

  • Develop AI systems (usually to place on the market or put into service)
  • Sell AI systems for use or make available through other means
  • Majority of compliance obligations and requirements will apply to providers

Deployers:

  • Organizations, individuals or other entities that use AI systems for specific purposes or goals
  • AI system is considered “under the user’s authority,” except where the system is used for
    personal, non-professional activities
  • May also be referred to as “users”
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are exemptions to the EU AI Act?

A

Exemptions to the Act include:

  1. AI used in a military context, including national security and defense
  2. AI used in research and development, including in the private sector
  3. AI used by public authorities in third countries and international organizations under international agreements for law enforcement or judicial cooperation
  4. AI used by people for non-professional reasons
  5. Open-source AI (in some cases)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are the four risk categories classified by the EU AI Act?

A
  1. Unacceptable risk
  2. High risk
  3. Limited risk
  4. Minimal or no risk
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are unacceptable risks under the EU AI Act?

A

Social credit scoring systems

  1. Emotion-recognition systems used in law enforcement, border patrol and educational institutions
  2. AI that exploits a person’s vulnerabilities, such as age or disability
  3. Behavioral manipulation and techniques that circumvent a person’s free will
  4. Untargeted scraping of facial images to use for facial recognition
  5. Biometric categorization systems using sensitive characteristics
  6. Specific predictive policing applications
  7. Real-time biometric identification by law enforcement in publicly accessible spaces, except certain limited, pre-authorized situations
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is important to know about the EU AI Act’s risk categories?

A
  1. Each risk level has a different level of compliance obligation
  2. Provides flexibility and adaptability for the Act
  3. Provides clear guidance for organizations
  4. Providers and, in some cases, users/deployers, will be required to:
    - Process AI use in accordance with the risk level
    - Document AI use
    - Audit AI use
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are high risks under the EU AI Act?

A

Majority of the Act will apply to AI that falls into the high-risk category

  • Specific articles within the Act outline requirements
  • Will require CAs, among other obligations, to ensure the system is safe prior to it going on the market or into use
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are two high-risk subcategories under the EU AI Act?

A
  1. Product safety

AI used as a safety component of a product covered by EU legislation, such as toys,
machinery, medical devices, aviation, vehicles and railways

  1. Systems that pose a significant risk of harm to health, safety or fundamental rights
  • Biometric identification and categorization of natural persons
  • Management and operation of critical infrastructure (such as gas and electricity) * Education and vocational training
  • Employment, worker management and access to self-employment
  • Access to and enjoyment of essential private services and public services and benefits (e.g.,
    emergency service dispatching)
  • Law enforcement
  • Migration, asylum and border control management
  • Administration of justice and democratic processes
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are the provider requirements for managing high-risk under the EU AI Act?

A
  1. Implement a risk management system
    - Identify and analyze risks posed by the AI system, and add measures to minimize and
    mitigate risks
    - Provide technical documentation of the risks and mitigatory processes
    - Maintain and update documentation even after product release
  2. Manage data and data governance
    - Ensure input data is relevant for the purpose, free of errors, representative and complete
    - Robust data management: collection, annotation, labelling, cleaning; examination for biases
    (providers may process special category personal data to monitor, detect and correct bias)
    - Monitor performance and safety; take corrective steps for nonconforming systems *
  3. Register in the public EU database of high-risk AI systems before placing them on the market
  4. Keep logs in an automatic, documented manner (e.g., inputs and outputs should be traceable)
  5. Comply with transparency measures about the provider and how the system was built, and provide instructions for use
    - Must be clear, concise and relevant
    - How to use system safely: system maintenance, capabilities and limitations, how to implement human oversight
  6. Develop the system in a way that allows for human oversight
    - Humans must be able to oversee processes and understand how the system works,
    understand and interpret output and intervene to stop or override the AI outputs
  7. Ensure the system performs consistently to achieve its intended purpose
    - Test regularly for accuracy and robustness
    - Build with resilience to cybersecurity threats
  8. Create a quality management system and undertake a conformity assessment
    - Quality management: strategy for regulatory compliance, build standards, post-market
    monitoring
    - Fundamental rights impact assessment (CA: Demonstrate compliance prior to marketing. May be self-assessed, or may require
    third-party assessments, depending on various factors.)
  9. Report serious incidents and malfunctions that lead to breach of fundamental rights
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What are high-risk areas requiring registration under the EU AI Act?

A

EU-wide database for high-risk AI systems

  • Public, accessible by anyone
  • Operated and owned by the European Commission
  • Data provided by providers
  • Providers must register prior to placing system on the market
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What are high-risk areas requiring notification under the EU AI Act?

A

Providers must establish and document a post-market monitoring system

  • Track how the AI system is performing (What the AI system is doing after it has been sold)
  • Report any serious incident or malfunctioning which is, may be, or could become a breach of the obligations to protect fundamental rights (If an incident occurs: required to report to local market surveillance authority)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What requirements apply to deplorers, importers and distributors for high-risk systems under the EU AI Act?

A
  • Complete an FRIA before putting AI system into use (for services of general interest like banks, schools, hospitals and insurers, for high-risk systems)
    -Verify compliance with the Act and ensure required documentation is available, including
    instructions for use
  • Communicate with the provider and regulator as required
  • Ensure CA has been completed and is marked on the product
  • Do not put the product on the market if there is reason to believe it does not conform with the provider’s requirements
  • Deploy in accordance with the instructions for use
  • Monitor AI systems and suspend use if any serious issues occur (as defined in the Act)
  • Update provider or distributor about serious incidents or malfunctions
  • Maintain automatically generated logs
  • Assign human oversight to appropriate individuals
  • Cooperate with regulators as necessary
  • Comply with GDPR where relevant * Ensure input data is relevant to the use of the system - Inform people when they might be subject to the use of high-risk AI
15
Q

What are EU AI Act requirements for limited risks?

A

Primary compliance focuses on transparency:

  1. Providers must inform people from the outset that they will be interacting with an AI system (e.g., chatbots)

2.Deployers must:
- Inform, and obtain the consent of, those exposed to permitted emotion recognition or biometric categorization systems
- Disclose and clearly label visual or audio deepfake content that was manipulated by AI

  1. Applies to the following techniques, systems and uses:
    - Systems designed to interact with people (e.g., chatbots)
    - Systems that can generate or manipulate content
    *Large language models (e.g., ChatGPT) *Systems that can create deepfakes
16
Q

What are EU AI Act requirements for minimal or no risk?

A

Examples include:

  • Spam filters
  • AI-enabled video games
  • Inventory management systems

Codes of conduct may eventually be created by industry/specific use; these would be voluntary

17
Q

What are penalties under the EU AI Act?

A
  • Highest penalty is reserved for using prohibited AI (Up to tens of millions of euros or a certain percentage of global turnover for the preceding fiscal year, whichever is higher)
  • Penalty for most instances of noncompliance will be lower than for the use of prohibited AI, but penalties can still go up to tens of millions of euros or a certain percentage of global turnover for the preceding fiscal year, whichever is higher
  • More proportionate caps on fines for startups and small/medium-sized enterprises
18
Q

When will the EU AI Act come into effect?

A

The provisional EU AI Act agreement provides that the Act should apply two years after it comes into effect, with some exceptions for specific provisions.

19
Q

What is the foundation/general purpose AI models and systems under the EU AI Act?

A
  • Usually referred to as GPAI (for General Purpose AI)
  • An AI model that displays significant generality and can perform a wide range of distinct tasks, regardless of how the model is released on the market (Can be integrated into a variety of downstream systems or applications)
  • A new categorization was created: “High-impact GPAI models with systemic risk” and all other GPAI
  • For “all other” GPAI, obligations include:
  • Maintaining technical documentation
  • Making information available to downstream providers who integrate the GPAI model into their AI systems
  • Complying with EU copyright law
  • Providing summaries of training data
  • For GPAI “with systemic risk,” whose definition is based on computing power and substantial compliance requirements, the obligations are the four items listed above, plus:
  • Assessing model performance
  • Assessing and mitigating systemic risks
  • Documenting and reporting serious incidents and action(s) taken
  • Conducting adversarial training of the model (also known as “red-teaming”) * Ensuring security and physical protections are in place
  • Reporting the model’s energy consumption
20
Q

What are the approaches of emerging AI regulations?

A
  1. Specific areas of focus:
    * Automated decision-making
    * Industry-based: e.g., health care, finance, transportation
    * Employment
  2. Overarching regulations: e.g., the EU AI Act
  3. Amending existing laws and regulations; e.g., Brazi
21
Q

What do proposed AI regulatory frameworks build on?

A

Proposed regulatory frameworks often build off existing data protection and privacy laws:

  • Requiring similar risk assessments and auditing processes
  • Transparency is a primary concern
22
Q

What countries have emerging AI legislation?

A

Australia, Canada, China, the EU, South Africa and the UK.

23
Q

What is Canada’s emerging AI legislation?

A

The Digital Charter Implementation Act of 2022 (Bill C-27)

24
Q

What does Bill C-27 add to privacy legislation?

A

The Artificial Intelligence and Data Act

25
Q

What are the main specifications of Canada’s Artificial Intelligence and Data Act?

A
  1. Applies to the private sector
  2. Allows for provincial laws and further federal legislation
  3. Looks at automated decision-making as a whole, not just AI
  4. Risk-based, focusing on high-impact AI
    * Nature and severity of harms
    * Scale of use of the system
    * Extent to which individuals can opt out or control interaction
    * Imbalance of economic and social status of individuals interacting with the system
  5. Are the risks already regulated through other means?
26
Q

What federal AI guidance is in place in the US?

A
  1. Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial
    Intelligence
    * Provides standards around privacy, security and safety in AI use
    * Seeks to protect workers and equity while also encouraging competition
  2. Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People
    * Five principles to guide the design, use and deployment of automated systems
  3. Executive Order on Further Advancing Racial Equity and Support for Underserved Communities
    Through the Federal Government
    * Directs federal agencies to identify and remove bias in their design and use of new
    technologies
  4. No specific overarching AI laws, but rather focused on specific sectors and areas of research:
    * John S. McCain National Defense Authorization Act for Fiscal Year 2019
    * Directed the Department of Defense to appoint a coordinator to oversee AI * National AI Initiative Act of 2020
    * Focused on expanding AI research and development between defense/intelligence communities and civilian federal agencies
27
Q

What are existing AI regulatory requirements in the US?

A

In the short term, AI may be regulated primarily by existing laws and regulatory agencies, for example:

  • Federal Trade Commission: considers Section 5 of the FTC Act (unfair and deceptive business practices) to apply to AI/ML

Consumer Financial Protection Bureau: creditors must explain specific reasons why an adverse credit decision was taken (Applies to “black box” models and other complex algorithmic models)

28
Q

What are state privacy laws in the US?

A

New York City’s Local Law 144 requires bias audits of AI-enabled employment tools

California’s BOT Act prohibits the use of bots to encourage a sale

(Include the right to opt out of automated decision making)

29
Q

What is the objective of the Cyberspace Administration of China?

A
  1. Oversees cyberspace security and internet content regulations
  2. Apply to services available to the general public in China
  3. Research institutions are exempt
  4. Requirement for generative AI service providers to conduct security reviews and register algorithms with the government if the service is capable of influencing public opinion or can “mobilize” the public
  5. Rights-based approach
    * Overview for end-user rights, e.g., clear notice, ability to opt out of personalized
    recommendations, prohibition on price differentiation
    * Clear directives for companies using AI for online recommendations, social media and gaming
30
Q

Do municipalities in China have AI governance oversight?

A

Yes.

  • Oversight for compliance and development, including audits
  • Have or are contemplating bans on AI that threatens national security, personal privacy, health or discrimination
  • Potentially ban development or use of “metaverse-related” technology
  • Technology used to create and manage digital entities, such as virtual assistants and chatbots
31
Q
A
32
Q
A
33
Q
A
34
Q
A