Module 7: Existing and Emerging AI Laws and Standards: Standards and Risk Management Frameworks Flashcards

1
Q

Name some standards and risk management frameworks organizations can utilize to help govern their AI programs.

A
  • EU AI Act
  • Singapore’s Model AI Framework
  • ISO/IEC 42001:2023
  • ISO 31000:2018
  • NIST AI RMF
  • HUDERIA
  • IEEE-7000-21
  • ISO/IEC Guide 51:2014
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

ISO 42001:2023 - ISO/IEC Artificial Intelligence Management System

A
  • Provides guidance for using AI responsibly and effectively, including the various aspects of AI and the different applications an organization may use. It takes an integrated approach to manage AI projects, from risk assessment to effective treatment of these risks.
  • Applies to organizations of any size and industry involved in developing, providing or using AI-based products or services.

Process:
- Integrate the AI management system into the organization’s processes and overall management structure.
- Consider specific issues related to AI in the design of processes, information systems and controls, such as:
- Determining organizational objectives, involvement of interested parties and organizational policy
- Managing risks and opportunities
- Processes for managing concerns related to the trustworthiness of AI systems
- Processes to manage suppliers, partners and third parties that provide or develop AI systems for the organization

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

ISO 3100:2018 - ISO Risk Management - Guidelines

A

Breaks down the process into principles, process and framework.

8 risk management principles:
1) Inclusive
2) Dynamic
3) Best available information
4) Human and cultural factors
5) Continuous improvement
6) Integration
7) Structured and comprehensive
8) Customized

6 areas of focus:
1) Leadership
2) Integration
3) Design
4) Implementation
5) Evaluation
6) Improvement

Process:
- Identify risks
- Evaluate the probability of a risk event occurring
- Determine the severity of the effects of a risk event occurring
- Not intended to eliminate risks, but identify risks and implement mitigation strategies and processes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

United States National Institute of Standards and Technology, AI Risk Management Framework (NIST AI RMF)

A

Provides practical guidance on risk management activities for AI principles:

7 characteristics of “trustworthy” AI (principles):
1) Valid and reliable
2) Safe, secure and resilient
3) Accountable and transparent
4) Explainable and interpretable
5) Privacy-enhanced
6) Fair — with harmful bias managed
7) Governance process

Map: identify use and risks related to use

Measure: assess, analyze and track risks

Manage: prioritize risks and act based on projected impact

Organizational processes and activities to assess and manage risk

Key steps:
- Test
- Evaluate
- Verify
- Validate

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Council of Europe’s Human Rights, Democracy, and the Rule of Law Assurance Framework for AI Systems (HUDERIA)

A

General guidance:
- Develop impact assessment models that incorporate human rights with AI-centered approaches
- Apply a risk-based approach based on specific principles
- Formulate a methodology of impact assessments that follow the proportionality principle
- Develop a method for assessing and grading the likelihood and extent of risks associated with an AI system
- Considers use, such as contexts and purposes, underlying technology, stage of development and stakeholders

8 Principles:
1) Human dignity: Humans should be treated as moral subjects and not as objects to be algorithmically manipulated
2) Human freedom and autonomy: Humans should be informed and empowered to act; systems should enrich humans, not control or condition them
3) Prevention of harm: AI systems must not be permitted to adversely impact humans’ mental or physical health, or planetary health
4) Non-discrimination: AI systems must be fair, equitable and inclusive in their beneficial impacts and in the distribution of risks
5) Transparency and explainability: AI use and the rationale for this use must be made clear to affected individuals
6) Data protection and the right to privacy: Informed, freely given and unambiguous consent for AI use when personal information is involved
7) Democracy: Transparent and inclusive oversight mechanisms to ensure the safeguarding of the above principles
8) Rule of Law: AI systems must not undermine judicial independence, due process, etc.

Process:
- Identify relevant human rights that could be adversely impacted
- Assess the impact on those rights
- Assess governance mechanisms to ensure the mitigation of risks, stakeholder involvement, effective remedy, accountability and transparency
- Monitor and evaluate the system continuously for sufficient response to changes in context and operation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

IEEE 7000-21 Standard Model Process for Addressing Ethical Concerns during System Design

A

Outlines how organizations can consider ethical values throughout system design.

Purpose: Enabling organizations to design systems with explicit consideration of individual and societal ethical values, such as transparency, sustainability, privacy, fairness, and accountability, as well as values typically considered in system engineering, such as efficiency and effectiveness.
Projects conforming to this standard balance management commitments for time and budget constraints with long-term values of social responsiveness and accountability
Emphasizes management and engineering communication with stakeholders
Describes processes for traceability of ethical values in the concept of operations, ethical requirements, and ethical risk-based design
Relevant to all sizes and types of organizations using their own life cycle models

Defines the term “ethical” as supporting the realization of positive values or reduction of negative values.
The ethically-aligned processes described are performed during two stages in the system life cycle:
1) Concept exploration
2) Development

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

ISO/IEC Guide 51:2014

A

Intended for those who draft standards.

Provides guidance on including safety aspects and serves as a reference for other stakeholders

Goals:
- Reduce risk from product or system use
- Reduce risk in design, production, distribution, use and destruction or disposal of systems or products
- Achieve tolerable risk for people, property and the environment
- EU AI Act uses similar terminology and processes in identifying risk factors

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

When creating a risk management strategy, what factors should be taken into consideration?

A
  • Jurisdictional requirements
  • Requirements which align with your organization’s principles and purpose for using AI
How well did you know this?
1
Not at all
2
3
4
5
Perfectly