6. AI Governance and Risk Mitigation Flashcards

1
Q

What are the risks that AI algorithms and modules pose?

A
  1. Security and operational risk
  2. Privacy risk
  3. Business risk
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are the common risks associated with generative AI?

A
  1. Hallucinations: The generative AI creates content that either contradicts the source or creates factually incorrect output under the appearance of fact
  2. Deepfakes: Audio, video or images that have been generated or altered to an extent that they portray a different reality
    3.Training data poisoning: Altering the training data set, leading to an overall performance reduction of the model due to “bad” input. Commonly happens when hackers use AI to hack other AI models
  3. Data leakage: Unauthorized disclosure of data to a third party. Common with federated learning.
  4. Filter bubbles/echo chambers: The generative AI repeats back to the user what they already believe or have already told it. It does not provide any new insights or new information.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What security risks does Ai pose?

A
  1. AI can concentrate the power to a few individuals or organizations, leading to the erosion of individual freedom
  2. Overreliance on AI leads to a false sense of security
    * Security programs that use AI include those that look for malicious activity or patterns that
    detect a denial-of-service attack
    * Users can be so reliant on these systems that they miss security holes
  3. AI systems are vulnerable to adversarial machine learning attacks
    * The attacker attempts to manipulate the input data of an AI model to change the output data of the AI model
    * Causes AI systems to make incorrect decisions, which could lead to security breaches or further data loss
  4. Misuse of AI
    * May lead to major security risks
    * Transfer learning
    * Algorithms: Attacker trains and uses the algorithm to hack other systems (e.g., other AI systems and health care systems)
    * Storing training data in a less secure environment outside of production (e.g., sandbox, development or QA)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What are the operational risks associated with AI algorithms?

A
  1. High costs

Hardware: AI systems require powerful hardware to run, including specialized processors, such as central processing units (CPUs) or graphical processing units (GPUs)

Storage: AI systems require a lot of training data; there are over 500,000 pieces of data in a training set

High-speed network: 10 GbE or faster

Skilled professionals to run AI system: No-code or low-code systems exist, but if the organization is developing its own AI model, it will need data scientists; typically requires high salaries and must be hired, retained and trained to keep skills current

Environmental:
* Detriment to the environment/negative cost; e.g., increased carbon footprint or
greater resource utilization leading to natural resource depletion
* Cost of running green/environmentally friendly

  1. Data corruption and poisoning
  • Happens if data is insecure/doesn’t have proper guardrails (e.g., if you do not have good identity and access management)
  • Data corruption and poisoning can then lead to bad data decision-making, such as
    inaccurate health care decisions
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are the privacy risks associated with AI?

A
  1. Data persistence:
    * Data can exist longer than the human subjects who created it; however, this should not happen
    * Good practice is to delete the data after the human subject is gone unless there is consent for data to remain, or a purpose for data to be retained (E.g., a family wishes to have access to photos or social media; it is a legal necessity to retain data)
    * Data persistence may happen if an organization keeps the data beyond the lifespan of the data subject
  2. Data repurposing:
    * Data being used beyond its originally specified purpose
    * May be intentional or unintentional (Data users may not be trained to know which purposes are aligned with each other and which purposes require additional supervision, verification, etc.)
  3. Spillover data
    * Data is collected on people who are not the target of the data collection; e.g., from
    surveillance
  4. Data collected/derived from the AI algorithm/model itself * Challenges with informed consent (transparency with the data subject and consent that is
    freely given), providing the data subject with the option to opt out, limiting data collection, limiting creation of certain pieces of derived data, describing the nature of the AI processing to the data subject, and deleting personal data upon the request of the data subject (part of the data subject’s right)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are the threats associated with generative AI?

A
  1. Threat to democracy
    * Can cause erosion of confidence in government and public institutions
    * AI algorithms do not know what is fact and what is not fact
  2. Misuse of pattern analysis
    * AI can detect patterns, but this can be misused
    * Example: facial recognition software used to identify individuals at a protest march
  3. Profiling/tracking
    * Identifies shared characteristics and behaviors across platforms
    * Can carry over to non-users of systems or users who did not consent (Example: When a user shops on multiple websites, a profile is created that links all
    the user’s activities on these sites; however, this profile may carry over to more
    than one family member using the same device or account and visiting different
    websites)
  4. Overreliance on predictive analytics
    * Leads to the creation of records on people with little or no direct interaction or consent
    * Uses a device’s IP address, Mac address, or hardware serial number to identify the user and create a record about them
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are business risks associated with generative AI?

A
  1. Bias and discrimination can be fed by:
    * Bad quality training data; bad/lack of labeling practices or bad/lack of good transformation practices
    * Bad quality AI algorithms, which may result in lack of or bad algorithm tuning
  2. Job displacement:
    * AI can automate tasks and jobs
    * Not just manual jobs, but also processes
  3. Dependence on AI vendors:
    * A lot of startups want your business for AI
    * Risk of vendor lock makes it difficult to switch from one vendor to another
    * Vendor failure is possible (e.g., bankruptcy)
    * What happens if the vendor gets bought out? Does the new owner get all of your data and
    what can they do with it?
    * Vagueness around liability accountability to the final customer
    * May be the organization or the data subject
  4. Lack of transparency
    * Avoid treating AI as a “black box”
    * Document the logic of the AI and the envisioned risks to the data subject and the business
  5. Intellectual property infringement
    * Relates to copyright, patents and trademarks, etc.
    * If the AI scrapes the internet, it may use somebody else’s intellectual property and claim it as its own
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are the regulatory and legal risks associated with generative AI?

A
  1. Compliance with laws and regulations
  2. Liability for harm caused by the AI systems
  3. Intellectual property disputes
  4. Human rights violations
  5. Reputational damage
  6. Socioeconomic inequality
  7. Social manipulation
  8. Opaque decision-making
  9. Lack of human oversight
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What ethical considerations should businesses consider?

A
  1. Businesses are racing to be the first in the marketplace, but this can result in the release of
    unethical, unresponsive and potentially malicious AI systems into the world
  2. We as humans configure these AI models, and our biases, morals and ethical values are mirrored in the AI systems we develop
  3. Human biases, morals and ethical values instilled in AI systems can affect AI decision-making that can have significant consequences for the data subject
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Why is it important to align organizational AI risk management strategies?

A

All of an organization’s risk management strategies need to be aligned because if they do not intersect, there will be gaps between them; those gaps may be exploited and surface through incidents

An organization may have a security/operational risk strategy, privacy risk strategy and business risk strategy, all with an AI component to them, or it may have a holistic AI risk management strategy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is a harms taxonomy?

A

A list of negative consequences that could befall the data subject or organization if
certain pieces of information are leaked or misused

An ontological map of individual harms - breaks down harms into their constituent components or attributes
* Example: What is the capacity of the attacker to complete that harm? What is the
capability? What is the opportunity?
* Looks at the dimensions of the harm

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Why is a harms taxonomy important?

A

Privacy laws, directives and regulations focus on the right to the protection of personal data and principles surrounding it, which is helpful within a legal context. To understand why these rights matter, you must understand the concept of harm; a harms taxonomy allows privacy professionals to focus on the consequences of privacy rights infringements — for individuals and society as a whole.

It enhances empathy for data subjects - customers and people from whom personal data is collected

Once harms are broken down, organizations can preform targeted, controlled selection to drive down a specific type of risk (security, privacy, business)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What do AI governance professionals need to balance?

A

As with any evolving technology, AI governance practitioners must balance the benefits of AI use with potential harms and risks to users.

Risk management strategies must evolve to include AI and new procedures may be necessary to address the unique risks AI poses.

By examining new AI technologies carefully to determine areas of vulnerability and mapping them
to new areas of potential harm, organizations can create necessary strategies to mitigate risks and update existing risk management strategies.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What are the organizational considerations for AI adoption?

A
  1. Pro-innovation mindset
  2. Consensus-driven planning and design
  3. Outcome focused teams
  4. Law-, industry-, and technology-agnostic frameworks
  5. Non-prescriptive approach to allow for intelligent self-management
  6. Risk-centric governance
  7. End-to-end accountability in third-party risk management
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

How can AI risk be scored?

A

Severity of harm x Probability of occurance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is a common tool for understanding the severity of mapped risks?

A

Impact assessments

17
Q

What can be done with an AI risk score?

A

Can be placed on a risk scale, which can be shown as qualitative, such as red, amber, green, or may entail more detailed simulations or economic approaches

18
Q

What can be done with AI risks once identified?

A

Once risks are identified and scored, they can be governed with appropriate legal, policy
and technical controls to drive operational decisions

19
Q

What are examples of context specific AI risks?

A
  1. Owner and operator
  2. Specific industry and use case
  3. Potential social impacts
  4. Timing and use of AI
  5. Jurisdictional control
20
Q

What is a fundamental question an AI risk assessment should explore?

A

Is the AI producing the desired outcome?

21
Q

What does an AI risk assessment identify?

A

Which AI systems or parts of the AI system need additional governance measures.

22
Q

In addition to risk scoring, what criteria can organizations use in risk assessment to
determine if the outcome of developing and using AI is appropriate?

A
  1. Business purpose and planned uses of the AI
    * What is the intended task of the AI?
    * What is it going to achieve?
    * Has the organization been sufficiently transparent around how the AI works and
    what the intended consequences might be?
  2. Potential harms, including false positives and negative predictions
  3. Descriptions of the data used to train the AI, including sensitive data
  4. Functionality
    * How does it function?
    * Is it robust; i.e., is it scalable? Can it withstand greater or less use?
  5. Performance metrics
  6. Benchmarking the AI against established and known processes; i.e., whether the AI has or will be evaluated against alternate approaches
  7. Third-party risks: Determine and include the risks raised by involving third parties
23
Q

What are the AI risk qualification levels used through the EU model?

A
  1. Unacceptable risk (Ban on applications like social credit scoring systems and real time remote facial recognition systems in public spaces)
  2. High risk (AI systems that have the potential to harm the safety or fundamental rights of
    people; Mandatory requirements for high-risk AI systems that require compliance)
  3. Limited risk (Risks associated with the AI systems are limited and perhaps only transparency
    requirements are prescribed)
  4. Minimal risk (Risk level is identified as minimal and there are no mandatory requirements;
    nonetheless, the developers of such AI systems may voluntarily choose to follow
    industry standards)
24
Q

Describe the NIST AU Risk Management Framework?

A

For organizations developing and using AI, allocating roles, responsibilities and authority to the relevant stakeholders and providing the resources they need is essential

  • Use the NIST AI Risk Management Framework as a guide
  • Senior and independent oversight should review and hold accountable the risk measurement structures - declaring risk tolerances for development or use
  • Determine and document roles and responsibilities
  • Support AI risk management and risk executives by ensuring there is appropriate authority
    and resources allocated to perform risk management throughout the organization —
    equip the right people with the right tools (Delegate authority to personnel who are involved in the design, the development,
    the deployment and the assessment and monitoring of AI)