Module 4 Flashcards

1
Q

List 5 security risks related to generative AI

A
  • Hallucinations
  • Deepfakes
  • Training data poisoning
  • Data leakage
  • Filter bubbles (echo chambers)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are hallucinations?

A

Instances where a generative AI model creates content that either contradicts the source or creates factually incorrect output under the appearance of fact

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is the problem with deepfakes?

A

You don’t know what is real and what is not

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is training data poisoning?

A
  • Common approach when using AI to hack other AI models
  • Hackers try to poison your training data pools
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Where is data leakage common?

A

Federated learning

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are filter bubbles (or echo chambers)?

A
  • The AI model repeats back to you what you already believe and what you have already told it
  • It is not producing new insights
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

List 6 security risks of general AI

A
  • AI can concentrate the power on a few individuals or organizations
  • Overreliance on AI which provides a false sense of security
  • AI systems are vulnerable to attack
  • Misuse of AI
  • AI algorithms that are used to attack other AI systems
  • Storing training data outside of production
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Why is it bad for AI to concentrate power on a few individuals or organizations?

A

It erodes individual freedom

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What do you call it when an attacker manipulates input data to obtain a different output?

A

Adversarial machine learning attack

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is the potential impact of an adversarial machine learning attack?

A

Can lead to incorrect decisions which could lead to security breaches or data loss

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is the potential impact of the misuse of AI?

A
  • May lead to security risks
  • Like in transfer of learning attacks
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Why should you never store training data outside of production?

A
  • Those environments do not have the same level of security
  • You may put this data out in a less secure environment
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What operational risks are associated with AI?

A

How you use the AI
- High cost to run the AI algorithm - hardware (CPUs, GPUs, …)
Operating and running the AI
- Environmental costs
- Data corruption and poisoning

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What environmental costs could AI generate?

A
  • Increased carbon footprint
  • Greater resource utilization
  • Costs for running green
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

List 4 privacy risks associated with AI

A
  • Data persistence
  • Data repurposing
  • Data spillover
  • Data collected or derived from AI itself
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Describe privacy issues related to data persistence

A
  • Data exists longer than the human subjects that created it
  • Once the data subject is gone, best practice would be to delete it
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Describe privacy issues related to data repurposing

A
  • Data being used beyond the originally specified purpose
  • May be intentional or not
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Describe privacy issues related to data spillover

A
  • Data is collected on people who are not the target of the data collection
  • Can happen in something like surveillance
19
Q

Describe privacy issues related to data that is collected from the AI itself

A
  • How can you do informed, freely-given, consent when the AI is collecting it
  • How does the individual opt out
  • How to you limit data collection or limit the creation of certain pieces of derived data
  • How do you describe the nature of the AI process to the data subject
  • How do you go back to delete data on request
20
Q

List 9 regulation and legal risks related to AI

A
  • Compliance with laws and regulations
  • Liability for harm caused by the AI system
  • Intellectual property disputes
  • Human rights violations
  • Reputational damage
  • Socio-economic inequality
  • Social manipulation
  • Opaque decision making
  • Lack of human oversight
21
Q

What did Citron and Solove develop?

A

An approach to identifying privacy harms which includes multiple categories and sub-categories of harm

22
Q

List the categories of harm according to Citron and Solove

A
  • Physical Harms
  • Economic Harms
  • Reputational Harms
  • Psychological Harms
  • Autonomy Harms
  • Discrimination Harms
  • Relationship Harms
23
Q

List the sub-categories of psychological harms according to Citron and Solove

A
  • Emotional distress
  • Disturbance
24
Q

List the sub-categories of autonomy harms according to Citron and Solove

A
  • Coercion
  • Manipulation
  • Failure to Inform
  • Thwarted Expectations
  • Lack of Control
  • Chilling Effects
25
Q

List 8 types of prospective (potential) AI harms to use as a starting point

A
  • Use of force – use of autonomous weapons systems
  • Safety and certification – the role of government in preventing humans from experiencing harms
  • Privacy – shielding an individual’s information from society
  • Personhood – development of self, assigning human rights and responsibilities to non-humans
  • Displacement of labour – technology replacing humans in the workforce and not just manual labour but processes as well
  • Justice system – effect of technology on the operation of the courts
  • Accountability – responsibility for pecuniary (financial) and non-pecuniary harms
  • Using labels to discriminate – classification of individuals (peoples, nationalities, etc.)
26
Q

List 7 organizational considerations in implementing AI risk management

A
  • Adopt a pro-innovation mindset
  • Ensure planning and design is consensus-driven
  • Ensure the team is outcome-focused
  • Ensure the framework is law-, industry- and technology-agnostic
  • Adopt a non-prescriptive approach to allow for intelligent self-management
  • Ensure governance is risk-centric
  • Create policies to manage third party risk, to ensure end-to-end accountability
27
Q

How do you adopt a pro-innovation mindset?

A
  • Does not refer to “innovation for the sake of innovation”
  • Be prepared for changes, new products and possibilities
  • Consider these questions:
    • Will the new product fill a gap or meet a need?
    • Does it align with principles?
    • Is it fiscally responsible?
28
Q

How do you ensure planning and design is consensus-driven?

A
  • Does not refer to “best two out of three”
  • Consider these questions:
    • Have you involved all necessary stakeholders?
    • Did you include people from various teams across the organization?
    • Does each stakeholder understand the needs vs. risks?
29
Q

How do you ensure the team is outcome-focused?

A
  • Does not refer to only the “bottom line”
  • Consider these questions:
    • Does the team understand what the desired outcome is from the AI product?
    • Will the product serve the purpose for which it is being created/utilized/designed/applied?
    • Is there a better way to achieve the goal than that which is being proposed?
30
Q

How do you ensure the framework is law-, industry- and technology agnostic?

A

The framework should:
- Be interoperable across systems
- Ideally be flexible, able to solve a business “problem,” and able to explain why an approach was taken
- Not be biased toward a specific business process or practice, law, industry or technology

31
Q

How do you adopt a non-prescriptive approach to allow for intelligent self-management?

A

Risks should be approached in a context-specific and use-case manner to allow for adjustment and evolution as needs and uses change

32
Q

How do you create policies to manage third party risk, to ensure end-to-end accountability?

A
  • Identify the purpose for AI and ensure the program meets the need
  • Determine who needs access to the AI programs and specify use
  • Identify risks from the specific program or use, and work to mitigate those risks
  • Be clear on who owns the output of the AI process, especially once the contract or use is complete/terminated
33
Q

How are risks assessed?

A

The severity of harm x the probability of occurrence

34
Q

What does the NIST Risk Management Framework suggest to ensure proper oversight?

A

Senior and independent oversight within a company is required to review and hold account the risk management structures

35
Q

What does the NIST Risk Management Framework include to support oversight?

A

Declaring risk tolerances
Supporting AI risk management and risk executives
- Ensuring appropriate authority and resources are available to perform risk management
- Equipping the right people with the right tools
- Determining and documenting roles and responsibilities and delegating authority

36
Q

Provide the detailed steps for calculating risk

A
  1. Determine who needs to be involved
  2. Understand the business purposes and planned usage of the AI
  3. Enumerate the potential harms
  4. Describe the data
  5. Determine whether the AI has or will be evaluated against alternative approaches
  6. Evaluate the harm against the potential
37
Q

How do you assess fairness in an AI system?

A
  • What is the the intended task
  • How does it function – what are its performance metrics
  • Is it robust – is it scalable, can it withstand greater or less use
  • Have we been sufficiently transparent around how it works and what the intended consequences might be
38
Q

What should you consider when establishing AI governance and strategy?

A
  • Look to existing stakeholders
  • Establish and understand the roles and responsibilities
  • Assist personnel to understand their roles, where to seek assistance, how to empower themselves in the development and release process
  • Communicate with research, data scientists, AI and ML engineers, and non-AI engineers
  • Decide who will maintain and update an inventory of AI applications and algorithms
  • As practitioners begin to build AI assessment processes, look to external frameworks
  • Work with internal stakeholders to establish the organizational risk strategy and tolerance
  • Focus on key AI risks and needs tied to the organization’s AI principles, values and standards
39
Q

Who should you include as stakeholders when establishing AI governance and strategy?

A
  • Privacy
  • Security
  • Accessibility
  • Digital safety
40
Q

What are 3 aspects to consider when engaging with leadership?

A
  • Identify leadership already using AI and would be supporters of improved governance and structure to streamline activities
  • Identify where and how responsible AI is a differentiator
  • Show why and how your organization and leadership can get ahead of AI
41
Q

How can responsible AI be a differentiator?

A

When current programs and public or customer-focused information is insufficient
(for example, when privacy and security do not sufficiently address the risks of AI bias and improved forms of AI focused transparency, grounded in governance, can make products more appealing to customers and the public)

42
Q

How can you show why and how your organization and leadership can get ahead of AI?

A
  • Explain current legislation coming to the fore that will impact the organization
  • Showcase existing regulatory statements
  • Express concerns and statements on AI to emphasize why a strong governance program can help mitigate AI risks and demonstrate the organization’s commitment to building trustworthy products
43
Q

List 3 different types of AI governance models

A
  • Centralized - controlled by a single team in the organization
  • Federated - hybrid
  • Decentralized - highly technical and integrated into development tools
44
Q

How can you incentivize effective and safe AI?

A
  • Highlighting customer value and increased trust by customers
  • Strongly defining responsible AI as a discipline to reinforce the value of AI for an organization
  • Engaging HR to identify work roles and success measures for AI practitioners so that they are rewarded and can help highlight the value of responsible AI and foster a strong governance community and responsibly minded AI engineers