Domain 3: How to Govern AI Flashcards

1
Q

AI Development Lifecycle (4 stages)

A

Plan
Design
Develop
Deploy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

AI Dev Lifecycle - Plan stage

A

Business problem
Mission
Gaps
Data
Scope
Governance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

AI Dev Lifecycle - Design stage: Data strategy

A

Data quality
Data gathering
Data wrangling/prep
Data cleansing
Data labeling
Data privacy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Scope considerations for AI Dev planning (3)

A

Impact
Effort
Fit

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Data wrangling 5 V’s

A

Variety
Value
Velocity
Veracity
Volume

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

AI Dev Lifecycle - Design: Consideration

A

Desired accuracy, interoperability
Objective of data
Business problem
Compliance and business requirements
Constraints (time, money, expertise)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

AI Dev Lifecycle - Development

A

Define the features of the model

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

AI Dev Lifecycle - Dev: Purpose of engineering features

A

Explains how the model reaches decisions
- Improve model performance
- Reduce computational costs
- Boost model explainability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Key principles of explainability

A

Fair
Privacy
Reliability
Robustness
Trust

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Types of Impact Assessments (5)

A

Algorithmic
Privacy or Data Protection
Human rights, Democracy, and Rule of Law
Fundamental rights
Ethical

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

HUDERIA: Preliminary Context-based Risk Analysis (PCRA)

A

provide initial indication of context-based risk for human rights, freedoms and democracy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

HUDERIA: Risk Index Number (RIN)

A

Gravity potential + Rights-holders affected = Severity

Severity * Likelihood = RIN

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

HUDERIA: Risk Index Number (RIN): Gravity

A

Gravity potential
1: moderate
2: serious
3: critical
4: catastrophic

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

HUDERIA: Risk Index Number (RIN): Rights-holders affected

A

Rights-holders affected
.5: <= 10k
1.0: 10k < x <= 100k
1.5: 100k < x <= 1m
2.0: >1m

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

HUDERIA: Risk Index Number (RIN): Likelihood

A

Likelihood
0: not applicable
1: unlikely
2: possible
3: likely
4: very likely

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

HUDERIA: Risk Index Number (RIN): Risk levels

A

Low: <=5
Moderate: 5.5-6
High: 6-7.5
Very high: >=8

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

HUDERIA: Risk Index Number (RIN): Relevant categories

A

Avoid
Reduce
Restore (rehab)
Compensate

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Confusion matrix

A

Actual vs predicted
Actual
Pos Neg
Pos True Pos | False Pos
Predicted
Neg False Neg | True Neg

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Risk classifications (4)

A

Prohibitive
Major
Moderate
Low-risk

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Severity levels for risk (3)

A

Critical
Moderate
Marginal

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Probability levels for risk (3)

A

Probable
Occasional
Improbable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Risk scoring

A

assigning a quantitative value to a risk

Severity x Probability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Operational controls

A

Assign system responsibilities
Conduct audits and reviews
Establish feedback mechanisms
Respond to feedback and appeals
Elevate issues within the org
Assign “kill switch” responsibilities

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Three approaches to global AI regulations

A

Specific area of focus (ADM, sector)
Amend existing laws (Brazil)
Comprehensive regs (EU AI Act)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Brazil AI Law
Comprehensive proposed Risk based ADM rights, human rights Tiered approach
26
Japan AI Law (2022)
Non-binding Defers to private sector self-regulation ML management, cloud services
27
India
AI gov principles National committee to write policy framework Ministry of Electronics and Info Tech Advisories - AI cannot facilitate illegal content - Unreliable models must be labeled
28
Canada: Digital Charter Implementation Act (C-27)
Died in gov Risk based
29
Canada: Consumer Privacy Protection Act
PIPEDA replacement privacy regime and higher fines
30
Canada: Personal Information and Data Protection Tribunal Act
data protection tribunal for appeals, penalties
31
Canada: Code of Conduct
demonstrate responsible AI
32
Cyberspace Administration of China (CAC)
regulates cyberspace, security, and the internet
33
China: Generative AI Measures (2023)
Regulates the use of GenAI for public Extraterritorial *** Rights based approach (only one) Municipal actions
34
Singapore: Model AI Governance Framework (2019)
First AI gov framework in Asia Risk based Voluntary, Industry led council Existing authorities Human centric and transparent
35
Singapore: AI in Healthcare (2021)
support patient safety and improve trust
36
Monetary Authority of Singapore (MAS)
central bank of Singapore First regulatory authority to act on AI FEAT (2018) - Fairness - Ethics - Accountability - Transparency
37
Veritas Framework
Singapore Method to put FEAT into AI solutions
38
National AI Strategy (NAIS)
Singapore plan for implementing FEAT and Veritas
39
National AI Strategy (NAIS) 2.0
Singapore Advance AI and value creation while empowering individuals Art 13: established regular reviews and adjustments of the framework
40
Singapore: Personal Data Protection Commission (PDPC)
regulates privacy of personal data
41
AI Verify Foundation
Singapore Governance testing framework via a software toolkit to demonstrate compliance
42
Singapore: Model AI Principles
Transparency / explainability Repeatability / reproducibility Robustness, safety, security Fairness Data governance Accountability Human agency and oversight Inclusive growth and well-being
43
Three lines of defense (human oversight)
First line: establish and implement controls by the business owner Second line: identify and mitigate risks daily by RM committee or compliance officer Third line: Provide independent assurance through internal audits
44
Feedforward neural network (FNN)
straightforward data processing where data flows in one direction
45
Convolutional neural network (CNN)
multiple, distinct layers of FNNs Think classification, object detection, spacial or images
46
Recurrent neural network (RNN)
bi-directional data model Think memory, speech recognition, text or sequential
47
Graph neural network (GNN)
analyzes how data points are connected Think social network connections
48
Transformers and Attention
self-attention to learn relationships between components GenAI LLMs use this to map words
49
Ensemble methods
Stacking: train multiple models and synthesize them Bagging (or bootstrap aggregation): train the same model on different subsets of data to average the outputs Boosting: sequential building of simple models where each improves on the last
50
What to test for in AI models
Bias Accuracy Reliability Robustness Privacy Interpretability Safety
51
How to test an AI model
Repeatabliity assessment Adversarial testing (red teaming) Threat modeling
52
Testing resources
OECD Catalog of Tools OECD Catalog of Metrics AI incident database
53
What to document
Decisions Testing and outcomes Risks Responses to outcomes Remediation of adverse impact Obligations
54
Counterfactual Explanations (CFE)
explains how small changes to inputs lead to changes in outputs
55
Model cards
Transparency documents like nutrition labels Info about the AI model, training, and data Improves model credibility
56
Data sheets for data sets
Documents the base data pile for: - Motivation - Composition - Collection process - Recommended use
57
Conformity Assessments
Applies to high risk systems to comply with the EU AI Act Framework of technical and non-technical assessments and documentation Providers conduct Identifies: - development - data - training process - behavior - potential impacts Applies to BCMEEEAL
58
Conformity assessment requirements
Risk management Data and data governance Tech documentation Record-keeping Transparency Human oversight Accuracy, robustness, cybersecurity
59
Data Protection Impact Assessment (DPIA)
Privacy assessment under GDPR
60
Why systems fail (6)
Brittleness Embedded bias Uncertainty Catastrophic forgetting False positives Hallucination
61
Communication plan and products
Business use case Project plan and timeline Model cards User interface copy Documentation for regulators, consumers Data sheets for data sets Acceptable Use Policy (AUP)