Module 5 Flashcards

1
Q

What should stakeholders do early in the AI system development process?

A
  • Define and agree on the goal for using AI
  • Assess whether AI is suitable for the mission and purpose
  • Set parameters defining what is important to success
  • Determine at what frequency does the stakeholder group need to meet to evaluate success and mitigate issues during the development lifecycle
  • Establish who is ultimately responsible for any risks and mitigations, and of any failures of the system once it is implemented
  • Follow the organization’s guide (or create one!)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What questions can the stakeholder group answer to define the business case?

A
  • What is the cost/benefit analysis?
  • What are the trade-offs in using AI/ML vs. other solutions?
  • What will the organization’s declared position on AI use be, both externally and internally?
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Who should be the stakeholders?

A
  • AI governance officers
  • Privacy experts
  • Security experts
  • Procurement experts (sometimes)
  • Subject matter experts
  • Legal team
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What steps must stakeholders take to evaluate whether the AI system is meeting its goals appropriately?

A

Know:
- The data needed for the training algorithm
- What policies are applicable
- What happens if the AI performs poorly – what impacts could this have on individuals and the organization
- What is the organization’s risk tolerance (ensure the stakeholder group agrees on it and document any decisions)
- Have different methodologies to evaluate risk that you can use routinely

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

List 4 risk evaluation methodologies

A
  • Probability and severity harms matrix
  • HUDERIA risk index number
  • Risk mitigation hierarchy
  • Confusion matrix
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Who should be included in your communication plan?

A
  • Regulators
  • Consumers
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What should you communicate to regulators?

A
  • Compliance and disclosure obligations
  • Explainability
  • Risks and mitigation processes
  • Data and risk classifications
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What should you communicate to consumers?

A
  • Transparency as to the functionality of AI
  • What data will be used and how
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What should be included in an AI algorithmic assessment?

A
  • Data issues
  • Decisions the stakeholder group has made (risks and mitigations, individual responsible, etc.)
  • Document appropriate uses of the AI system
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

How do stakeholders identify risks?

A
  • Conduct a risk analysis and determine the contributing factors
  • Classify risks appropriately
  • Determine what risks can be mitigated
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

How should you determine what tests to run on your AI system?

A

The risks you have should inform testing and should consider:
- Purpose
- Algorithm type
- Whether you are integrating with third party tools
- The regulations applicable to your sector

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

List 7 types of testing

A
  • Accuracy
  • Robustness
  • Reliability
  • Privacy
  • Interpretability
  • Safety
  • Bias
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

List 3 types of bias

A
  • Computational bias
  • Cognitive bias
  • Societal bias
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What can you do to ensure your testing is comprehensive?

A
  • Include cases the AI has not previously seen (“edge” cases) and “unseen” data (data not part of the training data set)
  • Include potentially malicious data in the test
  • Conduct repeatability assessments to determine if the AI produces the same (or a similar) outcome consistently
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What are counterfactual explanations?

A

A counterfactual is essentially a statement of how the world would have to change in order to achieve a different outcome

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

List 4 AI specific risks that you should consider

A
  • Inversion
  • Extraction
  • Poisoning
  • Invasion
17
Q

How can you mitigate the risk of new data in the algorithm?

A
  • Keep snapshots of an algorithm as well as its outputs, so if there is an issue you can go back to a previous iteration
  • Use documentation to know what changed between iterations
18
Q

What is your best mitigation for AI not performing as it should?

A

Consider it an incident and have an incident response plan

19
Q

What should you include in your incident response plan?

A
  • Description of the issue
  • Model version and data set used
  • Who it should be reported to (internal/external)
  • Mitigations
  • Communications
20
Q

Why should you document the model version and data set related to an incident?

A
  • Allows for challenger models to be accurately created
  • Allows for transparency with regulatory agencies and consumers
21
Q

What should you put in place to monitor systems?

A
  • Continuously improve the system
  • Have a procedure in place to deactivate a system or localize it as needed
  • Create a “challenger model”