Module 5 Flashcards
What should stakeholders do early in the AI system development process?
- Define and agree on the goal for using AI
- Assess whether AI is suitable for the mission and purpose
- Set parameters defining what is important to success
- Determine at what frequency does the stakeholder group need to meet to evaluate success and mitigate issues during the development lifecycle
- Establish who is ultimately responsible for any risks and mitigations, and of any failures of the system once it is implemented
- Follow the organization’s guide (or create one!)
What questions can the stakeholder group answer to define the business case?
- What is the cost/benefit analysis?
- What are the trade-offs in using AI/ML vs. other solutions?
- What will the organization’s declared position on AI use be, both externally and internally?
Who should be the stakeholders?
- AI governance officers
- Privacy experts
- Security experts
- Procurement experts (sometimes)
- Subject matter experts
- Legal team
What steps must stakeholders take to evaluate whether the AI system is meeting its goals appropriately?
Know:
- The data needed for the training algorithm
- What policies are applicable
- What happens if the AI performs poorly – what impacts could this have on individuals and the organization
- What is the organization’s risk tolerance (ensure the stakeholder group agrees on it and document any decisions)
- Have different methodologies to evaluate risk that you can use routinely
List 4 risk evaluation methodologies
- Probability and severity harms matrix
- HUDERIA risk index number
- Risk mitigation hierarchy
- Confusion matrix
Who should be included in your communication plan?
- Regulators
- Consumers
What should you communicate to regulators?
- Compliance and disclosure obligations
- Explainability
- Risks and mitigation processes
- Data and risk classifications
What should you communicate to consumers?
- Transparency as to the functionality of AI
- What data will be used and how
What should be included in an AI algorithmic assessment?
- Data issues
- Decisions the stakeholder group has made (risks and mitigations, individual responsible, etc.)
- Document appropriate uses of the AI system
How do stakeholders identify risks?
- Conduct a risk analysis and determine the contributing factors
- Classify risks appropriately
- Determine what risks can be mitigated
How should you determine what tests to run on your AI system?
The risks you have should inform testing and should consider:
- Purpose
- Algorithm type
- Whether you are integrating with third party tools
- The regulations applicable to your sector
List 7 types of testing
- Accuracy
- Robustness
- Reliability
- Privacy
- Interpretability
- Safety
- Bias
List 3 types of bias
- Computational bias
- Cognitive bias
- Societal bias
What can you do to ensure your testing is comprehensive?
- Include cases the AI has not previously seen (“edge” cases) and “unseen” data (data not part of the training data set)
- Include potentially malicious data in the test
- Conduct repeatability assessments to determine if the AI produces the same (or a similar) outcome consistently
What are counterfactual explanations?
A counterfactual is essentially a statement of how the world would have to change in order to achieve a different outcome
List 4 AI specific risks that you should consider
- Inversion
- Extraction
- Poisoning
- Invasion
How can you mitigate the risk of new data in the algorithm?
- Keep snapshots of an algorithm as well as its outputs, so if there is an issue you can go back to a previous iteration
- Use documentation to know what changed between iterations
What is your best mitigation for AI not performing as it should?
Consider it an incident and have an incident response plan
What should you include in your incident response plan?
- Description of the issue
- Model version and data set used
- Who it should be reported to (internal/external)
- Mitigations
- Communications
Why should you document the model version and data set related to an incident?
- Allows for challenger models to be accurately created
- Allows for transparency with regulatory agencies and consumers
What should you put in place to monitor systems?
- Continuously improve the system
- Have a procedure in place to deactivate a system or localize it as needed
- Create a “challenger model”