Module 5: Implementing AI Projects and Systems: Managing and Monitoring AI Systems after Deployment Flashcards
What are some questions to ask during the system monitoring phase?
- Were the goals achieved?
- Has human interpretation and oversight been incorporated into the evaluation to avoid automation bias?
- Are there secondary or unintended outputs from the system? If so:
- Do these result in additional risks or harms that need to be addressed?
- Can these risks be predicted using a challenger model?
What should an organization do when AI is not performing as it should?
- Consider it an incident and respond. Include third parties in the response.
- If the risk is great, have a human shut down the algorithm.
What should be included in an AI incident response plan?
Document the model version and the dataset used for the model.
- This allows for challenger models to be accurately created. - This also allows for transparency with regulatory agencies and consumers.
Respond to internal and external risks.
- Prioritize and determine the risk level and appropriate response; create a “risk score.” - Conduct internal or external red teaming exercises for generative AI systems (may also be done pre-deployment) - Consider bug bashing/bug bounties to generate user engagement and extensive feedback
What are some considerations for monitoring AI systems?
1) Inventory all AI risk systems and attach a risk score to each.
2) Continuously improve the system.
3) Have a procedure in place to deactivate a system or localize it as needed.
4) Create a challenger model.
What are some potential negative downstream consequences of AI?
- Resentment with poorly implemented interventions.
- A lack of transparency and clarity about decisions can result in a perception of unfairness, arbitrariness or ideological influence.
- Use of superficial policies and guardrails meeting the letter of the law but not the spirit of it.
- False sense of safety and privacy.
- Researchers and reviewers believing all possible risks are addressed, and overlooking something significant. This may be more likely if there are incentives to mask or reframe some risks.
- One-time evaluation of risk vs. continuous monitoring of changing AI risks over time.
- Unintended consequences. For example, if researchers and developers are asked to reflect on potential misuse, an unintended consequence could be the creation of a “roadmap” malicious actors could use.
What does effective AI governance require?
A comprehensive understanding and documentation of an AI system’s purpose, risks and impacts.
What should be implemented by organizations to maintain effective AI governance?
- Monitoring and fine-tuning processes
- Documentation of AI purpose and limitations
- Prepare to adapt models for new purposes