Module 5: Implementing AI Projects and Systems: Managing and Monitoring AI Systems after Deployment Flashcards

1
Q

What are some questions to ask during the system monitoring phase?

A
  • Were the goals achieved?
  • Has human interpretation and oversight been incorporated into the evaluation to avoid automation bias?
  • Are there secondary or unintended outputs from the system? If so:
    • Do these result in additional risks or harms that need to be addressed?
    • Can these risks be predicted using a challenger model?
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What should an organization do when AI is not performing as it should?

A
  • Consider it an incident and respond. Include third parties in the response.
  • If the risk is great, have a human shut down the algorithm.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What should be included in an AI incident response plan?

A

Document the model version and the dataset used for the model.

 - This allows for challenger models to be accurately created.

 - This also allows for transparency with regulatory agencies and consumers.

Respond to internal and external risks.

 - Prioritize and determine the risk level and appropriate response; create a “risk score.”

 - Conduct internal or external red teaming exercises for generative AI systems (may also be done pre-deployment)

 - Consider bug bashing/bug bounties to generate user engagement and extensive feedback
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What are some considerations for monitoring AI systems?

A

1) Inventory all AI risk systems and attach a risk score to each.
2) Continuously improve the system.
3) Have a procedure in place to deactivate a system or localize it as needed.
4) Create a challenger model.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are some potential negative downstream consequences of AI?

A
  • Resentment with poorly implemented interventions.
  • A lack of transparency and clarity about decisions can result in a perception of unfairness, arbitrariness or ideological influence.
  • Use of superficial policies and guardrails meeting the letter of the law but not the spirit of it.
  • False sense of safety and privacy.
  • Researchers and reviewers believing all possible risks are addressed, and overlooking something significant. This may be more likely if there are incentives to mask or reframe some risks.
  • One-time evaluation of risk vs. continuous monitoring of changing AI risks over time.
  • Unintended consequences. For example, if researchers and developers are asked to reflect on potential misuse, an unintended consequence could be the creation of a “roadmap” malicious actors could use.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What does effective AI governance require?

A

A comprehensive understanding and documentation of an AI system’s purpose, risks and impacts.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What should be implemented by organizations to maintain effective AI governance?

A
  • Monitoring and fine-tuning processes
  • Documentation of AI purpose and limitations
  • Prepare to adapt models for new purposes
How well did you know this?
1
Not at all
2
3
4
5
Perfectly