Implementing AI Projects and Systems Flashcards
What are the steps in identifying data needed to train the algorithm?
- What is the data source?
- Is the data accurate?
- Is the data fully representational of the intended data for the AI process?
- Is the data biased?
What can help identify data gaps?
Statistical sampling
What are a couple of examples of defining the business case?
- What is the cost/benefit analysis? What are the tradeoffs in using AI/ML vs. other solutions?
- What will the organization’s declared position on AI use be, both externally and internally?
What areas can stakeholders assist with?
- Identifying data needs
- Determining policies (utilizing sector specific laws)
- Considering system options (including redress)
- Documenting AI uses
- Evaluating “what if” scenarios
- Assessing risk tolerance
- Test, evaluation, verification, and validation cycles
What are some considerations for a Communication Plan by audience?
For regulators:
- Compliance and disclosure obligations
- Explainability
- Risks and mitigation processes
- Data and risk classifications
For consumers:
- Transparency as to the functionality of AI
- What data will be used and how
What are the steps involved in identifying the risks of AI use?
- Conduct a risk analysis and determine the contributing factors
- Classify risks appropriately (prohibitive, major, moderate, no-risk), and create proportionate management for risks
- Determine what risks can be mitigated
- Assess the organization’s risk tolerance
Name some tools for identifying risk.
- HUDERIA
- Probability and Severity Harms Matrix
- Confusion Matrix
What does HUDERIA stand for? What is its basis?
Human Rights, Democracy, and the Rule of Law Impact Assessment. It is based upon relevant Council of Europe standards,
What are some considerations for testing and validating AI systems?
- Use cases: Align testing data and processes for the specific use case
- Resources: Understand what resources you have and where best to put them to address risks and mitigations.
- Conduct adversarial testing and threat modeling to identify security threats.
- Establish multiple layers of mitigation to stop system errors or failures at different levels or modules of the AI system.
- Evaluate AI systems for attributes unique to them, such as brittleness, hallucinations, embedded bias, uncertainty and false positives.
- Understand trade-offs among mitigation strategies.
- PETs: Apply PETs to training and testing data.
- Documentation: Document all decisions the stakeholders group makes.
What are some questions to ask when implementing tests during system monitoring?
1) Were the goals achieved?
- Automation bias: do not rely solely on output to determine this; human interpretation and oversight should be included in the evaluation.
2) As the system is in use, are there secondary or unintended outputs?
- Do these result in additional risks or harms that need to be addressed?
- Can these or others be predicted by using a challenger model?
What are some elements of an AI response plan?
1) Document the model version and the dataset used for the model
- Allows for challenger models to be accurately created
- Allows for transparency with regulatory agencies and consumers
2) Respond to internal and external risks
- Prioritize and determine the risk level and appropriate response; create a “risk score”
- Conduct internal or external red teaming exercises for generative AI systems (may also be done pre-deployment)
- Consider bug bashing/bug bounties to generate user engagement and extensive feedback
What are some considerations for monitoring AI systems?
1) Continuously improve the system: Retrain with new data as needed and with human input and feedback.
2) Have a procedure in place to deactivate a system or localize it as needed (helpful for legal requirements and performance issues).
3) Create a “Challenger Model” to test and compare against the existing model (or “Champion Model”) to assess drift, unexpected results, etc.
Name some common stakeholders involved in scoping an AI project.
- AI governance officers
- Privacy experts
- Security experts
- Procurement experts
- Subject matter experts
- Legal team
Names some different types of bias.
- Computational
- Cognitive
- Societal
What are the OECD AI Principles?
- Inclusive growth
- Sustainable development and well-being
- Human-centered values and fairness
- Transparency and explainability
- Robustness, security and safety
- Accountability