Implementing AI Projects and Systems Flashcards

1
Q

What are the steps in identifying data needed to train the algorithm?

A
  • What is the data source?
  • Is the data accurate?
  • Is the data fully representational of the intended data for the AI process?
  • Is the data biased?
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What can help identify data gaps?

A

Statistical sampling

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are a couple of examples of defining the business case?

A
  • What is the cost/benefit analysis? What are the tradeoffs in using AI/ML vs. other solutions?
  • What will the organization’s declared position on AI use be, both externally and internally?
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What areas can stakeholders assist with?

A
  • Identifying data needs
  • Determining policies (utilizing sector specific laws)
  • Considering system options (including redress)
  • Documenting AI uses
  • Evaluating “what if” scenarios
  • Assessing risk tolerance
  • Test, evaluation, verification, and validation cycles
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are some considerations for a Communication Plan by audience?

A

For regulators:
- Compliance and disclosure obligations
- Explainability
- Risks and mitigation processes
- Data and risk classifications

For consumers:
- Transparency as to the functionality of AI
- What data will be used and how

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are the steps involved in identifying the risks of AI use?

A
  • Conduct a risk analysis and determine the contributing factors
  • Classify risks appropriately (prohibitive, major, moderate, no-risk), and create proportionate management for risks
  • Determine what risks can be mitigated
  • Assess the organization’s risk tolerance
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Name some tools for identifying risk.

A
  • HUDERIA
  • Probability and Severity Harms Matrix
  • Confusion Matrix
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What does HUDERIA stand for? What is its basis?

A

Human Rights, Democracy, and the Rule of Law Impact Assessment. It is based upon relevant Council of Europe standards,

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are some considerations for testing and validating AI systems?

A
  • Use cases: Align testing data and processes for the specific use case
  • Resources: Understand what resources you have and where best to put them to address risks and mitigations.
  • Conduct adversarial testing and threat modeling to identify security threats.
  • Establish multiple layers of mitigation to stop system errors or failures at different levels or modules of the AI system.
  • Evaluate AI systems for attributes unique to them, such as brittleness, hallucinations, embedded bias, uncertainty and false positives.
  • Understand trade-offs among mitigation strategies.
  • PETs: Apply PETs to training and testing data.
  • Documentation: Document all decisions the stakeholders group makes.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are some questions to ask when implementing tests during system monitoring?

A

1) Were the goals achieved?
- Automation bias: do not rely solely on output to determine this; human interpretation and oversight should be included in the evaluation.
2) As the system is in use, are there secondary or unintended outputs?
- Do these result in additional risks or harms that need to be addressed?
- Can these or others be predicted by using a challenger model?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are some elements of an AI response plan?

A

1) Document the model version and the dataset used for the model
- Allows for challenger models to be accurately created
- Allows for transparency with regulatory agencies and consumers
2) Respond to internal and external risks
- Prioritize and determine the risk level and appropriate response; create a “risk score”
- Conduct internal or external red teaming exercises for generative AI systems (may also be done pre-deployment)
- Consider bug bashing/bug bounties to generate user engagement and extensive feedback

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What are some considerations for monitoring AI systems?

A

1) Continuously improve the system: Retrain with new data as needed and with human input and feedback.
2) Have a procedure in place to deactivate a system or localize it as needed (helpful for legal requirements and performance issues).
3) Create a “Challenger Model” to test and compare against the existing model (or “Champion Model”) to assess drift, unexpected results, etc.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Name some common stakeholders involved in scoping an AI project.

A
  • AI governance officers
  • Privacy experts
  • Security experts
  • Procurement experts
  • Subject matter experts
  • Legal team
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Names some different types of bias.

A
  • Computational
  • Cognitive
  • Societal
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What are the OECD AI Principles?

A
  • Inclusive growth
  • Sustainable development and well-being
  • Human-centered values and fairness
  • Transparency and explainability
  • Robustness, security and safety
  • Accountability
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What are some factors to consider when evaluating an AI system’s potential risks?

A
  • Algorithm type
  • Third-party tools
  • Regulations
  • Industry-specific considerations
  • Intended purpose
17
Q

What are some AI specific risks an organizations should consider?

A
  • Model
  • Inversion
  • Extraction
  • Poisoning
  • Evasion
18
Q

What should you do if an AI model is not working properly?

A

Consider it an incident and implement the incident response plan.

19
Q

What are some potential negative downstream consequences of AI research and development?

A
  • Resentment with poorly implemented interventions.
  • A lack of transparency and clarity about decisions can result in a perception of unfairness, arbitrariness or ideological influence.
  • Use of superficial policies and guardrails meeting the letter of the law but not the spirit of it.
  • False sense of safety and privacy.
  • Researchers and reviewers may believe all possible risks are addressed but overlook something significant. This could be dangerous, especially if there are incentives to mask or reframe some risks.
  • One-time evaluation of risk vs. continuous monitoring of changing AI risks over time.
  • Unintended consequences. Example: If researchers and developers are required to reflect on potential misuses of their work, an unintended consequence could be a “roadmap” malicious actors may use.
20
Q

What does effective AI governance require?

A

A comprehensive understanding and documentation of an AI system’s purpose, risks and impacts.

21
Q

What should be implemented by organizations to maintain effective AI governance?

A
  • Monitoring and fine-tuning processes
  • Documentation of AI purpose and limitations
  • Prepare to adapt models for new purposes