7. AI Governance Strategy Flashcards
What do you need to understand to build AI governance?
The organization’s:
- Operations
- Incentive structures
- Sector
- Developer, developer, or user of AI
What are the considerations for engaging leadership to support behavioral and cultural change?
- Identify leadership already using AI
- Understand where and how responsible AI is a differentiator
- Show why and how the organization and leadership can get ahead of AI
What are the steps to involve stakeholders (privacy, security, legal, accessibility, and digital safety) into AI governance?
Assist stakeholders to understand:
1. Their specific roles
2. Where to seek assistance
3. How to empower themselves in the AI development and release process
How do you establish organizational risk, strategy and tolerance?
Determine the level of risk the organization is willing to accept and develop mitigation strategies accordingly
What are the 3 types of governance models?
- Centralized (a governance model that
leaves one team or person responsible for AI-related affairs; all other persons or
organizations flow through this point) - Hybrid (a governance model that allows for a
combination of centralized and local governance; typically seen when a large organization
assigns a main individual responsibility for AI-related affairs, and the local entities then
fulfill and support the policies and directives from the central governing body) - Decentralized (also known as “local governance,” a governance
model that involves the delegation of decision-making authority down to the lower levels in
an organization, away from and lower than a central authority; there are fewer tiers in the
organizational structure, allowing for a wider span of control and bottom-to-top flow of
decision-making and ideas)
What should you consider in an AI assessment process?
- External frameworks (NIST AI Risk Management Frameworks), internal and academic publications
- Adapt the framework for external procurement or internal development of AI-based solutions
- Focus on key AI risks and needs based on the organization’s AI principles, values, and standards
- Contrast the assessment against existing assessments
Why should you contrast proposed AI assessments against existing assessments such as privacy reviews?
- Identifying areas of commonality
- Simplifying the overall compliance processes expected for AI products
- Reinforcing leadership support as they see that processes are optimizing and de-duplicating to maintain product release timelines aligned with pace of market
developments
How do you create a culture of responsible AI within the organization?
- Highlight customer value and increased customer trust
- Recognize cultural variations (Ensure that diversity is included and encouraged)
- Define responsible AI as a discipline (Reinforce the value of AI for an organization)
- Engage HR to identify work roles and success measures for practitioners so they are rewarded
- Provide knowledge resources and training to personnel (Foster a culture that continuously promotes ethical behavior)
- Set common AI terms and taxonomy for the organization
How do you map, plan, and scope AI projects?
- Determine the stakeholders
- Involve stakeholders early in the process
- Define the business case
- Work with the stakeholders to assess whether AI is the solution
- Determine stakeholder meeting frequency
- Determine the risks for internal and external AI use
How do you identify the data needed for training the algorithm?
- Determine where the data is originating and verify it is accurate
- Is the data fully representational of the data intended to be used for the AI process?
- Is the data biased?
- Statistical sampling can help identify data gaps
When mapping, planning and scoping Ai projects, what should you take into account?
- Data needed to train the algorithm
- Applicable legislation
- Available system options, including redress
- Documenting appropriate uses of AI to prevent use for different purpose than the one created for
- Evaluate what happens if AI preforms poorly
- Assess the organization’s risk tolerance
- Build a timeline to include sufficient test, evaluation, verification and validation cycles
What are the different methodologies to identify risk?
- Probability and severity harms matrix
- HUDERIA risk index number
- Risk mitigation hierarchy
- Confusion matrix (A four-square matrix that includes risks and weighted outcomes)
What is a critical step following risk identification and mitigation?
Communicate them to stakeholders, the organization, partners, and people with whom your organization will be sharing data results.
Communicate in different methods depending on your audience
Can you leverage PIAs to assess AI projects?
Yes:
1. Consider performing a privacy impact assessment on the underlying training data
2. A PIA may not cover everything you need to have in an AI governance document, so you
may also want to do a data protection impact assessment
3. Tailor the organization’s exiting PIA and DPIAs to cover AI
What is one draw-back to PIA or DPIA?
They are not AI specific.
Ensure you identify gaps between existing processes and what you need for an algorithmic
impact assessment.
What should an algorithmic impact assessment cover?
An algorithmic impact assessment should cover the data issues and document decisions your
stakeholder group makes. This may include risk identification and mitigation or identifying who
approves and accepts risk on behalf of your organization.
What are the types of testing to validate AI?
Types of testing can include:
* Accuracy
* Robustness
* Reliability
* Privacy
* Interpretability
* Safety
* Bias
How do you address privacy in AI?
One way to address privacy in AI is to use PETs applied to training and testing data along with
other privacy protective measures. Some common PETs include homomorphic encryption,
differential privacy, de-identification/obfuscation techniques and federated learning.
What are AI testing and validating challenges?
Resources:
* Understand what resources you have and where best to put them to address risks and
mitigations
* Higher-risk areas (e.g., AI used in aviation) should have higher resources put toward mitigation
* Lower-risk areas (e.g., an algorithm that determines which pictures of a cat will get clicks) will have
lower testing, validation and security requirements
* Within an organization, this may mean dedicating more resources to HR’s use of AI than
marketing’s use of AI to send emails
Why is it important to document AI testing, outcomes, and what was changes based on testing?
Compliance may require audits
Document all decisions and updates — these will be critical for informing future audits
Do you need to test and validate AI during deployment?
Testing and continuous evaluation are always needed for an algorithm. It is important to align
testing data and process to the use case and not use the same testing and evaluation for each
algorithm.
* Use cases may need differing amounts of detail. Some may also require more security or privacy,
depending on the algorithm’s purpose.
* Include cases the AI has not previously seen; i.e., “edge” cases
* Include “unseen” data (data not part of the training data set).
* Include potentially malicious data in the test.
* You may need to do a more intense search of what kinds of bias issues and mitigations exist.
* Conduct repeatability assessments to ensure the AI produces the same (or a similar) outcome
consistently.
What do you do when AI is not responding as intended?
When determining what tests to use, ensure you fully understand risks identified by stakeholders.
Conduct adversarial testing and threat modeling to identify security threats
* How does the AI/ML program behave when provided malicious or inadvertently harmful input?
* What are the security threats to the system?
Do you need to establish multiple layers of mitigation to stop system errors/failures at different levels or modules of the system?
Yes. Evaluating AI systems performance is not just about accuracy and validity. It also includes
attributes unique to these systems, such as brittleness, hallucinations, embedded bias,
uncertainty and false positives.
Why do you need to review previous incidents to identify risks?
To understand the breath of potential issues
To understand what testing and validation, and in which areas, has been completed.
How do you document decisions the stakeholder group makes during the development life cycle of an algorithm?
- Standard documents and templates
- Model cards or fact sheets (Provide standardized information about the model and its function/output)
- Contextual explanations (Provides details on what new or different input may affect the output of the AI process)
- Determine the level of impact required for remediation
- Determine method of deployment