Casos práticos Flashcards
Use the following answer the next question:
ABC Corp, is a leading insurance provider offering a range of coverage options to individuals. ABC has decided to utilize artificial intelligence to streamline and improve its customer acquisition and underwriting process, including the accuracy and efficiency of pricing policies.
ABC has engaged a cloud provider to utilize and fine-tune its pre-trained, general purpose large language model (‘‘LLM’’). In particular, ABC intends to use its historical customer data including applications, policies, and claims and proprietary pricing and risk strategies to provide an initial qualification assessment of potential customers, which would then be routed a human underwriter for final review.
ABC and the cloud provider have completed training and testing the LLM, performed a readiness assessment, and made the decision to deploy the LLM into production. ABC has designated an internal compliance team to monitor the model during the first month, specifically to evaluate the accuracy, fairness, and reliability of its output. After the first month in production, ABC realizes that the LLM declines a higher percentage of women’s loan applications due primarily to women historically receiving lower salaries than men.
What is the best strategy to mitigate the bias uncovered in the loan applications?
A Retrain the model with data that reflects demographic parity.
B Procure a third-party statistical bias assessment tool.
C Document all instances of bias in the data set.
D Delete all gender-based data in the data set.
A. Retrain the model with data that reflects demographic parity.
Justification:
Addressing Systemic Bias:
The issue stems from historical data reflecting gender-based income disparities. Retraining the model with a dataset that reflects demographic parity ensures the model learns patterns that are fair and unbiased.
Improved Model Performance:
By introducing balanced or synthetic data that corrects for biases in the training dataset, the model can improve its predictions, ensuring equitable outcomes across demographic groups.
Ethical AI Practices:
Retraining aligns with ethical AI principles, including fairness and non-discrimination. It ensures that the AI solution does not perpetuate historical biases, a common issue in AI systems trained on real-world data.
Why not the other options?
B. Procure a third-party statistical bias assessment tool:
While bias assessment tools can identify and quantify bias, they do not actively mitigate or correct the bias within the model. This is a complementary step, not the solution itself.
C. Document all instances of bias in the data set:
Documenting bias is important for transparency but does not solve the issue of biased outcomes. It is part of the broader process but not sufficient on its own.
D. Delete all gender-based data in the data set:
Deleting gender-based data (a process called “fairness through unawareness”) may lead to proxy discrimination, where the model indirectly uses other correlated attributes (e.g., occupation or location) to replicate biased outcomes.
ABC Corp, is a leading insurance provider offering a range of coverage options to individuals. ABC has decided to utilize artificial intelligence to streamline and improve its customer acquisition and underwriting process, including the accuracy and efficiency of pricing policies.
ABC has engaged a cloud provider to utilize and fine-tune its pre-trained, general purpose large language model (‘‘LLM’’). In particular, ABC intends to use its historical customer data including applications, policies, and claims and proprietary pricing and risk strategies to provide an initial qualification assessment of potential customers, which would then be routed a human underwriter for final review.
ABC and the cloud provider have completed training and testing the LLM, performed a readiness assessment, and made the decision to deploy the LLM into production. ABC has designated an internal compliance team to monitor the model during the first month, specifically to evaluate the accuracy, fairness, and reliability of its output. After the first month in production, ABC realizes that the LLM declines a higher percentage of women’s loan applications due primarily to women historically receiving lower salaries than men.
The best approach to enable a customer who wants information on the Al model’s parameters for underwriting purposes is to provide?
A. Transparency notice.
B. An opt-out mechanism.
C. Detailed terms of service.
D. Customer service support.
A. Transparency notice.
Reference:
According to the AIGP Body of Knowledge, transparency in AI systems is essential to ensure that stakeholders, including customers, understand how their data is being used and how decisions are made. This aligns with ethical principles of AI governance, ensuring that customers are informed and can make knowledgeable decisions regarding their interactions with AI systems.
Justification:
Purpose of a Transparency Notice:
A transparency notice is designed to provide customers with clear, accessible, and relevant information about how an AI system operates, including the parameters and factors it uses in decision-making processes such as underwriting.
It promotes trust and ensures compliance with ethical and regulatory standards by explaining the AI model’s role in assessing applications.
Alignment with Regulatory Expectations:
Transparency notices are often required under data protection laws such as the GDPR (for European customers) or similar frameworks, which mandate that individuals understand how automated systems affect them.
In this case, the notice should explain how the AI system evaluates applications, the types of data it uses, and how final decisions involve human oversight.
Providing Meaningful Information:
A transparency notice helps ensure customers are informed about the decision-making criteria without overwhelming them with overly technical or irrelevant details.
Why not the other options?
B. An opt-out mechanism:
While opt-out mechanisms are valuable for customers who prefer not to have their data processed by AI, they do not address the specific need for information about the AI model’s parameters.
C. Detailed terms of service:
Terms of service are often lengthy and legalistic, making them an ineffective means of communicating information about AI parameters to customers in an accessible way.
D. Customer service support:
Customer service can assist with general inquiries, but providing detailed and consistent information about AI parameters is better achieved through a structured transparency notice.
ABC Corp, is a leading insurance provider offering a range of coverage options to individuals. ABC has decided to utilize artificial intelligence to streamline and improve its customer acquisition and underwriting process, including the accuracy and efficiency of pricing policies.
ABC has engaged a cloud provider to utilize and fine-tune its pre-trained, general purpose large language model (‘‘LLM’’). In particular, ABC intends to use its historical customer data including applications, policies, and claims and proprietary pricing and risk strategies to provide an initial qualification assessment of potential customers, which would then be routed a human underwriter for final review.
ABC and the cloud provider have completed training and testing the LLM, performed a readiness assessment, and made the decision to deploy the LLM into production. ABC has designated an internal compliance team to monitor the model during the first month, specifically to evaluate the accuracy, fairness, and reliability of its output. After the first month in production, ABC realizes that the LLM declines a higher percentage of women’s loan applications due primarily to women historically receiving lower salaries than men.
Which of the following is the most important reason to train the underwriters on the model prior to deployment?
A. To provide a reminder of a right appeal.
B. To solicit on-going feedback on model performance.
C. To apply their own judgment to the initial assessment.
D. To ensure they provide transparency applicants on the model.
C. To apply their own judgment to the initial assessment.
Reference:
The AIGP Body of Knowledge emphasizes the importance of human oversight in AI systems, particularly in high-stakes areas such as underwriting and loan approvals. Human underwriters can provide a critical review and ensure that the model’s assessments are accurate and fair, integrating their expertise and understanding of complex cases.
Justification:
Human Oversight:
Training underwriters to apply their own judgment ensures that the decisions made by the AI model are not blindly followed. Human judgment can help correct or adjust for biases or errors in the model’s initial assessment.
Mitigating Bias:
In this case, the AI model has demonstrated bias by declining a higher percentage of women’s loan applications. Trained underwriters can identify and address such issues by overriding or questioning the AI’s output when appropriate.
Regulatory and Ethical Compliance:
Many regulations and ethical guidelines require that AI systems be used with human oversight, particularly in high-stakes decisions like underwriting. Training underwriters ensures compliance with these standards.
Why not the other options?
A. To provide a reminder of a right appeal:
While informing customers of their right to appeal is important, this does not address the need for underwriters to make informed, independent judgments about the AI’s output.
B. To solicit ongoing feedback on model performance:
Feedback is valuable for refining the model, but the immediate priority before deployment is to ensure that underwriters can independently evaluate and make decisions based on the model’s output.
D. To ensure they provide transparency to applicants on the model:
Transparency is essential, but it is typically achieved through notices or disclosures, not through training underwriters. The underwriters’ primary role is to ensure fair and accurate application assessments.
ABC Corp, is a leading insurance provider offering a range of coverage options to individuals. ABC has decided to utilize artificial intelligence to streamline and improve its customer acquisition and underwriting process, including the accuracy and efficiency of pricing policies.
ABC has engaged a cloud provider to utilize and fine-tune its pre-trained, general purpose large language model (‘‘LLM’’). In particular, ABC intends to use its historical customer data including applications, policies, and claims and proprietary pricing and risk strategies to provide an initial qualification assessment of potential customers, which would then be routed a human underwriter for final review.
ABC and the cloud provider have completed training and testing the LLM, performed a readiness assessment, and made the decision to deploy the LLM into production. ABC has designated an internal compliance team to monitor the model during the first month, specifically to evaluate the accuracy, fairness, and reliability of its output. After the first month in production, ABC realizes that the LLM declines a higher percentage of women’s loan applications due primarily to women historically receiving lower salaries than men.
Each of the following steps would support fairness testing by the compliance team during the first month in production EXCEPT?
A. Validating a similar level of decision-making across different demographic groups.
B. Providing the loan applications with information about the model capabilities and limitations.
C. Identifying if additional training data should be collected for specific demographic groups.
D. Using tools to help understand factors that may account for differences in decision-making.
B. Providing the loan applications with information about the model capabilities and limitations.
Justification:
Purpose of Fairness Testing:
Fairness testing focuses on ensuring that the AI system produces equitable outcomes across different demographic groups and does not discriminate unfairly. It involves evaluating outputs, identifying biases, and addressing disparities.
Why Option B is Not Relevant:
While transparency about the model’s capabilities and limitations is important, it is more related to ethical AI practices and customer communication than to fairness testing. Providing this information to loan applicants does not directly address or test for fairness in the AI system’s outputs.
Explanation of Other Options:
A. Validating a similar level of decision-making across different demographic groups:
This directly tests for fairness by ensuring consistent treatment across groups.
C. Identifying if additional training data should be collected for specific demographic groups:
Addressing gaps in the training data is a key step in mitigating bias and improving fairness.
D. Using tools to help understand factors that may account for differences in decision-making:
This supports fairness testing by identifying potential sources of bias or discriminatory outcomes.
ABC Corp, is a leading insurance provider offering a range of coverage options to individuals. ABC has decided to utilize artificial intelligence to streamline and improve its customer acquisition and underwriting process, including the accuracy and efficiency of pricing policies.
ABC has engaged a cloud provider to utilize and fine-tune its pre-trained, general purpose large language model (‘‘LLM’’). In particular, ABC intends to use its historical customer data including applications, policies, and claims and proprietary pricing and risk strategies to provide an initial qualification assessment of potential customers, which would then be routed a human underwriter for final review.
ABC and the cloud provider have completed training and testing the LLM, performed a readiness assessment, and made the decision to deploy the LLM into production. ABC has designated an internal compliance team to monitor the model during the first month, specifically to evaluate the accuracy, fairness, and reliability of its output. After the first month in production, ABC realizes that the LLM declines a higher percentage of women’s loan applications due primarily to women historically receiving lower salaries than men.
During the first month when ABC monitors the model for bias, it is most important to?
A. Continue disparity testing.
B. Analyze the quality of the training and testing data.
C. Compare the results to human decisions prior to deployment.
D. Seek approval from management for any changes to the model.
A. Continue disparity testing.
Reference:
Regular disparity testing is highlighted in the AIGP Body of Knowledge as a critical practice for maintaining the fairness and reliability of AI models. By continuously monitoring for and addressing disparities, organizations can ensure their AI systems remain compliant with ethical and legal standards, and mitigate any unintended biases that may arise in production.
Justification:
Purpose of Disparity Testing:
Disparity testing involves evaluating the model’s outputs to identify potential biases or unequal treatment across demographic groups, such as gender or race. This testing ensures that the AI model operates fairly and equitably.
Importance in Monitoring for Bias:
In the scenario, the AI model has already exhibited bias against women. Continuing disparity testing is crucial during the first month of monitoring to quantify the bias, understand its scope, and implement measures to address it.
Maintaining Model Fairness:
Disparity testing ensures that the AI model complies with fairness principles and relevant regulations. Without this ongoing evaluation, bias might persist or worsen, leading to unfair decisions and potential legal or reputational consequences.
Why not the other options?
B. Analyze the quality of the training and testing data:
While analyzing the training and testing data is important, it is more relevant during the model development phase. During the monitoring phase, focus shifts to identifying and addressing biases in the model’s outputs.
C. Compare the results to human decisions prior to deployment:
While this can provide useful insights, it does not directly address ongoing bias detection and mitigation during production. Disparity testing is more targeted for this purpose.
D. Seek approval from management for any changes to the model:
Management approval is necessary for implementing significant changes, but it is not the most immediate or critical action during bias monitoring. Identifying and quantifying the bias comes first.
A mid-size US healthcare network has decided to develop an Al solution to detect a type of cancer that is most likely arise in adults. Specifically, the healthcare network intends to create a recognition algorithm that will perform an initial review of all imaging and then route records a radiologist for secondary review pursuant agreed-upon criteria (e.g., a confidence score below a threshold).
To date, the healthcare network has taken the following steps: defined its Al ethical principles: conducted discovery to identify the intended uses and success criteria for the system: established an Al governance committee; assembled a broad, cross functional team with clear roles and responsibilities; and created policies and procedures to document standards, workflows, timelines and risk thresholds during the project.
The healthcare network intends to retain a cloud provider to host the solution and a consulting firm to help develop the algorithm using the healthcare network’s existing data and de-identified data that is licensed from a large US clinical research partner.
Which stakeholder group is most important in selecting the specific type of algorithm?
A. The cloud provider.
B. The consulting firm.
C. The healthcare network’s data science team.
D. The healthcare network’s Al governance committee.
C. The healthcare network’s data science team.
Justification:
Technical Expertise in Algorithm Design:
The data science team possesses the technical expertise necessary to evaluate, design, and select the specific type of algorithm. They understand the data’s characteristics, the model’s requirements, and the nuances of implementing machine learning techniques.
Alignment with Clinical Objectives:
The data science team bridges the gap between clinical needs and technical feasibility. They ensure that the chosen algorithm aligns with the healthcare network’s objectives, such as detecting cancer with high sensitivity and specificity.
Customization and Optimization:
Given that the algorithm needs to integrate with existing data and the de-identified data, the data science team is best positioned to customize and optimize the algorithm for the unique requirements of the project.
Why not the other options?
A. The cloud provider:
The cloud provider supports hosting and scalability but does not play a direct role in selecting the specific algorithm. Their role is technical infrastructure, not algorithm design.
B. The consulting firm:
The consulting firm provides support and expertise but is secondary to the healthcare network’s internal stakeholders. They may advise on options but do not have the same level of vested interest or detailed knowledge of the network’s specific goals and constraints.
D. The healthcare network’s AI governance committee:
The governance committee oversees ethical and strategic alignment, but it does not have the technical expertise to select the specific algorithm. Its role is to approve or review decisions, not to make detailed technical selections.
A mid-size US healthcare network has decided to develop an Al solution to detect a type of cancer that is most likely arise in adults. Specifically, the healthcare network intends to create a recognition algorithm that will perform an initial review of all imaging and then route records a radiologist for secondary review pursuant agreed-upon criteria (e.g., a confidence score below a threshold).
To date, the healthcare network has taken the following steps: defined its Al ethical principles: conducted discovery to identify the intended uses and success criteria for the system: established an Al governance committee; assembled a broad, cross functional team with clear roles and responsibilities; and created policies and procedures to document standards, workflows, timelines and risk thresholds during the project.
The healthcare network intends to retain a cloud provider to host the solution and a consulting firm to help develop the algorithm using the healthcare network’s existing data and de-identified data that is licensed from a large US clinical research partner.
In the design phase, what is the most important step for the healthcare network to take when mapping its existing data to the clinical research partner data?
A. Apply privacy-enhancing technologies to the data.
B. Identify fits and gaps in the combined data.
C. Ensure the data is labeled and formatted.
D. Evaluate the country of origin of the data.
B. Identify fits and gaps in the combined data.
Justification:
Data Quality and Completeness:
Identifying fits and gaps ensures that the datasets are aligned for the intended use. This step involves evaluating whether the combined data covers all necessary attributes, fields, and values required to train the AI model effectively.
Ensuring Data Compatibility:
The healthcare network’s data and the clinical research partner’s data might have differences in structure, content, or context. Mapping these datasets requires identifying overlaps (fits) and discrepancies (gaps) to ensure the final dataset is cohesive and usable.
Accuracy and Bias Avoidance:
Addressing gaps helps avoid introducing bias into the AI model, as mismatched or incomplete data could lead to flawed training outcomes, reducing the system’s accuracy and fairness.
Foundation for Other Steps:
Fit and gap analysis is a prerequisite for other activities, such as applying privacy-enhancing technologies, labeling, formatting, or evaluating legal considerations. Without identifying gaps, downstream processes cannot proceed effectively.
Why not the other options?
A. Apply privacy-enhancing technologies to the data:
This step is important but secondary to identifying fits and gaps. Privacy techniques like encryption or anonymization protect the data but do not address compatibility or completeness issues.
C. Ensure the data is labeled and formatted:
Labeling and formatting are necessary for machine learning, but they depend on first understanding fits and gaps. Without fit-gap analysis, labeling or formatting efforts may be misaligned with the combined data’s needs.
D. Evaluate the country of origin of the data:
While important for legal and compliance reasons, the country of origin is a secondary concern when ensuring the data is compatible and suitable for the AI model’s purpose.
A mid-size US healthcare network has decided to develop an Al solution to detect a type of cancer that is most likely arise in adults. Specifically, the healthcare network intends to create a recognition algorithm that will perform an initial review of all imaging and then route records a radiologist for secondary review pursuant agreed-upon criteria (e.g., a confidence score below a threshold).
To date, the healthcare network has taken the following steps: defined its Al ethical principles: conducted discovery to identify the intended uses and success criteria for the system: established an Al governance committee; assembled a broad, cross functional team with clear roles and responsibilities; and created policies and procedures to document standards, workflows, timelines and risk thresholds during the project.
The healthcare network intends to retain a cloud provider to host the solution and a consulting firm to help develop the algorithm using the healthcare network’s existing data and de-identified data that is licensed from a large US clinical research partner.
In the design phase, which of the following steps is most important in gathering the data from the clinical research partner?
A. Perform a privacy impact assessment.
B. Combine only anonymized data.
C. Segregate the data sets.
D. Review the terms of use.
D. Review the terms of use.
Explanation:
In the design phase of an AI solution, ensuring legal and ethical compliance when using data from third parties is critical. The following rationale explains why reviewing the terms of use is the most important step when gathering data from the clinical research partner:
Perform a privacy impact assessment (A):
While a privacy impact assessment (PIA) is important for identifying risks related to privacy and data protection, it is not specific to the initial step of gathering data. A PIA typically evaluates the broader system’s compliance with privacy laws, not the specific contractual terms or conditions for obtaining the data.
Combine only anonymized data (B):
Combining anonymized data can help mitigate privacy risks, but the question does not specify whether the licensed data is already anonymized or simply de-identified. Furthermore, data anonymization alone does not address the legal or contractual limitations on data use.
Segregate the data sets (C):
Segregating datasets might be relevant during processing or storage to prevent data mixing, but it does not address the fundamental step of ensuring the data can legally and ethically be used as intended.
Review the terms of use (D):
Reviewing the terms of use of the licensed data is the most critical initial step. The terms dictate the legal scope of what the healthcare network can do with the data (e.g., usage restrictions, sublicensing, or conditions for derivative works). Ensuring compliance with these terms is essential to avoid legal liabilities and to align the AI solution with ethical and contractual standards.
A mid-size US healthcare network has decided to develop an Al solution to detect a type of cancer that is most likely arise in adults. Specifically, the healthcare network intends to create a recognition algorithm that will perform an initial review of all imaging and then route records a radiologist for secondary review pursuant Agreed-upon criteria (e.g., a confidence score below a threshold).
To date, the healthcare network has taken the following steps: defined its Al ethical principles: conducted discovery to identify the intended uses and success criteria for the system: established an Al governance committee; assembled a broad, cross functional team with clear roles and responsibilities; and created policies and procedures to document standards, workflows, timelines and risk thresholds during the project.
The healthcare network intends to retain a cloud provider to host the solution and a consulting firm to help develop the algorithm using the healthcare network’s existing data and de-identified data that is licensed from a large US clinical research partner.
The most significant risk from combining the healthcare network’s existing data with the clinical research partner data is?
A. Privacy risk.
B. Security risk.
C. Operational risk.
D. Reputational risk.
A. Privacy risk.
Justification:
Data Handling and Identification Risks:
Although the clinical research partner’s data is described as “de-identified,” combining it with the healthcare network’s data might lead to re-identification risks, particularly if data sets contain overlapping information or sufficient indirect identifiers.
Regulatory Compliance:
Under U.S. healthcare regulations such as HIPAA, the use and sharing of data must comply with stringent privacy requirements. Any failure in safeguarding patient data or re-identification could lead to legal and financial penalties.
Data Governance and Consent Issues:
It is crucial to ensure that the clinical research partner’s data was de-identified properly and that all uses are within the scope of patients’ informed consent. Misalignment could lead to a violation of ethical and legal obligations.
Impact of a Breach:
A privacy breach involving healthcare data has severe consequences, including loss of trust, regulatory investigations, and fines, which could overshadow other types of risks (e.g., security, operational, or reputational).
A mid-size US healthcare network has decided to develop an Al solution to detect a type of cancer that is most likely arise in adults. Specifically, the healthcare network intends to create a recognition algorithm that will perform an initial review of all imaging and then route records a radiologist for secondary review pursuant Agreed-upon criteria (e.g., a confidence score below a threshold).
To date, the healthcare network has taken the following steps: defined its Al ethical principles: conducted discovery to identify the intended uses and success criteria for the system: established an Al governance committee; assembled a broad, cross functional team with clear roles and responsibilities; and created policies and procedures to document standards, workflows, timelines and risk thresholds during the project.
The healthcare network intends to retain a cloud provider to host the solution and a consulting firm to help develop the algorithm using the healthcare network’s existing data and de-identified data that is licensed from a large US clinical research partner.
Which of the following steps can best mitigate the possibility of discrimination prior to training and testing the Al solution?
A. Procure more data from clinical research partners.
B. Engage a third party to perform an audit.
C. Perform an impact assessment.
D. Create a bias bounty program.
C. Perform an impact assessment.
Explanation:
An impact assessment is the most effective step to mitigate the possibility of discrimination before training and testing an AI solution. It involves evaluating the potential risks, including bias and discrimination, and establishing safeguards to ensure fairness, accuracy, and compliance with legal and ethical standards.
Analysis of Each Option:
Procure more data from clinical research partners (A):
While increasing the dataset size and diversity may help reduce bias, simply acquiring more data does not guarantee that it will be representative or free from bias. An impact assessment is necessary to evaluate the data’s quality and fairness before proceeding.
Engage a third party to perform an audit (B):
Audits are valuable for evaluating systems after development or deployment. However, prior to training and testing, an audit is premature. Identifying risks through an impact assessment is a more foundational step at this stage.
Perform an impact assessment (C):
An impact assessment, such as a bias impact assessment or a data protection impact assessment (DPIA), helps identify risks of discrimination by examining the data, intended use cases, and potential consequences of the AI solution. This ensures proactive measures are in place to address bias before the system is developed or tested.
Create a bias bounty program (D):
A bias bounty program is an innovative mechanism to identify biases by inviting external contributors to test the system. However, this is typically implemented after the system is operational and would not be as effective in the early stages.
Good Values Corporation (GVC) is a U.S. educational services provider that employs teachers to create and deliver enrichment courses for high school students. GVC has learned that many of its teacher employees are using generative Al to create the enrichment courses, and that many of the students are using generative Al to complete their assignments.
In particular, GVC has learned that the teachers they employ used open source large language models (“LLM”) to develop an online tool that customizes study questions for individual students. GVC has also discovered that an art teacher has expressly incorporated the use of generative Al into the curriculum to enable students to use prompts to create digital art.
GVC has started to investigate these practices and develop a process to monitor any use of generative Al, including by teachers and students, going forward.
All of the following may be copyright risks from teachers using generative Al to create course content EXCEPT?
A. Content created by an LLM may be protectable under U.S. intellectual property law.
B. Generative Al is generally trained using intellectual property owned by third parties.
C. Students must expressly consent to this use of generative Al.
D. Generative Al often creates content without attribution.
C. Students must expressly consent to this use of generative AI.
Explanation:
The question asks which of the options is not a copyright risk associated with the use of generative AI by teachers in creating course content. Let’s break down each option:
A. Content created by an LLM may be protectable under U.S. intellectual property law: This option is about the uncertainty around whether or not content generated by AI can be copyrighted. The current legal framework in the U.S. does not clearly grant copyright protection to AI-generated content, which can present a copyright-related challenge for users of generative AI. Thus, this could be considered a potential risk.
B. Generative AI is generally trained using intellectual property owned by third parties: This is a significant copyright concern. Generative AI models are often trained on vast datasets that may include copyrighted content. If the generated outputs are deemed to have copied or derived from these protected works, it could create legal risks for users like teachers and educational institutions.
C. Students must expressly consent to this use of generative AI: This statement is not directly related to copyright risks. Instead, it pertains to privacy and consent issues, which are important but not specifically tied to copyright law. Therefore, it is not a copyright risk, making it the correct answer.
D. Generative AI often creates content without attribution: This can present a copyright issue because if the generated content includes or is derived from protected works, the lack of attribution could lead to claims of copyright infringement. This is indeed a copyright-related risk.
Thus, C is the answer because it is related more to the consent and privacy of students rather than being a direct copyright risk associated with using generative AI.
Good Values Corporation (GVC) is a U.S. educational services provider that employs teachers to create and deliver enrichment courses for high school students. GVC has learned that many of its teacher employees are using generative Al to create the enrichment courses, and that many of the students are using generative Al to complete their assignments.
In particular, GVC has learned that the teachers they employ used open source large language models (“LLM”) to develop an online tool that customizes study questions for individual students. GVC has also discovered that an art teacher has expressly incorporated the use of generative Al into the curriculum to enable students to use prompts to create digital art.
GVC has started to investigate these practices and develop a process to monitor any use of generative Al, including by teachers and students, going forward.
Which of the following risks should be of the highest concern to individual teachers using generative AI to ensure students learn the course material?
A. Copyright infringement.
B. Model accuracy.
C. Technical complexity.
D. Financial cost.
B. Model accuracy.
Justification:
Impact on Student Learning:
The primary concern for teachers using generative AI is ensuring that the AI-generated content is accurate and aligns with the educational objectives. Inaccurate or misleading content could undermine students’ understanding of the material, impeding their learning.
Dependence on AI:
If students rely on AI-generated study questions or digital art tools, inaccuracies in the material could propagate misconceptions. Teachers must review and verify AI outputs to ensure educational quality.
Alignment with Educational Goals:
Teachers are responsible for delivering content that supports students’ mastery of course material. Ensuring the accuracy of generative AI outputs is critical to maintaining the integrity of the learning process.
Why not the other options?
A. Copyright infringement:
Copyright issues are a valid concern, particularly with generative AI tools trained on third-party intellectual property. However, this is a legal and ethical concern rather than directly impacting the immediate learning of course material.
C. Technical complexity:
While some tools may require technical expertise, most generative AI tools are user-friendly. The complexity of the tools is less critical compared to the accuracy of the generated content.
D. Financial cost:
Many generative AI tools, especially open-source ones, are either free or have minimal costs. Financial considerations are not likely to outweigh the importance of ensuring the accuracy of the material.
Good Values Corporation (GVC) is a U.S. educational services provider that employs teachers to create and deliver enrichment courses for high school students. GVC has learned that many of its teacher employees are using generative Al to create the enrichment courses, and that many of the students are using generative Al to complete their assignments.
In particular, GVC has learned that the teachers they employ used open source large language models (“LLM”) to develop an online tool that customizes study questions for individual students. GVC has also discovered that an art teacher has expressly incorporated the use of generative Al into the curriculum to enable students to use prompts to create digital art.
GVC has started to investigate these practices and develop a process to monitor any use of generative Al, including by teachers and students, going forward.
What is the best reason for GVC to offer students the choice to utilize generative AI in limited, defined circumstances?
A. To enable students to learn about performing research.
B. To enable students to learn how to use AI as a supportive educational tool.
C. To enable students to learn about practical applications of AI.
D. To enable students to learn how to manage their time.
B. To enable students to learn how to use AI as a supportive educational tool.
Justification:
AI as a Supportive Educational Tool:
Allowing students to use generative AI in limited, defined circumstances helps them understand how to leverage AI to enhance their learning process, such as brainstorming ideas, solving problems, or creating content within ethical and practical boundaries.
Building Essential Skills:
By introducing AI as a supportive tool, students develop critical skills for integrating AI into their work responsibly, a competency that will be essential in their future academic and professional endeavors.
Focus on Responsible Usage:
Limiting and defining the circumstances under which generative AI can be used teaches students to use the technology appropriately and understand its limitations (e.g., verifying accuracy, avoiding overreliance).
Why not the other options?
A. To enable students to learn about performing research:
While generative AI can assist in research, it is not a primary or reliable research tool due to the potential for errors or hallucinated information. This is a secondary benefit, not the primary reason.
C. To enable students to learn about practical applications of AI:
While understanding practical applications is important, it is broader and less focused than the goal of using AI as a supportive educational tool to directly enhance learning outcomes.
D. To enable students to learn how to manage their time:
Generative AI may save time by automating tasks, but this is an indirect benefit and not the primary reason for integrating AI into an educational context.
A local police department in the United States procured an Al system to monitor and analyze social media feeds, online marketplaces and other sources of public information to detect evidence of illegal activities (e.g., sale of drugs or stolen goods). The Al system works by surveilling the public sites in order to identify individuals that are likely to have committed a crime. It cross-references the individuals against data maintained by law enforcement and then assigns a percentage score of the likelihood of criminal activity based on certain factors like previous criminal history, location, time, race and gender.
The police department retained a third-party consultant assist in the procurement process, specifically to evaluate two finalists. Each of the vendors provided information about their system’s accuracy rates, the diversity of their training data and how their system works. The consultant determined that the first vendor’s system has a higher accuracy rate and based on this information, recommended this vendor to the police department.
The police department chose the first vendor and implemented its Al system. As part of the implementation, the department and consultant created a usage policy for the system, which includes training police officers on how the system works and how to incorporate it into their investigation process.
The police department has now been using the Al system for a year. An internal review has found that every time the system scored a likelihood of criminal activity at or above 90%, the police investigation subsequently confirmed that the individual had, in fact, committed a crime. Based on these results, the police department wants to forego investigations for cases where the Al system gives a score of at least 90% and proceed directly with an arrest.
During the procurement process, what is the most likely reason that the third-party consultant asked each vendor for information about the diversity of their datasets?
A. To comply with applicable law.
B. To assist the fairness of the Al system.
C. To evaluate the reliability of the Al system.
D. To determine the explainability of the Al system.
B. To assist the fairness of the AI system.
Explanation:
The diversity of the datasets used in training an AI system is crucial for ensuring that the model is fair and does not disproportionately target or misclassify individuals based on attributes like race, gender, or other characteristics. If the training data is not diverse, the AI system may learn biased patterns, which can lead to unfair outcomes—such as over-representing certain groups as being more likely to engage in criminal activity.
The consultant’s request for information about the diversity of training data was likely motivated by the need to assess whether the system is designed in a way that reduces bias and ensures fair treatment across different demographic groups. This is especially important in law enforcement contexts, where biased predictions can have significant ethical and legal implications.
Here’s why the other options are less suitable:
A. To comply with applicable law: While compliance with anti-discrimination laws is important, the direct request for information about dataset diversity is more likely aimed at understanding fairness rather than merely complying with legal requirements.
C. To evaluate the reliability of the AI system: Reliability refers to the consistency and accuracy of the system’s results, but this is generally assessed through metrics like accuracy rates and error rates, not necessarily through data diversity. Data diversity impacts fairness more than reliability.
D. To determine the explainability of the AI system: Explainability concerns how easily humans can understand the AI system’s decision-making process. It relates more to how the system’s decisions are communicated rather than the nature of the training data itself.
Thus, B is the best answer because assessing dataset diversity helps ensure that the AI system treats different demographic groups equitably, which is key to preventing biased outcomes in sensitive applications like law enforcement.
A local police department in the United States procured an Al system to monitor and analyze social media feeds, online marketplaces and other sources of public information to detect evidence of illegal activities (e.g., sale of drugs or stolen goods). The Al system works by surveilling the public sites in order to identify individuals that are likely to have committed a crime. It cross-references the individuals against data maintained by law enforcement and then assigns a percentage score of the likelihood of criminal activity based on certain factors like previous criminal history, location, time, race and gender.
The police department retained a third-party consultant assist in the procurement process, specifically to evaluate two finalists. Each of the vendors provided information about their system’s accuracy rates, the diversity of their training data and how their system works. The consultant determined that the first vendor’s system has a higher accuracy rate and based on this information, recommended this vendor to the police department.
The police department chose the first vendor and implemented its Al system. As part of the implementation, the department and consultant created a usage policy for the system, which includes training police officers on how the system works and how to incorporate it into their investigation process.
The police department has now been using the Al system for a year. An internal review has found that every time the system scored a likelihood of criminal activity at or above 90%, the police investigation subsequently confirmed that the individual had, in fact, committed a crime. Based on these results, the police department wants to forego investigations for cases where the Al system gives a score of at least 90% and proceed directly with an arrest.
The best human oversight mechanism for the police department to implement is that a police officer should?
A. Explain to the accused how the AI system works.
B. Confirm the AI recommendation prior to sentencing.
C. Ensure an accused is given notice that the AI system was used.
D. Consider the AI recommendation as part of the criminal investigation.
Correct Answer: D. Consider the AI recommendation as part of the criminal investigation.
Justification:
Human Oversight in High-Stakes Decision-Making:
The AI system’s score is only a tool to assist in investigations, not a replacement for human judgment. The best oversight mechanism ensures that the AI recommendation is used as a supplementary input rather than the sole basis for an arrest or legal action.
Why Consider the AI as Part of the Investigation?:
Human police officers need to evaluate the context, corroborate evidence, and determine the appropriateness of the AI’s findings within the broader investigation. This ensures fairness, accountability, and reduces reliance on potentially biased or incorrect AI outputs.
Why Not the Other Options?:
A. Explain to the accused how the AI system works:
While transparency is important, this does not qualify as an oversight mechanism. Explaining the system to the accused is more related to fairness and procedural justice.
B. Confirm the AI recommendation prior to sentencing:
Sentencing is the court’s responsibility, not the police department’s. This option conflates the roles of law enforcement and judiciary oversight.
C. Ensure an accused is given notice that the AI system was used:
Providing notice is a legal and ethical obligation but does not directly involve human oversight of the AI system’s output during the investigative process.
Key Consideration:
The AI system is a tool, and human officers must retain control over the decision-making process to ensure ethical and legal standards are upheld.