Questões gerais Flashcards
Training data is best defined as a subset of data that is used to?
A. Enable a model to detect and learn patterns.
B. Fine-tune a model to improve accuracy and prevent overfitting.
C. Detect the initial sources of biases to mitigate prior to deployment.
D. Resemble the structure and statistical properties of production data.
A. Enable a model to detect and learn patterns.
Training data is used to enable a model to detect and learn patterns. During the training phase, the model learns from the labeled data, identifying patterns and relationships that it will later use to make predictions on new, unseen data. This process is fundamental in building an AI model’s capability to perform tasks accurately. Reference: AIGP Body of Knowledge on Model Training and Pattern Recognition.
Testing data is best defined as a subset of data that is used to?
A. Assess a model’s on-going performance in production.
B. Enable a model to discover and learn patterns.
C. Provide a robust evaluation of a final model.
D. Evaluate a model’s handling of randomized edge cases.
C. Provide a robust evaluation of a final model.
Testing data is a subset of data used to provide a robust evaluation of a final model. After training the model on training data, it is essential to test its performance on unseen data (testing data) to ensure it generalizes well to new, real-world scenarios. This step helps in assessing the model’s accuracy, reliability, and ability to handle various data inputs.
Reference: AIGP Body of Knowledge on Model Validation and Testing.
Explanation:
Training and testing data serve distinct purposes in the machine learning (ML) workflow, and testing data specifically is designed to evaluate the performance of a trained model.
Assess a model’s on-going performance in production (A):
This refers to monitoring in production environments, not testing during development. Testing data is used prior to deployment to validate the model’s accuracy and generalization, not for ongoing production evaluation.
Enable a model to discover and learn patterns (B):
This describes the purpose of training data, which is used during the training phase to allow the model to learn patterns and relationships in the data. Testing data, by contrast, is not used for learning.
Provide a robust evaluation of a final model (C):
Testing data is a reserved subset of the data used to evaluate the model’s performance after training. It helps measure how well the model generalizes to unseen data, ensuring it performs robustly on new or unknown cases.
Evaluate a model’s handling of randomized edge cases (D):
While testing data may include edge cases, its primary purpose is broader: to evaluate overall model performance. Edge-case testing is typically a more specific task within robustness testing or adversarial testing.
To maintain fairness in a deployed system, it is most important to?
A. Protect against loss of personal data in the model.
B. Monitor for data drift that may affect performance and accuracy.
C. Detect anomalies outside established metrics that require new training data.
D. Optimize computational resources and data to ensure efficiency and scalability
B. Monitor for data drift that may affect performance and accuracy.
To maintain fairness in a deployed system, it is crucial to monitor for data drift that may affect performance and accuracy. Data drift occurs when the statistical properties of the input data change over time, which can lead to a decline in model performance. Continuous monitoring and updating of the model with new data ensure that it remains fair and accurate, adapting to any changes in the data distribution. Reference: AIGP Body of Knowledge on Post-Deployment Monitoring and Model Maintenance.
When monitoring the functional performance of a model that has been deployed into production, all of the following are concerns EXCEPT?
A. Feature drift.
B. System cost.
C. Model drift.
D. Data loss.
Correct Answer: B system cost
When monitoring the functional performance of a model deployed into production, concerns typically include feature drift, model drift, and data loss. Feature drift refers to changes in the input features that can affect the model’s predictions. Model drift is when the model’s performance degrades over time due to changes in the data or environment. Data loss can impact the accuracy and reliability of the model. However, system cost, while important for budgeting and financial planning, is not a direct concern when monitoring the functional performance of a deployed model. Reference: AIGP Body of Knowledge on Model Monitoring and Maintenance.
After completing model testing and validation, which of the following is the most important step that an organization takes prior to deploying the model into production?
A Perform a readiness assessment.
B Define a model-validation methodology.
C Document maintenance teams and processes.
D Identify known edge cases to monitor post-deployment.
Correct Answer: A Perform a readiness assessment.
After completing model testing and validation, the most important step prior to deploying the model into production is to perform a readiness assessment. This assessment ensures that the model is fully prepared for deployment, addressing any potential issues related to infrastructure, performance, security, and compliance. It verifies that the model meets all necessary criteria for a successful launch. Other steps, such as defining a model-validation methodology, documenting maintenance teams and processes, and identifying known edge cases, are also important but come secondary to confirming overall readiness. Reference: AIGP Body of Knowledge on Deployment Readiness.
Which type of existing assessment could best be leveraged to create an Al impact assessment?
A. A safety impact assessment.
B. A privacy impact assessment.
C. A security impact assessment.
D. An environmental impact assessment.
Correct Answer: B. A privacy impact assessment.
A privacy impact assessment (PIA) can be effectively leveraged to create an AI impact assessment. A PIA evaluates the potential privacy risks associated with the use of personal data and helps in implementing measures to mitigate those risks. Since AI systems often involve processing large amounts of personal data, the principles and methodologies of a PIA are highly applicable and can be extended to assess broader impacts, including ethical, social, and legal implications of AI. Reference: AIGP Body of Knowledge on Impact Assessments.
What is the primary purpose of an Al impact assessment?
A. To define and evaluate the legal risks associated with developing an Al system.
B. Anticipate and manage the potential risks and harms of an Al system.
C. To define and document the roles and responsibilities of Al stakeholders.
D. To identify and measure the benefits of an Al system.
B. Anticipate and manage the potential risks and harms of an AI system.
Explanation:
The primary purpose of an AI impact assessment is to identify, evaluate, and manage the potential risks and harms associated with the deployment and use of an AI system. This process helps ensure that the AI system is developed and used in a way that minimizes negative consequences and aligns with ethical and legal standards.
Key aspects of an AI impact assessment include:
Identifying potential risks: Understanding how the AI system could cause harm to individuals, groups, or society.
Managing risks: Developing strategies to mitigate those risks and ensure that the AI system is safe, fair, and aligned with the organization’s values.
Considering broader impacts: Taking into account the social, ethical, and environmental implications of deploying the AI system.
Which of the following steps occurs in the design phase of the Al life cycle?
A. Data augmentation.
B. Model explainability.
C. Risk impact estimation.
D. Performance evaluation.
C. Risk impact estimation.
In the design phase, the focus is on planning and identifying potential risks and impacts of the AI system. Risk impact estimation involves assessing the potential consequences of deploying the model, including ethical, legal, and operational risks. The other steps typically occur in later stages of the AI life cycle:
A. Data augmentation happens during the data preparation phase.
B. Model explainability is often addressed during model development or validation.
D. Performance evaluation occurs after the model is trained, during testing and validation.
During the planning and design phases of the Al development life cycle, bias can be reduced by all of the following EXCEPT?
A. Stakeholder involvement.
B. Feature selection.
C. Human oversight.
D. Data collection.
B. Feature selection.
While feature selection is an important step in AI model development, it typically occurs during the modeling phase, not the planning or design phases. Bias can be reduced during planning and design through A. Stakeholder involvement, C. Human oversight, and D. Data collection, which ensure that diverse perspectives and appropriate data are considered early on. Feature selection focuses more on refining the model’s inputs and is not directly related to bias reduction at the planning and design stages.
Which of the following use cases would be best served by a non-AI solution?
A. A non-profit wants to develop a social media presence.
B. An e-commerce provider wants to make personalized recommendations.
C. A business analyst wants to forecast future cost overruns and underruns.
D. A customer service agency wants automate answers to common questions.
A. A non-profit wants to develop a social media presence.
Building a social media presence typically involves content creation, scheduling posts, and engagement strategies, which can be handled effectively with standard tools and human effort rather than requiring AI. The other use cases—such as personalized recommendations, forecasting, and automating customer service—are more suited to AI-driven solutions that can leverage data and machine learning models.
All of the following are elements of establishing a global Al governance infrastructure EXCEPT?
A. Providing training to foster a culture that promotes ethical behavior.
B. Creating policies and procedures to manage third-party risk.
C. Understanding differences in norms across countries.
D. Publicly disclosing ethical principles.
Answer : D Publicly disclosing ethical principles.
Establishing a global AI governance infrastructure involves several key elements, including providing training to foster a culture that promotes ethical behavior, creating policies and procedures to manage third-party risk, and understanding differences in norms across countries. While publicly disclosing ethical principles can enhance transparency and trust, it is not a core element necessary for the establishment of a governance infrastructure. The focus is more on internal processes and structures rather than public disclosure. Reference: AIGP Body of Knowledge on AI Governance and Infrastructure.
Which of the following would be the least likely step for an organization to take when designing an integrated compliance strategy for responsible Al?
A. Conducting an assessment of existing compliance programs to determine overlaps and integration points.
B. Employing a new software platform to modernize existing compliance processes across the organization.
C. Consulting experts to consider the ethical principles underpinning the use of Al within the organization.
D. Launching a survey to understand the concerns and interests of potentially impacted stakeholders.
Answer : B. Employing a new software platform to modernize existing compliance processes across the organization.
When designing an integrated compliance strategy for responsible AI, the least likely step would be employing a new software platform to modernize existing compliance processes. While modernizing compliance processes is beneficial, it is not as directly related to the strategic integration of ethical principles and stakeholder concerns. More critical steps include conducting assessments of existing compliance programs to identify overlaps and integration points, consulting experts on ethical principles, and launching surveys to understand stakeholder concerns. These steps ensure that the compliance strategy is comprehensive and aligned with responsible AI principles. Reference
A company initially intended to use a large data set containing personal information to train an Al model. After consideration, the company determined that it can derive enough value from the data set without any personal information and permanently obfuscated all personal data elements before training the model.
This is an example of applying which privacy-enhancing technique (PET)?
A Anonymization.
B Pseudonymization.
C Differential privacy.
D Federated learning.
A. Anonymization.
Justification:
Definition of Anonymization:
Anonymization is the process of irreversibly transforming personal data so that individuals can no longer be identified, directly or indirectly. In this case, the company permanently obfuscated all personal data elements, ensuring that the data set no longer contains any personally identifiable information (PII).
Key Characteristics of Anonymization:
The process is irreversible.
The data set cannot be used to identify individuals, even when combined with other data sets.
It ensures compliance with privacy laws like GDPR, which treats anonymized data as no longer subject to data protection regulations.
Why not the other options?
B. Pseudonymization:
Pseudonymization replaces personal identifiers with pseudonyms (e.g., a unique ID) but does not make the data irreversible. Pseudonymized data can still be linked back to individuals with additional information, unlike anonymization.
C. Differential privacy:
Differential privacy involves adding statistical noise to the data to protect individual privacy while allowing insights at an aggregate level. It does not obfuscate or remove personal data entirely.
D. Federated learning:
Federated learning trains machine learning models across multiple decentralized data sets without sharing raw data. It does not involve obfuscating or removing personal data in a single data set.
The planning phase of the Al life cycle articulates all of the following EXCEPT the?
A Objective of the model.
B Approach to governance.
C Choice of the architecture.
D Context in which the model will operate.
Answer : B Approach to governance.
The planning phase of the AI life cycle typically includes defining the objective of the model, choosing the appropriate architecture, and understanding the context in which the model will operate. However, the approach to governance is usually established as part of the overall AI governance framework, not specifically within the planning phase. Governance encompasses broader organizational policies and procedures that ensure AI development and deployment align with legal, ethical, and operational standards
What is the best reason for a company adopt a policy that prohibits the use of generative Al?
A. Avoid using technology that cannot be monetized.
B. Avoid needing to identify and hire qualified resources.
C. Avoid the time necessary to train employees on acceptable use.
D. Avoid accidental disclosure to its confidential and proprietary information.
Correct Answer: D Avoid accidental disclosure to its confidential and proprietary information.
The primary concern for a company adopting a policy prohibiting the use of generative AI is the risk of accidental disclosure of confidential and proprietary information. Generative AI tools can inadvertently leak sensitive data during the creation process or through data sharing. This risk outweighs the other reasons listed, as protecting sensitive information is critical to maintaining the company’s competitive edge and legal compliance. This rationale is discussed in the sections on risk management and data privacy in the IAPP AIGP Body of Knowledge.
Which of the following is an example of a high-risk application under the EU Al Act?
A. A resume scanning tool that ranks applicants.
B. An Al-enabled inventory management tool.
C. A government-run social scoring tool.
D. A customer service chatbot tool.
Correct Answer: C A government-run social scoring tool.
The EU AI Act categorizes certain applications of AI as high-risk due to their potential impact on fundamental rights and safety. High-risk applications include those used in critical areas such as employment, education, and essential public services. A government-run social scoring tool, which assesses individuals based on their social behavior or perceived trustworthiness, falls under this category because of its profound implications for privacy, fairness, and individual rights. This contrasts with other AI applications like resume scanning tools or customer service chatbots, which are generally not classified as high-risk under the EU AI Act.
All of the following are penalties and enforcements outlined in the EU Al Act EXCEPT?
A. Fines for SMEs and startups will be proportionally capped.
B. Rules on General Purpose Al will apply after 6 months as a specific provision.
C. The Al Pact will act as a transitional bridge until the Regulations are fully enacted.
D. Fines for violations of banned Al applications will be €35 million or 7% global annual turnover (whichever is higher).
C. The AI Pact will act as a transitional bridge until the Regulations are fully enacted.
The EU AI Act outlines specific penalties and enforcement mechanisms to ensure compliance with its regulations. Among these, fines for violations of banned AI applications can be as high as €35 million or 7% of the global annual turnover of the offending organization, whichever is higher. Proportional caps on fines are applied to SMEs and startups to ensure fairness. General Purpose AI rules are to apply after a 6-month period as a specific provision to ensure that stakeholders have adequate time to comply. However, there is no provision for an “AI Pact” acting as a transitional bridge until the regulations are fully enacted, making option C the correct answer.
According to the EU Al Act, providers of what kind of machine learning systems will be required to register with an EU oversight agency before placing their systems in the EU market?
A. Al systems that are harmful based on a legal risk-utility calculation.
B. Al systems that are “strong” general intelligence.
C. Al systems trained on sensitive personal data.
D. Al systems that are high-risk.
D. AI systems that are high-risk.
Explanation:
The EU AI Act introduces a regulatory framework aimed at ensuring the safe and responsible deployment of AI systems in the European Union. A key provision of the Act is the classification of AI systems into risk categories: unacceptable risk, high risk, limited risk, and minimal risk.
High-Risk AI Systems:
Definition of High-Risk AI Systems:
AI systems are considered high-risk if they:
Affect fundamental rights, health, safety, or access to opportunities.
Are used in critical areas such as healthcare, law enforcement, education, employment, and biometric identification.
Registration Requirement:
Providers of high-risk AI systems must:
Register their systems in an EU database managed by an oversight agency before placing them on the EU market.
Demonstrate compliance with strict requirements, including risk management, data governance, transparency, and human oversight.
Why the Other Options Are Incorrect:
AI systems that are harmful based on a legal risk-utility calculation (A):
While harm is a consideration, the Act focuses on predefined risk categories rather than requiring a general risk-utility calculation. “High-risk” classification depends on the system’s application and sector.
AI systems that are “strong” general intelligence (B):
The Act does not specifically regulate systems with “strong” or “general” intelligence. Current regulations are focused on specific use cases and risks rather than theoretical advancements in AI.
AI systems trained on sensitive personal data (C):
The Act regulates how personal data is processed within AI systems but does not require registration solely based on the type of training data. Compliance with the GDPR governs data protection aspects.
All of the following may be permissible uses of an AI system under the EU AI Act EXCEPT?
A. To detect an individual’s intent for law enforcement purposes.
B. To promote equitable distribution of welfare benefits.
C. To implement social scoring.
D. To manage border control.
C. To implement social scoring.
Justification:
Prohibition of Social Scoring:
The EU AI Act explicitly prohibits the use of AI systems for social scoring, especially when it involves evaluating individuals based on behavior, predicted personality traits, or social circumstances in ways that result in discriminatory or unfair treatment.
Permissible Uses Under the EU AI Act:
A. Law enforcement purposes: AI can be used under strict regulations for specific law enforcement purposes, such as detecting intent, provided it complies with safeguards.
B. Welfare distribution: AI may assist in ensuring equitable welfare distribution by analyzing eligibility or managing resources.
D. Border control: AI systems can be deployed for border management tasks like verifying identities or analyzing risks, subject to safeguards against misuse.
Why Social Scoring is the Exception:
Social scoring, often associated with surveillance and discriminatory practices (e.g., the “credit score” systems used in some regions), is inconsistent with EU principles of fairness, privacy, and non-discrimination.
What is the best method to proactively train an LLM so that there is mathematical proof that no specific piece of training data has more than a negligible effect on the model or its output?
A Clustering.
B Transfer learning.
C Differential privacy.
D Data compartmentalization.
C. Differential privacy.
Explanation:
Differential privacy is the best method to ensure that no specific piece of training data has a significant effect on the model or its output. This technique involves adding noise to the data or the training process in a controlled manner, such that it becomes mathematically provable that the model’s output does not change significantly due to the inclusion or exclusion of any single data point.
Key reasons why differential privacy is suitable:
It provides mathematical guarantees that the contribution of individual data points is limited.
It helps ensure data privacy because the model cannot be used to infer whether any specific data point was present in the training set.
Here’s why the other options are less suitable:
A. Clustering: Clustering is a method for grouping similar data points together but does not inherently protect individual data points’ influence on the model or provide mathematical guarantees about privacy.
B. Transfer learning: Transfer learning involves using a pre-trained model and fine-tuning it on new data, but it does not focus on ensuring that individual data points have a minimal impact on the overall model output.
D. Data compartmentalization: This is a method for organizing and isolating data into segments but does not directly address controlling the influence of specific data points on the model.
Differential privacy is specifically designed for scenarios where it is important to ensure that the presence or absence of any single piece of data cannot be detected or inferred from the model, making it the best choice for this purpose.
Machine learning is best described as a type of algorithm by which?
A. Systems can mimic human intelligence with the goal of replacing humans.
B. Systems can automatically improve from experience through predictive patterns.
C. Statistical inferences are drawn from a sample with the goal of predicting human intelligence.
D. Previously unknown properties are discovered in data and used to predict and make improvements in the data.
B. Systems can automatically improve from experience through predictive patterns.
Explanation:
Machine learning (ML) is a branch of artificial intelligence (AI) that focuses on building algorithms and systems that can learn from data and improve their performance over time without being explicitly programmed for each specific task. ML algorithms learn from past data (experience) to identify patterns and make predictions or decisions.
You asked a generative Al tool to recommend new restaurants to explore in Boston, Massachusetts that have a specialty Italian dish made in a traditional fashion without spinach and wine. The generative Al tool recommended five restaurants for you to visit.
After looking up the restaurants, you discovered one restaurant did not exist and two others did not have the dish.
This information provided by the generative Al tool is an example of what is commonly called?
A. Prompt injection.
B. Model collapse.
C. Hallucination.
D. Overfitting.
C. Hallucination.
Explanation:
In the context of AI, hallucination refers to instances where a generative AI model produces information that is false, inaccurate, or fabricated. This means the model might generate responses that seem plausible or detailed but are not grounded in reality.
In your case, the generative AI tool recommended a restaurant that does not exist and suggested dishes that were not actually available at the other restaurants. This is a classic example of hallucination, where the model produces responses based on patterns it has learned, even though those responses do not correspond to real-world facts.
Here’s why the other options are incorrect:
A. Prompt injection: This occurs when a user manipulates the prompt to alter or exploit the AI’s behavior. It’s not relevant here, as the issue is about the AI providing inaccurate information, not about how the prompt influenced it.
B. Model collapse: This refers to a situation where a model’s performance deteriorates over time, often due to training issues. It’s not related to the generation of incorrect information.
D. Overfitting: Overfitting happens when a model learns too closely from its training data, resulting in poor performance on new, unseen data. It is not related to the generation of false information like recommending non-existent restaurants.
Each of the following actors are typically engaged in the Al development life cycle EXCEPT?
A. Data architects.
B. Government regulators.
C. Socio-cultural and technical experts.
D. Legal and privacy governance experts.
B. Government regulators.
Explanation:
In the context of the AI development life cycle, various stakeholders are typically involved, such as:
A. Data architects: They play a crucial role in designing the data infrastructure, preparing and structuring data, and ensuring it is suitable for training and testing AI models.
C. Socio-cultural and technical experts: These experts help ensure that the AI system is developed with consideration for its social and cultural impact and that it aligns with technical best practices and societal values.
D. Legal and privacy governance experts: These professionals ensure that the AI system complies with laws and regulations regarding data privacy, security, and ethical considerations throughout its development.
B. Government regulators, however, are generally not directly involved in the AI development process itself. Instead, they play a role in setting standards, creating regulations, and ensuring compliance after the AI system is deployed or during audits. They might interact with organizations to ensure adherence to laws, but they are not typically part of the internal development process.
A company is working to develop a self-driving car that can independently decide the appropriate route to take the driver after the driver provides an address.
If they want to make this self-driving car “strong” Al, as opposed to “weak,” the engineers would also need to ensure?
A. That the Al has full human cognitive abilities that can independently decide where to take the driver.
B. That they have obtained appropriate intellectual property (IP) licenses to use data for training the Al.
C. That the Al has strong cybersecurity to prevent malicious actors from taking control of the car.
D. That the Al can differentiate among ethnic backgrounds of pedestrians.
A. That the AI has full human cognitive abilities that can independently decide where to take the driver.
Explanation:
The distinction between “strong” AI (also known as Artificial General Intelligence, or AGI) and “weak” AI (also known as narrow AI) lies in the scope of cognitive abilities.
Weak AI is designed to perform a specific task or set of tasks, such as driving a car or playing chess. It does not possess general understanding or reasoning beyond its designated functions.
Strong AI, or AGI, would have the ability to understand, learn, and reason across a wide range of topics, similar to a human. It would be capable of making decisions autonomously in a manner that reflects broad human-like understanding.
In the context of a self-driving car, making the car “strong” AI would require it to have the capability to independently decide where to take the driver even without a specific address, reflecting human-like judgment and understanding of complex situations.
Which of the following is NOT a common type of machine learning?
A. Deep learning.
B. Cognitive learning.
C. Unsupervised learning.
D. Reinforcement learning
B. Cognitive learning.
Explanation:
Cognitive learning is not a standard term used to describe a type of machine learning. It generally refers to human learning processes, such as understanding, applying knowledge, and thinking. It is not specifically related to machine learning algorithms or methods.
The other options are common types of machine learning:
A. Deep learning: A subset of machine learning that uses neural networks with many layers (deep neural networks) to learn from large amounts of data. It is particularly effective in complex tasks like image and speech recognition.
C. Unsupervised learning: A type of machine learning where the model is trained on data without labeled outcomes. It is used to find patterns or groupings within the data, such as clustering and association.
D. Reinforcement learning: A type of machine learning where an agent learns to make decisions by taking actions in an environment to maximize cumulative rewards. It is commonly used in robotics, game playing, and autonomous systems.
An EU bank intends to launch a multi-modal Al platform for customer engagement and automated decision-making assist with the opening of bank accounts. The platform has been subject to thorough risk assessments and testing, where it proves to be effective in not discriminating against any individual on the basis of a protected class.
What additional obligations must the bank fulfill prior to deployment?
A. The bank must obtain explicit consent from users under the privacy Directive.
B. The bank must disclose how the Al system works under the Ell Digital Services Act.
C. The bank must subject the Al system an adequacy decision and publish its appropriate safeguards.
D. The bank must disclose the use of the Al system and implement suitable measures for users to contest
D. The bank must disclose the use of the AI system and implement suitable measures for users to contest.
Explanation:
Under the EU AI Act and other relevant EU regulations, when deploying an AI system that is used in high-stakes contexts like customer engagement and automated decision-making for opening bank accounts, the bank has certain obligations:
Transparency: The bank is required to disclose to customers that an AI system is being used in the decision-making process. This ensures that users are aware that decisions affecting them are partially or wholly automated.
User Rights: The bank must also implement mechanisms for users to contest decisions made by the AI system. This means that if a customer disagrees with a decision made by the AI (e.g., rejection of a bank account application), they should have a way to seek a human review or appeal the decision.
Random forest algorithms are in what type of machine learning model?
A. Symbolic.
B. Generative.
C. Discriminative.
D. Natural language processing.
C. Discriminative.
Explanation:
Random forest algorithms fall under the category of discriminative models in machine learning. Discriminative models are designed to classify or predict a target outcome by learning the boundary between different classes based on the features in the data.
Here’s why the other options are not correct:
A. Symbolic: Symbolic AI involves rule-based systems where knowledge is encoded in symbols and rules. Random forests do not follow this approach; they are based on data-driven learning of decision trees.
B. Generative: Generative models focus on modeling the joint probability of the input features and the output labels, allowing them to generate new data instances. Random forests do not attempt to model the joint probability; instead, they learn to differentiate between classes based on input features.
D. Natural language processing: Natural Language Processing (NLP) is a field of AI focused on interactions between computers and human language. Random forest is a type of algorithm that can be applied to NLP tasks, but it is not a category of machine learning itself.
Under the NIST Al Risk Management Framework, all of the following are defined as characteristics of trustworthy Al EXCEPT?
A. Tested and Effective.
B. Secure and Resilient.
C. Explainable and Interpretable.
D. Accountable and Transparent.
A. Tested and Effective.
Explanation:
Under the NIST AI Risk Management Framework (NIST AI RMF), the focus is on ensuring that AI systems are developed and deployed in a way that makes them trustworthy. Trustworthiness is defined through several key characteristics, including:
B. Secure and Resilient: Ensuring that AI systems are protected against adversarial attacks, vulnerabilities, and can recover from unexpected events is a key aspect of trustworthiness.
C. Explainable and Interpretable: It is important for AI systems to provide outputs that can be understood and explained to human users, especially in high-stakes environments. This ensures that stakeholders understand how decisions are made.
D. Accountable and Transparent: Trustworthy AI systems require clear accountability structures and transparency around how decisions are made, ensuring that stakeholders can understand and hold the AI system accountable.
Pursuant to the White House Executive Order of November 2023, who is responsible for creating guidelines to conduct red-teaming tests of Al systems?
A. National Institute of Standards and Technology (NIST).
B. National Science and Technology Council (NSTC).
C. Office of Science and Technology Policy (OSTP).
D. Department of Homeland Security (DHS).
A. National Institute of Standards and Technology (NIST).
Explanation:
According to the White House Executive Order on AI issued in November 2023, the National Institute of Standards and Technology (NIST) is tasked with developing guidelines for conducting red-teaming tests of AI systems. These guidelines are intended to provide a framework for testing and evaluating the robustness, security, and trustworthiness of AI systems, particularly to identify vulnerabilities and risks associated with their deployment.
Red-teaming involves subjecting AI models to rigorous testing, often simulating adversarial conditions, to assess their performance under various challenging scenarios. NIST’s role is to ensure that these guidelines are comprehensive and aligned with standards that promote the safe and responsible use of AI.
The other options are less relevant for this specific responsibility:
B. National Science and Technology Council (NSTC): This body coordinates science and technology policy across federal agencies but is not specifically tasked with creating guidelines for red-teaming.
C. Office of Science and Technology Policy (OSTP): The OSTP plays a role in setting overall policy direction and priorities for AI but does not directly create testing guidelines like those developed by NIST.
D. Department of Homeland Security (DHS): The DHS is involved in matters of national security and could be concerned with the implications of AI in that context but is not responsible for creating technical testing guidelines for AI systems.
According to November 2023 White House Executive Order, which of the following best describes the guidance given to governmental agencies on the use of generative AI as a workplace tool?
A. Limit access to specific uses of generative AI.
B. Impose a general ban on the use of generative AI.
C. Limit access of generative AI to engineers and developers.
D. Impose a ban on the use of generative AI in agencies that protect national security.
A. Limit access to specific uses of generative AI.
Justification:
White House Executive Order on AI Guidance:
The November 2023 White House Executive Order emphasizes responsible use of generative AI, focusing on limiting its use to specific applications that align with governmental priorities and ensuring its deployment is ethical, secure, and fair.
Context of Guidance:
Rather than implementing a blanket ban, the guidance seeks to control specific use cases to minimize risks such as misuse, security breaches, or ethical concerns.
Why Not Other Options:
B. Impose a general ban: The order does not call for a general ban but promotes responsible and controlled use.
C. Limit access to engineers and developers: The focus is on use-case restrictions, not limiting it to specific roles.
D. Impose a ban on use in agencies that protect national security: Instead of banning use in certain agencies, the order likely includes additional security protocols for high-risk environments.
The White House Executive Order from November 2023 requires companies that develop dual-use foundation models to provide reports to the federal government about all of the following EXCEPT?
A. Any current training or development of dual-use foundation models.
B. The results of red-team testing of each dual-use foundation model.
C. Any environmental impact study for each dual-use foundation model.
D. The physical and cybersecurity protection measures of their dual-use foundation models.
C. Any environmental impact study for each dual-use foundation model.
Explanation:
The Executive Order issued by the White House in October 2023 mandates that companies developing dual-use foundation models provide reports to the federal government on several aspects of their AI systems. These requirements include:
Ongoing or Planned Activities: Companies must disclose any ongoing or planned activities related to the training, development, or production of dual-use foundation models.
ARNOLD & PORTER
Red-Team Testing Results: Developers are required to report the outcomes of red-team testing—structured efforts to identify flaws and vulnerabilities in AI systems—based on guidelines developed by the National Institute of Standards and Technology (NIST).
MOFO
Physical and Cybersecurity Measures: Companies must detail the physical and cybersecurity protections implemented to safeguard the integrity of the training process against potential threats.
DECHERT
However, the Executive Order does not require companies to provide reports on environmental impact studies for each dual-use foundation model. While environmental considerations are important, they are not specified as a reporting requirement in this particular Executive Order.