Questions Flashcards

1
Q

Fiona leads the data science team at a media company. She is developing a new machine learning model to
improve its recommender systems. She has a large volume of input data around user preferences, but not
all users actively confirmed their likes and dislikes. What would be the most suitable ML approach for Fiona
to use?
A. Supervised learning.
B. Unsupervised learning.
C. Reinforcement learning.
D. Semi-supervised learning.

A
  1. The correct answer is D. The use of semi-supervised learning is the most suitable approach in this situation
    given the combination of labelled and unlabeled data in the full dataset.
    Body of Knowledge Domain I, Competency B
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q
  1. Which of the following is a method that helps to ensure AI integrity and that data is representative,
    accurate and unbiased?
    A. Tracking data lineage.
    B. Assuming the data is accurate.
    C. Purchasing data from a vendor.
    D. Reversioning the data internally.
A
  1. The correct answer is A. Data lineage tracks data over time from the source to any other intervening
    programs or uses, and ultimately to the AI program or process utilizing it. Knowing where and how data has
    been used and manipulated before it is incorporated into an AI program or process helps ensure the data
    being used is accurate and appropriate.
    Body of Knowledge Domain VI, Competency D
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q
  1. You are tasked with designing an AI system to predict customer attrition for a telecommunications
    company. During the design phase, you are focusing on developing the data strategy. Which of the
    following statements best describes the critical importance of data gathering in this phase?
    A. It primarily aims to reduce data storage costs, prioritizing low-cost repositories over data quality or
    relevance.
    B. It ensures the AI system has access to high-quality and relevant data, which is the fuel for training
    effective models.
    C. It focuses on designing attractive user interfaces for data input, catering to user experience over data
    completeness or accuracy.
    D. It is only important for regulatory complianc
A
  1. The correct answer is B. Implementing a data strategy during the AI system design phase is crucial because
    it ensures access to high-quality and relevant data, which is fundamental for training effective AI models.
    The quality and relevance of data directly impacts the performance and accuracy of AI systems. The other
    options are irrelevant aspects of an AI system’s design phase.
    Body of Knowledge Domain V, Competency B
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q
  1. Which of the following is included in the Canadian Artificial Intelligence and Data Act’’s risk-based approach?
    A. Source of the artificial intelligence technology.
    B. Elimination of data privacy impact assessments.
    C. Consideration of the nature and severity of harms.
    D. Equal application to companies operating in and outside of Canada.
A
  1. The correct answer is C. Bill C-27 bases the risk evaluation on:
    a. Nature and severity of harms.
    b. Scale of use of the system.
    c. Extent to which individuals can opt out or control interactions.
    d. Imbalance of economic and social status of the individual interacting with the system.
    Body of Knowledge Domain IV, Competency B
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q
  1. In the context of workforce readiness for AI, what is an essential consideration for businesses to fully
    leverage AI benefits responsibly?
    A. Focusing on training a select IT task force in advanced AI techniques.
    B. Avoiding employee training on AI to prevent over-reliance on technology.
    C. Implementing comprehensive training programs across various departments.
    D. Outsourcing AI-related tasks to third-party specialists to reduce training needs.
A
  1. The correct answer is C. To maximize the benefits of AI, businesses should implement comprehensive
    training programs that encompass not just technical AI skills but also ethical considerations of AI usage. This
    approach ensures a broad-based understanding of AI across the organization, enabling more effective and
    responsible use of AI technologies.
    Body of Knowledge Domain VII, Competency B
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q
  1. The Human Rights, Democracy and Rule of Law Impact Assessment builds on a history of impact
    assessments used extensively in other domains; such as, for example, data protection regulation. What is
    one of the main objectives of this type of impact assessment?
    A. To produce a document with mandatory requirements and enforcement mechanisms for harmful
    design choices.
    B. To provide a process for assessing and grading the likelihood of risks associated with potentially
    harmful outcomes.
    C. To expand on the proposals and guidelines for specific sectors and AI applications decided solely by the
    Council of Europe.
    D. To voluntarily endorse business practices, technology or policies after they have been deployed or
    when most convenient to developers.
A
  1. The correct answer is B. The Human Rights, Democracy and the Rule of Law Impact Assessment provides an
    opportunity for project teams and engaged stakeholders to come together to produce detailed evaluations
    of the potential and actual harmful impacts that the design, development and use of an AI system could
    have on human rights, fundamental freedoms, elements of democracy and the rule of law.
    Body of Knowledge Domain IV, Competency C
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q
  1. The manager of a call center decides to use an AI system to record and conduct sentiment analysis on
    interactions between customers and call center agents. Prior to using this data to evaluate the
    performance of the agents, what step should be implemented to minimize error in the evaluations?
    A. A security review of the AI system.
    B. A manual review of the recordings.
    C. An estimate of call rates to the center.
    D. A retention policy for aggregate reports.
A
  1. The correct answer is B. Human review or supervision of the recorded call should be done to ensure the
    sentiment analysis of the AI system is accurate prior to using the data to evaluate the agent’s performance.
    Retention of data is important, but aggregate report data will not create risk for an individual. An estimate of
    call rates may help in staffing or workforce planning but won’t directly affect an individual’s performance
    rating. A security review of the AI will ensure security of the system, but also does not directly affect the
    individual or how their performance is evaluated.
    Body of Knowledge Domain III, Competency B
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q
  1. Consider you are a privacy consultant with a tech company developing an AI-driven health-monitoring
    application. The application collects user data to provide personalized health insights. The company is
    dedicated to ensuring user privacy and seeks your advice. In the context of developing a privacy-enhanced
    AI health-monitoring application, which of the following measures would be most effective in aligning with
    privacy standards?
    A. Implementing user consent mechanisms and anonymizing collected data to protect user identities.
    B. Utilizing social media integration to enrich user profiles with additional lifestyle information and offers.
    C. Sharing raw user data with various pharmaceutical companies to facilitate targeted drug development.
    D. Storing user health data on a publicly accessible server for easy collaboration with health care
    researchers.
A
  1. The correct answer is A. Obtaining consent from users empowers individuals to make informed decisions
    about the use of their health data. Furthermore, anonymizing involves removing personally identifiable
    information from the collected data, such as names, addresses and contact details.
    Body of Knowledge Domain II, Competency B
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q
  1. Which of the following is an important consideration for mitigating AI system errors?
    A. Excluding a review of previous incidents.
    B. Focusing exclusively on legal compliance.
    C. Adopting a uniform approach for all errors.
    D. Understanding the AI use case and the data type.
A
  1. The correct answer is D. The AI use case and the data type involved in system errors will guide the method
    of mitigation needed. This allows for an appropriate response and helps to guide any updates or changes
    needed to avoid similar issues in the future.
    Body of Knowledge Domain VI, Competency E
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q
  1. Large language models are considered massive neural networks that can generate human-like text (e.g.,
    emails, poetry, stories, etc.). These models also have the ability to predict the text that is likely to come
    next. How is machine learning related to LLMs?
    A. LLMs do not use ML but rely on a fixed answer database.
    B. ML and LLMs are unrelated; LLMs use manual programming, not ML.
    C. LLMs learn from text data and use ML algorithms to generate responses.
    D. ML and LLMs are separate; ML is for numerical analysis, and LLMs are for graphics.
A
  1. The correct answer is C. Large language models cannot be designed without the training of the datasets
    required for the model to learn, make decisions and predict. LLMs are built using a machine learning
    algorithm trained to make classifications or predictions.
    Body of Knowledge Domain I, Competency A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q
  1. In the design phase of AI development, privacy-enhancing technologies play a critical role in addressing
    data privacy concerns. Among these technologies, differential privacy and federated learning are
    significant. Given this context, which of the following best illustrates the application of differential privacy in
    the AI design phase?
    A. An AI model is trained in a centralized dataset in which individual data points are anonymized before
    being added to the dataset.
    B. An AI system enables multiple models to learn from separate datasets, then combines them to improve
    the overall model without sharing individual data points.
    C. An AI application uses encryption to secure data in transit between the client and server, ensuring that
    intercepted data cannot be read by unauthorized parties.
    D. An AI system adds random noise to the raw data it collects, ensuring that any output reflects general
    trends in the dataset but does not compromise individual data privacy.
A
  1. The correct answer is D. This option directly aligns with the principles of differential privacy. Differential
    privacy is a technique used to ensure the privacy of individual data points in a dataset by adding statistical
    noise. This method allows the dataset to be used for analysis or AI model training while maintaining the
    confidentiality of individual data entries. The key aspect of differential privacy is that it provides insights into
    the general patterns or trends of the data without revealing sensitive information about any specific
    individual.
    Body of Knowledge Domain V, Competency B
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q
  1. When implementing responsible AI governance and risk management, what is a critical consideration for
    testing and evaluation in alignment with a specific use case?
    A. Include potentially malicious data in the test.
    B. There is no need for repeatability assessments for novel cases.
    C. Use the same testing and evaluation approach for all algorithms.
    D. Exclude cases that the AI has not encountered during its training.
A
  1. The correct answer is A. Incorporating potentially malicious data into the testing process would enable the
    assessment of the AI’s robustness and resilience against adversarial inputs and ensure that it behaves safely
    in real-world scenarios. The other options are not aligned with the best practices for responsible AI testing.
    Body of Knowledge Domain VI, Competency E
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q
  1. The Digital Services Act aims to regulate the digital marketplace in the EU. Which of the following best
    describes how the DSA will empower users?
    A. Online marketplaces will have to comply with special obligations to combat the sale of illegal products
    and services.
    B. Very large online search engines will be held accountable for their role in disseminating illegal and
    harmful content.
    C. Very large online platforms will have to be more transparent in how they select the companies allowed
    to advertise on their platform.
    D. Platforms will have to provide clear information on why content is shown and give users the right to opt
    out of profiling-based recommendations.
A
  1. The correct answer is D. Platforms will be required to disclose to users why the user is being shown specific
    content, as well as provide users with the right to opt out of being shown content that was derived through
    profiling. This is the only option that provides users with control over their content.
    Body of Knowledge Domain III, Competency A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q
  1. When assessing the success of an AI system meeting its objectives, which of the following approaches best
    aligns with the requirement to ensure a comprehensive evaluation while avoiding automation bias?
    A. Evaluate and align predefined benchmarks that will provide evidence of having achieved your system
    goals.
    B. Rely on the AI system’s output to determine if the goals were achieved, as this will provide a reliable
    and objective measure.
    C. Conduct a review that includes the AI system’s output, human interpretation, and potential secondary
    or unintended outputs.
    D. Focus on user feedback about the AI system’s performance to directly measure the system’s
    effectiveness in meeting the objectives
A
  1. The correct answer is C. This is a comprehensive approach as it includes different testing initiatives. By
    integrating the AI system’s output, human interpretation and potential secondary or unintended outputs,
    you will avoid automation bias (when only relying on the output), as well as the limitation of having only
    human feedback that might not fully capture AI specificities. Additionally, this process avoids a narrow
    focus, which would be the result of using only a benchmark as a comparison item.
    Body of Knowledge Domain VI, Competency E
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q
  1. Which of the following pairs provides examples of potential group harms associated with AI technologies?
    A. Safety and deep fakes.
    B. Acceleration and reputational.
    C. Mass surveillance and facial recognition.
    D. Carbon dioxide emission and agricultural.
A
  1. The correct answer is C. Mass surveillance and facial recognition can lead to group harm given the potential
    use of AI technology to target and/or discriminate against specific groups/demographics. Safety and deep
    fakes are examples of societal harm, which is different than group harms in that they are not targeted
    against a specific group or demographic. Acceleration refers to the potential harm caused by AI advancing
    beyond the ability to safeguard it properly. Reputational harm refers to the potential risk to organizations or
    individuals from poorly managed AI or competitors creating deep fakes. Carbon dioxide emissions and
    agricultural harm refer to the possibility of increased computer reliance affecting the environment.
    Body of Knowledge Domain II, Competency A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q
  1. What are the initial steps in the AI system planning phase?
    A. (1). Plan the team; (2). Determine specific use cases the organization wants the AI system to solve; (3).
    Identify gaps to achieve the use cases; (4). Identify necessary data for the AI system.
    B. (1). Gain stakeholder buy in; (2). Determine specific use cases the organization wants the AI system to
    solve; (3). Identify gaps to achieve the use cases; (4). Identify necessary data for the AI system.
    C. (1). Determine the budget; (2). Determine specific use cases the organization wants the AI system to
    solve; (3). Identify gaps to achieve the use cases; (4). Identify necessary data for the AI system.
    D. (1). Determine the business problem to be solved by the AI system; (2). Determine the specific use
    cases; (3). Identify gaps to achieve the use cases; (4). Identify necessary data for the AI system.
A
  1. The correct answer is D. Identifying the business problem to be solved is always the first step in any
    significant technology investment. The steps listed in other answers may be relevant but are not initial
    considerations.
    Body of Knowledge Domain V, Competency A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q
  1. During the AI winter of the 1970s, funding declined partially due to?
    A. Failing to meet expectations from initial hype and promises.
    B. Government agencies reallocating budgets to other sciences.
    C. Hardware restrictions on complex neural network simulations.
    D. Mathematical proofs showing capabilities were overestimated.
A
  1. The correct answer is A. Disillusion set in when early hopes did not match the actual capability delivered.
    Mathematical theories did not yet fully characterize limitations.
    Body of Knowledge Domain I, Competency D
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q
  1. Which of the following is NOT true about the relationship of explainability and transparency as they relate
    to AI?
    A. They are synonymous.
    B. They support trustworthy AI.
    C. They are distinct characteristics that support each other.
    D. They enable users of the system to understand the outcomes.
A
  1. The correct answer is A. Both are types of documentation that help users understand how the system
    produces its outputs. Both support trustworthy AI as defined by NIST, OECD and others. “Explainability and
    transparency are distinct characteristics that support each other. Transparency can answer the question of ‘what happened’ in the system. Explainability can answer the question of ‘how’ a decision was made in the
    system” (NIST AI RMF, 3.5 Explainable and Interpretable). They are not synonymous. The words do not have
    nearly the same meaning but can be presented in ways that may make this appear to be the case.
    Body of Knowledge Domain II, Competency B
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q
  1. A regulator has ruled that the AI model used by some features of your product should not be used in its
    jurisdiction. What is the most robust way of reacting to this situation?
    A. Shut down the service in the jurisdiction to pressure the regulator to change its position.
    B. Create a second, separate version of the application and deploy it to users in the jurisdiction.
    C. Implement a feature flag to enable or disable the features based on a user’s country of origin.
    D. Retrain and deploy your AI model immediately to prove you can react quickly to regulatory demands.
A
  1. The correct answer is C. Feature flags can be toggled without major engineering efforts, and keeping the
    model offline during regulatory negotiations avoids unnecessary confrontations, such as requiring a
    redeploy.
    Body of Knowledge Domain VI, Competency F
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q
  1. Which of the following is often used as a defense to a claim for copyright infringement with respect to AI?
    A. Fair use.
    B. Ethical use.
    C. Patentability of AI models.
    D. AI models made by the government.
A
  1. The correct answer is A. The fair use doctrine states that excerpts of copyright material may, under certain
    circumstances, be copied verbatim for purposes such as criticism, news reporting, teaching and research
    without the need for permission from or payment to the copyright holder. Many AI manufacturers rely on
    fair use as a copyright infringement defense, alleging that the use of copyrighted material for training an AI
    model is ”transformative” (does not merely replicate the material) and does not harm the commercial
    market for the copyright holder’s works.
    Body of Knowledge Domain III, Competency A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q
  1. What does the term “generalization” mean regarding an AI model?
    A. The model is a good data classifier.
    B. The model is effective with new data.
    C. The model can solve a variety of business problems.
    D. The model can produce good outputs from few inputs.
A
  1. The correct answer is B. A model’s generalization is its ability to perform well on new data, in addition to
    training and initial test data.
    Body of Knowledge Domain V, Competency C
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q
  1. How should an organization allocate resources to effectively address AI risks?
    A. Allocate resources to highest risk areas.
    B. Allocate resources to highest risk factors.
    C. Allocate resources to highest risk tolerance.
    D. Allocate resources to highest risk outcomes.
A
  1. The correct answer is A. After performing a risk assessment to determine where AI risk resides in an
    organization, priority should be given to the highest risk areas in terms of resources allocated. Risk
    tolerance relates to organizational risk appetite, risk factors are inputs to determining a risk level and risk
    outcomes relate to adverse scenarios that might transpire. They are related concepts, but when it comes to
    prioritizing and allocating resources, best practice suggests it is done by risk area.
    Body of Knowledge Domain VI, Competency A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q
  1. Why should an AI developer separate data into training and test sets?
    A. To create models that can be trained quickly.
    B. To reduce the inherent biases in stochastic gradient descent.
    C. To avoid overfitting to the specific characteristics of the training data.
    D. To improve the test validation scores by presenting multiple datasets.
A
  1. The correct answer is C. It is best practice to separate training and test sets to ensure any random biases in
    the training set do not carry over into the final model. In machine learning, overfitting occurs when an
    algorithm fits too closely or even exactly to its training data, resulting in a model that can’t make accurate
    predictions or conclusions from any data other than the training data.
    Body of Knowledge Domain VI, Competency F
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q
  1. When considering the implementation of AI systems, how should companies educate users to best address
    their concerns?
    A. Abort the AI system to avoid addressing user concerns.
    B. Assume that users will self-educate through online resources.
    C. Provide highly technical details relevant only to AI professionals.
    D. Offer comprehensive information on AI functionalities and limitations.
A
  1. The correct answer is D. To effectively address user concerns, companies should provide comprehensive
    and accessible information regarding the AI system. This includes educating users on the capabilities and
    limitations of AI to ensure that they have a balanced and realistic understanding of the AI technology in
    place.
    Body of Knowledge Domain VII, Competency B
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
35. Which of the following is TRUE of a “weak” AI system? A. It closely mimics the operation of the human brain. B. It boosts productivity by enabling smarter decision-making. C. It outperforms humans across a comprehensive range of tasks. D. It can perform complex tasks and achieve goals in different contexts.
35. The correct answer is B. Weak AI systems boost productivity by enabling smarter decision making. The other responses are examples of strong AI systems. “Strong” AI refers to artificial general intelligence, which does not yet exist. All current AI systems today are considered to be “weak” AI that is suitable for supporting human decision-making and automating repetitive tasks but does not meet the standard of being equivalent to human intelligence in terms of understanding context and meaning. Body of Knowledge Domain I, Competency B
26
36. Alex is working for a public transit agency that is building out its AI program. He is collaborating with a local university to work on an AI project that involves what the NIST AI Risk Management Framework defines as social responsibility. Which proposal below best aligns with the concepts of social responsibility? A. Using AI-assisted software to analyze anonymized ridership data to fulfill government reporting requirements. B. Attaching sensors on buses to determine and assess heavy traffic periods on specific routes to provide accurate travel time to passengers. C. Analyzing video from station surveillance cameras to determine when office trash cans need to be emptied to save staff from unnecessary trips. D. Developing an AI-based mobile application that provides pseudonymized assistance to disabled customers booking point-to-point ride services.
36. The correct answer is D. NIST refers to the organization’s social responsibility as considering the “impacts of its decisions and activities on society and the environment through transparent and ethical behavior.” While each of these are important or useful in one way or another, providing ride assistance to disabled customers meets the goal of ethical social behavior. By pseudonymizing the information and the service itself, the company is providing a socially responsible service that protects the individual. Body of Knowledge Domain IV, Competency C
27
37. The engineering team is working on training an AI model with a large, new dataset culled from social media. The team has built out the model to predict trends in car buying for the coming year based on social media posts. The AI model has a clear bias toward one particular automotive model. It turns out the recent data input was pulled when the car was featured in a blockbuster Hollywood movie. This is an example of which type of AI bias? A. Human-cognitive. B. Systemic and directed. C. Computational and statistical. D. Psychological and sociological.
37. The correct answer is C. Computational and statistical biases may stem from errors due to non- representative samples. In this example, the data is skewed based on the movie release. Body of Knowledge Domain VI, Competency E
28
41. What is the AI Liability Directive proposed by the European Commission? A. A directive that will regulate organizations’ implementation of AI systems. B. A directive that will provide for legal responsibility by organizations using AI systems. C. A directive that will minimize and regulate the development and adoption of AI systems. D. A directive that will reduce organizational legal risks and costs associated with AI systems.
41. The correct answer is B. The AI liability directive is a proposal by the European Commission for a directive on civil liability for AI systems that complements and modernizes the EU liability framework to introduce new rules specific to damages caused by AI systems. Body of Knowledge Domain III, Competency C
29
42. George is developing an advertising campaign for Splendide Company’s new product. George wants to incorporate colorful, eye-catching modern art as the backdrop for the product in print and digital ads. He uses a generative AI application to create the campaign and plans to use the prompt, “colorful abstract paintings in the style of contemporary artist Alexandra Artiste.” George determines that, because he wants to reference a specific artist, he should obtain a copyright license from the artist. To protect Splendide from liability, he wants to include terms in the license that provide exceptions to copyright infringement indemnification, as is commonly done with copyright infringement licensing contracts. Which of the following is a common exception to copyright infringement indemnification that does NOT work well in an AI model? A. Identifying the licensee as the owner of the resulting work. B. Requiring the licensee to use the artwork to train the model. C. Identifying the output from the AI model as an original work. D. Combining the licensed artwork with other artwork in the AI model.
42. The correct answer is D. Typical exceptions to intellectual property infringement indemnification in IP license agreements include exceptions for modifications to the licensed work, unauthorized combinations of the licensed work and other works, and use of the licensed work beyond the scope authorized in the license agreement. Because AI models modify, combine and use the works on which they are trained in other contexts, these exceptions to infringement indemnification do not work. Body of Knowledge Domain VII, Competency A
30
43. Nolan is in the IT procurement department of Happy Customer Inc., a provider of call center solutions. Other companies are using Happy Customer Inc. services to help their clients with easy, first-level support requests. Happy Customer Inc. wants to offer its B2B clients a new text-based chatbot solution that offers the right answers to a given set of questions. What type of AI model should Nolan look for? A. A robotics model. B. A statistical model. C. A decision tree model. D. A speech recognition model.
43. The correct answer is C. A decision tree model predicts an outcome based on a flowchart of questions and answers. A statistical model is used to model the relationships between two variables. A speech recognition model is used to analyze speech; e.g., for voice assistants. A robotics model is based on a multidisciplinary field that encompasses the design, construction, operation and programming of robots and allows AI systems and software to interact with the physical world without human intervention. Body of Knowledge Domain I, Competency C
31
44. Which of the following best describes feature engineering? A. Transforming or manipulating features to enhance the performance of the model. B. Consulting with subject matter experts about features to include in the training data. C. Anonymizing raw data so personal information is not included in the features for the model. D. Annotating the raw data to create features with categorical labels that align with internal terminology.
44. The correct answer is A. While the other options are all actions that are performed in the AI development life cycle, feature engineering refers specifically to the transformation of “features” used as inputs for the model. Body of Knowledge Domain V, Competency C
32
45. A company uses automated decision-making via an AI system to evaluate and subsequently accept or deny loan applications from potential customers. A customer (data subject) who is denied might use the process of redress because of the denial. Which of the following is a component of the right to redress? A. Review the privacy notice. B. Register a formal complaint. C. Sign a contract with the company. D. Provide consent for data processing.
45. The correct answer is B. The data subject would use the process of redress to make a formal complaint or request a review of the automated decision. Body of Knowledge Domain III, Competency B
33
46. Which of the following is an important reason to continuously improve and maintain deployed models by tuning and retraining them with new data and human feedback? A. To comply with various legal and regulatory requirements. B. To maintain the system’s relevance and effectiveness over time. C. To reduce the initial cost of training the model by adding new data iteratively. D. To ensure perfect accuracy and eliminate any possibility of errors in the model.
46. The correct answer is B. This option best captures one of the key objectives of continuously improving and maintaining models. Other reasons include adapting the model to evolving data and addressing potential bias. No model can achieve perfect accuracy, and continuous improvement aims to minimize errors, not eliminate them entirely. While continuous training can improve model performance over time, it is not primarily a cost-saving measure. Laws and regulations might necessitate responsible AI practices that are aided by training on new data, but the concept of continuous improvement focuses more on enhancing the model’s real-world effectiveness and fairness. Body of Knowledge Domain VI, Competency F
34
47. Olive is on a team of governance, risk and compliance professionals building out an AI policy for her company, which sells customizable baby strollers. She has been assigned to review the ISO/IEC Guide 51, “Safety aspects — Guidelines for their inclusion in standards,” to incorporate areas of safety for its main product line. Which item below is NOT one of the four areas of safety to consider? A. Basic safety. B. Group safety. C. Product safety. D. Development safety.
47. The correct answer is D. The four safety standards set forth in ISO/IEC Guide 51 are: basic safety, group safety, product safety and standards containing safety aspects. Development safety is not one of the four standards. Body of Knowledge Domain IV, Competency C
35
48. Olive presents her work to the GRC AI policy group and receives positive feedback from the team. The group asks Olive to build out a questionnaire for departments with potential AI projects using the ISO/IEC Guide 51 questions around “the purpose of the standard.” With that as a starting point, which question below should be removed from the survey? A. Will AI impact any safety aspects of your project? B. Will large AI-processed datasets be used as part of the project? C. Will AI be used as a basis for conformity assessments as part of the project? D. Will AI be used during any of the phases of product testing or model building?
48. The correct answer is B. The other options are all questions that should be asked to determine how the use of AI could introduce or increase existing risk to safety. While understanding what datasets may be used is important for privacy, it is not part of the ISO/IEC Guide 51 safety aspects. Body of Knowledge Domain IV, Competency C
36
52. The Turing test was introduced by mathematician and cryptographer Alan Turing in 1950. What did the test originally attempt to identify? A. A human can often demonstrate unintelligent behavior. B. A machine can be built that is as intelligent as a human. C. A machine can demonstrate behavior indistinguishable from a human. D. A machine can be built that is able to converse intelligently with a human.
52. The correct answer is C. When Turing proposes the question, “Can machines think?” he formulates a situation where a human evaluator would try to distinguish between a computer and human text response. Body of Knowledge Domain I, Competency A
37
53. A newly developed AI system in your organization is almost ready to deploy. The engineers who collaborated on the project are the most appropriate personnel to ensure which of the following is in place before the system is deployed? A. A change management plan to support widespread internal adoption. B. A new company policy to address AI developer, deployer and user risks. C. A method for continuous monitoring for issues that affect performance. D. A set of documented roles and responsibilities for an AI governance program.
53. The correct answer is C. This is the only option that describes one of the four key steps in the implementation phase: monitoring. The prompt states the actors involved are engineers who worked directly on the project. These engineers can be assumed to have the technical and project knowledge necessary to establish continuous monitoring for deviations in accuracy or irregular decisions made by the model. The other options listed should also be in place before the system is deployed but are best addressed by the chief privacy officer, AI governance committee, office for responsible AI, ethics board or other steering groups. Body of Knowledge Domain V, Competency D
38
54. Morgan has just started their position as an AI governance lead for a pharmaceutical company, Pharma Co. They have been tasked with ensuring AI governance principles are integrated across the company. What is one of the principles that should be applied consistently at Pharma Co.? A. Adopt a dissenting viewpoint. B. Select a team that is process focused. C. Ensure planning and design is consensus-driven. D. Prioritize the financial considerations of the program.
54. The correct answer is C. Core objectives to ensure AI governance principles apply an integrated focus include adopting a pro-innovation mindset, ensuring the AI ethics framework is law, remaining industry and technology agnostic, focusing the team on outcomes and ensuring there is consensus in planning and design. Financial considerations are important, but they should not be the primary consideration. Adopting a dissenting viewpoint will likely only silo AI governance rather than integrate it into the company. Finally, while the team can follow a process, it should focus on the outcomes. There may be many pathways to integrate AI governance into the company to achieve successful outcomes. Body of Knowledge Domain VI, Competency B
39
55. WonderTemp, a temporary staffing company, terminated most of its employees after implementing a virtual assistant to help write emails, policies, marketing materials and responses to client questions. Three months later, WonderTemp learns that four of its biggest clients are not renewing their contracts because the work product from WonderTemp is full of inaccuracies. Based on these facts, what types of harm has WonderTemp suffered because of its reliance on the virtual assistant? A. Legal and regulatory harm. B. Individual and societal harm. C. Economic and reputational harm. D. Disinformation and reportable harm.
55. The correct answer is C. The company suffers economic harm when the four companies choose not to renew, and they suffer reputational harm because their clients found their work to be full of inaccuracies. Legal and regulatory do not apply here as WonderTemp has not been sued, there are no assertions that the company violated laws or regulations, and it is not in a highly regulated industry. Social harm is also incorrect, as this type of harm is not experienced by WonderTemp. Disinformation is incorrect, as disinformation is false information that is deliberately intended to mislead — intentionally misstating the facts, and that is not what is happening here. Body of Knowledge Domain II, Competency A
40
You work for a large online retailer and the engineering team comes to you with a proposal to implement an AI system for matching customers with products they might want to buy. For the next two items, identify the type of use case described in the situation. 56. Use Case 1: A customer inputs a picture of the product they are looking for and receives a link to the page where they can purchase the product. A. Detection. B. Recognition. C. Optimization. D. Personalization. 57. Use Case 2: A marketer predicts and displays items a known customer might want to buy on the landing page based on the customer’s profile. A. Detection. B. Recognition. C. Optimization. D. Personalization.
56. The correct answer is B. The system needs to recognize and identify the item shown in the customer’s picture to match it with products in the company catalog. Recognition models are also used for tasks like facial recognition, identification of defects in manufacturing, and deepfake detection. Body of Knowledge Domain I, Competency A 57. The correct answer is D. When a company is developing a customer profile, it is using information about the customer’s previous behavior to predict future purchasing behavior. Personalization models aim to create a unique experience based on the needs and preference of each customer. Body of Knowledge Domain I, Competency A
41
58. How do you ensure your AI system does not develop bias over time? A. You train a specialized customer service representative to handle any complaints about bias. B. You implement an automated test suite to check for bias and monitor the results continuously. C. You provide users with a survey every quarter asking if the respondents have observed any bias. D. You personally test the system every week to ensure your recommendations remain the same or improve.
58. The correct answer is B. Automated testing is the only method that ensures full coverage of the user base. Not all users respond to surveys or contact customer service. Checking the system yourself does not cover all corner cases. Body of Knowledge Domain VI, Competency F
42
59. The recruiting department for Company X wants to use AI to help it identify the best external candidates and fast-track the hiring process for open positions. It trains the AI program using the resumes of top- performing software engineers across the company. The AI determines that men between the ages of 25 and 35 are the best candidates. This is an example of what type of bias in the AI system? A. Sampling bias. B. Temporal bias. C. Projection bias. D. Confirmation bias.
59. The correct answer is A. The data is skewed toward a subset of individuals due to the limited group selected (internal employees) to train the program — the sample. The sample data should be broader than just those individuals currently employed by the company. The larger the sample, the more diverse the results will be. Temporal bias does not apply because the biased outcome is not due to unbalanced data sampling over time. Confirmation bias does not apply because the recruiters did not seek out information to confirm existing beliefs, but trained the AI using performance statistics. Projection bias does not apply as this type of bias is based on assuming others share our preferences, beliefs and attitudes, and there is no indication that this is the case. Body of Knowledge Domain II, Competency B
43
60. Which of the following is the best reason why it is important to conduct red team reviews of models before deployment? A. To uncover potential biases in the models before they are deployed to external users. B. To eliminate all flaws and weaknesses in the model before they impact real-world users. C. To avoid the need to conduct formal risk assessments for various models before deployment. D. To improve traditional security testing methods that are not effective when testing AI models.
60. The correct answer is A. This option effectively captures a key purpose of red teaming; i.e., proactively identifying biases. B is incorrect as red teaming should complement, not replace, formal risk assessments. C is incorrect as red teaming is not a guarantee that a model will perform perfectly, as real-world deployment can present new flaws or issues. D is incorrect — red teaming should complement, not replace, traditional security testing methods, which remain important for broader system security. Body of Knowledge Domain VI, Competency F
44
61. What is the role of U.S. federal agencies in regulating and overseeing the development and use of AI systems? A. To discourage the development and adoption of AI systems, including private use. B. To regulate the development and use of AI systems, including various liability issues. C. To reduce the legal risks and costs associated with AI systems, including copyright issues. D. To change the accountability of parties who cause damage with AI systems, including social media companies.
61. The correct answer is B. The role of U.S. federal agencies is to regulate and oversee the development and use of AI systems and to address the liability issues and challenges arising from AI systems. U.S. federal agencies have been involved in various initiatives and activities related to AI, such as issuing guidance, standards and best practices; conducting research and analysis; providing funding and support; and enforcing laws and regulations. For example, the National Artificial Intelligence Initiative Office, created in January 2021, is responsible for coordinating and advancing the federal government’s efforts and investments in AI research, development, innovation and education. Body of Knowledge Domain III, Competency C
45
65. Which of the following is an example of societal harm in AI? A. Reputational harm to an organization. B. Housing discrimination against a family. C. Spread of disinformation over the internet. D. Insurance discrimination against a demographic.
65. The correct answer is C. The spread of disinformation causes harm to a society by making it difficult or impossible for individuals to verify whether the information they are accessing is accurate or not. This undermines the individual’s faith in their societal structure, causing social polarization, unrest and mistrust in the decision-making process of policy makers. The internet has made the ability to spread disinformation a much more rapid process, complicating these issues further. Reputational harm to an organization is an example of organization harm; housing discrimination is an example of individual harm; and insurance discrimination is an example of AI bias that causes individual harm. Body of Knowledge Domain II, Competency A
46
66. AbleSoft developed an AI-based screening system to identify potential insider trading. As part of adopting Singapore’s Model AI Governance Framework, AbleSoft took which of these steps? A. Contracted with outside experts to audit algorithms for bias risks. B. Set up IT security policies limiting access to source code and data. C. Joined industry groups defining standards for financial AI systems. D. Established an approval process for AI systems based on their risk level.
66. The correct answer is D. The Singapore Framework has guidance on implementing policies and processes for accountable and ethical AI development aligned to risk levels. Body of Knowledge Domain I, Competency C
47
67. Starnet Ltd. is a multinational company that provides a virtual assistant. Patrick, a user from the EU, decides to generate an image of himself. To his surprise, the assistant generates an image that is clearly derived from his private social media profile. Patrick believes that Starnet Ltd. has performed untargeted scraping of the internet to obtain facial images for the dataset used by its LLM. To which authority should Patrick report this incident, and, if Starnet Ltd. is liable under the EU AI Act, what would be the threshold of the penalty to be imposed? A. The incident should be reported to the EDPS. The penalty would be a fine of up to 15 million euros or three percent of the total worldwide turnover. B. The incident should be reported to the EU AI office. The penalty would be a fine of up to 15 million euros or three percent of the total worldwide turnover. C. The incident should be reported to the data protection regulator. The penalty would be a fine of up to 7.5 million euros or 1.5 percent of the total worldwide turnover. D. The incident should be reported to the designated national authority. The penalty would be a fine of up to 35 million euros or seven percent of the total worldwide turnover.
67. The correct answer is D. According to the latest release by the European Commission on the EU AI Act, Member States must designate a national authority to serve as a point of contact for the complaints of citizens. Furthermore, the use of AI for untargeted scraping of the internet for facial images to build up or expand databases is classified as a prohibited use case of AI and, as such, any AI system used for this purpose will be banned. Therefore, if Starnet Ltd. has used an AI system for this purpose, Starnet Ltd., as a multinational company, would face an applicable threshold of penalty of a fine of up to 35 million euros or seven percent of the total worldwide turnover. Body of Knowledge Domain IV, Competency A
48
68. Your company is developing an AI system for global markets. This system involves personal data processing and some decision-making that may affect rights of the individuals. Part of your role as an AI governance professional is to ensure that the AI system complies with various international regulatory requirements. Which of the following approaches best aligns with a comprehensive understanding of and compliance with AI regulatory requirements in this situation? A. Focus on the EU AI Act, as it is the most comprehensive regulation, since compliance with it will ensure compliance with all other international regulations that apply to AI. B. Develop a compliance strategy based on the strictest requirements in various regulations, including the EU AI Act, GDPR and HIPAA, then harmonize it into a unified compliance framework. C. Consider that if the AI system complies with local regulations of the country where the company is based, it is not necessary to comply with international regulations, as AI is adaptable globally. D. Adopt a region-specific compliance strategy where the AI is modified to meet regulatory requirements of each region independently, disregarding commonalities or overlaps in international regulations.
68. The correct answer is B. This option represents a balanced and comprehensive approach to AI regulatory compliance by acknowledging the need to understand and integrate multiple international requirements, ensuring adherence to the highest standards of AI compliance, data privacy and other regulatory considerations. Body of Knowledge Domain VI, Competency C
49
69. As technology continues to advance, companies, scientists and technologists are adopting a human-centric approach to develop AI. What specifically characterizes an AI system as “human-centric”? A. It prioritizes technical advancement without considering human input, needs or experiences. B. It is designed with a focus on human input, providing an approach that addresses human needs and experiences. C. It disregards technological advancements and societal technological needs and rely entirely on human input. D. It is developed internally by companies without review of existing technologies and without collaboration with external technologists.
69. The correct answer is B. Human-centric AI is AI designed to put humans before the machine by acknowledging human abilities and ingenuity. HCAI is AI that enhances and augments human abilities rather than displacing them. HCAI aims to preserve human control to ensure that AI meets human needs. “Human centeredness asserts firstly, that we must always put people before machines, however complex or elegant that machine might be, and, secondly, it marvels and delights at the ability and ingenuity of human beings. In the Human Centered System, there exists a symbiotic relation between the human and the machine, in which the human being would handle the qualitative subjective judgements and the machine the quantitative elements. It involves a radical redesign of the interface technologies and at a philosophical level, the objective is to provide tools (in the Heidegger sense) which would support human skill and ingenuity rather than machines which would objectivize that knowledge.” Mike Cooley, On Human-Machine Symbiosis, 2008. Body of Knowledge Domain II, Competency B
50
70. A financial institution aims to develop a model for detecting fraudulent credit card transactions. It possesses a history of transaction data in which instances of fraud and non-fraud are clearly labeled and explicitly identified. What is the appropriate machine learning technique the financial institution should adopt? A. Supervised. B. Unsupervised. C. Reinforcement. D. Semi-supervised.
70. The correct answer is A. In this situation, where the financial institution has a labeled dataset with instances of fraud and non-fraud clearly identified, the appropriate machine learning technique is supervised learning. In supervised learning, the model is trained on labeled data, and it learns to make predictions or classifications based on the provided labels. The goal is to identify patterns in the labeled data that can be used to accurately predict the labels for new, unseen instances. Body of Knowledge Domain I, Competency B
51
71. BankC has created an artificial intelligence algorithm that filters loan applications based on criteria, such as current zip code, applicant job industry and name, in addition to the expected criteria. One staff person noticed that several loan applicants who passed through the filter were white, as compared with a distribution that was representative of the community’s distribution prior to use of the algorithm. If BankC proceeds with using this algorithm in the U.S., which of the below federal laws must its compliance department be aware of (in addition to others not listed)? A. The Dodd-Frank Act. B. Fair Credit Reporting Act. C. Equal Credit Opportunity Act. D. Americans with Disabilities Act.
71. The correct answer is C. The Equal Credit Opportunity Act prohibits discrimination against credit applicants based on race, color, national origin, sex, marital status, age, existing income from a public assistance program, or because an applicant has, in good faith, exercised any right under the Consumer Credit Protection Act. Another relevant law, which is broader, is section 5 of the Federal Trade Commission Act, which provides that “unfair or deceptive acts or practices in or affecting commerce … are … declared unlawful.” 15. U.S.C. Sec. 45(a)(1). Body of Knowledge Domain III, Competency A
52
74. When initiating an AI governance program, which of the following statements best reflects a strategic approach to effectively engaging leadership? A. Identify leaders already using AI who see opportunities for differentiation through improved governance. B. Postpone leadership engagement until the AI program is fully implemented to avoid an overly complicated journey. C. Prioritize leaders with extensive business experience to help minimize discussions about responsible AI and simplify your decision-making process. D. Get buy-in from leaders whose primary interest centers around the ability of AI to allow the organization to get ahead of its competition.
74. The correct answer is A. In the context of AI governance, it is crucial to engage leaders who not only have experience with AI, but also recognize the potential for differentiation through responsible practices. This approach ensures leadership alignment with the AI governance program goals and helps in understanding the value it brings. Body of Knowledge Domain VI, Competency C
53
75. The EU AI Act and NIST AI Risk Management Framework are both pursuing the dual purposes of promoting the uptake of AI and addressing the risks associated with its use. Which of the following is one of the main differences between the EU AI Act and NIST AI RMF? A. The EU AI Act is underpinned by a risk-based approach, while the NIST AI RMF is a horizontal regulation. B. The EU AI Act is applicable to EU-based companies only, while the NIST AI RMF has an extraterritorial effect. C. The EU AI Act is relevant to large- and medium-sized companies, while NIST AI RMF applies to companies of all sizes. D. The EU AI Act imposes mandatory compliance requirements, while NIST AI RMF is a voluntary AI governance framework.
75. The correct answer is D. The NIST AI Risk Management Framework is intended for voluntary use and to improve the ability to incorporate trustworthiness considerations into the design, development, use and evaluation of AI products, services and systems. The EU AI Act imposes comprehensive mandatory compliance requirements on high-risk AI with respect to, among other things, risk mitigation, data governance, detailed documentation, human oversight, transparency, robustness, accuracy and cybersecurity. The EU AI Act requires mandatory conformity assessments and fundamental rights impact assessments. It also provides a right for citizens to launch complaints about AI systems and receive explanations about decisions based on high-risk AI systems that impact their rights. Body of Knowledge Domain IV, Competency A
54
76. Robotic processing automation is an emerging type of artificial intelligence. Which of the following does NOT describe RPA? A. Machines that mimic human actions on digital systems. B. Software robots that are used to automate rule-based tasks. C. Machines designed to perform tasks relying on human intervention. D. Software robots that are augmented by other types of artificial intelligence.
76. The correct answer is C. The intent of robotic processing automation is to “mimic” human behavior by leveraging intelligent systems. The goal of this is to decrease human intervention by automating tasks to increase efficiency of business processes, and not rely on humans to complete tasks. Body of Knowledge Domain I, Competency B
55
77. A retailer plans to implement facial recognition in physical stores. Which of the following measures should the retailer take to comply with regulatory requirements? A. Disclose the practice of facial recognition to consumers. B. Train the employees not to reveal the use of facial recognition technology. C. Ensure as many images as possible are placed into the facial-matching system. D. Consult with a trusted privacy professional network to source a facial-recognition vendor.
77. The correct answer is A. Transparency is a key value in privacy and responsible AI. Failure to disclose risks of harm to consumers can also lead to violations of the FTC Act section 5, as well as existing privacy laws such as the GDPR, and potentially to violations of emerging laws. Companies that use facial recognition in physical stores should, at a minimum, consider disclosing the practice to consumers. Body of Knowledge Domain III, Competency A
56
78. Company N works in the field of disability benefits and wants to use AI to determine who is most likely to benefit from its services by filtering applications. Company N’s privacy officer is concerned about unintended serious impacts of filtering applications. What is a negative impact the privacy officer should consider when analyzing the proposed project? A. Impacting an employee’s job. B. Making individuals uncomfortable. C. Discriminating against a particular group. D. Creating a false sense of security for applicants.
78. The correct answer is C. The privacy officer should examine the project carefully to determine if there will be an unintended result of discrimination against a particular group of individuals who may share a trait that will accidentally cause the AI to filter out their applications for disability benefits, even though they would normally qualify. Body of Knowledge Domain II, Competency A
57
79. A new generative AI (GenAI) service will be launched in China and may be used by minors. Developers will manually tag the data used to train and develop the AI models. Before the launch of the GenAI service, which of the following actions will the GenAI service NOT need to take to be compliant with China’s Interim Measures for the Management of Generative Artificial Intelligence? A. Carry out assessments to assess the quality of the tagging. B. Provide training to personnel on how to properly tag the data. C. Provide notices to parents of minors identifying risks of addiction. D. Employ measures to prevent minors from being addicted to the GenAI Service.
79. The correct answer is C. The Interim Measures require that preventative measures be taken to prevent minors from being addicted to the generative AI services. No notice to parents of minors is required. Body of Knowledge Domain IV, Competency B
58
80. Kim is expanding their governance, risk and compliance role at their organization to include the NIST AI Risk Management Framework. Kim is using the four core functions to lay out their project to present to management. Which of the following is the step for the measure function? A. Risk management is cultivated and present. B. Identified risks are assessed, analyzed or tracked. C. Risk is prioritized and acted upon based on projected impact. D. Context is recognized and risks related to context are identified.
80. The correct answer is B. The measure function involves quantifying risks and their related impacts, including tracking trustworthiness and social impacts. Through this stage, organizations can balance any trade-offs necessary and ensure the risks are mitigated where possible. Body of Knowledge Domain IV, Competency C
59
81. A meat processing facility and warehouse in the U.S. is planning to use automated, driverless forklifts to supplement its work force to increase efficiencies and reduce operating costs. Which agency guidelines and regulations should the facility review and analyze to maximize employee safety in connection with implementing this program? A. The Food and Drug Administration. B. The Consumer Product Safety Commission. C. The Equal Opportunity Employment Commission. D. The Occupational Safety and Health Administration.
81. The correct answer is D. The Occupational Safety and Health Administration establishes the relevant regulations and guidelines for safety and acceptable potential hazards in the workplace. The other agencies have also issued guidelines relevant to developing and implementing AI systems, but those guidelines and regulations are not focused on product safety in the workplace. Body of Knowledge Domain III, Competency A
60
82. Which of the following outline some of the necessary components of creating an AI governance structure? A. (1). Promote the IT organization with full AI governance responsibility; (2). Document AI governance decisions; (3). Identify an executive champion. B. (1). Federate all governance responsibility throughout the organization; (2). Document AI governance decisions; (3). Identify an executive champion. C. (1). Promote the compliance organization with full AI governance responsibility; (2). Document AI governance decisions; (3). Identify an executive champion. D. (1). Determine whether there is an AI governance structure; (2). Identify and document who will maintain and implement AI governance structure; (3). Identify an executive champion.
82. The correct answer is D. These components provide a foundational approach to establishing an effective AI governance structure. Neither a specific organizational unit, such as the compliance department or IT, nor a federated approach to governance is sufficient to support the complex organizational and governance needs of an AI system. Body of Knowledge Domain V, Competency A
61
83. SOS Soft, an EU-based company, has created an AI system for classifying calls made to emergency services. The user only needs to select which emergency service they require before the AI system routes them to the appropriate service nearest to their location. As the intended purpose of this system is only to route the caller to the appropriate emergency service location, recordings of emergency calls were not collected to train the system. During the development phase, it was decided that synthetic data should be used in the validation dataset to detect negative bias and correct it. Prior to the system’s release, it was tested during a two-day trial, with the permission of a local municipality, using real citizens. Would the EU AI Act apply to SOS Soft, and, if yes, would the company be in compliance with the procedures for testing AI systems under the EU AI Act? A. The EU AI Act will apply to SOS Soft, and the company is in full compliance with the procedures for testing AI systems. B. The EU AI Act will apply to SOS Soft, and the company is not in compliance with the procedures for testing AI systems under the EU AI Act. C. SOS Soft would not be regulated under the EU AI Act because the system was tested with the permission of the municipality, meaning it was trialed in a simulated environment. D. The EU AI Act will not apply to SOS Soft, as it was trialed in a simulated environment; however, if this system is released on the market, it would not be in compliance with the procedures for testing AI systems under the EU AI Act.
83. The correct answer is A. Per the EU AI Act, the regulation shall not apply to research, testing and development activities, but testing in real world conditions is NOT included in this exemption. The definitions include in the Act refer to real-world conditions outside of a laboratory or otherwise simulated environment. In this case, since the testing took place in a manner where real citizens made use of the AI system for the purpose of contacting emergency services, the testing is “real-world testing, ” so the EU AI Act does apply to SOS Soft. Furthermore, per the testing procedures mandated by the EU AI Act in Article 10(5) in relation to bias testing, clause (a) states that bias detection and correlation cannot be effectively fulfilled by processing synthetic data. Body of Knowledge Domain IV, Competency A
62
84. What is the purpose of a risk assessment in AI Governance? A. Test and validate the AI system. B. Establish priorities and allocate resources. C. Compile feedback from impacted AI actors. D. Establish appropriate roles and responsibilities.
84. The correct answer is B. The purpose of a risk assessment is to identify the areas of greatest risk in an AI system so risk mitigation resources can be allocated appropriately. Risk assessments can also be used to generate metrics for tracking and managing priorities throughout the model development life cycle. Body of Knowledge Domain VI, Competency D
63
85. The use of deidentification or anonymization within AI systems will mitigate or remove risks to individuals under the GDPR. However, what is a challenge likely created by using these methods? A. They reduce the utility of data. B. They produce security concerns. C. They restrict access to information. D. They generate a lack of transparency.
85. The correct answer is A. Deidentifying or anonymizing data reduces its utility within AI systems and is difficult to do with large datasets. Using these methods will not create additional concerns related to security and transparency or restrict access to the data. Body of Knowledge Domain III, Competency B
64
86. John is the AI governance lead for his EU-based company, XYZ. XYZ provides users with a virtual assistant. When exiting the system, users receive a notice which informs them they have interacted with an AI system and its responses may include hallucinations that should be reviewed by the user. John informs XYZ of his approval of this notice along with the way it is presented to him. Based on the information provided, why is XYZ not in compliance with the transparency requirements set out by the EU AI Act? A. The notice is not GDPR compliant. B. The notice was not displayed in a timely manner. C. The notice is not accessible to vulnerable persons. D. The notice is not adequate in terms of the information provided.
86. The correct answer is B. As per the transparency obligations set forth in the EU AI Act, the notice that informs users they are using an AI system shall be provided, at the latest, at the time of first interaction or exposure to the AI system in question. In this case, the notice was provided as a response to the first prompt rather than being displayed to users at the time of the first interaction or exposure, before they enter a prompt. Body of Knowledge Domain IV, Competency A
65
87. Which of the following most directly led to renewed interest and funding in AI research in the 1980s? A. Early successes of neural networks. B. Academic conferences bringing together AI researchers. C. The development of expert systems and natural language processing. D. Advances in computer processing power and availability of large datasets.
87. The correct answer is D. Increased computing power and data availability enabled more complex AI systems to be developed, renewing interest in the field. Body of Knowledge Domain I, Competency D
66
88. DPIAs and AI conformity assessments are both important tools in evaluating privacy risk. Which of the following is a key component of both? A. A targeted mitigation plan. B. An effective consent model. C. A strategy to anonymize data. D. A policy for personal data use.
88. The correct answer is A. Both assessments should involve an assessment of risks, as well as a plan to mitigate such risks. Consent and anonymization may be needed depending on the process or system involved. While personal data is a component of a DPIA, it may or may not be for an AI conformity assessment. Body of Knowledge Domain III, Competency B
67
89. How can human oversight influence the reduction of bias or discrimination in AI/ML models (algorithmic bias) in the initial stages of selecting data and defining goals? A. Using only data scientists to handle data selection in AI/ML models ensures success in meeting set goals. B. Working with a variety of social experts ensures inclusive and unbiased data during the AI/ML model design. C. Having software engineers manage AI/ML bias reduction without social science input provides clarity in data selection. D. Allowing government regulators to provide the sole definition of AI/ML model data will remove all bias and discrimination.
89. The correct answer is B. Collaboration with social experts, such as psychologists, linguists and sociologists, provides diverse insights into human behavior and societal dynamics essential for identifying potential biases that may not be obvious to data scientists or software engineers. Body of Knowledge Domain I, Competency A
68
90. What is the purpose of liability reform regarding liability for AI systems? A. To discourage the development and adoption of AI systems. B. To increase the legal risks and costs associated with AI systems. C. To reduce the legal certainty for developers and users of AI systems. D. To change the rules governing accountability for damages caused by AI systems.
90. The correct answer is D. Liability reform is the process of changing the legal rules and principles that govern the responsibility and accountability of parties who cause or contribute to damage or harm through AI systems. Body of Knowledge Domain III, Competency C
69
91. When building an AI model, which of the following is a best practice for feature selection? A. Raw data is preferred to processed data. B. The more features used, the better the model performs. C. Personal information should never be used as a feature. D. The same features must be used for training and testing.
91. The correct answer is D. Training and testing data must use the same features to ensure accuracy. There are trade-offs to adding model features in performance, and adding too many features can cause overfitting to the training data. Raw data needs to be processed before being added to the model, at the very least, for data cleaning purposes. Finally, personal information can be used as a feature, but only where necessary. Body of Knowledge Domain V, Competency C
70
92. Company XBR is developing an AI system for a virtual assistant to be used in households. How would the development team incorporate the principles of human-centric design throughout the entire process, from conception to deployment? A. Ignoring privacy concerns to maximize data collection for refining the AI algorithms. B. Designing the virtual assistant to operate independently, without requiring user input. C. Prioritizing user feedback and involving potential users in the early stages of development. D. Focusing on technical advancements and automation to streamline the virtual assistant’s capabilities.
92. The correct answer is C. The development team should rely on users’ participation and feedback to incorporate the principles of human-centric design through the process, from development to deployment of the AI system. Human-centered AI learns from human input and collaboration, and continuously improves based on human feedback. Prioritizing users’ feedback and involving users in the development of the AI system or otherwise is known as “human-centered design,” which is an approach to interactive systems development that aims to make systems usable and useful by focusing on the users, and their needs and requirements, and by applying human factors/ergonomics, usability knowledge and techniques. This approach enhances effectiveness and efficiency, improves human well-being, user satisfaction, accessibility and sustainability; and counteracts possible adverse effects of use on human health, safety and performance. See ISO 9241-210:2019(E) Body of Knowledge Domain II, Competency B
71
93. Which one of the following options contains elements that should be considered when building an AI incident response plan? A. The security incident plan, which covers all requirements needed, includes required incident actions so the security team can oversee AI incident management. B. The third-party tools that are integrated into your system should be identified because, in the case of an incident, you might need to notify those third parties. C. An automated tool should be in place to guarantee the algorithm is shut down immediately, avoiding dependence on human action to avoid delays or human error. D. The existing privacy breach response tools in your organization need to be reused, as the data privacy team should be the reporting body overseeing all data-related incidents.
93. The correct answer is B. To understand third-party risk in the organization, it is necessary to identify all third-party tools that are integrated in the system. If there is an incident, the organization might need to notify the user of these third-party tools, as the incident might not only impact the tool owned by the organization. Body of Knowledge Domain VI, Competency F
72
94. The HR department of the EU-based CompanyX purchases an AI-enabled tool to optimize efficiency and resource allocation. The tool collects personal data, including special categories of personal data under the GDPR, for an automated rating of workers’ performance and assignment of work. Given the context, the tool might be qualified as a high-risk AI system under the EU AI Act. Consequently, CompanyX must conduct tests against preliminarily defined metrics and probabilistic thresholds to identify risks and the most appropriate risk management measures. Since the tool will be used for automated decision-making affecting the privacy rights of employees, CompanyX must? A. Carry out a conformity assessment procedure after working with a representative in the EU. B. Draft the technical documentation and the required instructions for use once the tool is ready for use. C. Perform a data protection impact assessment to determine the impact to personal data before launching the tool. D. Inform appropriate national authorities of the Member States of all the risk mitigations taken as soon as the tool is in use.
94. The correct answer is C. CompanyX is an EU-based AI user and, therefore, it will need to comply with the requirements of the EU AI Act. The EU AI Act classifies AI systems used in employment and worker management as high risk and imposes rigorous obligations on the providers of high-risk AI systems. Users of high-risk AI tools must comply with less stringent requirements, such as using the product according to provider instructions; conducting impact assessments, including data protection impact assessments; monitoring the operation of the system; and notifying the provider or distributor of system malfunctions. Carrying out a data protection impact assessment in this context is a GDPR requirement since the type of processing activity described creates a likelihood of a high-risk outcome to the rights and freedoms of the employees. Body of Knowledge Domain IV, Competency A
73
95. The U.S. Federal Trade Commission is urging what tort reform initiative? A. Encourage AI developers to limit contractual liability to AI users. B. Protect trade secrets by maintaining confidentiality of AI algorithms. C. Ensure robust, empirically sound, ethical, transparent and fair decisions. D. Favor agency enforcement of AI safety violations over private right of action.
95. The correct answer is C. The FTC guidance on “Using Artificial Intelligence and Algorithms” urges businesses to be transparent with consumers; ensure decisions are fair, robust and empirically sound; and hold themselves accountable for compliance, ethics, fairness and non-discrimination. Body of Knowledge Domain VII, Competency A
74
96. CompanyZ is a U.S.-based hospital system with a clinic serving a particular community with a chronic disease that, long-term, is fatal. CompanyZ’s IT department wants to create an artificial intelligence tool that will ingest clinical data from electronic medical records, and then analyze progression of the disease to anticipate a patient’s significant clinical decline before it is a medical emergency. The IT department has asked for the privacy officer’s help in determining how to make the tool compliant. Which law below must the privacy officer consider? A. Colorado Privacy Act. B. Gramm-Leach-Bliley Act. C. Federal Trade Commission Health Breach Notification Rule. D. Health Insurance Portability and Accountability Act of 1996.
96. The correct answer is D. The Health Insurance Portability and Accountability Act of 1996 requires that protected health information be protected and not disclosed without a patient’s consent or knowledge, except in certain limited circumstances. The privacy officer must determine how to ensure that the proposed project protects patient information, if the project should allow for prior patient authorization of the use of artificial intelligence, and if it must allow patients to decline use of their data in the project without affecting their ability to obtain care. Body of Knowledge Domain III, Competency A
75
97. A consulting company will be using a new recruiting platform that will prescreen candidates using an automated employment decision tool as part of its hiring process. The tool will score candidates based on their application submitted through the company’s website, and the recruiters will see the top candidates for open job roles. The consulting company will be hiring candidates from New York. Which of the following actions is the consulting company NOT required to take to comply with New York City’s Local Law 144? A. Publish the results of the bias audit on its public website. B. Provide a transparent notice to employees or candidates about the use of AEDT. C. Conduct a bias audit within one year prior to using it to screen candidates for employment. D. Notify the NYC Department of Consumer Worker Protection of the results of any bias audit.
97. The correct answer is D. Notification of bias audit results is not a requirement of Local Law 144. The result of the bias audit needs to be published on the public website. Body of Knowledge Domain IV, Competency B
76
98. An organization wants to build a model to analyze content on social media by considering both textual content and associated images or videos. Which type of machine learning modeling technique is suitable in this scenario? A. Regression. B. Multi-modal. C. Reinforcement. D. Anomaly detection.
98. The correct answer is B. Multi-modal models are designed to process and integrate information from multiple modalities, making them well suited for tasks that involve diverse types of data, such as text, images and videos. In the given scenario, where the organization wants to analyze content on social media by developing a model that processes both textual content and associated images or videos, the resulting model would be regarded as multi-modal. Body of Knowledge Domain I, Competency B
77
99. The engineering team within your company has been working on an AI tool for the past several months. After completion of planning, design and development, what is the next step? A. Perform a readiness assessment. B. Define the business requirements. C. Determine the governance structure. D. Verify the data used to test the system.
99. The correct answer is A. Although the other options reference some part of the AI development life cycle, these steps should be completed before commencement of the implementation phase and during system planning, design and development. After the completion of these activities, a readiness assessment will determine whether the technology is ready to deploy. Body of Knowledge Domain V, Competency D
78
100. Which of the following activities is a best practice method to identify and mitigate predictable risks of secondary or unintended uses of AI models? A. Developing model cards for AI systems. B. Implementing an AI governance strategy. C. Conducting data protection impact assessments. D. Engaging in bug bashing and red teaming exercises.
100. The correct answer is A. Model cards are short summary documents explaining the purpose of an AI model and provide information regarding a model’s development and performance, as well as additional details for model transparency purposes. These cards are considered as a best practice method to identify and mitigate predictable risks associated with any secondary/unintended AI model use. Body of Knowledge Domain VI, Competency F