QCM questions (AI Regulatory Avalanche of Flashcards)

1
Q

Which of the following is a principle of Privacy by Design?
A) Privacy should be an afterthought in AI system development
B) AI systems should not require human intervention
C) Privacy should be embedded into the system by default
D) Privacy concerns should be addressed only when users complain

A

Correct Answer: C) Privacy should be embedded into the system by default

Explanation: Privacy by Design ensures that privacy safeguards are proactively built into AI systems as a core feature, rather than being added later.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What defines a High-Risk AI System under the EU AI Act?
A) AI that has a significant impact on safety or fundamental rights
B) AI models trained using deep reinforcement learning
C) Open-source AI models available for general use
D) AI used in experimental academic research

A

A) AI that has a significant impact on safety or fundamental rights
Explanation: High-risk AI systems must comply with strict transparency and risk management requirements.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

How does the EU AI Act ensure transparency in AI decision-making?
A. By mandating that all AI-driven decisions be reviewed by humans
B. By requiring AI deployers to provide clear documentation on how AI systems reach decisions
C. By banning all AI models that are not explainable
D. By allowing AI systems to operate freely without disclosure requirements

A

Correct Answer: B. By requiring AI deployers to provide clear documentation on how AI systems reach decisions

Explanation: Transparency measures ensure that AI-driven decisions can be explained and challenged, especially in high-risk applications.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Which metric is commonly used to measure the explainability of an AI model?
A) Accuracy
B) SHAP (Shapley Additive Explanations)
C) Precision
D) F1-score

A

B) SHAP (Shapley Additive Explanations)
Explanation: SHAP values help quantify feature importance and explain AI model decisions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

According to EO 14110, AI developers must:
A) Share safety test results with the U.S. government
B) Obtain a federal AI safety license before deployment
C) Implement carbon neutrality standards for AI infrastructure
D) Disclose training datasets used in AI systems

A

A) Share safety test results with the U.S. government
Explanation: The order emphasizes AI risk management, requiring transparency in AI safety testing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Which of the following is not covered under copyright law?
A) Software source code
B) Facts and ideas
C) Literary works
D) Architectural designs

A

B) Facts and ideas
Explanation: Copyright law protects the expression of ideas, not the ideas themselves. Facts and systems are not copyrightable.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is the main function of adversarial testing in AI security?
A) To train AI models to resist attacks and manipulation
B) To improve AI-generated images
C) To increase AI system complexity
D) To validate AI model accuracy

A

Correct Answer: A) To train AI models to resist attacks and manipulation

Explanation: Adversarial testing evaluates AI robustness by simulating attacks, helping detect vulnerabilities before real-world deployment.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Which data format is easiest to analyze using traditional SQL-based tools?
A) Structured data
B) Unstructured data
C) Semi-structured data
D) Streaming data

A

A) Structured data
Explanation: Structured data, stored in relational databases with tables and columns, is ideal for SQL-based analysis.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Which of the following is an essential aspect of AI governance?
A) Maximizing model accuracy
B) Ensuring AI compliance with legal and ethical guidelines
C) Selecting the most efficient deep learning model
D) Reducing data storage costs

A

B) Ensuring AI compliance with legal and ethical guidelines
Explanation: AI governance focuses on risk assessment, accountability, and regulatory compliance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Which AI paradigm best describes reinforcement learning?
A) AI learns from labeled data
B) AI extracts patterns from data without supervision
C) AI learns through rewards and penalties
D) AI generates data rather than classifies it

A

Correct Answer: C) AI learns through rewards and penalties

Explanation: Reinforcement learning is based on feedback loops, where AI takes actions and receives rewards or punishments, allowing it to optimize decision-making over time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

How does AI explainability relate to compliance with both proprietary AI governance and the NIST AI RMF?
A) The NIST AI RMF mandates explainability, while proprietary AI models are often black-box systems
B) Explainability is only relevant for AI models used in the financial sector
C) The EU AI Act does not require AI explainability
D) Proprietary AI models are always fully transparent

A

A) The NIST AI RMF mandates explainability, while proprietary AI models are often black-box systems
Explanation: The NIST AI RMF promotes explainability as a best practice, whereas proprietary AI models often lack transparency, complicating risk assessment and compliance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is a key governance challenge in auditing proprietary AI models?
A) The algorithms are too simple to require auditing
B) The model owners do not allow external audits
C) Proprietary models are always safe to deploy
D) Open-source models face stricter regulations

A

B) The model owners do not allow external audits
Explanation: Proprietary AI providers often limit external audits, making it difficult to evaluate bias, safety, and ethical concerns.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

How does the EU AI Act’s risk-based approach impact AI compliance for proprietary AI models?
A) Proprietary AI models must be assessed for risk level and comply with relevant transparency and accountability obligations
B) The EU AI Act does not classify AI risks
C) Proprietary AI models are always classified as low-risk
D) The EU AI Act does not regulate proprietary AI models

A

A) Proprietary AI models must be assessed for risk level and comply with relevant transparency and accountability obligations
Explanation: The EU AI Act enforces obligations based on an AI system’s risk level, including proprietary models used in high-risk applications.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Which industry is most likely to be affected by AI-driven job automation?
A. Healthcare
B. Finance
C. Manufacturing
D. Retail

A

Correct Answer: C. Manufacturing

Explanation: AI-driven automation significantly impacts manufacturing jobs by replacing repetitive, manual tasks with intelligent robotic systems.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Which of the following is not a typical AI deployment challenge?
A) Latency issues
B) Model packaging compatibility
C) Unlimited scalability
D) Security vulnerabilities

A

C) Unlimited scalability
Explanation: AI deployment often faces scalability limits, latency issues, and security risks—but unlimited scalability is not a typical challenge.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Which proprietary AI model sparked debates over intellectual property and fair use?
A) OpenAI’s GPT
B) Google’s BERT
C) Meta’s LLaMA
D) IBM Watson

A

A) OpenAI’s GPT
Explanation: GPT models have faced lawsuits regarding their use of copyrighted data in training without explicit consent.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Which of the following best defines a foundation model?
A) A neural network trained on a massive dataset for broad applicability
B) A traditional rule-based AI system
C) An AI system specifically designed for real-time decision-making
D) An AI model optimized for small-scale data analysis

A

Correct Answer: A) A neural network trained on a massive dataset for broad applicability

Explanation: Foundation models, such as large language models, are trained on extensive datasets and can be fine-tuned for specific applications.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Which type of AI model architecture is best suited for image recognition tasks?
A) Recurrent Neural Networks (RNN)
B) Convolutional Neural Networks (CNN)
C) Decision Trees
D) Reinforcement Learning

A

B) Convolutional Neural Networks (CNN)
Explanation: CNNs are specialized for processing image data by detecting patterns through convolutional and pooling layers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Which ‘V’ of big data refers to the accuracy and reliability of data?
A) Variety
B) Velocity
C) Veracity
D) Volume

A

C) Veracity
Explanation: The 5 Vs of big data include Variety (data types), Velocity (processing speed), Volume (amount), Veracity (accuracy), and Value (usefulness).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What is the role of the Market Surveillance Authority (MSA)?
A) Monitor AI compliance and enforce penalties at the national level
B) Develop AI governance guidelines for private companies
C) Approve all AI model releases before deployment
D) Conduct AI research for regulatory bodies

A

A) Monitor AI compliance and enforce penalties at the national level
Explanation: MSAs enforce AI Act compliance within their respective EU member states.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Which of the following best defines the OECD AI Transparency Principle?
A) AI systems should explain their decision-making processes
B) AI models should be open-source
C) AI should be regulated by a single international body
D) AI-generated content should always be labeled

A

A) AI systems should explain their decision-making processes
Explanation: Transparency ensures that users and stakeholders can understand and trust AI outputs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

How does the principle of AI accountability in the NIST AI RMF align with compliance requirements under the EU AI Act?
A) Both frameworks mandate that AI models must be decentralized
B) AI accountability requires clear roles, governance structures, and compliance measures in both frameworks
C) The EU AI Act does not include accountability as a regulatory principle
D) AI accountability is only necessary for AI models used in cybersecurity

A

B) AI accountability requires clear roles, governance structures, and compliance measures in both frameworks
Explanation: Both frameworks emphasize governance structures that ensure AI accountability, requiring clear policies and compliance measures.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Under GDPR, which of the following is considered a legal basis for data processing?
A) User consent
B) Data ownership transfer
C) AI system autonomy
D) Automated bias detection

A

Correct Answer: A) User consent

Explanation: Organizations must obtain valid user consent or rely on another legal basis (e.g., contract necessity or legal obligation) before processing personal data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Under the EU AI Act, what penalty can be imposed for deploying a prohibited AI system?
A. No penalty, as AI development is encouraged
B. Up to €35 million or 7% of global annual turnover
C. A written warning with a 5-year grace period
D. Immediate ban without any financial consequences

A

Correct Answer: B. Up to €35 million or 7% of global annual turnover

Explanation: The EU AI Act imposes significant financial penalties on companies violating AI regulations, particularly for deploying prohibited AI applications.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Which authority is responsible for supervising General Purpose AI (GPAI) models at the EU level?
A) European AI Office
B) Market Surveillance Authority
C) European Patent Office
D) OECD AI Governance Committee

A

A) European AI Office
Explanation: The European AI Office oversees compliance and risk assessments for GPAI models.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

What is a shared governance challenge for proprietary AI models and high-risk AI applications under the EU AI Act?
A) Both require clear accountability structures, transparency, and bias mitigation
B) Proprietary AI models are automatically classified as low-risk
C) The EU AI Act does not regulate high-risk AI models
D) Proprietary AI models are exempt from all compliance requirements

A

A) Both require clear accountability structures, transparency, and bias mitigation
Explanation: Both proprietary AI and high-risk AI applications must meet transparency and accountability standards to align with governance frameworks.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

NYC Local Law 144 requires that AI-based hiring tools:
A) Undergo bias audits before deployment
B) Be entirely transparent about their decision-making process
C) Receive federal certification before use
D) Be manually reviewed by human HR professionals before making hiring decisions

A

A) Undergo bias audits before deployment
Explanation: Local Law 144 mandates that AI hiring systems be audited for bias and that employers notify candidates when such tools are used in hiring decisions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Which governance measure helps limit liability when deploying AI models?
A) Open-sourcing all AI models
B) Implementing technical documentation, AUPs, and compliance reviews
C) Using only on-premise AI infrastructure
D) Allowing AI models to function independently without audits

A

B) Implementing technical documentation, AUPs, and compliance reviews
Explanation: Organizations limit liability by maintaining detailed documentation, Acceptable Use Policies (AUPs), and compliance checks.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

What is a key requirement for deployers using High-Risk AI Systems?
A) Provide real-time access to system logs to all users
B) Conduct a Fundamental Rights Impact Assessment (FRIA)
C) Open-source all AI models
D) Ensure AI models do not use deep learning

A

B) Conduct a Fundamental Rights Impact Assessment (FRIA)
Explanation: Deployers of high-risk AI systems must conduct FRIA assessments to evaluate risks to fundamental rights.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

What is the primary risk associated with black-box AI models?
A) They are too slow for real-world deployment
B) Their decision-making processes are not interpretable
C) They cannot process unstructured data
D) They require constant internet connectivity

A

Correct Answer: B) Their decision-making processes are not interpretable

Explanation: Black-box AI models, such as deep neural networks, lack transparency, making it difficult to understand how decisions are made.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Which of the following is a physical AI risk control?
A. Secure data centers for AI model storage
B. Algorithmic fairness audits
C. Ethical AI policies
D. Bias impact assessments

A

Correct Answer: A. Secure data centers for AI model storage

Explanation: Physical controls include secure locations, locked data centers, and restricted access areas to prevent unauthorized AI tampering.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

How do the EU AI Act and NIST AI RMF approach AI accountability differently?
A) The EU AI Act mandates accountability measures, while NIST AI RMF provides voluntary guidelines
B) Both frameworks enforce strict penalties for non-compliance
C) NIST AI RMF requires government audits, while the EU AI Act does not
D) AI accountability is only relevant to proprietary AI models

A

A) The EU AI Act mandates accountability measures, while NIST AI RMF provides voluntary guidelines
Explanation: The EU AI Act enforces legally binding accountability rules, while NIST AI RMF provides flexible, voluntary best practices for AI risk management.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

How do the risk classification systems of the EU AI Act and NIST AI RMF fundamentally differ?
A) The EU AI Act enforces legally binding risk categories, while NIST AI RMF provides adaptable risk frameworks
B) Both frameworks classify AI risks in the exact same way
C) NIST AI RMF does not include AI risk classifications
D) The EU AI Act does not regulate AI risk

A

A) The EU AI Act enforces legally binding risk categories, while NIST AI RMF provides adaptable risk frameworks
Explanation: The EU AI Act categorizes AI into mandatory risk levels, while NIST AI RMF allows for flexible, organization-specific risk management strategies.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

What key obligation does the EU AI Act impose on deployers of high-risk AI?
A. They must delete AI models every six months
B. They must implement and document human oversight of AI systems
C. They must prevent all users from interacting with AI decision-making tools
D. They must make AI systems open-source

A

Correct Answer: B. They must implement and document human oversight of AI systems

Explanation: The EU AI Act mandates human intervention where necessary to ensure fairness and accountability in high-risk AI decision-making.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

Which risk is associated with AI data drift?
A) AI model becomes more accurate over time
B) AI outputs become unpredictable due to changing input distributions
C) AI models self-correct errors in real-time
D) AI models only work in real-time applications

A

B) AI outputs become unpredictable due to changing input distributions
Explanation: Data drift occurs when the statistical properties of input data change, leading to unreliable AI performance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

What is the primary characteristic that differentiates Artificial Narrow Intelligence (ANI) from Artificial General Intelligence (AGI)?
A) ANI can generalize knowledge across multiple domains, AGI cannot
B) ANI performs a single function within defined constraints, while AGI exhibits human-level intelligence
C) ANI is entirely theoretical, while AGI is already widely deployed
D) ANI and AGI have the same capabilities, but AGI is more efficient

A

Correct Answer: B) ANI performs a single function within defined constraints, while AGI exhibits human-level intelligence

Explanation: ANI is designed for specific tasks, such as facial recognition or language translation, whereas AGI is capable of reasoning, problem-solving, and adapting across various domains like a human.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

Which AI deployment approach is best suited for real-time AI inference?
A) Batch processing
B) Edge computing
C) Data lake storage
D) Offline processing

A

B) Edge computing
Explanation: Edge computing enables real-time inference by running AI models directly on local devices.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

How does AI governance under the NIST AI RMF align with compliance requirements in the EU AI Act?
A) Both frameworks emphasize risk management and accountability
B) The EU AI Act requires open-source AI models, while NIST AI RMF does not
C) NIST AI RMF does not address AI governance principles
D) The EU AI Act does not require AI accountability

A

A) Both frameworks emphasize risk management and accountability
Explanation: The NIST AI RMF and EU AI Act share governance principles, requiring organizations to define AI risks, implement monitoring mechanisms, and ensure ethical accountability.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

What is a major security concern with proprietary AI models?
A) They consume too much computational power
B) They can contain undisclosed vulnerabilities that lead to adversarial attacks
C) They can never be updated after deployment
D) They cannot be used in regulated industries

A

B) They can contain undisclosed vulnerabilities that lead to adversarial attacks
Explanation: Since proprietary AI models are closed-source, security vulnerabilities may exist without public awareness or scrutiny.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

What is the main risk of using synthetic data in AI model training?
A) It is legally restricted in most jurisdictions
B) It lacks diversity compared to real-world data
C) It always produces biased AI models
D) It requires deep learning models to be effective

A

B) It lacks diversity compared to real-world data
Explanation: Synthetic data may not fully represent the diversity of real-world data, leading to biased or inaccurate AI models.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

Which AI risk assessment framework focuses on continuous evaluation and adaptation?
A) ISO 9001
B) NIST AI RMF
C) IEEE 7000-21
D) SOC 2 Compliance

A

B) NIST AI RMF
Explanation: The NIST AI Risk Management Framework (AI RMF) promotes continuous risk evaluation and mitigation in AI systems.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

How do proprietary AI models pose challenges in compliance with the NIST AI RMF and the EU AI Act?
A) Proprietary AI models always comply with both frameworks
B) Lack of transparency in proprietary models makes it difficult to assess risks and biases
C) Proprietary models cannot be used in regulated industries
D) Proprietary AI models must be open-source to comply with both frameworks

A

B) Lack of transparency in proprietary models makes it difficult to assess risks and biases
Explanation: Proprietary models often do not disclose their training data or decision-making processes, creating challenges for compliance with transparency and risk assessment requirements in both frameworks.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

The case “Silverman v. OpenAI” primarily revolved around which legal question?
A) Whether AI-generated content is protected under copyright law
B) Whether training AI models using copyrighted books without permission constitutes infringement
C) Whether AI-generated books can receive fair use protection
D) Whether AI systems can autonomously apply fair use defenses

A

B) Whether training AI models using copyrighted books without permission constitutes infringement
Explanation: The plaintiffs claimed OpenAI used copyrighted books to train its AI models without consent, but the court ruled that training on copyrighted data did not automatically constitute infringement.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

How does the EU AI Act’s risk-based approach impact the deployment of General Purpose AI (GPAI)?
A) GPAI models require additional compliance measures because of their wide applicability
B) The EU AI Act classifies GPAI models as low-risk by default
C) GPAI models are not subject to any AI regulation
D) The EU AI Act does not address GPAI compliance

A

A) GPAI models require additional compliance measures because of their wide applicability
Explanation: The EU AI Act enforces transparency and risk management requirements on GPAI due to their broad integration across multiple applications.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

Which of the following is NOT a primary reason to pursue AIGP certification?
A. To gain expertise in AI governance
B. To understand AI model development in-depth
C. To advance career opportunities
D. To support responsible AI deployment

A

Correct Answer: B. To understand AI model development in-depth

Explanation: AIGP certification focuses on AI governance, compliance, and risk management rather than technical AI development itself.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

Which of the following is a key principle in the NIST AI RMF?
A) AI systems should be entirely automated
B) AI risk management should be flexible and context-dependent
C) AI regulation should be uniform across all industries
D) AI models should not require any human oversight

A

B) AI risk management should be flexible and context-dependent
Explanation: The NIST AI RMF emphasizes that risk management approaches should adapt based on the AI system’s context and use case.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

Which case study best exemplifies reinforcement learning AI under OECD’s framework?
A) AlphaGo
B) ChatGPT-4
C) IBM Watson
D) Google Translate

A

A) AlphaGo
Explanation: AlphaGo used reinforcement learning to optimize gameplay strategies, aligning with the framework’s AI model classification.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

What is a common governance challenge for both proprietary AI models and high-risk AI applications under the EU AI Act?
A) The requirement for AI models to be open-source
B) The need for robust documentation, bias audits, and transparency reporting
C) The exemption of proprietary AI from AI regulations
D) The requirement to train AI models only on European datasets

A

B) The need for robust documentation, bias audits, and transparency reporting
Explanation: Both proprietary AI models and high-risk AI applications must adhere to compliance measures that involve transparency, accountability, and bias mitigation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

Which NIST framework aims to ensure AI systems are safe, explainable, and fair?
A) NIST AI RMF
B) NIST Cybersecurity Framework
C) IEEE 7000-21
D) ISO 22989:2022

A

A) NIST AI RMF
Explanation: The NIST AI Risk Management Framework (AI RMF) provides a structure to ensure AI safety, fairness, and transparency.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

How does the concept of AI bias mitigation in the NIST AI RMF relate to high-risk AI applications under the EU AI Act?
A) The EU AI Act requires bias mitigation only for proprietary AI models
B) The NIST AI RMF provides guidelines for detecting and mitigating bias, while the EU AI Act mandates bias audits for high-risk AI systems
C) AI bias is not addressed by either framework
D) The EU AI Act requires all AI models to be bias-free before deployment

A

B) The NIST AI RMF provides guidelines for detecting and mitigating bias, while the EU AI Act mandates bias audits for high-risk AI systems
Explanation: The NIST AI RMF suggests best practices for bias detection, while the EU AI Act legally requires audits and documentation for high-risk AI models.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
51
Q

Which principle of Fair Information Practices aligns with the concept of ‘data minimization’?
A) Use Limitation
B) Data Quality and Relevance
C) Collection Limitation
D) Safeguards

A

Correct Answer: C) Collection Limitation

Explanation: Collection Limitation (also known as Data Minimization) ensures that only necessary data is collected and retained for as long as needed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
52
Q

Which U.S. executive order mandates AI developers to report safety test results to the government?
A) EO 14091
B) EO 14110
C) EO 13960
D) EO 13769

A

B) EO 14110
Explanation: Executive Order 14110 strengthens AI safety by requiring developers to share risk assessments with the federal government.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
53
Q

How does the EU AI Act define ‘General-Purpose AI Models’ (GPAI)?
A. AI systems that are designed for a single, highly specialized task
B. AI models trained on large datasets that can perform a broad range of tasks
C. AI applications that are used exclusively for governmental purposes
D. AI models that are exempt from all transparency requirements

A

Correct Answer: B. AI models trained on large datasets that can perform a broad range of tasks

Explanation: GPAI models, also known as foundation models, are trained on extensive datasets and can be adapted for multiple downstream applications.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
54
Q

What is the maximum fine for GDPR violations?
A) €10 million or 2% of global turnover
B) €20 million or 4% of global turnover
C) €30 million or 6% of global turnover
D) No financial penalty, only a warning

A

Correct Answer: B) €20 million or 4% of global turnover

Explanation: GDPR imposes strict penalties for non-compliance, with maximum fines reaching 4% of a company’s global annual revenue or €20 million, whichever is higher.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
55
Q

Which of the following is not a key component of AI project planning?
A) Business problem
B) Mission
C) System architecture
D) Data considerations

A

C) System architecture
Explanation: AI project planning includes defining the business problem, mission, gaps, data considerations, and governance. System architecture is determined in the design phase.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
56
Q

Which of the following AI liability models applies no-fault responsibility to developers?
A) Fault-based liability
B) Negligence-based liability
C) Strict liability
D) Common carrier liability

A

C) Strict liability
Explanation: Under strict liability, AI developers may be held responsible for harm caused by their products regardless of intent or negligence.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
57
Q

Under GDPR, when is a DPIA required?
A) When deploying AI models with minimal risk
B) When processing high-risk personal data
C) Only when requested by regulators
D) Only for organizations operating outside the EU

A

Correct Answer: B) When processing high-risk personal data

Explanation: DPIAs are mandatory under GDPR for high-risk AI systems, ensuring potential privacy harms are evaluated before data processing begins.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
58
Q

What is the primary function of the EU AI Act’s Articles?
A) Provide supplementary technical specifications
B) Explain the reasoning behind legislative provisions
C) Establish substantive rules, rights, and obligations
D) Define AI governance for member states

A

C) Establish substantive rules, rights, and obligations
Explanation: Articles in the EU AI Act set out legally binding rules and obligations that directly affect AI operators.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
59
Q

Under the EU AI Act, what is required of AI providers placing high-risk AI systems on the EU market?
A. No specific requirements, as AI is self-regulated
B. Conformity assessments, risk management, and post-market monitoring
C. Full government oversight and pre-approval of all AI systems
D. Exemptions from all liability if AI models fail

A

Correct Answer: B. Conformity assessments, risk management, and post-market monitoring

Explanation: High-risk AI providers must conduct assessments to ensure compliance, track risks, and continuously monitor the AI’s impact on users and society.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
60
Q

How does the principle of explainable AI (XAI) in the NIST AI RMF impact compliance with proprietary AI governance?
A) Explainability is not relevant for proprietary AI models
B) Proprietary AI models must implement XAI techniques to meet transparency requirements
C) NIST AI RMF prohibits the use of black-box AI models
D) The EU AI Act does not require explainability for any AI models

A

B) Proprietary AI models must implement XAI techniques to meet transparency requirements
Explanation: The NIST AI RMF emphasizes the importance of explainability, which helps ensure compliance with transparency and accountability regulations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
61
Q

Which U.S. Executive Order establishes guidelines for AI safety and security?
A) Executive Order 14110
B) Executive Order 13960
C) Executive Order 13769
D) Executive Order 14091

A

A) Executive Order 14110
Explanation: EO 14110 mandates AI safety measures, requiring companies to share test results and align AI development with national security priorities.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
62
Q

NYC Local Law 144 requires that AI-based hiring tools:
A) Undergo bias audits before deployment
B) Be entirely transparent about their decision-making process
C) Receive federal certification before use
D) Be manually reviewed by human HR professionals before making hiring decisions

A

A) Undergo bias audits before deployment
Explanation: Local Law 144 mandates that AI hiring systems be audited for bias and that employers notify candidates when such tools are used in hiring decisions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
63
Q

Under the Digital Services Act (DSA), what requirement must AI-powered recommender systems comply with?
A) They must avoid collecting personal data for targeted advertising.
B) They must clearly inform users how recommendations are generated.
C) They must provide an alternative manual content ranking system.
D) They must be certified by the European Data Protection Board.

A

B) They must clearly inform users how recommendations are generated.
Explanation: The DSA enforces transparency requirements for recommender systems, ensuring users understand how content is prioritized and displayed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
64
Q

What does ‘Fairness in AI’ primarily aim to address?
A) Ensuring AI models operate without bias or discrimination
B) Reducing AI system efficiency
C) Allowing unrestricted AI decision-making
D) Preventing AI governance regulations

A

Correct Answer: A) Ensuring AI models operate without bias or discrimination

Explanation: Fairness in AI ensures that models do not reinforce biases, protecting users from discriminatory outcomes in hiring, finance, healthcare, and law enforcement.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
65
Q

Under the EU AI Act, what is the primary responsibility of an Authorized Representative?
A) Conduct AI system audits before market entry
B) Ensure compliance on behalf of a non-EU provider
C) Monitor AI applications in regulated industries
D) Deploy AI systems for public authorities

A

B) Ensure compliance on behalf of a non-EU provider
Explanation: Authorized Representatives act as the compliance liaison for AI providers outside the EU.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
66
Q

What is a key benefit of AI containerization in deployment?
A) Increases model accuracy
B) Packages code, dependencies, and configurations for easy deployment
C) Prevents all security vulnerabilities in AI models
D) Requires minimal infrastructure for model execution

A

B) Packages code, dependencies, and configurations for easy deployment
Explanation: Containers ensure AI models are portable and compatible across multiple environments.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
67
Q

What is the primary concern of continuous AI monitoring post-deployment?
A) Ensuring real-time feedback from users
B) Evaluating AI accuracy and detecting irregular decisions
C) Increasing the processing power of AI models
D) Reducing dataset size to optimize speed

A

B) Evaluating AI accuracy and detecting irregular decisions
Explanation: Continuous monitoring involves tracking model performance, detecting deviations, and mitigating risks such as data drift.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
68
Q

How does the NIST AI RMF’s focus on continuous risk monitoring relate to proprietary AI models in high-risk sectors?
A) Proprietary AI models do not require risk monitoring
B) The lack of external auditing for proprietary AI models makes continuous risk monitoring more difficult
C) Continuous risk monitoring applies only to AI models used in social media
D) NIST AI RMF does not support AI risk monitoring

A

B) The lack of external auditing for proprietary AI models makes continuous risk monitoring more difficult
Explanation: Continuous risk monitoring is essential, but proprietary AI models often restrict access, making ongoing evaluations harder.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
69
Q

What distinguishes generative models from discriminative models?
A) Generative models create new data, while discriminative models classify existing data
B) Discriminative models predict outcomes, while generative models cannot
C) Generative models rely on supervised learning, while discriminative models do not
D) Discriminative models require deep learning, whereas generative models do not

A

Correct Answer: A) Generative models create new data, while discriminative models classify existing data

Explanation: Generative models, such as GPT and GANs, generate new samples from learned distributions, while discriminative models classify or separate data into categories.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
70
Q

Which case set the precedent that AI-generated works cannot be copyrighted?
A) Burrow-Giles Lithographic v. Sarony (1884)
B) Thaler v. Vidal (2023)
C) Silverman v. OpenAI
D) Rogers v. Christie

A

B) Thaler v. Vidal (2023)
Explanation: The “Thaler v. Vidal” case confirmed that only humans can be named inventors on a patent, reinforcing the idea that AI cannot hold copyright or patent rights under U.S. law.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
71
Q

How do General Purpose AI (GPAI) models impact regulatory compliance under both the EU AI Act and the NIST AI RMF?
A) GPAI models are automatically classified as low-risk AI
B) GPAI models require additional compliance measures because they can be integrated into multiple applications
C) The EU AI Act does not apply to GPAI models
D) NIST AI RMF prohibits the use of GPAI models in risk management

A

B) GPAI models require additional compliance measures because they can be integrated into multiple applications
Explanation: The EU AI Act introduces specific regulations for GPAI due to their broad applicability, while the NIST AI RMF provides adaptable risk management strategies.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
72
Q

How does differential privacy help improve AI security?
A) By encrypting AI models before deployment
B) By adding statistical noise to data to protect individual privacy
C) By limiting AI model access to public networks
D) By ensuring AI systems never store user data

A

Correct Answer: B) By adding statistical noise to data to protect individual privacy

Explanation: Differential privacy prevents AI models from learning identifiable patterns by adding controlled randomness, improving privacy protection.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
73
Q

What is a key requirement for AI deployers under the EU AI Act?
A. They must conduct mandatory fairness audits every 5 years
B. They must ensure human oversight and compliance with transparency rules
C. They must create their own AI risk assessment methodology
D. They must submit all AI decisions for approval by the European Commission

A

Correct Answer: B. They must ensure human oversight and compliance with transparency rules

Explanation: Deployers must monitor AI decisions, ensure human oversight where required, and comply with transparency obligations to protect users’ rights.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
74
Q

Which of the following best describes an AI model license agreement?
A) A contract that allows users to modify and redistribute the model
B) A legal document outlining how an AI model can be used
C) A formal recognition that an AI model is public domain
D) A government certification of AI fairness

A

B) A legal document outlining how an AI model can be used
Explanation: Proprietary AI models require license agreements that define usage rights, limitations, and compliance requirements.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
75
Q

Which of the following is a primary challenge in AI risk management frameworks?
A) AI models evolve unpredictably over time
B) AI models never make mistakes
C) AI risks are fixed at the development stage
D) AI systems are inherently unbiased

A

A) AI models evolve unpredictably over time
Explanation: One key challenge in AI risk management is tracking and adapting to emergent risks as AI systems evolve.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
76
Q

What is the primary challenge of embedding AI models into applications?
A) High computational costs
B) Integration complexity and API management
C) Limited internet access
D) Insufficient training data

A

B) Integration complexity and API management
Explanation: Embedding AI requires managing APIs, data flows, and ensuring compatibility with application environments.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
77
Q

What does the Consumer Financial Protection Bureau (CFPB) regulate?
A) Fair hiring practices and AI bias audits
B) Data privacy for online social media platforms
C) Consumer protection in the financial sector, including AI-driven credit decisions
D) Automated investment systems in financial markets

A

C) Consumer protection in the financial sector, including AI-driven credit decisions
Explanation: The CFPB ensures transparency and fairness in financial services, including AI-driven lending models and credit scoring.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
78
Q

Who developed the concept of Privacy by Design (PbD)?
A) Alan Turing
B) Ann Cavoukian
C) Geoffrey Hinton
D) Tim Berners-Lee

A

Correct Answer: B) Ann Cavoukian

Explanation: Ann Cavoukian, former Information and Privacy Commissioner of Ontario, developed Privacy by Design to proactively integrate privacy into system design and governance policies.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
79
Q

Which governance principle is emphasized in both proprietary AI models and the EU AI Act’s high-risk AI classification?
A) Automated bias audits for all AI models
B) Transparency, documentation, and human oversight
C) Requiring proprietary models to be fully explainable
D) Mandatory open-source licensing for AI providers

A

B) Transparency, documentation, and human oversight
Explanation: Both proprietary AI models and high-risk AI applications must implement transparency, documentation, and accountability structures to comply with governance frameworks.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
80
Q

Why is AI explainability particularly challenging for proprietary AI models under the NIST AI RMF and the EU AI Act?
A) Proprietary AI models always have built-in explainability
B) The lack of access to model architecture and training data makes it harder to verify fairness and risk
C) Explainability is only required for AI models used in financial applications
D) Proprietary models are exempt from risk management frameworks

A

B) The lack of access to model architecture and training data makes it harder to verify fairness and risk
Explanation: Proprietary AI models often restrict access to their internal workings, making it difficult to ensure compliance with explainability and risk management standards.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
81
Q

Which AI category represents human-level intelligence across multiple domains?
A) Artificial Narrow Intelligence (ANI)
B) Artificial General Intelligence (AGI)
C) Artificial Super Intelligence (ASI)
D) Machine Learning (ML)

A

Correct Answer: B) Artificial General Intelligence (AGI)

Explanation: AGI exhibits reasoning, learning, and adaptability across diverse fields, similar to human intelligence.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
82
Q

According to the Equal Credit Opportunity Act (ECOA), what must creditors provide when making an adverse decision using AI-based credit scoring?
A) The raw data used by the AI to evaluate the applicant
B) A full audit report of the AI model
C) A specific explanation of the factors leading to the adverse decision
D) The name of the AI developer responsible for the model

A

C) A specific explanation of the factors leading to the adverse decision
Explanation: The ECOA requires creditors to explain specific reasons for denying credit applications, even when using AI, to prevent discrimination and ensure transparency.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
83
Q

What is a key challenge in applying the EU AI Act’s risk classification system to proprietary AI models?
A) The lack of transparency in proprietary AI models makes it difficult to assess their risk level
B) Proprietary AI models are automatically classified as low-risk
C) The EU AI Act does not regulate proprietary AI models
D) Proprietary AI models are always banned under the EU AI Act

A

A) The lack of transparency in proprietary AI models makes it difficult to assess their risk level
Explanation: Proprietary AI models do not always disclose their internal mechanisms, making it harder to classify them under the EU AI Act’s predefined risk levels.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
84
Q

How does the NIST AI RMF’s ‘Measure’ function apply to proprietary AI models?
A) It encourages risk assessment methodologies, which are often difficult to implement due to proprietary AI’s closed nature
B) Proprietary AI models are automatically risk-free
C) The ‘Measure’ function does not apply to AI governance
D) The EU AI Act does not require risk measurement for AI

A

A) It encourages risk assessment methodologies, which are often difficult to implement due to proprietary AI’s closed nature
Explanation: The ‘Measure’ function in NIST AI RMF focuses on risk assessment, but proprietary AI models often limit access to necessary data, complicating compliance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
85
Q

Which principle ensures AI systems provide clear explanations for their decisions?
A) Transparency
B) Data Portability
C) Algorithmic Invisibility
D) Model Complexity

A

Correct Answer: A) Transparency

Explanation: Transparency ensures that AI models explain their decision-making processes, allowing stakeholders to evaluate fairness, accountability, and compliance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
86
Q

Which phase of the NIST AI RMF involves developing AI risk metrics and evaluation processes?
A) Measure
B) Map
C) Manage
D) Govern

A

A) Measure
Explanation: The ‘Measure’ function focuses on developing risk assessment methods and evaluation strategies for AI models.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
87
Q

What is the primary limitation of using unstructured data in AI systems?
A) It is difficult to store in cloud databases
B) It cannot be processed by AI models
C) It requires significant pre-processing and transformation
D) It is inherently biased

A

C) It requires significant pre-processing and transformation
Explanation: Unstructured data, such as images and text, must be formatted and labeled before AI models can analyze it.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
88
Q

Which of the following is not classified as a Prohibited AI System under the EU AI Act?
A) Predictive policing models that target specific demographics
B) AI models that manipulate human behavior subliminally
C) AI for customer service chatbots
D) Social credit scoring AI

A

C) AI for customer service chatbots
Explanation: Customer service AI is considered low-risk, while social scoring and manipulative AI are banned.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
89
Q

Why do General Purpose AI (GPAI) models present compliance risks under both the NIST AI RMF and the EU AI Act?
A) Their adaptability allows them to be used in multiple applications, making risk assessments more complex
B) GPAI models are automatically risk-free
C) The EU AI Act does not classify GPAI as a regulatory concern
D) GPAI models are only used in low-risk AI applications

A

A) Their adaptability allows them to be used in multiple applications, making risk assessments more complex
Explanation: Because GPAI can be applied in various domains, ensuring compliance with transparency and accountability standards becomes challenging.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
90
Q

Which compliance requirement is common to both proprietary AI governance and high-risk AI applications under the EU AI Act?
A) AI models must be fully explainable and open-source
B) AI providers must conduct risk assessments, maintain documentation, and ensure model transparency
C) AI providers are required to register with the European Patent Office
D) AI providers must train their models on strictly European data

A

B) AI providers must conduct risk assessments, maintain documentation, and ensure model transparency
Explanation: Both proprietary AI governance and high-risk AI applications require compliance with transparency, documentation, and risk management obligations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
91
Q

Which AI technique ensures that sensitive data remains protected while allowing analysis?
A) Differential Privacy
B) Data Augmentation
C) Feature Selection
D) Reinforcement Learning

A

A) Differential Privacy
Explanation: Differential privacy adds noise to data to protect individual privacy while maintaining useful insights.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
92
Q

Which of the following AI applications is prohibited under the EU AI Act?
A. AI-based fraud detection in financial services
B. Real-time biometric surveillance without explicit legal justification
C. AI-powered recommendation engines in e-commerce
D. AI tools for grammar and spell checking

A

Correct Answer: B. Real-time biometric surveillance without explicit legal justification

Explanation: Real-time biometric identification in public spaces is banned unless strictly necessary for national security or law enforcement in limited cases.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
93
Q

What is the role of an AI importer under the EU AI Act?
A. To develop and deploy AI models within the EU
B. To verify that AI systems comply with EU regulations before being placed on the market
C. To oversee AI fairness testing in academic research
D. To ensure all AI systems are developed using open-source data

A

Correct Answer: B. To verify that AI systems comply with EU regulations before being placed on the market

Explanation: Importers must ensure AI models meet EU compliance requirements before distribution and are held accountable for regulatory breaches.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
94
Q

Which of the following is not a core function of the NIST AI RMF?
A) Govern
B) Map
C) Measure
D) Explain

A

D) Explain
Explanation: The four core functions of the NIST AI RMF are Map, Measure, Manage, and Govern. ‘Explain’ is not a separate function but falls under transparency efforts.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
95
Q

What does ‘Accountability’ mean under Fair Information Practices?
A) Organizations take responsibility for ensuring compliance with data protection policies
B) Data subjects are responsible for securing their personal data
C) AI systems automatically self-regulate
D) There is no need for data security measures

A

Correct Answer: A) Organizations take responsibility for ensuring compliance with data protection policies

Explanation: Accountability ensures organizations implement data governance policies, adhere to regulations, and enforce privacy protections.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
96
Q

What is the primary purpose of the EU AI Act?
A) Promote AI adoption with minimal regulation
B) Ensure AI is safe, trustworthy, and respects fundamental rights
C) Allow unrestricted development of AI technologies
D) Prevent all AI applications in high-risk sectors

A

B) Ensure AI is safe, trustworthy, and respects fundamental rights
Explanation: The EU AI Act seeks to balance AI innovation with fundamental rights protections.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
97
Q

Under U.S. law, which of the following statements about AI-generated intellectual property is correct?
A) AI can be named as an inventor if it demonstrates autonomous creativity.
B) AI-generated works cannot be copyrighted because they lack human authorship.
C) AI systems must have a legal personality to file patents in the U.S. and Europe.
D) The European Patent Office has granted patents to AI-generated inventions.

A

B) AI-generated works cannot be copyrighted because they lack human authorship.
Explanation: U.S. copyright law requires human authorship. Cases such as “Thaler v. Vidal (2023)” confirm that AI cannot be named as an inventor, and the U.S. Copyright Office has repeatedly denied AI-generated works copyright protection.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
98
Q

Which risk does AI continuous monitoring help mitigate?
A) Data drift
B) Hyperparameter misconfiguration
C) Model training errors
D) Increased processing speed

A

A) Data drift
Explanation: Continuous monitoring ensures that AI models maintain accuracy by detecting data drift and adapting to changes over time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
99
Q

Which AI principle is NOT explicitly included in the OECD AI Principles?
A) Privacy and Data Protection
B) Robustness and Security
C) Explainability and Trustworthiness
D) Algorithmic Bias Compensation

A

D) Algorithmic Bias Compensation
Explanation: While OECD AI principles emphasize fairness and non-discrimination, they do not explicitly require AI to actively compensate for bias.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
100
Q

What is the biggest security risk when deploying proprietary AI models?
A) Data poisoning attacks
B) Open-source licensing
C) Poor API documentation
D) High infrastructure costs

A

A) Data poisoning attacks
Explanation: Data poisoning attacks occur when malicious data is introduced to manipulate AI model behavior.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
101
Q

What is a key legal challenge in applying product liability laws to AI-based systems?
A) AI systems cannot be held liable because they lack legal personhood.
B) AI operates autonomously, making it difficult to attribute harm to a specific party.
C) AI-generated errors are covered by fair use laws.
D) AI decision-making is fully transparent, making liability easy to determine.

A

B) AI operates autonomously, making it difficult to attribute harm to a specific party.
Explanation: Because AI systems function independently, determining responsibility—whether it’s the developer, the user, or the AI itself—is legally complex.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
102
Q

Which of the following is NOT a key component of AI governance frameworks?
A) Accountability
B) Bias mitigation
C) AI-generated policies
D) Transparency

A

Correct Answer: C) AI-generated policies

Explanation: AI governance frameworks are created by human regulators and organizations to ensure AI fairness, accountability, and transparency.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
103
Q

What is a strategic advantage of proprietary AI models over open-source models?
A) They offer greater transparency
B) They provide better security and exclusive access to optimized performance
C) They are always more accurate
D) They do not require compliance with AI regulations

A

B) They provide better security and exclusive access to optimized performance
Explanation: Proprietary models often leverage proprietary data and optimizations, leading to better security and performance advantages for their owners.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
104
Q

Which AI risk factor is most associated with black-box models?
A) High computational cost
B) Lack of transparency in decision-making
C) Open-source vulnerabilities
D) Lack of hardware compatibility

A

B) Lack of transparency in decision-making
Explanation: Black-box models, like deep neural networks, make decisions that are difficult to interpret, creating transparency and accountability challenges.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
105
Q

Which of the following is NOT a type of AI risk control?
A. Administrative controls
B. Technical controls
C. Physical controls
D. AI-generated self-regulatory controls

A

Correct Answer: D. AI-generated self-regulatory controls

Explanation: AI governance relies on human-defined risk controls, such as administrative policies, technical measures, and physical safeguards.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
106
Q

What is a key requirement of AI providers under the EU AI Act?
A) Conduct periodic bias audits and risk assessments
B) Register AI models as public domain technologies
C) Ensure AI operates without human oversight
D) Use only EU-developed datasets for training

A

A) Conduct periodic bias audits and risk assessments
Explanation: AI providers must regularly assess bias, transparency, and risks before deployment.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
107
Q

Under the EU AI Act, what is a key requirement for AI systems used in law enforcement?
A. No specific requirements, as law enforcement AI is exempt from regulation
B. Strict compliance with transparency, fairness, and bias mitigation requirements
C. Full immunity from legal liability for any wrongful AI-based decisions
D. Automatic approval for deployment without assessment

A

Correct Answer: B. Strict compliance with transparency, fairness, and bias mitigation requirements

Explanation: AI used in law enforcement is subject to high-risk regulations, requiring human oversight, fairness testing, and impact assessments.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
108
Q

What is a common challenge when applying AI governance frameworks to proprietary AI models?
A) Proprietary AI models often limit external oversight, making risk audits and compliance assessments difficult
B) Proprietary AI models always comply with all AI governance standards
C) The EU AI Act does not regulate proprietary AI models
D) The NIST AI RMF prohibits the use of proprietary AI models

A

A) Proprietary AI models often limit external oversight, making risk audits and compliance assessments difficult
Explanation: Many proprietary AI models operate as black boxes, making compliance with AI governance principles more challenging.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
109
Q

Which of the following is not a primary consideration in AI deployment?
A) Environment
B) Packaging
C) Model explainability
D) Accessibility

A

C) Model explainability
Explanation: Explainability is important but is primarily an AI governance concern, while deployment considerations focus on infrastructure, integration, and access.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
110
Q

Which case set the precedent that AI-generated works cannot be copyrighted?
A) Burrow-Giles Lithographic v. Sarony (1884)
B) Thaler v. Vidal (2023)
C) Silverman v. OpenAI
D) Rogers v. Christie

A

B) Thaler v. Vidal (2023)
Explanation: The “Thaler v. Vidal” case confirmed that only humans can be named inventors on a patent, reinforcing the idea that AI cannot hold copyright or patent rights under U.S. law.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
111
Q

Which term refers to AI models trained on massive datasets with self-supervised learning?
A) General Purpose AI (GPAI)
B) Narrow AI
C) Expert AI Systems
D) Reinforcement AI

A

A) General Purpose AI (GPAI)
Explanation: GPAI, also called Foundation Models, are trained on large-scale data and exhibit broad generalization capabilities.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
112
Q

Which OECD AI principle focuses on promoting transparency and explainability?
A) Inclusive Growth
B) Human Rights
C) Accountability
D) Transparency and Explainability

A

D) Transparency and Explainability
Explanation: The OECD AI Principles include this category to ensure AI systems are understandable and their decision-making processes are clear.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
113
Q

Which principle ensures that personal data is kept accurate and up to date?
A) Data Quality and Accuracy
B) Data Portability
C) Collection Limitation
D) Automated Privacy Controls

A

Correct Answer: A) Data Quality and Accuracy

Explanation: Data Quality and Accuracy require organizations to maintain updated and correct personal data to prevent errors and misinformation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
114
Q

Which consideration does not belong to AI project scoping?
A) Impact
B) Effort
C) Fit
D) Model accuracy

A

D) Model accuracy
Explanation: AI project scope includes Impact, Effort, and Fit, whereas accuracy is an evaluation metric in the development phase.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
115
Q

Which international treaty aims to regulate AI’s impact on human rights and democracy?
A) The Geneva Convention
B) Council of Europe AI Convention
C) United Nations AI Ethics Treaty
D) AI for Good Global Pact

A

B) Council of Europe AI Convention
Explanation: This treaty aims to govern AI development while ensuring fundamental rights, democracy, and rule of law.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
116
Q

Which AI governance framework focuses on privacy and compliance?
A. NIST AI RMF
B. GDPR
C. ISO 42001
D. IEEE AI Ethics Guidelines

A

Correct Answer: B. GDPR

Explanation: The General Data Protection Regulation (GDPR) mandates strict AI data privacy protections and user rights regarding AI-driven decisions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
117
Q

Which of the following is an example of classification-based AI?
A) Predicting future stock prices
B) Identifying whether an email is spam or not
C) Recommending personalized movie content
D) Forecasting annual product demand

A

B) Identifying whether an email is spam or not
Explanation: Classification models categorize inputs into discrete labels, such as ‘spam’ or ‘not spam’.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
118
Q

Which of the following is NOT a requirement in ISO/IEC 42001:2023 for AI governance?
A) Risk assessment and mitigation
B) Compliance with international laws
C) AI system lifecycle documentation
D) Mandatory open-source AI models

A

D) Mandatory open-source AI models
Explanation: ISO/IEC 42001:2023 focuses on risk management and governance, but it does not require AI models to be open-source.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
119
Q

The Digital Services Act (DSA) requires online platforms to:
A) Prevent all forms of misinformation on their websites
B) Label all AI-generated content as synthetic media
C) Provide transparency about content moderation and recommendation algorithms
D) Register with the European Data Protection Board before operating in the EU

A

C) Provide transparency about content moderation and recommendation algorithms
Explanation: The DSA aims to enhance accountability of online platforms by ensuring transparency in content recommendation and moderation policies.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
120
Q

How does the principle of continuous monitoring in the NIST AI RMF apply to AI deployment under the EU AI Act?
A) The EU AI Act does not require AI monitoring after deployment
B) The EU AI Act mandates continuous monitoring for high-risk AI systems, aligning with the NIST AI RMF’s emphasis on ongoing risk assessment
C) NIST AI RMF applies only to pre-deployment AI models
D) Continuous monitoring is optional in both frameworks

A

B) The EU AI Act mandates continuous monitoring for high-risk AI systems, aligning with the NIST AI RMF’s emphasis on ongoing risk assessment
Explanation: Both frameworks recognize the importance of continuously assessing AI risks and performance post-deployment to ensure compliance and safety.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
121
Q

What risk category applies to AI chatbots under the EU AI Act?
A) High Risk
B) Minimal Risk
C) Limited Risk
D) Unacceptable Risk

A

C) Limited Risk
Explanation: AI chatbots fall under the Limited Risk category, requiring only transparency obligations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
122
Q

Which of the following is a key governance responsibility in AI projects?
A) Selecting training algorithms
B) Implementing a risk assessment framework
C) Optimizing model accuracy
D) Monitoring hyperparameter tuning

A

B) Implementing a risk assessment framework
Explanation: AI governance involvesestablishing policies, overseeing risk management, and ensuring regulatory compliance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
123
Q

What is a shared limitation of applying the EU AI Act’s risk categories to proprietary AI models?
A) Proprietary AI models often do not disclose risk factors, making classification and compliance difficult
B) Proprietary AI models are automatically compliant with EU AI Act risk levels
C) The EU AI Act does not classify AI models based on risk levels
D) The EU AI Act bans the use of proprietary AI models

A

A) Proprietary AI models often do not disclose risk factors, making classification and compliance difficult
Explanation: Proprietary AI models lack transparency, complicating their classification under the EU AI Act’s risk-based approach.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
124
Q

How does the NIST AI RMF framework relate to the EU AI Act’s risk classification?
A) Both frameworks classify AI risks in the same way
B) The EU AI Act focuses on regulatory compliance, while NIST AI RMF provides voluntary risk management guidance
C) NIST AI RMF is legally binding, while the EU AI Act is optional
D) The NIST AI RMF prohibits high-risk AI applications

A

B) The EU AI Act focuses on regulatory compliance, while NIST AI RMF provides voluntary risk management guidance
Explanation: The EU AI Act classifies AI systems into different risk levels with mandatory compliance, while the NIST AI RMF provides flexible risk management principles.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
125
Q

Which AI risk factor is most associated with data drift?
A) AI systems operating without human supervision
B) Changes in real-world data affecting AI accuracy
C) AI using open-source datasets
D) Reduction in computational efficiency

A

B) Changes in real-world data affecting AI accuracy
Explanation: Data drift occurs when input data changes over time, reducing model reliability.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
126
Q

How does the NIST AI RMF recommend handling AI risk mitigation over time?
A) Risk mitigation should be a continuous, iterative process
B) Risks should only be assessed once before AI deployment
C) AI risks can be ignored if accuracy is above a certain threshold
D) AI risk mitigation should only focus on security vulnerabilities

A

A) Risk mitigation should be a continuous, iterative process
Explanation: The NIST AI RMF stresses the importance of ongoing monitoring, assessment, and refinement of AI risk management strategies.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
127
Q

Which of the following is NOT a core principle of Fair Information Practices (FIPs)?
A) Data Minimization
B) Purpose Specification
C) AI Autonomy
D) Accountability

A

Correct Answer: C) AI Autonomy

Explanation: Fair Information Practices (FIPs) focus on privacy, security, and fairness. AI autonomy is not part of FIPs, as these principles emphasize human oversight and accountability.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
128
Q

What is a shared limitation of applying NIST AI RMF’s risk monitoring principles to proprietary AI models?
A) Proprietary AI models often restrict access to internal operations, making risk monitoring difficult
B) NIST AI RMF does not require risk monitoring for AI models
C) Proprietary AI models are always compliant with risk frameworks
D) The NIST AI RMF prohibits the use of proprietary AI

A

A) Proprietary AI models often restrict access to internal operations, making risk monitoring difficult
Explanation: Proprietary models are typically closed-source, which complicates efforts to assess and monitor AI risks as required by NIST AI RMF.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
129
Q

What does AIGP certification cover?
A. AI deployment and use governance
B. AI governance laws, standards, and frameworks
C. Foundations of AI governance
D. All of the above

A

Correct Answer: D. All of the above

Explanation: AIGP certification provides comprehensive knowledge of AI governance, including legal, ethical, and operational aspects.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
130
Q

What is the purpose of AI risk assessments?
A) To ensure AI systems always function perfectly
B) To identify potential risks and mitigation strategies for AI deployment
C) To increase AI system complexity
D) To eliminate the need for AI regulations

A

Correct Answer: B) To identify potential risks and mitigation strategies for AI deployment

Explanation: AI risk assessments evaluate factors such as bias, fairness, and security vulnerabilities, ensuring responsible AI deployment.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
131
Q

Which of the following is not an AI operator category under the EU AI Act?
A) Importer
B) Data Controller
C) Distributor
D) Authorized Representative

A

B) Data Controller
Explanation: AI operators include Providers, Deployers, Importers, Distributors, and Authorized Representatives, but ‘Data Controller’ is a GDPR term.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
132
Q

What is the biggest challenge in AI transparency?
A. AI systems evolving too slowly
B. Lack of legal AI frameworks
C. The complexity of AI models making decisions difficult to explain
D. The cost of AI deployment

A

Correct Answer: C. The complexity of AI models making decisions difficult to explain

Explanation: Complex AI systems, such as deep learning models, often operate as ‘black boxes,’ making their decision-making processes hard to interpret.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
133
Q

Which of the following is NOT a requirement for AI compliance in regulated industries?
A) Bias audits
B) Explainability documentation
C) Open-source licensing
D) Security assessments

A

C) Open-source licensing
Explanation: While bias audits, security, and explainability are required, AI models do not have to be open-source to comply with regulations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
134
Q

How does AI risk classification in the EU AI Act compare with the risk assessment approach in the NIST AI RMF?
A) The EU AI Act categorizes AI into strict risk levels, while NIST AI RMF provides adaptable risk management strategies
B) Both frameworks use the exact same risk classification system
C) The NIST AI RMF bans all high-risk AI applications, while the EU AI Act does not
D) The EU AI Act does not classify AI risks, while NIST AI RMF does

A

A) The EU AI Act categorizes AI into strict risk levels, while NIST AI RMF provides adaptable risk management strategies
Explanation: The EU AI Act mandates specific obligations based on an AI system’s risk level, while the NIST AI RMF allows organizations to determine how they manage AI risks.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
135
Q

Which AI governance approach is emphasized in both the NIST AI RMF and the EU AI Act for ensuring AI fairness?
A) Preventing the use of AI in commercial applications
B) Conducting algorithmic impact assessments and bias audits
C) Mandating the use of explainable AI (XAI) for all models
D) Banning the use of AI in predictive analytics

A

B) Conducting algorithmic impact assessments and bias audits
Explanation: Both frameworks emphasize fairness through impact assessments and bias audits to identify and mitigate discriminatory AI behavior.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
136
Q

Which of the following AI applications would likely be classified as ‘Limited Risk’ under the EU AI Act?
A. AI used in hiring processes
B. AI-generated chatbots and virtual assistants
C. AI-powered criminal sentencing recommendations
D. AI in facial recognition for real-time law enforcement

A

Correct Answer: B. AI-generated chatbots and virtual assistants

Explanation: Limited-risk AI applications require transparency measures, such as informing users that they are interacting with AI but do not face strict regulatory compliance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
137
Q

Which of the following is a technical AI risk control?
A. Employee AI ethics training
B. Using firewalls and encryption to protect AI systems
C. Implementing AI decision-making policies
D. Deploying AI in an experimental environment

A

Correct Answer: B. Using firewalls and encryption to protect AI systems

Explanation: Technical controls ensure AI security through mechanisms such as encryption, firewalls, and adversarial robustness testing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
138
Q

Which component of AI architecture is primarily responsible for identifying patterns in image data?
A) Decision Trees
B) Recurrent Neural Networks (RNNs)
C) Convolutional Neural Networks (CNNs)
D) Reinforcement Learning

A

C) Convolutional Neural Networks (CNNs)
Explanation: CNNs are optimized for processing visual data, using convolutional layers to detect spatial hierarchies of features.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
139
Q

Which of the following is NOT a high-risk AI application under the EU AI Act?
A. AI used in employment recruitment and hiring
B. AI-driven medical diagnostic tools
C. AI-powered virtual assistants for general consumer use
D. AI used in law enforcement predictive policing

A

Correct Answer: C. AI-powered virtual assistants for general consumer use

Explanation: AI virtual assistants fall under limited-risk AI, which only requires transparency measures. High-risk AI applications impact fundamental rights and require stringent compliance measures.

140
Q

Which organization provides the AIGP certification?
A. ISO
B. IAPP
C. IEEE
D. NIST

A

Correct Answer: B. IAPP

Explanation: The International Association of Privacy Professionals (IAPP) offers the AIGP certification to professionals specializing in AI governance and compliance.

141
Q

What must AI providers of high-risk AI systems do before placing their product on the market?
A. Conduct internal AI risk assessments only
B. Submit AI models for public testing
C. Undergo third-party conformity assessments and ensure compliance with transparency and risk management requirements
D. Rely on AI-generated compliance self-reports

A

Correct Answer: C. Undergo third-party conformity assessments and ensure compliance with transparency and risk management requirements

Explanation: The EU AI Act mandates that high-risk AI providers submit to external audits and maintain detailed compliance documentation.

142
Q

Which factor is most critical in managing AI model drift?
A) Model complexity
B) Continuous monitoring and retraining
C) Increasing dataset size
D) Reducing the number of training epochs

A

B) Continuous monitoring and retraining
Explanation: AI drift occurs when data patterns shift over time, requiring constant monitoring and periodic model retraining.

143
Q

Which AI deployment option provides the highest control over data privacy?
A) Cloud hosting
B) On-premise deployment
C) Edge computing
D) Serverless AI

A

B) On-premise deployment
Explanation: On-premise AI ensures full control over data, making it the best option for privacy-sensitive applications.

144
Q

Which AI model type is best suited for classification tasks such as spam detection?
A) Regression models
B) Recommendation systems
C) Decision trees
D) Clustering algorithms

A

C) Decision trees
Explanation: Classification problems, such as spam detection, are best solved using supervised learning models like decision trees, SVMs, or deep learning classifiers.

145
Q

Which AI risk level under the EU AI Act requires only transparency obligations?
A) High Risk
B) Minimal Risk
C) Limited Risk
D) Unacceptable Risk

A

C) Limited Risk
Explanation: Limited-risk AI systems must comply with transparency requirements but do not face stringent compliance rules like high-risk systems.

146
Q

What does AI deployment packaging refer to?
A) Storing training data in cloud repositories
B) Defining how AI models, dependencies, and configurations are bundled
C) Reducing AI model size for faster processing
D) Encrypting AI models for security

A

B) Defining how AI models, dependencies, and configurations are bundled
Explanation: AI deployment packaging ensures compatibility across environments by bundling dependencies and configurations.

147
Q

How does the concept of AI risk management in the NIST AI RMF apply to proprietary AI models? (Variation 26)
A) NIST AI RMF emphasizes continuous risk assessment, which is difficult for proprietary AI due to lack of transparency
B) Proprietary AI models are automatically risk-free under NIST AI RMF
C) The EU AI Act does not regulate proprietary AI models
D) AI risk management is only relevant for open-source models

A

A) NIST AI RMF emphasizes continuous risk assessment, which is difficult for proprietary AI due to lack of transparency
Explanation: Proprietary AI models often limit access to their internal workings, making compliance with risk management frameworks more challenging.

148
Q

The FTC’s primary role in AI regulation involves:
A) Creating AI safety standards for high-risk models
B) Ensuring AI tools comply with consumer protection and fair competition laws
C) Conducting pre-market approval for all AI-powered products
D) Licensing AI-based hiring and credit decision models

A

B) Ensuring AI tools comply with consumer protection and fair competition laws
Explanation: The FTC enforces consumer protection and fair competition laws, ensuring AI models do not engage in deceptive practices.

149
Q

Which entity is responsible for ensuring AI compliance when an AI model is developed outside the EU but deployed within the EU?
A. The European Commission
B. The AI deployer or importer within the EU
C. The AI provider from the foreign country
D. The end users of the AI system

A

Correct Answer: B. The AI deployer or importer within the EU

Explanation: Importers and deployers must ensure that AI systems comply with EU laws before deployment, even if the model was developed outside the EU.

150
Q

How does the NIST AI RMF’s ‘Manage’ function relate to regulatory compliance for high-risk AI models in the EU AI Act?
A) NIST AI RMF’s ‘Manage’ function encourages organizations to implement continuous risk mitigation strategies, which aligns with the EU AI Act’s high-risk AI obligations
B) The EU AI Act does not regulate risk management for high-risk AI
C) NIST AI RMF does not require risk mitigation
D) High-risk AI models do not require risk management under any framework

A

A) NIST AI RMF’s ‘Manage’ function encourages organizations to implement continuous risk mitigation strategies, which aligns with the EU AI Act’s high-risk AI obligations
Explanation: Both frameworks emphasize the importance of actively managing and mitigating AI risks through structured processes.

151
Q

What is a major challenge in enforcing AI governance under the EU AI Act?
A. AI providers refusing to develop new models
B. Difficulty in monitoring AI compliance across various jurisdictions
C. Over-reliance on AI-generated regulatory guidelines
D. AI being completely banned in the EU

A

Correct Answer: B. Difficulty in monitoring AI compliance across various jurisdictions

Explanation: Since AI systems can be developed outside the EU but deployed within it, enforcing compliance requires coordinated monitoring and oversight mechanisms.

152
Q

Which of the following is a compliance requirement for proprietary models under the EU AI Act?
A) Disclose their full source code
B) Provide transparency reports on AI risk assessments
C) Ensure free access to their training datasets
D) Remove all proprietary rights before deployment

A

B) Provide transparency reports on AI risk assessments
Explanation: The EU AI Act mandates that providers of high-risk AI models disclose risk assessments and mitigation strategies.

153
Q

Which of the following best describes an AI Provider under the EU AI Act?
A) An entity that imports AI systems from outside the EU
B) An individual or company that develops or places an AI system on the EU market
C) A regulatory body overseeing AI compliance
D) A customer purchasing AI services

A

B) An individual or company that develops or places an AI system on the EU market
Explanation: AI Providers create, modify, or distribute AI models and are subject to the highest regulatory obligations.

154
Q

What is the primary goal of AI regulatory compliance?
A) To maximize AI autonomy
B) To ensure AI systems adhere to legal, ethical, and operational standards
C) To eliminate all AI-related risks
D) To allow AI systems to self-regulate

A

Correct Answer: B) To ensure AI systems adhere to legal, ethical, and operational standards

Explanation: AI regulatory compliance ensures that AI applications align with legal requirements and ethical principles, protecting users and organizations.

155
Q

Which of the following best describes ISO 31000:2018?
A) A risk management framework for AI governance
B) A financial risk assessment methodology
C) A cybersecurity compliance standard
D) A global AI regulation law

A

A) A risk management framework for AI governance
Explanation: ISO 31000:2018 offers guidelines for risk assessment across industries, including AI.

156
Q

Why does the lack of transparency in proprietary AI models create compliance challenges under both the EU AI Act and NIST AI RMF?
A) It makes auditing, risk assessment, and bias mitigation harder
B) Proprietary AI models are always transparent by default
C) The EU AI Act does not require transparency in AI models
D) NIST AI RMF does not recommend transparency as a risk management principle

A

A) It makes auditing, risk assessment, and bias mitigation harder
Explanation: Proprietary models restrict access to decision-making processes, making it difficult to ensure compliance with transparency and fairness requirements.

157
Q

Which deployment strategy exposes AI models to external applications?
A) API integration
B) Edge computing
C) Containerized deployment
D) On-premise hosting

A

A) API integration
Explanation: REST APIs enable external applications and users to interact with AI models, facilitating broader adoption.

158
Q

Which U.S. federal agency developed the AI Blueprint for an AI Bill of Rights?
A) National Institute of Standards and Technology (NIST)
B) White House Office of Science and Technology Policy (OSTP)
C) Federal Communications Commission (FCC)
D) Consumer Financial Protection Bureau (CFPB)

A

B) White House Office of Science and Technology Policy (OSTP)
Explanation: The OSTP introduced the AI Bill of Rights to protect privacy, ensure fairness, and promote AI safety.

159
Q

What is the legal effect of an EU AI Act Article?
A) Non-binding but offers interpretative guidance
B) Legally binding, setting out rights and obligations
C) Applicable only to AI developed within the EU
D) A recommendation rather than a requirement

A

B) Legally binding, setting out rights and obligations
Explanation: Articles establish substantive legal rules and obligations with direct effect.

160
Q

Which of the following is NOT a recognized privacy law?
A) GDPR
B) CCPA
C) AI Bill of Rights
D) Biometric Information Privacy Act (BIPA)

A

Correct Answer: C) AI Bill of Rights

Explanation: While the AI Bill of Rights provides ethical guidelines for AI development, it is not a legally binding privacy law like GDPR, CCPA, or BIPA.

161
Q

According to ISO 42001:2023, AI risk assessments should focus on:
A) Algorithm complexity
B) AI system lifecycle stages
C) AI’s environmental impact
D) Public perception of AI risk

A

B) AI system lifecycle stages
Explanation: ISO 42001:2023 provides a structured approach to managing risks across AI development, deployment, and maintenance stages.

162
Q

Which key privacy principle requires organizations to collect only necessary personal data?
A) Purpose Limitation
B) Data Minimization
C) Data Portability
D) Algorithmic Transparency

A

Correct Answer: B) Data Minimization

Explanation: Data Minimization ensures that organizations collect only the data needed for specific purposes, reducing privacy risks and potential misuse.

163
Q

What is the maximum fine for non-compliance with the EU AI Act for deploying banned AI systems?
A. €2 million
B. €10 million or 2% of global turnover
C. €35 million or 7% of global turnover
D. No fines, only a warning

A

Correct Answer: C. €35 million or 7% of global turnover

Explanation: The EU AI Act enforces heavy penalties for serious violations, such as deploying prohibited AI systems that threaten fundamental rights.

164
Q

Which security measure helps protect AI models from unauthorized access?
A) Data encryption
B) Model compression
C) Reducing dataset size
D) Increasing computational power

A

A) Data encryption
Explanation: Encrypting AI models helps prevent unauthorized access and protects sensitive training data.

165
Q

Which regulatory requirement from the EU AI Act aligns with the ‘Govern’ function of the NIST AI RMF?
A) The requirement to make AI systems open-source
B) The obligation to conduct Fundamental Rights Impact Assessments (FRIA)
C) The requirement to remove bias from all AI models before deployment
D) The ban on all AI models classified as high-risk

A

B) The obligation to conduct Fundamental Rights Impact Assessments (FRIA)
Explanation: The ‘Govern’ function of NIST AI RMF focuses on establishing policies and accountability structures, which aligns with the EU AI Act’s requirement for impact assessments in high-risk AI applications.

166
Q

Which principle best defines the ‘Pannu Factors’ used in determining patent inventorship?
A) Novelty, nonobviousness, and utility
B) Significance, quality, and addition of something new
C) Human involvement, economic impact, and originality
D) Algorithm complexity, predictability, and applicability

A

B) Significance, quality, and addition of something new
Explanation: The ‘Pannu Factors’ were established in Pannu v. Iolab (1993) to determine whether a person made a substantial contribution to an invention.

167
Q

Which of the following is a fundamental principle of responsible AI development?
A) Maximizing AI autonomy
B) Ensuring fairness, accountability, and transparency (FAT)
C) Eliminating all human oversight
D) Prioritizing AI model complexity over usability

A

Correct Answer: B) Ensuring fairness, accountability, and transparency (FAT)

Explanation: Fairness, accountability, and transparency (FAT) principles guide responsible AI development to prevent discrimination and promote ethical AI use.

168
Q

Which of the following is NOT one of the five categories in the OECD AI Classification Framework?
A) People and Planet
B) Data and Input
C) Algorithmic Complexity
D) Economic Context

A

C) Algorithmic Complexity
Explanation: The framework includes ‘People and Planet,’ ‘Data and Input,’ ‘AI Model,’ ‘Tasks and Output,’ and ‘Economic Context,’ but not ‘Algorithmic Complexity’.

169
Q

What is a Deployer under the EU AI Act?
A) An entity responsible for AI risk assessment and bias audits
B) Any organization or individual using an AI system under their authority
C) The European body responsible for monitoring AI compliance
D) A company that builds AI infrastructure for cloud platforms

A

B) Any organization or individual using an AI system under their authority
Explanation: Deployers use AI models in operational settings and must ensure responsible usage.

170
Q

Which AI liability model holds developers responsible even if they were not negligent?
A) Fault-based liability
B) Strict liability
C) Risk-aware liability
D) Data-driven liability

A

B) Strict liability
Explanation: Strict liability applies when AI causes harm, regardless of whether negligence was involved.

171
Q

Which term refers to the risk of AI producing misleading, incorrect, or unexpected results?
A) Model explainability
B) Hallucination
C) Reinforcement learning bias
D) Transfer learning

A

B) Hallucination
Explanation: Hallucination occurs when AI generates responses that are not based on real or accurate data.

172
Q

Which AI deployment strategy offers highest flexibility and scalability?
A) Cloud
B) On-premise
C) Edge
D) Local machine hosting

A

A) Cloud
Explanation: Cloud deployment allows for scalability, flexibility, and resource sharing, making it ideal for large-scale AI models.

173
Q

Which of the following is an example of AI model retraining?
A) Increasing hardware processing power
B) Updating training data and fine-tuning the model
C) Reducing dataset size
D) Using older model versions

A

B) Updating training data and fine-tuning the model
Explanation: Retraining AI involves updating datasets and adjusting model parameters to improve accuracy.

174
Q

What is a key governance challenge when deploying proprietary AI models?
A) Lack of transparency
B) Reduced infrastructure costs
C) Public access to training data
D) Open-source compatibility

A

A) Lack of transparency
Explanation: Proprietary AI models are often ‘black boxes,’ making it difficult to audit or explain their decision-making processes.

175
Q

What is the purpose of ‘Privacy by Design’ in AI governance?
A) Embedding privacy measures throughout the software development life cycle
B) Applying privacy measures only after system deployment
C) Allowing AI to self-regulate privacy concerns
D) Removing all restrictions on AI data processing

A

Correct Answer: A) Embedding privacy measures throughout the software development life cycle

Explanation: Privacy by Design (PbD) integrates privacy protections from the initial stages of AI system design rather than applying them retroactively.

176
Q

Which of the following is not a phase in the AI Development Lifecycle?
A) Plan
B) Design
C) Monitor
D) Deploy

A

C) Monitor
Explanation: The AI Development Lifecycle consists of four phases: Plan, Design, Develop, and Deploy. Monitoring is part of the ‘Adapt and Govern’ phase post-deployment.

177
Q

Which principle applies to both proprietary AI models and high-risk AI compliance under the EU AI Act?
A) AI models must undergo impact assessments and document risks
B) All proprietary AI models must be open-source
C) AI models are exempt from regulatory oversight
D) Proprietary AI models are not considered high-risk

A

A) AI models must undergo impact assessments and document risks
Explanation: Both proprietary AI and high-risk AI applications must comply with transparency, accountability, and risk assessment obligations to meet legal and ethical standards.

178
Q

Which of the following best describes explainable AI (XAI)?
A) AI models that operate independently without human oversight
B) AI models designed to provide transparent and understandable decision-making
C) AI that eliminates all risks and uncertainties
D) AI that generates random outputs

A

Correct Answer: B) AI models designed to provide transparent and understandable decision-making

Explanation: Explainable AI (XAI) enhances trust by ensuring AI decisions are interpretable and justifiable, particularly in high-stakes applications like healthcare and finance.

179
Q

What is the primary objective of OECD AI Classification?
A) To determine AI system efficiency
B) To identify policy implications of deploying AI
C) To establish technical AI performance benchmarks
D) To regulate AI systems globally

A

B) To identify policy implications of deploying AI
Explanation: The classification framework helps policymakers assess risks and ethical concerns associated with AI deployment.

180
Q

Which compliance requirement is common to both high-risk AI models under the EU AI Act and AI risk management under the NIST AI RMF?
A) Prohibition of black-box AI models
B) Documentation of AI decision-making processes
C) Automatic exemption of proprietary AI models from regulations
D) Requirement to train AI models exclusively in the EU

A

B) Documentation of AI decision-making processes
Explanation: Both frameworks emphasize the need for detailed documentation to ensure AI accountability, transparency, and risk management.

181
Q

What does Privacy by Default ensure in data protection?
A) The highest level of privacy settings is applied automatically
B) Users must manually adjust their privacy settings
C) AI systems are exempt from privacy regulations
D) Privacy settings are irrelevant in AI systems

A

Correct Answer: A) The highest level of privacy settings is applied automatically

Explanation: Privacy by Default ensures that users are protected by default settings that uphold strong privacy protections, reducing the risk of data misuse.

182
Q

How do General Purpose AI (GPAI) models present unique compliance challenges under both the NIST AI RMF and EU AI Act?
A) GPAI models are always classified as minimal risk
B) The EU AI Act regulates GPAI at the EU level, while NIST AI RMF provides voluntary guidance on managing their risks
C) GPAI models do not require any regulatory oversight
D) NIST AI RMF does not apply to foundation models

A

B) The EU AI Act regulates GPAI at the EU level, while NIST AI RMF provides voluntary guidance on managing their risks
Explanation: The EU AI Act establishes specific regulatory requirements for foundation models like GPAI, while NIST AI RMF offers adaptable risk assessment approaches.

183
Q

Which governance measure helps prevent AI bias in decision-making?
A) Implementing explainability and transparency mechanisms
B) Allowing AI models to operate without human oversight
C) Reducing AI training datasets
D) Preventing all AI applications in critical sectors

A

A) Implementing explainability and transparency mechanisms
Explanation: Ensuring AI transparency helps identify and mitigate biased decision-making.

184
Q

What is the key objective of federated learning?
A) Training a model across multiple devices without data centralization
B) Encrypting AI models for better security
C) Running AI models faster in cloud environments
D) Training models only on structured datasets

A

A) Training a model across multiple devices without data centralization
Explanation: Federated learning allows multiple devices to train AI models collaboratively while keeping data decentralized.

185
Q

What is the primary role of machine perception in robotics?
A) Enhancing a robot’s ability to learn new tasks
B) Providing sensory input for AI to interpret environmental data
C) Limiting a robot’s interaction with its surroundings
D) Eliminating human oversight in robotic decision-making

A

Correct Answer: B) Providing sensory input for AI to interpret environmental data

Explanation: Machine perception allows robots to process sensory data such as vision, sound, and touch, enabling autonomous interaction with their environment.

186
Q

What major challenge does AI ‘memorization’ pose for copyright law?
A) AI is unable to recall training data precisely
B) There is no way to determine if AI training data contains copyrighted material
C) AI models store information exactly as found in copyrighted books
D) AI developers are legally required to delete all training data after use

A

B) There is no way to determine if AI training data contains copyrighted material
Explanation: AI models store distributed representations of data, making it difficult to assess whether copyrighted works have been memorized.

187
Q

Which AI use case does NOT fall under the ‘Unacceptable Risk’ category?
A. AI for subliminal manipulation influencing human behavior
B. AI-driven hiring tools with bias detection capabilities
C. AI used for indiscriminate biometric categorization
D. Social scoring AI used for assessing individual behaviors

A

Correct Answer: B. AI-driven hiring tools with bias detection capabilities

Explanation: While AI in hiring is considered high-risk, it is not banned. AI systems that manipulate individuals or categorize people based on biometric data in a discriminatory way fall under unacceptable risk.

188
Q

What is the primary function of AI monitoring dashboards?
A) Improve AI model accuracy
B) Track performance metrics and detect anomalies
C) Reduce data storage costs
D) Ensure AI models are publicly accessible

A

B) Track performance metrics and detect anomalies
Explanation: AI dashboards allow real-time monitoring to ensure model stability and detect performance issues.

189
Q

Which EU directive applies strict liability to AI systems?
A) The General Product Safety Regulation
B) The AI Liability Directive
C) The Digital Services Act
D) The Reformed Product Liability Directive

A

D) The Reformed Product Liability Directive
Explanation: The Reformed PLD applies strict liability to AI, meaning developers and deployers are automatically responsible if an AI product is found defective, without requiring proof of negligence.

190
Q

Which EU directive applies strict liability to AI systems?
A) The General Product Safety Regulation
B) The AI Liability Directive
C) The Digital Services Act
D) The Reformed Product Liability Directive

A

D) The Reformed Product Liability Directive
Explanation: The Reformed PLD applies strict liability to AI, meaning developers and deployers are automatically responsible if an AI product is found defective, without requiring proof of negligence.

191
Q

How does the NIST AI RMF’s continuous risk monitoring requirement align with post-deployment monitoring obligations in the EU AI Act?
A) Both frameworks emphasize the need for ongoing monitoring of AI model performance and risk factors
B) Post-deployment monitoring is only required for proprietary AI models
C) The EU AI Act does not require AI monitoring after deployment
D) NIST AI RMF does not address AI monitoring

A

A) Both frameworks emphasize the need for ongoing monitoring of AI model performance and risk factors
Explanation: Continuous monitoring ensures that AI models remain compliant, fair, and free of unintended risks over time.

192
Q

Which privacy principle ensures that data is used only for the specified purpose?
A) Purpose Limitation
B) Fairness and Accountability
C) AI Autonomy
D) Automated Decision-Making

A

Correct Answer: A) Purpose Limitation

Explanation: Purpose Limitation ensures that organizations clearly define and adhere to the intended purpose of collected personal data.

193
Q

What is the primary purpose of red teaming in AI security?
A) Improve AI model accuracy
B) Identify AI system vulnerabilities through simulated attacks
C) Reduce AI inference costs
D) Increase the speed of AI decision-making

A

B) Identify AI system vulnerabilities through simulated attacks
Explanation: Red teaming tests AI security by simulating real-world attacks to identify weaknesses.

194
Q

Why is data governance a critical challenge for AI models regulated under both the EU AI Act and the NIST AI RMF?
A) Data governance ensures that AI models comply with fairness, transparency, and privacy regulations
B) Data governance is optional under both frameworks
C) Data governance only applies to AI used in government agencies
D) Data governance is only relevant for open-source AI models

A

A) Data governance ensures that AI models comply with fairness, transparency, and privacy regulations
Explanation: Effective data governance is essential for ensuring compliance with legal, ethical, and operational requirements for AI models.

195
Q

How does NIST AI RMF’s risk classification approach differ from the EU AI Act’s fixed risk levels?
A) The EU AI Act assigns AI systems to specific risk categories, while NIST AI RMF allows organizations to define their own risk management approach
B) Both frameworks classify AI risks using the same methodology
C) NIST AI RMF prohibits the use of high-risk AI applications
D) The EU AI Act does not classify AI risks

A

A) The EU AI Act assigns AI systems to specific risk categories, while NIST AI RMF allows organizations to define their own risk management approach
Explanation: The EU AI Act mandates compliance based on predefined risk levels, while NIST AI RMF promotes a flexible, context-based risk management strategy.

196
Q

How does the NIST AI RMF’s ‘Manage’ function compare to the EU AI Act’s obligations for high-risk AI applications?
A) The NIST AI RMF encourages voluntary risk management, while the EU AI Act mandates specific risk mitigation steps
B) The NIST AI RMF bans the use of high-risk AI models
C) The EU AI Act does not regulate risk mitigation strategies
D) Both frameworks require AI models to be open-source

A

A) The NIST AI RMF encourages voluntary risk management, while the EU AI Act mandates specific risk mitigation steps
Explanation: While NIST AI RMF provides flexible guidance, the EU AI Act legally requires compliance with structured risk mitigation obligations for high-risk AI models.

197
Q

Which governance best practice can help organizations ensure AI transparency and accountability ?
A) Keeping AI model decisions confidential
B) Using explainability tools and conducting bias audits
C) Relying solely on third-party auditors
D) Reducing compliance checks to speed up AI deployment

A

B) Using explainability tools and conducting bias audits
Explanation: Transparency involves making AI decision-making processes interpretable and conducting audits to detect biases.

198
Q

Which U.S. agency is responsible for enforcing laws related to AI in employment discrimination?
A) The Federal Trade Commission (FTC)
B) The Equal Employment Opportunity Commission (EEOC)
C) The Securities and Exchange Commission (SEC)
D) The Consumer Financial Protection Bureau (CFPB)

A

B) The Equal Employment Opportunity Commission (EEOC)
Explanation: The EEOC enforces laws prohibiting workplace discrimination and has issued guidance on AI-based hiring tools to ensure compliance with Title VII of the Civil Rights Act.

199
Q

Which method can help reduce bias in AI models?
A) Using diverse and representative training datasets
B) Eliminating all historical data
C) Preventing human oversight in AI decision-making
D) Allowing AI models to self-train without constraints

A

Correct Answer: A) Using diverse and representative training datasets

Explanation: Ensuring diverse datasets helps reduce bias, improving AI model fairness and preventing discriminatory outcomes.

200
Q

In AI planning, which of the following is not a key question to ask?
A) What are the business objectives?
B) How can AI improve efficiency?
C) How many hidden layers should the model have?
D) What KPIs should be tracked?

A

C) How many hidden layers should the model have?
Explanation: AI planning focuses on business objectives, KPIs, and efficiency rather than specific technical model configurations.

201
Q

Which of the following best describes ‘algorithmic transparency’ in AI governance?
A) Making AI decision-making processes explainable and understandable
B) Allowing AI systems to self-regulate
C) Restricting all AI-driven decisions
D) Eliminating the need for human oversight

A

Correct Answer: A) Making AI decision-making processes explainable and understandable

Explanation: Algorithmic transparency ensures that AI decisions are interpretable, allowing stakeholders to understand how AI systems generate outcomes.

202
Q

Which AI principle ensures that AI systems remain under human control?
A) AI autonomy
B) Human-in-the-loop oversight
C) AI self-regulation
D) Automated governance

A

Correct Answer: B) Human-in-the-loop oversight

Explanation: Human-in-the-loop AI requires human intervention in AI decision-making, ensuring safety and ethical alignment in critical applications like healthcare and law enforcement.

203
Q

What is the purpose of AI governance?
A. To ensure responsible AI development and deployment
B. To replace human oversight in AI decision-making
C. To increase AI autonomy in business operations
D. To develop AI without regulatory constraints

A

Correct Answer: A. To ensure responsible AI development and deployment

Explanation: AI governance establishes ethical, legal, and operational frameworks to guide AI development and mitigate risks.

204
Q

What is the primary characteristic of a proprietary AI model?
A) It is freely available to the public
B) It is controlled by a private entity with restricted access
C) It is always open-source
D) It operates exclusively on publicly available datasets

A

B) It is controlled by a private entity with restricted access
Explanation: Proprietary AI models are owned and controlled by private companies, restricting access to their architecture and training data.

205
Q

How does AI fairness relate to both proprietary AI models and compliance with the NIST AI RMF?
A) Proprietary AI models are always unbiased and do not require fairness audits
B) The NIST AI RMF recommends fairness evaluations, while proprietary AI models may lack transparency, making fairness assessments difficult
C) Fairness is only required for AI models deployed by public institutions
D) Fairness requirements only apply to AI models developed in the EU

A

B) The NIST AI RMF recommends fairness evaluations, while proprietary AI models may lack transparency, making fairness assessments difficult
Explanation: Proprietary models often do not disclose their data sources, making fairness audits harder, while NIST AI RMF recommends organizations implement fairness evaluation practices.

206
Q

Which of the following AI applications is classified as high-risk under the EU AI Act?
A) AI-powered loan approval systems
B) AI-generated movie recommendations
C) AI used in gaming chatbots
D) AI-driven photo editing tools

A

Correct Answer: A) AI-powered loan approval systems

Explanation: AI-driven credit scoring and loan approvals impact financial decision-making, requiring fairness, transparency, and risk assessments under the EU AI Act.

207
Q

What does Article 16 of the EU AI Act require AI providers to do?
A) Conduct technical documentation and risk management
B) Apply for AI patents before market release
C) Ensure AI systems operate independently without oversight
D) Limit AI model training to European datasets

A

A) Conduct technical documentation and risk management
Explanation: AI providers must maintain records, assess risks, and document system capabilities.

208
Q

Which of the following is an example of AI explainability?
A) Providing clear justifications for an AI-generated credit approval decision
B) Encrypting all AI models for security
C) Increasing AI model complexity for better performance
D) Reducing AI bias by eliminating training data

A

Correct Answer: A) Providing clear justifications for an AI-generated credit approval decision

Explanation: AI explainability ensures that users and regulators can understand how AI decisions are made, improving trust and accountability.

209
Q

Which AI technique is commonly used for fraud detection?
A) Generative adversarial networks (GANs)
B) Anomaly detection algorithms
C) Neural style transfer
D) Reinforcement learning

A

Correct Answer: B) Anomaly detection algorithms

Explanation: Anomaly detection identifies patterns that deviate from normal behavior, making it effective for fraud detection in financial transactions.

210
Q

How does AI governance differ from AI ethics?
A) AI governance refers to regulatory compliance, while AI ethics focuses on moral AI principles
B) AI governance eliminates all ethical concerns
C) AI ethics is legally binding, while AI governance is voluntary
D) AI governance and AI ethics are the same

A

Correct Answer: A) AI governance refers to regulatory compliance, while AI ethics focuses on moral AI principles

Explanation: AI governance ensures compliance with laws and industry standards, while AI ethics addresses broader societal and moral concerns.

211
Q

What is the formula for a standard AI risk score?
A. AI accuracy * security measures
B. Severity of harm * probability of occurrence
C. Bias level * dataset size
D. AI system complexity * number of stakeholders

A

Correct Answer: B. Severity of harm * probability of occurrence

Explanation: AI risk is typically calculated by assessing the severity of harm an AI system may cause and the likelihood of such harm occurring.

212
Q

What is the primary function of the EU AI Act’s risk-based classification system?
A. To ban all AI technologies in Europe
B. To categorize AI systems based on their potential risks and apply appropriate regulations
C. To give AI providers full autonomy without regulatory interference
D. To ensure only European-developed AI systems can operate in the EU

A

Correct Answer: B. To categorize AI systems based on their potential risks and apply appropriate regulations

Explanation: The EU AI Act classifies AI systems into unacceptable, high, limited, and minimal risk categories, imposing stricter requirements on higher-risk applications.

213
Q

Which regulatory principle in the EU AI Act aligns with the NIST AI RMF’s emphasis on explainability and trustworthiness?
A) The requirement for AI providers to conduct bias audits and transparency assessments
B) AI models must always operate without human oversight
C) The EU AI Act does not require AI explainability
D) The NIST AI RMF does not recommend AI trustworthiness measures

A

A) The requirement for AI providers to conduct bias audits and transparency assessments
Explanation: Both frameworks emphasize the need for AI systems to be explainable and trustworthy, particularly in high-risk applications.

214
Q

Which AI regulation focuses on data protection and privacy rights?
A) GDPR
B) ISO 31000
C) IEEE AI Ethics Guidelines
D) NIST AI RMF

A

Correct Answer: A) GDPR

Explanation: The General Data Protection Regulation (GDPR) enforces strict data privacy protections, requiring AI systems to comply with transparency and user rights.

215
Q

What is a primary challenge associated with unsupervised learning?
A) The need for extensive labeled data
B) Difficulty in evaluating model performance due to the lack of predefined categories
C) The inability to cluster similar data points
D) The requirement for pre-programmed decision trees

A

Correct Answer: B) Difficulty in evaluating model performance due to the lack of predefined categories

Explanation: Unlike supervised learning, unsupervised learning lacks labeled data, making it challenging to assess model accuracy and effectiveness.

216
Q

Which factor is critical when evaluating third-party AI vendors?
A) Explainability and documentation
B) Low-cost licensing
C) Open-source status
D) Model complexity

A

A) Explainability and documentation
Explanation: Third-party AI vendors must provide detailed documentation and explainability to ensure compliance and reduce risks.

217
Q

Which of the following is NOT a key factor in AI risk assessment?
A. Business purpose
B. AI’s ability to function independently
C. Training data and potential biases
D. Social impacts

A

Correct Answer: B. AI’s ability to function independently

Explanation: AI risk assessment focuses on identifying and mitigating potential harms, considering business use, data integrity, and societal implications.

218
Q

What is the primary purpose of the ‘Right to Erasure’ (Right to be Forgotten) under GDPR?
A) Allows individuals to request deletion of their personal data
B) Requires companies to store data indefinitely
C) Prevents AI systems from learning user preferences
D) Grants organizations unrestricted access to user data

A

Correct Answer: A) Allows individuals to request deletion of their personal data

Explanation: The Right to Erasure ensures individuals can request the removal of their personal data if it is no longer necessary or processed unlawfully.

219
Q

Which AI risk category under the EU AI Act is strictly prohibited?
A) High Risk
B) Minimal Risk
C) Unacceptable Risk
D) Limited Risk

A

C) Unacceptable Risk
Explanation: AI systems deemed Unacceptable Risk, such as social scoring and mass biometric surveillance, are banned under the AI Act.

220
Q

What is a key legal issue with AI-based decision-making in credit and lending?
A) AI systems must disclose their algorithms to consumers
B) AI can be sued directly for discriminatory outcomes
C) AI models may inadvertently create biases in credit decisions
D) AI-generated credit scores are automatically compliant with the Fair Credit Reporting Act (FCRA)

A

C) AI models may inadvertently create biases in credit decisions
Explanation: AI-based lending tools must comply with the FCRA and ECOA, ensuring fair credit decisions and preventing algorithmic bias.

221
Q

The FTC Algorithmic Disgorgement principle refers to:
A) The requirement for AI companies to delete models trained on unauthorized data
B) The obligation of AI developers to disclose their training data sources
C) The necessity for AI tools to be tested before commercial deployment
D) The financial penalties imposed on AI companies violating intellectual property laws

A

A) The requirement for AI companies to delete models trained on unauthorized data
Explanation: The FTC has ordered companies to delete both datasets and models built on improperly obtained personal data, as seen in cases against Amazon and Cambridge Analytica.

222
Q

Which of the following is not a component of AI readiness assessment?
A) Opportunity discovery
B) Data pipeline and governance
C) Model hyperparameter tuning
D) IT environment and security

A

C) Model hyperparameter tuning
Explanation: Hyperparameter tuning is part of AI development, whereas readiness assessment focuses on opportunity discovery, governance, and infrastructure needs.

223
Q

Which phase of the AI Development Lifecycle involves defining the business problem and mission?
A) Plan
B) Design
C) Develop
D) Deploy

A

A) Plan
Explanation: The ‘Plan’ phase focuses on defining the business problem, mission, data requirements, and governance before moving to system design.

224
Q

Which of the following is not an exception under the EU AI Act’s scope?
A) AI for military applications
B) AI used for research and development
C) Open-source AI models with high risk potential
D) AI deployed by non-professional users

A

C) Open-source AI models with high risk potential
Explanation: Open-source AI models are exempt unless classified as high-risk, in which case they must comply with AI Act provisions.

225
Q

What does the AI Bill of Rights primarily focus on?
A) AI performance optimization
B) Protecting users from harmful AI outcomes
C) Reducing AI model costs
D) Standardizing AI software development

A

B) Protecting users from harmful AI outcomes
Explanation: The AI Bill of Rights is designed to safeguard privacy, prevent bias, and ensure transparency in AI applications.

226
Q

Which ISO standard provides foundational terminology for AI governance?
A) ISO 31000:2018
B) ISO/IEC 22989:2022
C) ISO 27001
D) IEEE 7000-21

A

B) ISO/IEC 22989:2022
Explanation: This standard ensures consistent AI terminology to aid policymakers, developers, and auditors.

227
Q

What is a common governance challenge for organizations implementing both the NIST AI RMF and EU AI Act compliance measures?
A) The frameworks contradict each other on AI transparency
B) Both frameworks require organizations to define AI risks and implement governance policies, which can be complex across different jurisdictions
C) The NIST AI RMF does not require risk assessments, making compliance easier
D) The EU AI Act requires AI models to be closed-source, while NIST AI RMF mandates open-source implementation

A

B) Both frameworks require organizations to define AI risks and implement governance policies, which can be complex across different jurisdictions
Explanation: Both frameworks emphasize governance and accountability, requiring organizations to define risk management approaches that comply with different regulatory standards.

228
Q

Which key deployment challenge arises when using AI in regulated industries like healthcare and finance?
A) Increased training time
B) Compliance with industry laws and ethical guidelines
C) Higher model inference costs
D) Lack of available cloud providers

A

B) Compliance with industry laws and ethical guidelines
Explanation: AI models in regulated industries must comply with privacy, security, and ethical standards such as GDPR and HIPAA.

229
Q

What distinguishes high-risk AI from general-purpose AI (GPAI) under the EU AI Act?
A. High-risk AI requires more rigorous compliance measures due to its potential impact on rights and safety
B. General-purpose AI is banned from the EU market
C. High-risk AI operates only in industrial automation, whereas GPAI applies to social AI systems
D. High-risk AI systems are automatically exempt from regulation

A

Correct Answer: A. High-risk AI requires more rigorous compliance measures due to its potential impact on rights and safety

Explanation: High-risk AI must comply with transparency, risk mitigation, and documentation requirements, while GPAI has specific transparency and accountability measures but not as stringent as high-risk AI.

230
Q

How does the concept of ‘explainability’ in NIST AI RMF apply to General Purpose AI (GPAI) under the EU AI Act?
A) GPAI models are required to be fully explainable under the EU AI Act, aligning with NIST AI RMF’s best practices
B) Explainability is only required for AI models used in national security applications
C) The EU AI Act does not require any transparency for GPAI models
D) NIST AI RMF prohibits the use of AI models without explainability

A

A) GPAI models are required to be fully explainable under the EU AI Act, aligning with NIST AI RMF’s best practices
Explanation: The EU AI Act mandates transparency for GPAI, while NIST AI RMF suggests explainability as a best practice for AI risk management.

231
Q

Which of the following is not a transparency obligation for Limited-Risk AI Systems?
A) Inform users they are interacting with AI
B) Log every AI system decision publicly
C) Label AI-generated content in machine-readable format
D) Process personal data in compliance with GDPR

A

B) Log every AI system decision publicly
Explanation: AI providers must ensure transparency but are not required to make all AI decisions public.

232
Q

Which of the following best describes supervised learning?
A) AI is trained with labeled data
B) AI independently clusters similar data
C) AI receives no feedback during training
D) AI updates itself continuously without labeled data

A

Correct Answer: A) AI is trained with labeled data

Explanation: Supervised learning involves training models with labeled input-output pairs, allowing AI to map inputs to desired outputs.

233
Q

What is a major challenge in AI model fairness?
A) Ensuring fairness across diverse demographic groups
B) Increasing AI model efficiency
C) Reducing AI processing speed
D) Optimizing AI for high-performance computing

A

Correct Answer: A) Ensuring fairness across diverse demographic groups

Explanation: AI models can exhibit bias if trained on unrepresentative data, making fairness audits crucial in AI governance.

234
Q

Which regulatory framework requires AI models to be transparent and explainable?
A) GDPR
B) IEEE 7000-21
C) ISO 27001
D) SOC 2

A

A) GDPR
Explanation: The General Data Protection Regulation (GDPR) mandates explainability for AI models that process personal data.

235
Q

Which ISO standard provides a global AI governance framework?
A) ISO/IEC 22989:2022
B) ISO/IEC 42001:2023
C) ISO 9001
D) IEEE 7000-21

A

B) ISO/IEC 42001:2023
Explanation: This is the first international standard that provides governance structures for AI risk management and compliance.

236
Q

Why are proprietary models often criticized in AI ethics discussions?
A) They require excessive computational resources
B) They lack transparency and accountability
C) They are always biased
D) They are exclusively used in government applications

A

B) They lack transparency and accountability
Explanation: Since proprietary models do not disclose their architecture or training data, they raise concerns about fairness, accountability, and potential biases.

237
Q

What penalty applies to providers of AI systems that violate the EU AI Act’s prohibited practices?
A) 5 million euros or 2% of annual global turnover
B) 35 million euros or 7% of annual global turnover
C) Lifetime ban from AI development
D) Removal of AI system from the EU market only

A

B) 35 million euros or 7% of annual global turnover
Explanation: Severe violations result in heavy fines to ensure compliance with AI governance rules.

238
Q

What is the purpose of the ‘Right to Data Portability’ under GDPR?
A) Allows individuals to transfer personal data between service providers
B) Grants organizations unrestricted access to personal data
C) Requires AI systems to operate without human oversight
D) Restricts users from retrieving their own data

A

Correct Answer: A) Allows individuals to transfer personal data between service providers

Explanation: The Right to Data Portability ensures users can obtain and reuse their personal data across different services in a structured, machine-readable format.

239
Q

Which U.S. law applies to AI’s impact on hiring decisions and employment bias?
A) Equal Credit Opportunity Act (ECOA)
B) Title VII of the Civil Rights Act
C) Section 230 of the Communications Decency Act
D) The AI Liability Directive

A

B) Title VII of the Civil Rights Act
Explanation: This law prohibits employment discrimination, including cases where AI hiring systems create bias.

240
Q

Why is data governance critical for compliance with both NIST AI RMF and the EU AI Act?
A) Both frameworks require organizations to implement data governance policies to ensure AI fairness and privacy
B) Data governance is only relevant for AI models used in cybersecurity
C) The EU AI Act does not address data governance in AI compliance
D) NIST AI RMF does not require organizations to manage AI data risks

A

A) Both frameworks require organizations to implement data governance policies to ensure AI fairness and privacy
Explanation: Data governance is essential for mitigating AI risks related to fairness, bias, privacy, and compliance with AI regulations.

241
Q

How does the principle of risk-based AI governance in the NIST AI RMF compare with the risk classification approach in the EU AI Act?
A) The NIST AI RMF provides flexible risk management guidance, while the EU AI Act classifies AI systems into fixed risk levels
B) Both frameworks require mandatory government audits before AI deployment
C) The NIST AI RMF does not address AI risks
D) The EU AI Act does not classify AI systems into risk levels

A

A) The NIST AI RMF provides flexible risk management guidance, while the EU AI Act classifies AI systems into fixed risk levels
Explanation: The EU AI Act enforces strict risk categories, while the NIST AI RMF allows organizations to tailor risk management approaches based on context.

242
Q

Which of the following is an AI governance principle outlined by NIST AI RMF?
A. AI should function without human intervention
B. AI must be traceable, explainable, and accountable
C. AI models must always prioritize business profit
D. AI should be able to make legally binding decisions

A

Correct Answer: B. AI must be traceable, explainable, and accountable

Explanation: NIST AI RMF emphasizes trustworthy AI by ensuring AI decisions can be traced, explained, and held accountable to human oversight.

243
Q

Which risk is most associated with black-box AI models?
A) Overfitting
B) Lack of interpretability
C) Slow processing speed
D) Inability to use deep learning

A

B) Lack of interpretability
Explanation: Black-box models, such as deep neural networks, lack transparency, making it difficult to understand their decision-making process.

244
Q

What is the first phase of the AI development lifecycle?
A) Design
B) Develop
C) Deploy
D) Plan

A

D) Plan
Explanation: The AI development lifecycle consists of four stages—Plan, Design, Develop, and Deploy. The planning phase involves defining the business problem, mission, data requirements, and governance.

245
Q

Which type of AI bias is specifically addressed by the NIST AI RMF?
A) Cognitive bias in human decision-making
B) Statistical bias in AI training data
C) Bias in economic forecasting models
D) Bias in search engine optimization

A

B) Statistical bias in AI training data
Explanation: The NIST AI RMF provides guidelines for detecting and mitigating bias introduced through AI training data.

246
Q

How does the NIST AI RMF’s ‘Govern’ function align with the EU AI Act’s obligations for AI providers?
A) Both require organizations to establish clear accountability structures and risk assessment processes
B) The NIST AI RMF bans the use of high-risk AI applications
C) The EU AI Act does not require AI providers to establish governance policies
D) AI governance is only required for proprietary AI models

A

A) Both require organizations to establish clear accountability structures and risk assessment processes
Explanation: The ‘Govern’ function in NIST AI RMF and the EU AI Act’s AI provider obligations both emphasize structured governance and compliance frameworks.

247
Q

What is a key “legal challenge” in applying product liability laws to AI-based systems?
A) AI systems cannot be held liable because they lack legal personhood.
B) AI operates autonomously, making it difficult to attribute harm to a specific party.
C) AI-generated errors are covered by fair use laws.
D) AI decision-making is fully transparent, making liability easy to determine.

A

B) AI operates autonomously, making it difficult to attribute harm to a specific party.
Explanation: Because AI systems function independently, determining responsibility—whether it’s the developer, the user, or the AI itself—is legally complex.

248
Q

What is the main ethical concern regarding bias in AI decision-making?
A) AI bias only affects automated marketing decisions
B) AI bias can lead to unfair and discriminatory outcomes
C) AI bias is unavoidable and has no real consequences
D) AI bias is irrelevant in regulated industries

A

Correct Answer: B) AI bias can lead to unfair and discriminatory outcomes

Explanation: AI bias can result in discriminatory hiring, lending, and policing practices, making fairness and bias mitigation crucial in AI governance.

249
Q

Which AI deployment method is best suited for low-latency applications such as real-time facial recognition?
A) Cloud
B) On-premise
C) Edge
D) Containerization

A

C) Edge
Explanation: Edge computing processes data locally on devices, reducing latency and improving privacy but is limited by hardware constraints.

250
Q

Which factor is not part of the three considerations when determining AI project scope?
A) Impact
B) Effort
C) Fit
D) Scalability

A

D) Scalability
Explanation: The three key considerations when determining AI project scope are Impact, Effort, and Fit. Scalability is a later-stage consideration.

251
Q

Which of the following best describes ‘Unacceptable Risk’ AI systems under the EU AI Act?
A. AI applications that are strictly banned due to their potential to cause harm
B. AI systems that are encouraged with strong government support
C. AI solutions used for research and development without commercial deployment
D. AI models that do not require transparency and accountability

A

Correct Answer: A. AI applications that are strictly banned due to their potential to cause harm

Explanation: Unacceptable risk AI includes systems like social credit scoring, subliminal manipulation, and real-time biometric categorization in public spaces.

252
Q

Why is data governance a critical compliance factor in both the EU AI Act and NIST AI RMF?
A) Data governance ensures compliance with fairness, bias mitigation, and privacy regulations
B) Data governance is only required for AI models used in government
C) The EU AI Act does not address AI data governance
D) NIST AI RMF prohibits the use of external datasets in AI training

A

A) Data governance ensures compliance with fairness, bias mitigation, and privacy regulations
Explanation: Both frameworks highlight the importance of responsible data management to prevent AI bias, privacy violations, and fairness issues.

253
Q

Which of the following is not a typical limitation of proprietary AI models?
A) Limited transparency
B) High licensing costs
C) Guaranteed fairness and bias elimination
D) Dependence on the model provider

A

C) Guaranteed fairness and bias elimination
Explanation: Proprietary AI models are often criticized for lack of transparency and potential bias, as their decision-making processes are not publicly accessible.

254
Q

Which of the following is a key responsibility in the ‘Govern’ function of the NIST AI RMF?
A) Conducting AI performance benchmarking
B) Establishing policies and accountability structures for AI risk management
C) Developing AI infrastructure
D) Ensuring all AI models use supervised learning

A

B) Establishing policies and accountability structures for AI risk management
Explanation: The ‘Govern’ function ensures that organizations have policies and structures in place for AI oversight and risk management.

255
Q

Which AI risk factor is most relevant to third-party AI model procurement?
A) Lack of technical documentation
B) Increased model accuracy
C) Low training costs
D) Fast inference speed

A

A) Lack of technical documentation
Explanation: Organizations must review vendor policies, documentation, and security measures when using third-party AI models.

256
Q

What is a Distributor under the EU AI Act?
A) An entity that makes AI available in the EU after importation
B) A regulatory agency overseeing AI risks
C) A company that develops AI for commercial use
D) An organization deploying AI in a high-risk sector

A

A) An entity that makes AI available in the EU after importation
Explanation: Distributors make AI systems available as a follow-on action after importation.

257
Q

Which deployment environment offers maximum control but requires significant investment?
A) Cloud
B) On-premise
C) Edge
D) Hybrid

A

B) On-premise
Explanation: On-premise AI deployment allows greater data control but requires higher upfront costs for infrastructure and maintenance.

258
Q

What is the main purpose of the IEEE 7000-21 standard?
A) Regulate AI performance benchmarking
B) Provide ethical guidelines for AI design
C) Improve AI cybersecurity standards
D) Define AI patentability requirements

A

B) Provide ethical guidelines for AI design
Explanation: IEEE 7000-21 integrates ethics into AI development, ensuring that systems align with societal values.

259
Q

What is the primary goal of AI governance post-deployment?
A) Increase model accuracy
B) Reduce infrastructure costs
C) Ensure compliance, fairness, and accountability
D) Eliminate AI bias completely

A

C) Ensure compliance, fairness, and accountability
Explanation: AI governance post-deployment focuses on compliance, monitoring bias, and ensuring ethical AI operations.

260
Q

What distinguishes High-Risk AI Systems under the EU AI Act?
A) They must undergo rigorous compliance measures before deployment
B) They are banned from being used in the EU
C) They are only used in non-commercial applications
D) They require open-source licensing

A

A) They must undergo rigorous compliance measures before deployment
Explanation: High-risk AI systems must meet strict requirements for transparency, testing, and oversight.

261
Q

What is the primary goal of data wrangling in AI projects?
A) Training AI models
B) Formatting raw data into structured data
C) Eliminating all bias in data
D) Selecting the best features for training

A

B) Formatting raw data into structured data
Explanation: Data wrangling (or preparation) involves cleaning, structuring, and transforming raw data into a usable format.

262
Q

What is a key purpose of the Generative AI Profile within NIST’s AI RMF?
A) Regulating deepfake content
B) Establishing AI patent laws
C) Providing guidance on risk management for generative AI
D) Replacing GDPR in AI regulation

A

C) Providing guidance on risk management for generative AI
Explanation: The Generative AI Profile helps organizations identify and mitigate risks associated with AI-generated content.

263
Q

What is a Data Protection Impact Assessment (DPIA)?
A) A tool for assessing privacy risks associated with data processing
B) A strategy for improving AI training data
C) A method to optimize AI model accuracy
D) A technique for encrypting sensitive data

A

Correct Answer: A) A tool for assessing privacy risks associated with data processing

Explanation: A DPIA identifies, evaluates, and mitigates potential privacy risks before deploying AI systems that handle personal data.

264
Q

How does NIST AI RMF’s risk measurement approach impact the deployment of proprietary AI models in regulated sectors?
A) Proprietary AI models may lack sufficient documentation to meet NIST AI RMF risk measurement standards
B) NIST AI RMF does not apply to AI models used in regulated sectors
C) Proprietary AI models are always compliant with NIST AI RMF risk measurement standards
D) NIST AI RMF does not require measuring AI risks

A

A) Proprietary AI models may lack sufficient documentation to meet NIST AI RMF risk measurement standards
Explanation: The limited access to proprietary AI model documentation creates challenges in assessing risks using NIST AI RMF methodologies.

265
Q

What is the main purpose of transfer learning in AI?
A) Allowing an AI model to apply knowledge from one task to another similar task
B) Training AI models from scratch for each new problem
C) Using unsupervised learning to develop generalized AI
D) Restricting AI applications to specific industries

A

Correct Answer: A) Allowing an AI model to apply knowledge from one task to another similar task

Explanation: Transfer learning enables models to leverage pre-trained knowledge, improving efficiency when adapting to new tasks with limited data.

266
Q

Which of the following risk assessment approaches is emphasized in the NIST AI Risk Management Framework (AI RMF)?
A) Binary risk evaluation
B) Continuous risk monitoring
C) Static risk classification
D) Algorithmic bias prioritization

A

B) Continuous risk monitoring
Explanation: NIST AI RMF advocates continuous assessment of AI risks throughout its lifecycle rather than static or one-time evaluations.

267
Q

Which international organization developed the AI Risk Management Framework (AI RMF)?
A) OECD
B) NIST
C) IEEE
D) ISO

A

B) NIST
Explanation: The National Institute of Standards and Technology (NIST) created AI RMF to help organizations manage AI risks and compliance.

268
Q

According to the Equal Credit Opportunity Act (ECOA), what must creditors provide when making an adverse decision using AI-based credit scoring?
A) The raw data used by the AI to evaluate the applicant
B) A full audit report of the AI model
C) A specific explanation of the factors leading to the adverse decision
D) The name of the AI developer responsible for the model

A

C) A specific explanation of the factors leading to the adverse decision
Explanation: The ECOA requires creditors to explain specific reasons for denying credit applications, even when using AI, to prevent discrimination and ensure transparency.

269
Q

What defines a General Purpose AI (GPAI) Model under the EU AI Act?
A) A model trained with reinforcement learning only
B) An AI system optimized for a single use case
C) A foundation model with broad applicability across tasks
D) An AI model that operates solely on structured data

A

C) A foundation model with broad applicability across tasks
Explanation: GPAI models, also called foundation models, display generality and can be integrated into multiple applications.

270
Q

How many questions are included in the AIGP certification exam?
A. 50
B. 75
C. 100
D. 150

A

Correct Answer: C. 100

Explanation: The AIGP exam consists of 100 multiple-choice questions, designed to assess knowledge of AI governance and risk management.

271
Q

What does Article 50 of the EU AI Act require for certain AI systems?
A) Mandatory bias audits for all AI models
B) Transparency obligations for providers and deployers
C) A restriction on AI systems generating biometric data
D) A ban on using AI in commercial applications

A

B) Transparency obligations for providers and deployers
Explanation: Article 50 requires AI providers to disclose system capabilities, risks, and transparency measures.

272
Q

How does the NIST AI RMF define AI risk?
A) The probability that an AI model will fail operationally
B) The potential for an AI system to cause unintended harm or impact trustworthiness
C) The financial cost of deploying an AI model
D) The computational requirements needed for AI processing

A

B) The potential for an AI system to cause unintended harm or impact trustworthiness
Explanation: AI risk refers to unintended consequences, such as bias, security vulnerabilities, and loss of trust in AI systems.

273
Q

How does the principle of AI transparency in the NIST AI RMF align with proprietary AI model governance?
A) Proprietary models are fully transparent by default
B) Proprietary models often lack transparency, making compliance with NIST AI RMF more challenging
C) NIST AI RMF does not require AI transparency
D) Transparency is only relevant to AI models used in government applications

A

B) Proprietary models often lack transparency, making compliance with NIST AI RMF more challenging
Explanation: The NIST AI RMF emphasizes transparency, but proprietary AI models may limit access to information about their decision-making processes.

274
Q

Which AI governance principle is common to both the EU AI Act and NIST AI RMF?
A) Requiring all AI models to be open-source
B) Implementing continuous risk monitoring and documentation
C) Mandating real-time AI decision-making
D) Eliminating human oversight from AI operations

A

B) Implementing continuous risk monitoring and documentation
Explanation: Both frameworks stress the importance of ongoing AI risk assessments, audits, and documentation to ensure compliance and responsible AI deployment.

275
Q

How does AI explainability relate to compliance with the EU AI Act and NIST AI RMF?
A) The EU AI Act requires AI explainability for all AI models, while NIST AI RMF only suggests it as a best practice
B) NIST AI RMF prohibits the use of explainable AI
C) The EU AI Act does not require AI transparency in any risk category
D) Both frameworks require explainability only for proprietary AI models

A

A) The EU AI Act requires AI explainability for all AI models, while NIST AI RMF only suggests it as a best practice
Explanation: The EU AI Act mandates transparency for Limited and High-Risk AI models, while the NIST AI RMF recommends explainability as a key principle in risk management.

276
Q

Which of the following is not a key feature of General Purpose AI (GPAI)?
A) Performs diverse tasks across different applications
B) Operates exclusively in high-risk sectors
C) Can be integrated into downstream applications
D) Trained on large datasets with self-supervised learning

A

B) Operates exclusively in high-risk sectors
Explanation: GPAI models are used across various domains, not just high-risk applications.

277
Q

Under U.S. law, which of the following statements about AI-generated intellectual property is correct?
A) AI can be named as an inventor if it demonstrates autonomous creativity.
B) AI-generated works cannot be copyrighted because they lack human authorship.
C) AI systems must have a legal personality to file patents in the U.S. and Europe.
D) The European Patent Office has granted patents to AI-generated inventions.

A

B) AI-generated works cannot be copyrighted because they lack human authorship.
Explanation: U.S. copyright law requires human authorship. Cases such as Thaler v. Vidal (2023) confirm that AI cannot be named as an inventor, and the U.S. Copyright Office has repeatedly denied AI-generated works copyright protection.

278
Q

Which of the following best describes risk management?
A. Eliminating all risks associated with AI
B. Identifying, assessing, and mitigating risks
C. Allowing AI systems to manage their own risks
D. Focusing only on financial risks

A

Correct Answer: B. Identifying, assessing, and mitigating risks

Explanation: Risk management is a structured approach to assessing potential AI-related harms and implementing controls to mitigate them.

279
Q

d

How do expert systems support decision-making?
A) By simulating human reasoning through a structured knowledge base
B) By randomly generating decisions based on probability
C) By eliminating human intervention in decision-making
D) By requiring massive deep learning datasets

A

Correct Answer: A) By simulating human reasoning through a structured knowledge base

Explanation: Expert systems use inference engines and rule-based logic to mimic human decision-making in specific domains, such as medical diagnosis or tax preparation.

280
Q

Which of the following is a key characteristic of reinforcement learning?
A) It requires extensive labeled datasets
B) It uses feedback in the form of rewards and penalties
C) It cannot adapt to real-world environments
D) It eliminates the need for human oversight

A

Correct Answer: B) It uses feedback in the form of rewards and penalties

Explanation: Reinforcement learning optimizes decision-making by learning from a series of rewards and punishments.

281
Q

Which privacy regulation is most closely aligned with Privacy by Design principles?
A) General Data Protection Regulation (GDPR)
B) California Consumer Privacy Act (CCPA)
C) Biometric Information Privacy Act (BIPA)
D) Digital Millennium Copyright Act (DMCA)

A

Correct Answer: A) General Data Protection Regulation (GDPR)

Explanation: GDPR incorporates Privacy by Design as a core requirement, mandating that organizations implement data protection throughout system development and operations.

282
Q

Which EU authority is responsible for supervising AI compliance at the national level?
A) European Data Protection Board (EDPB)
B) Market Surveillance Authority (MSA)
C) AI Risk Council (AIRC)
D) European Patent Office (EPO)

A

B) Market Surveillance Authority (MSA)
Explanation: MSAs are national regulatory bodies that enforce AI Act compliance and conduct audits.

283
Q

Which of the following describes the ‘Purpose Specification’ principle in Fair Information Practices?
A) Data should be accurate and relevant
B) Organizations should specify why data is collected
C) Individuals should have the right to access their data
D) Data should be retained indefinitely

A

Correct Answer: B) Organizations should specify why data is collected

Explanation: Purpose Specification requires that organizations define the reasons for collecting personal data and limit its use to those specified purposes.

284
Q

Why is risk-based regulation an essential aspect of the EU AI Act?
A. It ensures that all AI systems are banned to prevent risks
B. It provides differentiated requirements based on the potential impact of AI systems
C. It allows AI developers to self-regulate without oversight
D. It only applies to AI systems that interact with humans

A

Correct Answer: B. It provides differentiated requirements based on the potential impact of AI systems

Explanation: The EU AI Act applies stricter rules to high-risk AI while imposing fewer obligations on minimal and limited-risk systems, ensuring a balanced approach to regulation.

285
Q

What does the ‘Map’ function of the NIST AI RMF focus on?
A) Identifying and contextualizing AI risks
B) Implementing AI models for production use
C) Optimizing AI model performance
D) Ensuring data privacy compliance

A

A) Identifying and contextualizing AI risks
Explanation: The ‘Map’ function helps organizations understand and document AI risks within their operational context.

286
Q

What is a shared challenge between proprietary AI models and General Purpose AI (GPAI) in the context of regulatory compliance?
A) The need for increased transparency and disclosure of training data
B) Proprietary AI models are automatically compliant, while GPAI requires special approval
C) GPAI models are not subject to any AI regulations
D) Proprietary AI models do not require risk assessments

A

A) The need for increased transparency and disclosure of training data
Explanation: Both proprietary AI models and GPAI raise concerns about the lack of transparency in their training data and decision-making processes, impacting compliance with risk management frameworks.

287
Q

What distinguishes Annexes in the EU AI Act?
A) They are legally binding like Articles and provide technical details
B) They only offer non-binding guidance
C) They function as executive orders for AI governance
D) They serve as country-specific regulations

A

A) They are legally binding like Articles and provide technical details
Explanation: Annexes contain technical specifications referenced in Articles and are updated more frequently.

288
Q

What is the primary role of an Artificial Intelligence Governance Professional (AIGP)?
A. Develop AI models for business use
B. Implement responsible AI practices and risk management
C. Create AI marketing strategies
D. Train AI models for natural language processing

A

Correct Answer: B. Implement responsible AI practices and risk management

Explanation: AIGPs focus on AI ethics, compliance, risk management, and ensuring AI systems align with legal and organizational standards.

289
Q

Which AI-related risk is most concerning for consumer protection?
A. AI hallucinations
B. Biased decision-making
C. High compute power usage
D. AI development speed

A

Correct Answer: B. Biased decision-making

Explanation: Biased AI can lead to unfair outcomes, discrimination, and violations of consumer protection laws, making it a critical governance concern.

290
Q

Which AI deployment strategy allows AI models to run locally on user devices?
A) Cloud computing
B) On-premise hosting
C) Edge computing
D) Serverless AI

A

C) Edge computing
Explanation: Edge computing allows AI models to run on local devices like smartphones, reducing latency and improving privacy.

291
Q

According to copyright law, which factor is not part of the four-factor fair use test?
A) The commercial or nonprofit nature of the use
B) The purpose and character of the use
C) The amount and substantiality of the portion used
D) Whether the work is classified as a derivative

A

D) Whether the work is classified as a derivative
Explanation: The four factors of fair use are: (1) purpose and character of use, (2) nature of the copyrighted work, (3) amount and substantiality, and (4) market impact. Derivative works are a separate concept in copyright law.

292
Q

Which AI principle ensures that AI systems are safe, reliable, and function as intended?
A) Transparency and Explainability
B) Robustness, Security, and Safety
C) Inclusive Growth
D) Algorithmic Optimization

A

B) Robustness, Security, and Safety
Explanation: This principle emphasizes that AI systems should be resilient to attacks, reliable under stress, and secure throughout their lifecycle.

293
Q

Which AI operator places AI systems on the EU market but does not develop them?
A) Provider
B) Importer
C) Deployer
D) Distributor

A

B) Importer
Explanation: Importers ensure that AI systems from third countries comply with EU AI Act standards before entering the market.

294
Q

Which of the following is NOT a responsibility of an AIGP?
A. Monitoring AI systems throughout their lifecycle
B. Implementing AI-driven cybersecurity protocols
C. Addressing AI bias, discrimination, and privacy concerns
D. Establishing AI governance frameworks

A

Correct Answer: B. Implementing AI-driven cybersecurity protocols

Explanation: While AIGPs handle AI governance and risk management, cybersecurity is typically managed by security professionals specializing in AI threats.

295
Q

What is the purpose of data labeling in AI model training?
A) To transform unstructured data into structured data
B) To remove duplicate data
C) To categorize and tag data for supervised learning
D) To encrypt personal data

A

C) To categorize and tag data for supervised learning
Explanation: Labeled data enables supervised learning models to map input to output correctly.

296
Q

What is the primary purpose of the NIST AI Risk Management Framework (AI RMF)?
A) Establishing AI licensing requirements
B) Providing voluntary guidelines for managing AI risks
C) Restricting AI development in high-risk sectors
D) Certifying AI models before deployment

A

B) Providing voluntary guidelines for managing AI risks
Explanation: The NIST AI RMF is a voluntary framework that helps organizations identify, assess, and mitigate AI risks.

297
Q

What is a shared governance challenge between General Purpose AI (GPAI) models and proprietary AI models under the EU AI Act?
A) The lack of transparency in both models makes accountability and compliance more difficult
B) GPAI models are always exempt from EU AI Act compliance
C) Proprietary AI models are automatically classified as low-risk AI
D) GPAI models are banned under the EU AI Act

A

A) The lack of transparency in both models makes accountability and compliance more difficult
Explanation: Both GPAI and proprietary AI models often restrict access to their training data and internal operations, complicating compliance with EU AI Act requirements.

298
Q

Which term best describes NIST’s ARIA framework?
A) Adversarial Resilience in AI
B) Assessing Risks and Impacts of AI
C) AI Reliability and Integrity Assessment
D) Automated Regulatory Intelligence for AI

A

B) Assessing Risks and Impacts of AI
Explanation: NIST’s ARIA framework focuses on risk and impact evaluation of AI models, including security concerns and bias.

299
Q

What is a major difference between AI governance requirements in the EU AI Act and the NIST AI RMF?
A) The EU AI Act mandates compliance for AI providers, while NIST AI RMF provides voluntary guidance
B) Both frameworks have legally binding rules for AI governance
C) The NIST AI RMF bans the use of proprietary AI models
D) The EU AI Act does not regulate AI governance

A

A) The EU AI Act mandates compliance for AI providers, while NIST AI RMF provides voluntary guidance
Explanation: The EU AI Act enforces legally binding AI governance obligations, while the NIST AI RMF serves as a voluntary risk management framework.

300
Q

What is the primary purpose of assessing AI readiness before deployment?
A) To increase computational speed
B) To identify AI’s business value and data requirements
C) To optimize model accuracy
D) To evaluate user experience

A

B) To identify AI’s business value and data requirements
Explanation: Readiness assessment ensures AI deployment aligns with business objectives, data governance, and compliance requirements.

301
Q

How does the concept of AI fairness in the NIST AI RMF relate to bias auditing requirements in the EU AI Act?
A) NIST AI RMF provides best practices for AI fairness, while the EU AI Act mandates bias audits for high-risk AI systems
B) AI fairness is only relevant for models used in financial services
C) The EU AI Act does not require bias audits
D) NIST AI RMF bans the use of biased AI models

A

A) NIST AI RMF provides best practices for AI fairness, while the EU AI Act mandates bias audits for high-risk AI systems
Explanation: While NIST AI RMF suggests fairness strategies, the EU AI Act legally requires bias testing and mitigation for high-risk AI applications.

302
Q

What is the biggest limitation of edge computing for AI deployment?
A) Increased latency
B) Higher infrastructure costs
C) Limited processing power
D) Requires internet connectivity

A

C) Limited processing power
Explanation: Edge computing is constrained by device hardware, which limits the complexity of AI models that can be deployed.

303
Q

What is the purpose of Fundamental Rights Impact Assessments (FRIA) in AI deployment?
A. To improve AI model accuracy through user feedback
B. To assess how AI affects privacy, discrimination risks, and fundamental rights
C. To justify exemptions for AI providers from regulatory requirements
D. To allow AI systems to self-certify their fairness

A

Correct Answer: B. To assess how AI affects privacy, discrimination risks, and fundamental rights

Explanation: FRIA helps evaluate potential human rights concerns before AI deployment, ensuring compliance with ethical and legal standards.

304
Q

The case “Silverman v. OpenAI” primarily revolved around which legal question?
A) Whether AI-generated content is protected under copyright law
B) Whether training AI models using copyrighted books without permission constitutes infringement
C) Whether AI-generated books can receive fair use protection
D) Whether AI systems can autonomously apply fair use defenses

A

B) Whether training AI models using copyrighted books without permission constitutes infringement
Explanation: The plaintiffs claimed OpenAI used copyrighted books to train its AI models without consent, but the court ruled that training on copyrighted data did not automatically constitute infringement.

305
Q

Which international AI regulatory framework is specifically focused on human rights, democracy, and rule of law?
A) HUDERAF
B) NIST AI RMF
C) IEEE 7000-21
D) ISO 31000:2018

A

A) HUDERAF
Explanation: The Human Rights, Democracy, and the Rule of Law Assurance Framework for AI Systems (HUDERAF) was developed to assess AI’s societal impact.

306
Q

Which AI sector is considered High-Risk under the EU AI Act?
A) AI-powered video game assistants
B) Automated credit scoring systems
C) AI-generated music composition tools
D) AI used in personal assistants

A

B) Automated credit scoring systems
Explanation: AI used in credit scoring, healthcare, and biometric identification is classified as high-risk.

307
Q

Under Fair Information Practices, which principle ensures individuals can access and amend their personal data?
A) Notice
B) Data Minimization
C) Access and Individual Participation
D) Accountability

A

Correct Answer: C) Access and Individual Participation

Explanation: This principle ensures data subjects have the right to view and modify their personal data, enhancing transparency and control over their information.

308
Q

Which AI framework focuses on fairness and algorithmic discrimination prevention?
A) NIST AI RMF
B) The AI Bill of Rights
C) ISO 31000:2018
D) IEEE 7000-21

A

B) The AI Bill of Rights
Explanation: The AI Bill of Rights includes specific protections against bias and unfair algorithmic decision-making.

309
Q

What is the main objective of the EU AI Act?
A. To impose strict bans on all AI applications
B. To ensure AI development and deployment is safe, transparent, and respects fundamental rights
C. To allow unrestricted AI innovation within the EU market
D. To create a unified AI model for all European countries

A

Correct Answer: B. To ensure AI development and deployment is safe, transparent, and respects fundamental rights

Explanation: The EU AI Act aims to regulate AI based on risk levels while fostering innovation and protecting fundamental rights.

310
Q

Which AI compliance requirement ensures AI fairness?
A) Transparency reports
B) Bias testing and mitigation
C) Reducing model complexity
D) Increasing dataset size

A

B) Bias testing and mitigation
Explanation: Bias audits help organizations identify and mitigate unfair AI model behaviors.

311
Q

Which risk mitigation strategy applies to both the NIST AI RMF and the EU AI Act?
A) Prohibiting all AI models trained with large datasets
B) Establishing risk governance structures and transparency reports
C) Allowing AI models to operate without regulatory oversight
D) Mandating AI models to function without human involvement

A

B) Establishing risk governance structures and transparency reports
Explanation: Both frameworks emphasize governance, requiring organizations to define AI risks and document transparency measures.

312
Q

How does the concept of continuous AI monitoring under the NIST AI RMF relate to post-deployment obligations in the EU AI Act?
A) Both frameworks require ongoing risk evaluation and model monitoring to prevent AI failures
B) AI monitoring is only relevant before deployment
C) The EU AI Act does not require post-deployment monitoring
D) NIST AI RMF prohibits AI monitoring

A

A) Both frameworks require ongoing risk evaluation and model monitoring to prevent AI failures
Explanation: Both frameworks emphasize that AI models should be continuously evaluated to detect biases, risks, and performance shifts.

313
Q

Which of the following is NOT considered a legally binding component of the EU AI Act?
A. Articles
B. Annexes
C. Recitals
D. Market Surveillance Authority reports

A

Correct Answer: C. Recitals

Explanation: Recitals provide context and objectives for the legislation but are not legally binding. Articles and annexes, however, carry legal weight and define obligations.

314
Q

What is the primary focus of ‘explainable AI’ (XAI)?
A) Ensuring AI models provide interpretable and transparent decisions
B) Increasing AI system speed and performance
C) Restricting access to AI decision-making
D) Eliminating the need for AI regulation

A

Correct Answer: A) Ensuring AI models provide interpretable and transparent decisions

Explanation: Explainable AI (XAI) improves trust and accountability by making AI decision-making understandable to users and regulators.

315
Q

What is the main goal of alignment in AI systems?
A) Ensuring AI models optimize for the fastest response time
B) Matching AI behavior to human goals, ethical values, and societal expectations
C) Maximizing AI autonomy while reducing human intervention
D) Eliminating uncertainty in AI predictions

A

Correct Answer: B) Matching AI behavior to human goals, ethical values, and societal expectations

Explanation: AI alignment ensures that AI systems act in ways that are consistent with human values, safety requirements, and ethical considerations.

316
Q

Which AI governance principle is emphasized in both proprietary AI models and the NIST AI RMF’s ‘Govern’ function?
A) Open-source development is mandatory
B) Organizations must establish clear accountability and compliance structures
C) AI models should not require human oversight
D) AI risk management applies only to General Purpose AI (GPAI)

A

B) Organizations must establish clear accountability and compliance structures
Explanation: Both proprietary AI governance and the NIST AI RMF stress the need for organizational accountability and structured AI governance policies.

317
Q

Which of the following is an example of a high-risk AI application?
A) Email spam filtering
B) Predictive hiring assessments
C) Online shopping recommendations
D) AI-generated artwork

A

B) Predictive hiring assessments
Explanation: AI in hiring decisions can introduce bias and legal risks, making it a high-risk application requiring strict governance.

318
Q

Which of the following is NOT an AI governance framework?
A) NIST AI RMF
B) ISO 42001
C) The AI Bill of Rights
D) The Turing Test

A

Correct Answer: D) The Turing Test

Explanation: The Turing Test measures AI’s ability to mimic human intelligence but is not an AI governance framework.

319
Q

What distinguishes supervised learning from unsupervised learning?
A) Supervised learning requires labeled data, while unsupervised learning does not
B) Unsupervised learning is more accurate than supervised learning
C) Supervised learning is used only in robotics
D) Unsupervised learning requires human annotations

A

Correct Answer: A) Supervised learning requires labeled data, while unsupervised learning does not

Explanation: Supervised learning maps input-output relationships using labeled data, whereas unsupervised learning identifies patterns without predefined labels.

320
Q

Which AI governance structure best ensures accountability in AI projects?
A) Having only the development team oversee AI risks
B) Creating a dedicated AI ethics and governance board
C) Relying on government regulations alone
D) Allowing self-regulation by AI developers

A

B) Creating a dedicated AI ethics and governance board
Explanation: AI governance boards help oversee compliance, risk management, and ethical AI implementation.

321
Q

Which regulation primarily affects the use of proprietary AI models in Europe?
A) The AI Bill of Rights
B) The EU AI Act
C) The Digital Millennium Copyright Act (DMCA)
D) The Federal Trade Commission (FTC) Guidelines

A

B) The EU AI Act
Explanation: The EU AI Act places strict obligations on proprietary AI providers, especially in high-risk applications.

322
Q

What is the primary purpose of adversarial testing in AI security?
A) Optimize AI model performance
B) Protect AI models against manipulation and attacks
C) Increase the speed of model inference
D) Reduce training dataset size

A

B) Protect AI models against manipulation and attacks
Explanation: Adversarial testing helps identify vulnerabilities where attackers can manipulate AI model outputs.

323
Q

What is the primary goal of the ‘Manage’ function in the NIST AI RMF?
A) Establishing governance policies
B) Mitigating and responding to AI risks
C) Mapping AI development lifecycles
D) Improving computational efficiency

A

B) Mitigating and responding to AI risks
Explanation: The ‘Manage’ function focuses on taking action to reduce and manage AI-related risks effectively.

324
Q

Why is model versioning important in AI deployment?
A) Ensures backward compatibility
B) Helps track model changes and rollback if issues arise
C) Reduces training time
D) Prevents AI bias

A

B) Helps track model changes and rollback if issues arise
Explanation: Versioning allows organizations to maintain model integrity and revert to previous versions if needed.

325
Q

How does the use of proprietary AI models impact compliance with the NIST AI RMF’s ‘Measure’ function?
A) Proprietary AI models provide full access to their training data, making risk measurement easier
B) Limited transparency in proprietary AI models makes it difficult to assess risks and biases
C) The NIST AI RMF does not require measuring AI risk
D) AI risk measurement is only required for models deployed in government applications

A

B) Limited transparency in proprietary AI models makes it difficult to assess risks and biases
Explanation: The ‘Measure’ function in the NIST AI RMF focuses on risk evaluation, which is hindered by the closed nature of proprietary AI models.

326
Q

Which neural network architecture is best suited for time-series prediction?
A) CNN (Convolutional Neural Network)
B) RNN (Recurrent Neural Network)
C) GAN (Generative Adversarial Network)
D) Decision Trees

A

B) RNN (Recurrent Neural Network)
Explanation: RNNs process sequential data and are commonly used in speech recognition, time-series forecasting, and natural language processing.

327
Q

Which organization developed the OECD AI Classification Framework?
A) United Nations
B) World Economic Forum
C) Organisation for Economic Co-operation and Development (OECD)
D) National Institute of Standards and Technology (NIST)

A

C) Organisation for Economic Co-operation and Development (OECD)
Explanation: The OECD developed this framework to classify AI systems based on factors like economic context, data inputs, and potential risks.

328
Q

Which AI governance framework emphasizes fairness, accountability, and transparency in AI development?
A) ISO 9001
B) IEEE 7000-21
C) OECD AI Principles
D) SOC 2 Compliance

A

C) OECD AI Principles
Explanation: The OECD AI Principles emphasize fairness, transparency, and accountability to promote responsible AI.

329
Q

Which compliance measure for AI governance is emphasized in both proprietary AI models and NIST AI RMF risk mitigation?
A) Open-source licensing requirements
B) Continuous AI model monitoring and bias audits
C) Requirement for AI models to be decentralized
D) Limiting AI models to specific industries

A

B) Continuous AI model monitoring and bias audits
Explanation: Both proprietary AI models and AI governance frameworks stress the importance of monitoring AI performance and conducting regular bias audits.

330
Q

How does the NIST AI RMF’s ‘Govern’ function relate to compliance with proprietary AI governance?
A) It recommends AI governance policies that ensure accountability, transparency, and ethical compliance
B) Proprietary AI models are automatically exempt from governance requirements
C) The EU AI Act does not regulate AI governance
D) The ‘Govern’ function does not apply to AI governance

A

A) It recommends AI governance policies that ensure accountability, transparency, and ethical compliance
Explanation: The ‘Govern’ function in the NIST AI RMF provides best practices for structuring AI risk management policies, including accountability and transparency measures.

331
Q

What common compliance principle applies to both proprietary AI models and General Purpose AI (GPAI) under the EU AI Act?
A) Both must comply with transparency, documentation, and risk management obligations
B) GPAI models are automatically exempt from all AI regulations
C) Proprietary AI models are always classified as minimal risk
D) The EU AI Act does not regulate GPAI models

A

A) Both must comply with transparency, documentation, and risk management obligations
Explanation: The EU AI Act enforces specific compliance requirements for GPAI and proprietary AI models, including transparency and accountability obligations.

332
Q

The “FTC Algorithmic Disgorgement” principle refers to:
A) The requirement for AI companies to delete models trained on unauthorized data
B) The obligation of AI developers to disclose their training data sources
C) The necessity for AI tools to be tested before commercial deployment
D) The financial penalties imposed on AI companies violating intellectual property laws

A

A) The requirement for AI companies to delete models trained on unauthorized data
Explanation: The FTC has ordered companies to delete both datasets and models built on improperly obtained personal data, as seen in cases against Amazon and Cambridge Analytica.

333
Q

What is the main goal of risk-based AI regulation?
A) Applying stricter rules to AI systems with higher risks
B) Treating all AI applications the same regardless of risk level
C) Removing all AI governance frameworks
D) Banning AI development

A

Correct Answer: A) Applying stricter rules to AI systems with higher risks

Explanation: Risk-based AI regulation tailors oversight requirements based on the potential impact of AI applications, imposing stricter measures on high-risk systems.

334
Q

What is a shared compliance challenge for AI deployers under both the EU AI Act and NIST AI RMF?
A) Implementing continuous AI monitoring and bias detection
B) Ensuring all AI models are developed in Europe
C) The requirement to disclose all proprietary AI models to the public
D) The prohibition of AI in high-risk industries

A

A) Implementing continuous AI monitoring and bias detection
Explanation: Both frameworks emphasize ongoing monitoring and risk assessments to ensure that AI systems remain compliant, fair, and unbiased over time.

335
Q

What does the ‘garbage in, garbage out principle mean in AI?
A) AI can only function correctly if trained on high-quality data
B) AI systems should not be trained on large datasets
C) AI models never produce incorrect outputs
D) AI can make inferences without any training data

A

A) AI can only function correctly if trained on high-quality data
Explanation: Poor-quality data leads to unreliable AI predictions, reinforcing the need for data cleansing and validation.

336
Q

Which of the following statements is true regarding AI-driven decision-making?
A) AI decisions are always objective and free from bias
B) AI decisions require explainability to ensure transparency and fairness
C) AI can operate independently without any form of governance
D) AI systems do not impact human decision-making

A

Correct Answer: B) AI decisions require explainability to ensure transparency and fairness

Explanation: Explainability is crucial for understanding AI-driven decisions, ensuring fairness, and mitigating biases.

337
Q

How does AI governance under the NIST AI RMF align with compliance requirements in the EU AI Act? (Variation 43)
A) Both frameworks emphasize risk management and accountability
B) The EU AI Act requires open-source AI models, while NIST AI RMF does not
C) NIST AI RMF does not address AI governance principles
D) The EU AI Act does not require AI accountability

A

A) Both frameworks emphasize risk management and accountability
Explanation: The NIST AI RMF and EU AI Act share governance principles, requiring organizations to define AI risks, implement monitoring mechanisms, and ensure ethical accountability.

338
Q

How does the risk classification approach in the EU AI Act compare with the risk management principles in the NIST AI RMF?
A) The EU AI Act classifies AI risks into categories with mandatory obligations, while NIST AI RMF provides a flexible approach to managing AI risks
B) NIST AI RMF mandates strict bans on all high-risk AI applications
C) The EU AI Act is entirely voluntary, just like the NIST AI RMF
D) NIST AI RMF does not account for AI risks

A

A) The EU AI Act classifies AI risks into categories with mandatory obligations, while NIST AI RMF provides a flexible approach to managing AI risks
Explanation: The EU AI Act assigns AI applications to risk levels (Minimal, Limited, High, and Unacceptable), while NIST AI RMF provides adaptable guidelines for organizations to assess and mitigate AI risks.

339
Q

Which of the following is not a characteristic of the EU AI Act’s Recitals?
A) They explain the context and objectives of the legislation
B) They are not legally binding
C) They provide guidance for interpreting Articles
D) They establish enforceable obligations

A

D) They establish enforceable obligations
Explanation: Recitals explain legislative intent but are not legally binding like Articles.

340
Q

How does reinforcement learning differ from supervised learning?
A) Reinforcement learning relies on labeled data, while supervised learning does not
B) Reinforcement learning optimizes actions based on feedback, while supervised learning learns from pre-labeled datasets
C) Supervised learning uses real-time feedback, while reinforcement learning does not
D) Reinforcement learning only applies to static environments

A

Correct Answer: B) Reinforcement learning optimizes actions based on feedback, while supervised learning learns from pre-labeled datasets

Explanation: Reinforcement learning focuses on sequential decision-making with trial-and-error learning, while supervised learning uses labeled data for training.

341
Q

Which of the following is not one of the 5 Vs of big data?
A) Variety
B) Velocity
C) Visibility
D) Veracity

A

C) Visibility
Explanation: The 5 Vs of big data are Variety, Velocity, Volume, Veracity, and Value.

342
Q

Why is risk alignment critical in AI risk management?
A. It ensures AI risks align with an organization’s overall risk strategy
B. It focuses only on financial AI risks
C. It prevents AI from evolving independently
D. It guarantees AI will always operate safely

A

Correct Answer: A. It ensures AI risks align with an organization’s overall risk strategy

Explanation: AI risk management must be integrated with an organization’s broader risk framework to maintain consistency and regulatory compliance.

343
Q

Which of the following is not an AI testing method?
A) Bias testing
B) Hyperparameter tuning
C) Adversarial testing
D) Robustness testing

A

B) Hyperparameter tuning
Explanation: Hyperparameter tuning is an optimization process, not a testing method. AI testing methods include bias testing, robustness testing, and adversarial testing.

344
Q

Which global framework first introduced the concept of Fair Information Practices (FIPs)?
A) General Data Protection Regulation (GDPR)
B) The OECD Guidelines (1980)
C) California Consumer Privacy Act (CCPA)
D) NIST Privacy Framework

A

Correct Answer: B) The OECD Guidelines (1980)

Explanation: The OECD Guidelines established the foundational principles of Fair Information Practices, which have influenced various privacy regulations globally.

345
Q

Which of the following cases challenged the use of copyrighted books to train AI?
A) Burrow-Giles Lithographic v. Sarony
B) Silverman v. OpenAI
C) Thaler v. Vidal
D) Rogers v. Christie

A

B) Silverman v. OpenAI
Explanation: This case involved allegations that OpenAI used copyrighted books to train its AI models without consent, raising questions about AI training data and fair use.

346
Q

Which of the following is not an objective of the EU AI Act?
A) Ensuring AI respects fundamental rights
B) Encouraging unrestricted AI experimentation
C) Providing legal certainty for innovation
D) Prohibiting certain AI practices

A

B) Encouraging unrestricted AI experimentation
Explanation: The Act ensures AI safety while setting legal boundaries on unacceptable practices.

347
Q

Which category of AI systems is subject to the strictest regulations under the EU AI Act?
A. Minimal risk AI systems
B. Limited risk AI systems
C. High-risk AI systems
D. General-purpose AI models

A

Correct Answer: C. High-risk AI systems

Explanation: High-risk AI systems must comply with strict requirements such as risk management, data governance, human oversight, and transparency to minimize risks to safety and fundamental rights.