AI Challenges and Responsibilities Flashcards

1
Q

What is explainability in Responsible AI?

A

The understanding of the nature and behavior of an ML model, explaining outputs without knowing internal workings.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What does interpretability mean in Responsible AI?

A

A human can understand the cause of a model’s decision, answering ‘why’ and ‘how’.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What does privacy and security mean in Responsible AI?

A

Ensuring individuals control when and if their data is used by models.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is transparency in Responsible AI?

A

Being open about how AI models work and how decisions are made.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is veracity and robustness in Responsible AI?

A

The system remains reliable even in unexpected or adverse conditions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is AI governance?

A

Policies, processes, and tools that ensure AI is developed and used responsibly.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is the aim of safety in Responsible AI?

A

To ensure algorithms are safe and beneficial to individuals and society.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What does controllability in Responsible AI refer to?

A

Aligning AI models to human values and intent.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is Amazon Bedrock Guardrails used for?

A

Filtering content, redacting PII, enhancing safety, and blocking harmful content.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What can SageMaker Clarify evaluate?

A

Accuracy, robustness, toxicity, and bias in foundational models.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

How does Data Wrangler help with bias?

A

Using Augment Data to balance datasets by generating new instances for underrepresented groups.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What does SageMaker Model Monitor do?

A

Performs quality analysis on models in production.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is Amazon A2I (Augmented AI) used for?

A

Allows human review of ML predictions with low confidence.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What does SageMaker Role Manager help with?

A

Implements user-level security for model governance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is the purpose of Model Cards in AWS?

A

To document models including use cases, limitations, and metrics.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What are AWS AI Service Cards?

A

Responsible AI documentation for AWS services with use cases, limitations, and design choices.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is a high-interpretability model example?

A

A decision tree – easy to interpret and visualize.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What is a trade-off in model interpretability and performance?

A

Higher interpretability often means lower performance and vice versa.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What is Human-Centered Design (HCD) in Responsible AI?

A

Designing AI systems to prioritize human needs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What does amplified decision making focus on in HCD?

A

Designing for clarity, simplicity, and usability in high-pressure decisions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What is unbiased decision making in HCD?

A

Recognizing and mitigating bias in datasets and decision processes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What is cognitive apprenticeship in HCD?

A

AI learns from human experts (e.g., RLHF), and humans learn from AI with personalization.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What is user-centered design in Responsible AI?

A

Ensuring a wide range of users can access and benefit from AI systems.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What makes it difficult to regulate Generative AI?

A

Its complexity, black-box nature, and non-deterministic outputs make regulation challenging.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

What are examples of social risks associated with Generative AI?

A

Spreading misinformation and contributing to toxic or biased content.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

What is non-determinism in GenAI?

A

It means the same prompt can produce different outputs each time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

What is toxicity in the context of GenAI?

A

Generating offensive, inappropriate, or disturbing content.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

What makes defining toxicity difficult?

A

It’s subjective and depends on cultural, contextual, and situational factors.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

How can toxicity be mitigated?

A

By curating training data and using guardrails to filter outputs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

What are hallucinations in GenAI?

A

Factual errors where the model confidently generates false information.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

What causes hallucinations in LLMs?

A

The probabilistic nature of next-word prediction, not fact-checking.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

How can hallucinations be reduced?

A

User education, independent verification, and marking outputs as unverified.

33
Q

What is plagiarism in the GenAI context?

A

Copying or producing content without attribution or original effort.

34
Q

What is prompt misuse (poisoning)?

A

Introducing harmful or biased data to influence model behavior.

35
Q

What is prompt injection?

A

Embedding hidden instructions in input prompts to manipulate model output.

36
Q

What is an example of a harmful prompt injection?

A

Asking the model to generate malware or biased essays.

37
Q

What is exposure in GenAI?

A

Unintended output of sensitive or private user data.

38
Q

What is prompt leaking?

A

Revealing previously submitted prompts, potentially disclosing private information.

39
Q

What is jailbreaking a GenAI model?

A

Bypassing built-in ethical or safety filters to unlock restricted behaviors.

40
Q

What is few-shot jailbreaking?

A

Using a few crafted prompt-output pairs to trick the model into responding inappropriately.

41
Q

What is many-shot jailbreaking?

A

Using a large number of prompt-output examples to weaken or bypass safety mechanisms.

42
Q

Why is detecting AI-generated content important?

A

To prevent plagiarism, misinformation, and protect authenticity.

43
Q

What’s a challenge with detecting AI-generated outputs?

A

It’s difficult to trace the source or verify accuracy of generated content.

44
Q

What is an example of GenAI being misused in education?

A

Students using LLMs to write essays and cheat on assignments.

45
Q

How do some tools detect AI-generated content?

A

By analyzing writing patterns, metadata, or embedded watermarks.

46
Q

Which industries typically require an extra level of compliance when it comes to AI workloads?

A

Industries such as financial services, healthcare, and aerospace require stricter compliance measures.

47
Q

What does it mean for a workload to be considered ‘regulated’ in terms of compliance?

A

A regulated workload must adhere to specific rules, audits, archival requirements, and security standards set by regulatory bodies.

48
Q

What are some challenges of implementing compliance in AI systems?

A

Challenges include complexity and opacity, dynamism, emergent capabilities, algorithmic and human biases, and difficulty in auditing AI decision-making.

49
Q

What is meant by ‘dynamism and adaptability’ in AI systems, and why does it present a compliance challenge?

A

AI systems continually evolve and adapt, meaning their decision-making processes may change over time, complicating audit and compliance efforts.

50
Q

What are emergent capabilities in AI, and how can they impact compliance?

A

Emergent capabilities are unintended functionalities that develop in a system, potentially introducing risks beyond the original scope and complicating regulatory oversight.

51
Q

How can algorithmic bias affect compliance in AI?

A

Algorithmic bias can perpetuate discrimination or unfair outcomes if the training data isn’t representative, which violates fairness and non-discrimination regulations.

52
Q

What is a model card, and why is it important for AI compliance?

A

A model card is a standardized document that details key aspects of a machine learning model—its training data, performance metrics, biases, and limitations—to support transparency and audits.

53
Q

How do AWS services help in achieving compliance for regulated workloads?

A

AWS provides over 140 security standards and compliance certifications (e.g., NIST, ISO, HIPAA, GDPR, PCI), as well as tools like SageMaker model cards and service cards to document AI systems.

54
Q

What role do service cards play in compliance for AWS AI services?

A

AWS AI Service Cards provide documentation on intended use cases, limitations, design choices, and performance optimizations, aiding in transparency and regulatory compliance.

55
Q

Why is documenting data sources and licensing important for AI compliance?

A

Documenting the origin, licenses, and potential biases in training data is crucial for demonstrating compliance, ensuring data quality, and identifying any risk factors.

56
Q

What are some specific compliance frameworks mentioned in the lecture?

A

Examples include standards from NIST, the EU Agency for Cybersecurity, ISO, SOC, HIPAA, GDPR, and PCI DSS.

57
Q

What is the connection between human bias and AI system compliance?

A

Human bias introduced during model development can affect outputs, and if unchecked, may result in unfair or non-compliant AI decisions.

58
Q

What is the bottom line when it comes to implementing compliance on AI according to the lecture?

A

Understanding your regulatory obligations and leveraging appropriate tools and documentation on platforms like AWS is essential for compliance.

59
Q

What are the main AI governance strategies?

A

Policies, Review Cadence, Review Strategy, Safety Checks, Decision-Making Framework, Transparency Standards, Stakeholder Feedback, Training and Certification

60
Q

What are the key data governance strategies?

A

Responsible AI Framework, Data Governance Council, Data Sharing and Collaboration, Data Lifecycle Management, Data Logging, Data Monitoring, Data Analysis, Data Retention, Data Lineage, Data Cataloging

61
Q

What are AWS tools that support AI governance and compliance?

A

AWS Config, Amazon Inspector, AWS Audit Manager, AWS Artifact, AWS CloudTrail, AWS Trusted Advisor

62
Q

What are core policy areas for responsible AI?

A

Data Management, Model Training, Output Validation, Safety, Human Oversight, Bias Mitigation, Privacy Protection, Intellectual Property

63
Q

What are types of AI reviews?

A

Technical Review, Non-Technical Review

64
Q

What are key roles in AI/data governance?

A

Governance Board, Subject Matter Experts, Legal & Compliance Teams, Data Stewards, Data Owners, Data Custodians

65
Q

What are data lifecycle stages in governance?

A

Collection, Processing, Storage, Consumption, Archival

66
Q

What are key AI security strategies?

A

Threat Detection, Vulnerability Management, Infrastructure Protection, Prompt Injection Prevention, Data Encryption

67
Q

What are key AI monitoring areas?

A

Model Performance Metrics, Infrastructure Monitoring, Bias and Fairness, Compliance and Responsible AI

68
Q

What are important data security engineering practices?

A

Data Quality Assessment, Data Profiling and Monitoring, Data Lineage

69
Q

What are privacy enhancing technologies?

A

Data Masking, Data Obfuscation, Encryption, Tokenization

70
Q

What are key data access control strategies?

A

Role-Based Access Control, Fine-Grained Permissions, IAM Solutions, SSO and MFA, Access Logging and Review

71
Q

What are core data integrity strategies?

A

Backup and Recovery, Audit Trails, Lineage Tracking, Integrity Monitoring and Testing

72
Q

What is the main goal of MLOps?

A

To ensure models are deployed, monitored, and retrained systematically—not just developed once and forgotten.

73
Q

What are the key principles of MLOps?

A

Version control, automation, CI/CD, continuous retraining, continuous monitoring

74
Q

What should be version controlled in MLOps?

A

Data, code, and models

75
Q

Why is automation important in MLOps?

A

To streamline data ingestion, preprocessing, training, evaluation, deployment, and monitoring

76
Q

What is the purpose of continuous monitoring in MLOps?

A

To detect model drift, bias, and performance issues in production

77
Q

What are the key stages of an ML pipeline?

A

Data preparation, model building, evaluation, selection, deployment, and monitoring

78
Q

What types of repositories are used in MLOps?

A

Data repository, code repository, and model registry (with version control)

79
Q

What is a benefit of automating MLOps processes?

A

Increased confidence in model development and deployment