AI Flashcards

1
Q

What are Foundation Models?

A

Foundation models are large-scale pre-trained neural network architectures, like BERT or GPT, serving as bases for various AI tasks. They’re fine-tuned for specific applications like language understanding, generation, and more.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Watsonx.ai

A

With watsonx.ai, you can train, validate, tune and deploy foundation and machine learning models with ease.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are the 3 components of the WatsonX platform?

A

Watsonx features watsonx.ai for foundation models and generative AI, watsonx.data for a flexible data store, and watsonx.governance for responsible, transparent AI workflows.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Governed data and AI

A

It refers to the technology, tools, and processes that monitor and maintain the trustworthiness of data and AI solutions.

Companies must be able to direct and monitor their AI to ensure it is working as intended and in compliance with regulations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

5 AI pillars of trust (trustworthiness)

A
  • Transparency
  • Explainability
  • Fairness
  • Robustness
  • Privacy.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Privacy

A

AI must ensure privacy at every turn, not only of raw data, but of the insights gained from that data. Data belongs to its human creators and AI must ensure privacy with the highest integrity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Robutness

A

An AI solution must be robust enough to handle exceptional conditions effectively and to minimize security risk. AI must be able to withstand attacks and maintain its integrity while under attack.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Fairness

A

An AI solution means the reduction of human bias and the equitable treatment of individuals and of groups of individuals.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Explainability

A

Simple and straightforward explanations are needed for how AI is used. People are entitled to understand how AI arrived at a conclusion, especially when those conclusions impact decisions about their employability, their credit worthiness, or their potential.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Transparency

A

The best way to promote transparency is through disclosure. It allows the AI technology to be easily inspected and means that the algorithms used in AI solutions are not hidden or unable to be looked at more closely.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Open and Diverse Ecosystem

A

The teams building AI solutions must be made up of people from different backgrounds and closely resemble the gender, racial, and cultural diversity of the societies which those solutions serve.

A culture of diversity, inclusion, and shared responsibility, reinforced in an open ecosystem, is imperative for building and managing AI.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

3 Principles of AI in an organization - Foundational Components of Ethics

A
  • The purpose of AI is to augment human intelligence (not replace it)
  • Data and the insights belong to their creator
  • Technology must be transparent and explainable AI
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

AI governance

A

It involves managing and overseeing AI processes, people, and systems to ensure they align with organizational goals, stakeholder expectations, and regulatory compliance throughout the AI lifecycle.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Affinity bias

A

Seeking out or preferring those who seem similar to you

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Availability bias

A

Overestimating the importance of an event with greater “availability” in memory, like an event that happened most recently or was highly unusual

Recent exposure, emotions, or media influence make vivid data seem more significant, affecting decisions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Confirmation bias

A

Seeking only information that confirms what you already believe is true

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Halo effect

A

Interpreting another person’s actions through a positive lens because you feel favorably toward them

18
Q

Status quo bias

A

Opting to maintain the current situation even when better alternatives exist

19
Q

Biases in AI

A

It’s a systematic error that has been designed, intentionally or not, in a way that may generate unfair outcomes. Bias can be present both in an AI system’s algorithm and in the data used to train and test the system.

20
Q

The 5 phases of the AI lifecycle

A
  • Scope and plan
  • Collect and organize
  • Build and train
  • Validate and deploy
  • Monitor and manage
21
Q

Scope and plan

A

During the scope and plan stage, you define the project and evaluate potential ethical issues, including bias, by answering questions about topics like business expectations for fairness and transparency, regulation, and sensitive data handling.

Unconscious biases

22
Q

Collect and organize

A

During the collect and organize stage, you gather and prepare the data that will be used to train your model. Because models depend on data to learn, data quality is critical to AI fairness – yet there are many ways for bias to arise in this stage.

  • Sampling bias
  • Exclusion bias
23
Q

Build and train

A

During the build and train stage, you develop models and begin feeding them the data gathered and prepared in the collect and organize stage.

  • Observer bias/Confirmation bias
  • Aggregation bias
24
Q

Validate and deploy

A

During the validate and deploy stage, you evaluate the model’s performance and deploy it into production if it passes validation.

  • Evaluation bias
  • Deployment bias
25
Q

Monitor and Manage

A

During the monitor and manage stage, you monitor and manage the model after it is deployed for quality, performance, bias and fairness, and accuracy and data drift.

  • Evaluation bias
  • Deployment bias
26
Q

This bias:
If you label or classify data according to your subjective judgment rather than objective guidelines, you risk imprinting your own perspectives on the system.

A

Confirmation bias/Observer bias

if you labels images of lions as “cats” but your teammate does not, the system will be less accurate because of the inconsistently applied labels.

27
Q

This bias:
Occurs when the data sample used to train a model is not representative of a population, overrepresenting or underrepresenting one or more classes in that population. Sampling bias can lead to inaccurate and sometimes discriminatory outcomes.

A

Sampling bias

darker faces are often underrepresented in data used to train facial recognition systems.

28
Q

This bias:
Occurs during data preprocessing, when a human removes seemingly extraneous or irrelevant data from the sample, skewing the data that trains the model.

A

Exclusion bias

you might remove customer location data believing it to be irrelevant to a model, when in reality that location data might help the model detect geographically-influenced patterns.

29
Q

This bias:
It happens when different populations are inappropriately aggregated in order to build a single model, even though that model might not fit all populations equitably.

A

Aggregation bias

can often arise in medicine, when predictive models are built that ignore varying prevalence rates among different ethnic populations.

30
Q

This bias:
Can occur after a model is built but before it is deployed, if it is assessed against benchmarks that are not representative of the population it is intended to assess.

A

Evaluation bias

a model that predicts election outcomes might exhibit bias if it was evaluated using local election data but is deployed to predict national elections.

31
Q

This bias:
If a system is used in a new or unanticipated context, it could generate biased outcomes.

A

Deployment bias

a vision recognition system that is trained to recognize cars in the US might be biased when used in Europe, since the makes and models of American and European cars are often different.

32
Q

Benefits of Governance in AI

A
  • Trust: Aligning AI with values builds transparent, fair, and trustworthy systems, enhancing client satisfaction and brand reputation.
  • Efficiency: Standardized and optimized AI activities accelerate development, reducing time-to-market.
  • Compliance: Managed and monitored AI activities ease alignment with industry regulations and legal requirements, reducing compliance burdens.
33
Q

Foundation model

A

An AI model that can be adapted to a wide range of downstream tasks. Foundation models are typically large-scale (e.g. billions of parameters) generative models trained on unlabeled data using self-supervision.

34
Q

Foundation model-based AI system

A

An AI system built on one or more Foundation Models.

35
Q

Generative AI

A

A class of Machine Learning techniques whose purpose is to generate content or data of many kinds such as audio, code, images, text, simulations, 3D objects, and videos. Foundation models are generative in nature.

36
Q

Ethics by Design (EbD)

IBM

A

A framework of principles and best practices designed to support the development of trustworthy AI and other technologies at IBM

37
Q

Goal of Ethics by Design

IBM

A

Support IBM’s commitment to responsible development and deployment of technologies in all IBM products and services across all business units and geographies.

38
Q

AI Ethics Board

A

It’s a committee of diverse and knowledgeable stakeholders from across the organization whose responsibilities is to enforce Trust and Transparency Principles, prioritizing awareness, education, and accountability for ethical AI development and deployment within the company’s framework and values.

39
Q

What is AI Ethics?

A

A multidisciplinary field that studies how to optimize AI’s beneficial impact while reducing risks and adverse outcomes.

40
Q

AI Lifecycle

A

The AI lifecycle consists of a sequence of stages that bring together various personas and best practices, which include:

  • Scope and plan
  • Collect and organize
  • Build and test
  • Validate and deploy
  • Monitor and manage