AI Study Cards Flashcards

1
Q

GANs are made up of what two network types.
What are the two parts called?
How does each part work?

A

GANs are made up of two neural networks: a generator and a discriminator. The two networks compete with each other, with the generator creating an output based on some input and the discriminator trying to determine if the output is real or fake. The generator then fine-tunes its output based on the discriminator’s feedback, and the cycle continues until it stumps the discriminator.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Transformer Models create …’s based on….. data rather than …. … ….
How does it help the model?

This approach helps the model …. … context and is why it’s used to … or … text.

A
  • Create outputs based on sequential data (like sentences or paragraphs) rather than individual data points.
  • This approach helps the model efficiently process context and is why it’s used to generate or translate text.
    *Like ChatGPT, (which stands for Chat Generative Pretrained Transformer),
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Some types of Generative AI Models

A
  1. GANs
  2. Transformer
  3. (VAEs) Variational Auto-encoders
  4. (NeRFs) Neural Radiance Fields
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What type of model creates 2D & 3D Images?

A

NeRFs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Name two neural network models

A

GANs
VAEs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What does ChatGPT stand for?

A

Chat Generative Pretrained Transformer

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What does LLM stand for?

A

Large Language Model

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What type of model is CodeGen?
What does it do?
It democratizes software engineering by helping…

A

Is an LLM that turns English prompts into executable code. - Democratizes software engineering by helping users turn simple English prompts into executable code

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is Conversational Ai?
What does it enable?
….’s enabling …. ….’s between … … and … …, via a ….
Conducted … … ….’s native language.

A

Technologies enabling natural interactions between a human and a computer, via a conversation conducted in the human’s native language.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Some examples of Conversational AI Products

A
  1. Chatbots
  2. Voice Assistants
  3. Virtual Agents
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

4 Types of Programming

A
  1. Classical
  2. Automatic
  3. Interactive
  4. Conversational AI
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Classical programming is:

A

Traditionally, a programmer factorizes a problem into smaller sub-problems, defines a requirement, then drafts a piece of code, which then is revised until it solves the given problem.
In 1945, this is how the first programmable machine, the ENIAC, was programmed using plugboard wiring.
Today, this is how programs are written using formal languages with higher abstraction such as C, Python, or Java.
The classical, fundamental paradigm of specifying a problem in natural language and iteratively refining a solution in a formal or programming language until the specification is satisfied remains the predominant method of programming today.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Automatic programming is:

Coders … in a …-… …, and a … … …-… code.

A

Most of today’s popular computer languages are like this; coders write in a higher-level language, and a compiler generates low-level code; this saves time and effort, since we humans don’t have to worry about all the low-level details.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Interactive programming is when the person:

A

Codes a program (or parts of a program) on-the-fly, while that program is running.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Conversational AI programming:
Combines … .. and … programming.

We call what CodeGen does … … programming.

A

Combines conversational AI (interactive human-to-machine dialogue) and automatic programming (the system automatically creates the program based on a higher-level language:

we call what CodeGen does conversational AI programming.

The advent of machine learning urges us to rethink the classical paradigm. Instead of a human doing the programming, can a machine learn to program itself, with the human providing high-level guidance? Can human and machine establish an interactive discourse to write a program?

The answer, as our research reveals, is a resounding Yes.
your conversation!)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is Few Shot Learning?

A

Few-shot learning is a type of fine-tuning using a small number of task-specific examples in the prompt, enabling the model to perform better on a task. We can already do this with prompt design and the base LLM. We include instructions and sometimes several examples in a prompt. In a sense, prefeeding the prompt with a small dataset that is relevant to the task.

Fine-tuning improves on few-shot learning by training on a much larger set of examples than can fit in the prompt. This extended training can result in better performance on specific tasks. After a model has been fine-tuned, you won’t need to provide as many examples in the prompt. This saves costs and enables faster requests and responses.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Overfitting is?
When a model becomes….

A

When a model becomes too adapted to the new dataset

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

How to adjust for overfitting? (Regularization) - 3 Techniques

A
  1. Dropout
  2. Weight Decay
  3. Layer Normalization
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What is a concern of overfitting when fine tuning?

A

A major concern in fine-tuning is when a model is trained too closely on a small dataset. It might perform exceptionally well on that dataset but poorly on unseen data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What is Catastrophic Forgetting?

A

Incorrect fine-tuning might cause the model to “forget” some of its previous general knowledge, making it less effective outside the specialized domain.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

4 Types of Dataset Bias:

A
  1. Selection
  2. Sampling
  3. Label
  4. Historical
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What is Selection Bias?
The … selected for …-… does … … the … … of the … space.

A

The data selected for fine-tuning does not represent the full diversity of the problem space.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What is Sampling Bias?
When .. is … in a way that some… .. .. … …. Are less … to be … than …

A

The data is collected in a way that some members of the intended population are less likely to be included than others.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What is Label Bias?
The … or labels provided in the fine-tuning dataset are … by … …s or ….

A

The annotations or labels provided in the fine-tuning dataset are influenced by subjective opinions or stereotypes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

What is Historical Bias?
The data … … or … …s that are … … or …

A

The data reflects historical or societal inequities that are inherently unfair or problematic.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

What happens if you use the wrong Hyper-parameter Selection settings while fine tuning?

A

The wrong hyperparameter settings used while fine-tuning can hinder the model’s performance or even make it untrainable.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Dataset Splitting is when you separate, or partition your data into what 3 sets?

A

1.training
2. validation
3. test sets.
The model trains on the training set, hyperparameters are tuned using the validation set, and performance is evaluated on the test set.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

What are the first 3 steps to prepare your dataset for fine tuning?

A

Data Collection
Data Cleaning
Dataset Splitting

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

What are the 3 parts/or steps of Data-splitting?

A
  1. Training
  2. Validation
  3. Testing
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Model is trained on what type of set?

A

Training set

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Hyperparameters are trained on what type of set?

A

Validation Set

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Some examples of LLM architecture are:

A

GPT-3.5
Bert
RoBERTa

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Things to consider when choosing a model:

A

Whether if fits your specific task
Input & Output size of the model
Dataset size
Technical infrastructure is suitable for the computing power required for fine tuning.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

What does architecture selection allow you to do in fine tuning?

A

Adjust certain components depending on the task.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

What types of architectural components can you adjust in fine tuning?

A

Final Layer for classification tasks

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

4 Techniques for Monitoring & Evaluating your model are?

A

Track Loss & Metrics
Early Stopping
Evaluation Metrics

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

How do you adjust for performance after fine tuning a model?

A

Calibrate &
Create a feedback loop

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

Predictive AI Analyzes?

A

Historical data to predict future possible outcomes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

Generative AI Focuses its model to be able to generate: (4 types of things).

A

Generate images, texts, video, and even software code based on user input, demonstrating its potential for creative applications.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

Difference between Generative and Predictive AI

A

At their foundation, both generative AI and predictive AI use machine learning. However, generative AI turns machine learning inputs into content, whereas predictive AI uses machine learning to determine the future and boost positive outcomes by using data to better understand market trends.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

NLU is the acronym for…

A

Natural Language understanding

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

When planning bot conversations which topics are important to focus on?

A

Context
Personality
Conversation Design

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

Which terms are important to know before you build your bot?

A

Variables
Dialogs
Dialog Intents
Entities

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

Which License do you need for bots?

A

Service Cloud

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

To get started with bots what should you set up first?

A

Service Cloud
Salesforce Sites
Chat Guided Setup Flow

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

What is Artificial intelligence (AI)?

A

A branch of computer science in which computer systems use data to draw inferences, perform tasks, and solve problems with human-like reasoning.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

Bias is:
… and …. … in a computer system that create … … in … … from the … function of the system, due to … … in the … … process.

A

Systematic and repeatable errors in a computer system that create unfair outcomes, in ways different from the intended function of the system, due to inaccurate assumptions in the machine learning process.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

Corpus is?

A

A large collection of textual datasets used to train an LLM.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

Domain adaptation is the process through which?

A

The process through which organization-specific knowledge is added into the prompt and the foundation model

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

Fine-tuning
Is the process of … a …-… … for a … … by training it on a …, …-… ….

A

The process of adapting a pre-trained language model for a specific task by training it on a smaller, task-specific dataset.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
51
Q

Generative AI gateway in Salesforce is sometimes referred to as:

Alternate terms:

A

Einstein gateway, the gateway

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
52
Q

Generative Pre-Trained Transformer (GPT) is a family of …. … That’s trained on a … … of text … so that they can … …-… …

A

A family of language models that’s trained on a large body of text data so that they can generate human-like text.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
53
Q

Grounding is the process through which …-… … and … … is added to the prompt to give … … the … it needs to… more ….

A

The process through which domain-specific knowledge and customer information is added to the prompt to give the model the context it needs to respond more accurately.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
54
Q

Hallucination is a type of … where the model generates … … text that is … … or makes … to … …, given the …

A

A type of output where the model generates semantically correct text that is factually incorrect or makes little to no sense, given the context.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
55
Q

HITL stands for?

A

Human In the Loop

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
56
Q

HITL requires?

A

Human Interaction
A model that requires human interaction

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
57
Q

What does Hyperparameter do?
And where do they sit?

A

A parameter used to control the training process. Hyperparameters sit outside the generated model.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
58
Q

Inference is the process of?

A

The process of requesting a model to generate content.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
59
Q

Inference pipelines are a sequence of …. … …. ?
They are … together to … … … …
This includes these 4 things:
Resolving … ….
Passing .. … an …
Moderating … …
And sending … … to the …

A

A sequence of reusable generative steps stitched together to accomplish a generation task. This includes resolving prompt instructions, passing it to an LLM, moderating the results, and sending results back to the user.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
60
Q

What is Intent?

A

An end user’s goal for interacting with an AI assistant.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
61
Q

What does a Large language model consist of? (LLM)

A

A language model consists of a neural network with many parameters trained on large quantities of text.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
62
Q

Machine learning is a ?
It specializes in ?
These are designed to ?

A

A subfield of AI specializing in computer systems that are designed to learn, adapt, and improve based on feedback and inferences from data, rather than explicit instruction.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
63
Q

Model cards are?
What 5 things does it include?

A

Documents detailing about the model’s performance.
It includes inputs, outputs, training method, conditions under which the model works best, and ethical considerations in use.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
64
Q

Natural Language Processing (NLP) is?
What is an example of one of its models?

A

A branch of AI that uses machine learning to understand language as written by people. Large language models are one of many approaches to NLP.

65
Q

Parameter size is the ?

A

The number of parameters the model uses to process and generate data.

66
Q

A Prompt is a?

A

A natural language description of the task to be accomplished. An input to the LLM.

67
Q

Prompt chaining is the method of ?

A

The method of breaking up complex tasks into several intermediate steps and then tying it back together so that the AI generates a more concrete, customized, and better result.

68
Q

Prompt design is the process of…

A

Prompt design is the process of creating prompts that improve the quality and accuracy of the model’s responses. Many models expect a certain prompt structure, so it’s important to test and iterate them on the model you’re using. After you understand what structure works best for a model, you can optimize your prompts for the given use case.

69
Q

Prompt engineering is ?
What is it focused on?
How does it work?

A

An emerging discipline within AI focused on maximizing the performance and reliability of models by crafting prompts in a systematic and scientifically rigorous way.

70
Q

Prompt injection is a …?
With this …, users and third parties attempt to …?

A

A method used to control or manipulate the model’s output by giving it certain prompts. With this method, users and third parties attempt to get around restrictions and perform tasks that the model wasn’t designed for.

71
Q

Prompt instructions are ?

A

Prompt instructions are natural language instructions entered into a prompt template. Your user just has to send an instruction to Einstein. Instructions have a verb-noun structure and a task for the LLM, such as “Write a description no longer than 500 characters.” Your user’s instructions are added to the app’s prompt template, and then relevant CRM data replaces the template’s placeholders. The prompt template is now a grounded prompt and is sent to the LLM.

72
Q

Prompt management

A

The suite of tools used to effectively build, manage, package, and share prompts.

73
Q

Retrieval-augmented generation (RAG)

A

A form of grounding that uses an information retrieval system like a knowledge base to enrich a prompt with relevant context, for inference or training.

74
Q

Prompt template

A

A string with placeholders that are replaced with business data values to generate a final text instruction that is sent to the LLM.

75
Q

Semantic retrieval

A

A scenario that allows an LLM to use similar and relevant historical business data that exists in a customer’s CRM data.

76
Q

System cards are?

A

An expansion of the concepts within, and application of, model cards to address the complexity of an overall AI system, which may integrate multiple models. For LLM-based systems this includes the core components of model cards (for example, performance, use cases, ethical considerations) plus how the system operates, what models it uses, as well as how content is chosen, generated, and delivered.

77
Q

Temperature is what type of parameter?

A

A parameter that controls how predictable and varied the model’s outputs are. A model with a high temperature generates random and diverse responses. A model with a low temperature generates focused and more consistent responses.

78
Q

Toxicity

A

A term describing many types of discourse, including but not limited to offensive, unreasonable, disrespectful, unpleasant, harmful, abusive, or hateful language.

79
Q

Trusted AI

A

Guidelines created by Salesforce that are focused on the responsible development and implementation of AI.

80
Q

5 Data Ethics Best Practices are:

A
  1. Use and collect individual information appropriately.
  2. Provide clear exchange for value for data.
  3. Treat sensitive data carefully.
  4. Collect and use only what you need.
  5. Choose 3rd party partners carefully.
81
Q

Einstein Discovery Use cases:

A
  1. Regression
  2. Binary Classification
  3. Multiclass Classification
82
Q

Steps to Implement an Einstein Discovery Solution (7)

A
  1. Target Outcome
  2. Prepare Data
  3. Create Model
  4. Evaluate Model
  5. Explore Insights
  6. Deploy Model
  7. Predict & Improve
83
Q

What is RAG?

A

Retrieval Augmented Generation

84
Q

What 5 types of Risk Mitigation Strategies do you need to think about?

A
  1. Technical & Security
  2. Data & Privacy
  3. Ethical & Safety
  4. Operational
  5. Compliance & Legal
85
Q

What are 2 examples of risk mitigation strategies?

A
  1. Access Controls
  2. Change Management
86
Q

In addition to calculating the business value, organizations should assess difficulty of an AI project. What should they keep in mind? 5 things

A
  1. Technical feasability
  2. Operational readiness
  3. Data readiness
  4. Risks
  5. Executive buy in
87
Q

Offline preparation for RAG in Data Cloud involves these 4 steps:

A
  1. Ingest
  2. Chunk
  3. Vectorize
    $. index
88
Q

Integration and Runtime use in prompts has these 5 steps:

A
  1. Query from prompt
  2. Vectorize
  3. Relevant Content from Knowledge Store
  4. Augment the prompt
  5. Prompt submitted to the LLM
89
Q

Why might you want to use RAG in your prompts?

A

Because RAG improves the accuracy and relevance of your LLM responses.

90
Q

In a prompt template, what do you add to trigger RAG to search for specific information?

A

Einstein Search retriever

91
Q

SMART Goals are?

A
  1. Specific
  2. Measurable
  3. Attainable
  4. Relevant
  5. Temporal
92
Q

What are the 3 main risk areas?

A
  1. Data leaks
  2. Regulatory Requirements
  3. Reputation Harm
93
Q

3 Types of Guardrails for AI Risks

A
  1. Security
  2. Technical
  3. Ethical
94
Q

What is toxicity in an AI model?

A

Toxicity is when an AI model generates hateful, abusive and profane (HAP) or obscene content.

95
Q

Bias is

A

Bias is when AI reflects harmful stereotypes, these can be such things as racial or gender stereotypes.

96
Q

What guardrails prevent against data leaks?

A

Data Masking
Secure Data retrieval and grounding

97
Q

5 Key components of an agent

A
  1. Role
  2. Knowledge
  3. Actions
  4. Guardrails
  5. Channels
98
Q

5 Key Features of Agentforce Reasoning

A
  1. Multi-turn chat
  2. Topic classification
  3. Instructions and Actions
  4. Knowledge Retrieval
  5. Searchable public data
99
Q

How does an Agent take action?

A
  1. Receives a trigger
  2. Uses LM & Natural language descriptions to identify the context and select a topic that best fits the job.
  3. Depending on the task, an agent selects and chains actions.
  4. Agents dynamically plan and executes the tasks.
100
Q

What are two agentforce guardrails?

A
  1. Natural-language instructions telling the agent what it can and can’t do.
  2. Built-in security features in the Einstein Trust Layer
101
Q

5 Attributes of an Agent

A
  1. Role
  2. Data
  3. Actions
  4. Guardrails
  5. Channel
102
Q

Agent Role means..

A

What job they do

103
Q

Agent Data means…

A

What can they access

104
Q

What is the Einstein Trust Layer?

A

The Einstein Trust Layer is a sequence of gateways and retrieval mechanisms that together enable trusted and open generative AI.

105
Q

What are the 3 controls of Generation that make up the Einstein Trust Layer?

A
  1. Audit Trail
  2. Data Demasking
  3. Toxic Language
106
Q

What are the 5 controls of a Prompt that make up the Einstein Trust Layer?

A
  1. Secure Data Retrieval
  2. Dynamic Grounding
  3. Semantic Retrieval
  4. Data Masking
  5. Prompt Defense
107
Q

Einstein Trust Layer works with 4 controls: What are they?

A
  1. Retrieves info via a prompt from Salesforce Apps
  2. Prompt Trust Controls (5)
  3. Sends it to a secure Gateway for Hosted Models that have (1-2) controls.
  4. Generates with (3) controls - sending it back to the SF Apps.
108
Q

Why did Salesforce create the Einstein Trust Layer?
1. To help your company make its own LLM
2. To provide a toolbox of features to protect customer and company data.
3. To help you build prompts.
4. To help you access all data in your org.

A
  1. To provide a toolbox of features to protect customer and company data.
109
Q

What is the #1 Salesforce value?
1. CRM
2. Happiness
3. Trust
4. AI

A

Trust

110
Q

What is Semantic Retrieval?
(not generally available yet)

A

Semantic Retrieval uses ML and search methods to find relevant information in other data sources that can be automatically included in the prompt.

111
Q

In Data Masking what is done to each value so it has a placeholder based on what it represents?

A

A token is given (tokenizing) each value that should be masked with data, so that the LLM can maintain the context with whomever it is interacting and still generate a relevant response.

112
Q

Prompt Defense takes the form of what?

A

Guardrails in the form of further instructions to the LLM about how to behave in certain situations to decrease the likelihood it outputs something unintended or harmful.

113
Q

What is a prompt injection?

A

It is a form of attack that hackers and employees use to get around restrictions and attempt to perform tasks or manipulate the model’s output in ways the model was not designed.

114
Q

How does dynamic grounding improve a prompt’s context?
1. It overrides a prompt template?
2. It hides any customer data.
3. It gives an agent full access to all data in a customer’s org.
4. It pulls authorized, secure, relevant data directly from the customer’s org, and includes it in the prompt.

A

It pulls authorized, secure, relevant data directly from the customer’s org, and includes it in the prompt.

115
Q

What do the Trust Layer’s prompt defense guardrails help protect against?
1. Data Masking
2. Prompt Injection attacks
3. Tokenized data
4. New Prompt templates

A

Prompt Injection attacks

116
Q

What does “Zero Data Retention” mean when working with models created outside of Salesforce?

A

Its an agreement to keep the entire interaction safe. The agreement means that no customer data including prompt text and generated responses are stored outside of Salesforce.

117
Q

What are the first two things that happen when the conversation or data is passed back into the Salesforce Trust Boundary from the LLM?

A

Toxic Language Detection and Data Demasking

118
Q

Toxic Language Detection works how?

A

The Salesforce assessment tool, which is built with a set of deep learning models, scans the response for anything toxic.

119
Q

How does the toxic assessment tool work?

A

It looks for toxic, hateful, violent, sexual, identifiable, physical, and profane responses.

The tool scores the initial response along these categories, and sends the response back to the application that called it.

120
Q

Data Demasking

A

The Trust Layer uses the same tokenized data we saved when we originally masked the data to demask it before sharing the data.

121
Q

Feedback Framework is?
(not GA yet) but it is….

A

Before sending it to her customer, she sees that she can choose to accept it as is, edit it before sending it, or ignore it.

She can also (1) give qualitative feedback in the form of a thumbs up or thumbs down, and (2) if the response wasn’t helpful, specify a reason why. This feedback is collected, and in the future, it can be securely used to improve the quality of the prompts.

122
Q

Audit Trail does?

A

Everything that has transpired during the reaction is tracked via timestamped metadata that is collected in an audit trail.
This includes the prompt, the original unfiltered response, any toxic language scores, and feedback collected along the way.

123
Q

What does zero data retention mean?

A. The person who receives the response deletes the prompt and all its data.
B. The response remains masked.
C. The prompt remains masked.
D. The prompt, its responses, and all related data are forgotten by the LLM.

A

D. The prompt, its responses, and all related data are forgotten by the LLM.

124
Q

How is a response scanned for toxic language?

A. The end user reviews the response.
B. The response is sent to a panel of reviewers.
C. The Salesforce assessment tool is applied to the response.
D. A third-party LLM checks and stores the response.

A

The Salesforce assessment tool is applied to the response.

125
Q

Einstein 1 Platform includes what four parts?

A

CRM Apps
Einstein AI with its actions
Data Cloud
Trust Layer

126
Q

Which are the three current waves of the AI revolution?

A

Predictive AI
Generative AI
Autonomous & Agent AI

127
Q

What is the 4th wave coming up?

A

Artificial General Intelligence

128
Q

What is Einstein Data Prism?

A

Einstein Data Prism is a grounding solution for generative AI apps within Salesforce that improves accuracy for generative AI solutions that use Data Prism’s grounding capabilities.

With Einstein Data Prism, automatically ground your Large Language Models (LLMs) so you can gain more accurate and relevant responses to utterances. Einstein Data Prism is automatically enabled in integrated apps, such as Einstein Segment Creation and Einstein Copilot.

129
Q

What is Einstein Opportunity Insights?

A

Is a feature within Salesforce that uses artificial intelligence to analyze historical sales data and customer engagement to identify patterns and predict the likelihood of closing deals, essentially providing insights to sales reps on which opportunities are on track or at risk and what actions they can take to improve their win rates; it leverages machine learning and sentiment analysis to provide these insights.

130
Q

What is Einstein Email Composer and what does it do?

A

Einstein Email Composer” refers to a feature within Salesforce where users can leverage artificial intelligence (AI) powered by “Einstein” to automatically generate personalized emails for leads and contacts, essentially composing emails with the help of AI directly within the Salesforce email interface; it’s designed to streamline the email drafting process for sales reps by creating tailored content based on customer data

131
Q

The LLM securely retrieves and provides a personalized response using the pulled data this is process is known as:

A

Dynamic Grounding

132
Q

What is an acronym for Dyanamic Grounding?

A

RAG

133
Q

What are some samples of Attributes you can mask for sensitive data?

A

Credit Card
Email Address
IBAN Code
Company Name
Passport
Name
Phone Number
US Drivers License
US ITIN
US SSN
Birthdate

134
Q

What is SF’s Zero Retention Policy?

A

After the LLM has processed your prompt, any data referenced won’t be stored by the LLM provider.

135
Q

What does generative AI Audit data allow us to do?

A

Track data masking and detect toxicity to make sure any generative responses are accurate.

136
Q

The Einstein Generative AI & Feedback Data Dashboard

https://salesforce.vidyard.com/watch/wiYSd1PqxYRzQykbgjgB8C?

A

Contains visualizations

137
Q

Which feature of the Einstein Trust Layer helps limit hallucinations and decrease the likelihood of unintended outputs?

Dynamic Grounding with Secure Data Retrieval
Prompt Defense
Toxicity Scoring

A

Prompt Defense
Prompt Defense refers to system policies that help limit hallucinations and decrease the likelihood of harmful outputs.

138
Q

What is one way the Einstein Trust Layer ensures data privacy?

-The Einstein Trust Layer detects and masks sensitive information before sending it to the large language model (LLM).
-The Einstein Trust Layer assigns role-based access controls to regulate data access.
-The Einstein Trust Layer enhances firewall protections to prevent unauthorized access.

A

-The Einstein Trust Layer detects and masks sensitive information before sending it to the large language model (LLM).

139
Q

A healthcare company is implementing Salesforce Einstein to enhance its customer service operations but is highly concerned about data privacy and healthcare regulation compliance. The company requires that no patient data is used for model training or product improvements. What feature of the Einstein Trust Layer addresses the organization’s data privacy concerns?

Zero-Data Retention Policy
Dynamic Grounding
Prompt Defense

A

Zero-Data Retention Policy

140
Q

From where is the Einstein Generative AI Audit and Feedback Data Report package accessed?

Data Cloud
Marketing Cloud
Sales Cloud

A
141
Q

Zero-Data Retention Policy

A

Ensures no third-party retention, LLM
training, or human data access.

142
Q

Toxicity Scoring

A

Evaluates content for toxicity and logs
scores in Data Cloud as part of an
audit trail.

143
Q

Prompt Defense

A

System policies designed to limit AI hallucinations and reduce the risk of unintended outputs.

144
Q

Data Masking

A

Detects and replaces sensitive data with placeholder text before it is sent to the LLM

145
Q

The Einstein Trust Layer section makes up what percentage of the Salesforce AI Specialist Certification exam?

A

15%

146
Q

Which key topic does the Einstein Trust Layer section of the exam cover?

Configure standard, custom, and BYO-LLM generative models.
Given a scenario, identify the correct generative AI feature in Einstein for Service
Explain the processes and policies that protect customer data.
Leverage standard Copilot actions, and create custom Copilot actions.

A

Explain the processes and policies that protect customer data.

147
Q

A sales team wants to use AI to prioritize outreach to potential customers, based on their likelihood to convert. Which feature of Einstein for Sales Cloud should the sales team use?

Einstein Opportunity Scoring
Einstein Activity Capture
Einstein Lead Scoring

A

Einstein Lead Scoring
Lead Scoring uses AI to score leads by how well they fit the company’s successful conversion patterns, allowing sales teams to prioritize their leads based on these scores.

148
Q

A company is preparing to reach out to potential leads who have shown interest in the company’s latest product. The company wants to send personalized emails based on each lead’s interactions and interests. Which feature should the company use?

Einstein Sales Emails
Einstein Service Replies
Einstein Automated Contacts

A

Einstein Sales Emails
Sales Emails uses data from Salesforce to generate email content that is tailored to the recipient’s interests and previous interactions.

149
Q

A customer support team manages a high volume of customer inquiries daily. The team wants to leverage generative AI to decrease the time it takes to draft and send responses to customers. Which feature should the company use?

Einstein Case Classification
Einstein Call Summaries
Einstein Service Replies for Email

A

Einstein Service Replies for Email
Service Replies for Email helps users draft and send personalized email responses to customers based on recommended Knowledge articles.

150
Q

A support team manager wants to implement a feature that will help agents quickly catch up on ongoing customer conversations. The manager needs a solution that helps agents create an outline of completed conversations within a case, including the issue and resolution. Which feature meets these requirements?

Einstein Service Replies for Chat
Einstein Work Summaries
Einstein Article Recommendations

A

Einstein Work Summaries
Work Summaries provides real-time summaries of ongoing conversations, including the issue and resolution.

151
Q

Call Explorer

A

Sales Cloud feature that enables users to quickly gather information about voice and video calls.

152
Q

Call Summaries

A

Sales Cloud feature for creating and sharing editable summaries of voice and video calls

153
Q

Einstein Service Replies

A

Service Cloud feature that generates email and chat responses based on knowledge-base data.

154
Q

Work Summaries

A

Service Cloud feature that predicts and fills a summary, issue, and resolution after customer conversations.

155
Q

Snapshot of Einstein for Service Features

A

Einstein Article Recommendations: Recommends relevant knowledge articles to agents on open cases.
Einstein Bots: Automatically resolves common issues in conversations on chat and messaging channels. Bot conversations can be purchased via the Digital Engagement SKU, or as an add-on.
Einstein Case Classification: Predicts field values like Priority, Reason, or Type for classifying incoming cases based on the text a customer presents in the case Subject and Description.
Einstein Case Routing: Works with Einstein Case Classification to triage and route cases to the right agent or queue.
Einstein Case Wrap-Up: Lets chat agents complete cases fast, with greater accuracy and consistency.
Einstein Conversation Mining: Transforms conversation data into service insights and builds bot intents. This is included in the Service Intelligence SKU, not in the Einstein for Service SKU.
Einstein Knowledge Creation: Grows your knowledge base and captures information in the flow of work with AI-generated article drafts.
Einstein Next Best Action: Uses data insights and business rules to recommend offers and actions for an agent to take.
Einstein Reply Recommendations: Analyzes chat transcripts to recommend relevant replies during chat and messaging sessions.
Einstein Service Replies: Drafts and recommends fluent, courteous, and relevant replies using generative AI for your agents to review, edit, and post.
Einstein Work Summaries: Drafts a summary, issue, and resolution using generative AI, based on a Chat conversation between an agent and customer.
Service Analytics: Provides insights into contact center operations, helping you deliver enhanced customer experiences.

156
Q

For creating prompts in Prompt Builder what does CTA mean?

A

Create
Test
Activate

157
Q
A
158
Q
A
159
Q
A