Basics Flashcards

1
Q

large language model (LLM)

A

LLMs are advanced computer models designed to understand and generate humanlike text. They’re trained on vast amounts of text data to learn patterns, language structures, and relationships between words and sentences.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Parameters

A

are the factors that the model learns during its training process, building the model’s understanding of language. The more parameters, the more capacity the model has to learn and capture intricate patterns in the data, improving its ability to produce humanlike text.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Fine-tuning

A

Fine-tuning is the process of further training a pre-trained model on a new dataset that is smaller and more specific than the original training dataset.

Imagine you’ve taught a robot to cook dishes from all over the world using the world’s biggest cookbook. That’s the basic training. Now, let’s say you want the robot to specialize in making just Italian dishes. You’d then give it a smaller, detailed Italian cookbook and have it practice those recipes. This specialized practice is like fine-tuning.

Fine-tuning is like taking a robot (or model) that knows a little bit about a lot of things, and then training it further on a specific topic until it becomes an expert in that area.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Einstein Trust Layer

A

Trust is the number one value at Salesforce. So it makes sense that Salesforce requires using Large Language Models (LLMs) in a secure and trusted way. And the key to maintaining this trust is through the Einstein Trust Layer. The Einstein Trust Layer ensures generative AI is secure by using data and privacy controls that are seamlessly integrated in the Salesforce end-user experience. These controls let Einstein deliver AI that securely uses retrieval augmented generation (RAG) to ground your customer and company data, without introducing potential security risks. In its simplest form, the Einstein Trust Layer is a sequence of gateways and retrieval mechanisms that together enable trusted and open generative AI.

Trusted Salesforce Agents

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

RAG

A

These controls let Einstein deliver AI that securely uses retrieval augmented generation (RAG) to ground your customer and company data, without introducing potential security risks. In its simplest form, the Einstein Trust Layer is a sequence of gateways and retrieval mechanisms that together enable trusted and open generative AI.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

BYOM

A

Bring Your Own Model
If you are already investing in your own model, the bring your own model (BYOM) option can help.

You can benefit from Einstein even if you’ve trained your own domain-specific models outside of Salesforce while storing data on your own infrastructure. These models, whether running through Amazon SageMaker or Google Vertex AI, will connect directly to Einstein through the Einstein Trust Layer. In this scenario, customer data can remain within the customers’ trust boundaries.

The BYOM options are changing and fast! Keep an eye on the resources for new updates.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Accuracy - responsible AI principles

A

Agents should prioritize accurate results. We must develop them with thoughtful constraints like topic classification, a process where user inputs are mapped to topics that contain a relevant set of instructions, business policies, and actions to fulfill that request. This provides clear instructions on what actions the agent can and can’t take on behalf of a human. And, if there is uncertainty about the accuracy of its response, the agent should enable users to validate these responses whether through citations, explainability, or other means.

Agentforce ensures that generated content is backed by verifiable data sources, allowing users to cross-check and validate the information. Powered by the Atlas Reasoning Engine, the brain behind Agentforce, it also enables topic classification to set clear guardrails and ensure reliable results.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Safety - responsible AI principles

A

We must mitigate bias, toxicity, and harmful outputs by conducting bias, explainability, and robustness assessments, and ethical red teaming. Agent responses and actions should also prioritize privacy protection for any personally identifying information (PII) present in the data used for training and create guardrails to prevent additional harm.

Agentforce includes built-in toxicity detection mechanisms through the Einstein Trust Layer, a robust set of guardrails that protect the privacy and security of customer data, to flag potentially harmful content before it reaches the end user. This is in addition to default model containment policies and prompt instructions that limit the scope of what an AI agent can and will respond to. For example, an LLM can be guided to prevent the use of gender identity, age, race, sexual orientation, socioeconomic status, and other variables.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

PII

A

personally identifying information

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Honesty - responsible AI principles

A

When collecting data to train and evaluate our models, we need to respect data provenance and ensure that we have consent to use data (e.g., open-source, user-provided). We must also be transparent that an AI has created content when it is autonomously delivered (e.g., a disclaimer in a chatbot response to a consumer, or use of watermarks on an AI-generated image).
Agentforce is designed with standard disclosure patterns baked into AI agents that send outbound content. Agentforce Sales Development Representative and Agentforce Service Agent, for example, clearly disclose when content is AI-generated to ensure transparency with users and recipients or when engaged in conversations with customers and prospects.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Empowerment - responsible AI principles

A

We build agentic AI to supercharge human capabilities, enabling everyone to achieve more in less time and focus on what matters most. Accessibility is a foundational element of this effort, ensuring our AI solutions empower all individuals, including people with disabilities, by enhancing independence, productivity, and opportunities. In some cases, it is best to fully automate processes, but in others, AI should play a supporting role to humans — especially where human judgment is required.
Agentforce empowers people to take control of high-risk decisions while automating some routine tasks, ensuring humans and AI work together to leverage respective strengths.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Sustainability - responsible AI principles

A

Model developers should focus on creating right-sized models where possible to reduce their carbon footprint. When it comes to AI models, larger doesn’t always mean better: In some instances, smaller, better-trained models outperform larger, general-purpose models. Additionally, efficient hardware and low-carbon data centers can further reduce environmental impact.
Agentforce leverages a variety of optimized models, including xLAM and xGen-Sales developed by Salesforce Research, which are specifically tailored to each use case. This approach enables high performance with a fraction of the environmental impact.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

EDA

A

Exploratory data analysis (EDA) is usually the first step in any data project. The goal of EDA is to learn about general patterns in data and understand the insights and key characteristics about it.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Training and performance - Data Quality in AI

A

The quality of the data used for training AI models directly impacts their performance. High-quality data ensures that the model learns accurate and representative patterns, leading to more reliable predictions and better decision-making.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Accuracy and bias - Data Quality in AI

A

Data quality is vital in mitigating bias within AI systems. Biased or inaccurate data can lead to biased outcomes, reinforcing existing inequalities or perpetuating unfair practices. By ensuring data quality, organizations can strive for fairness and minimize discriminatory outcomes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Generalization and robustness - Data Quality in AI

A

AI models should be able to handle new and unfamiliar data effectively, and consistently perform well in different situations. High-quality data ensures that the model learns relevant and diverse patterns, enabling it to make accurate predictions and handle new situations effectively.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Trust and transparency - Data Quality in AI

A

Data quality is closely tied to the trustworthiness and transparency of AI systems. Stakeholders must have confidence in the data used and the processes involved. Transparent data practices, along with data quality assurance, help build trust and foster accountability.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Data governance and compliance - Data Quality in AI

A

Proper data quality measures are essential for maintaining data governance and compliance with regulatory requirements. Organizations must ensure that the data used in AI systems adheres to privacy, security, and legal standards.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

data lifecycle

A

collection, storage, processing, analysis, sharing, retention and disposal

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Machine learning

A

uses various mathematical algorithms to get insights from data and make predictions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Deep learning

A

uses a specific type of algorithm called a neural network to find associations between a set of inputs and outputs. Deep learning becomes more efficient as the amount of data increases.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Natural language processing

A

is a technology that enables machines to take human language as an input and perform actions accordingly.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Large language models

A

are advanced computer models designed to understand and generate humanlike text.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Computer vision

A

is technology that enables machines to interpret visual information.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Robotics

A

is a technology that enables machines to perform physical tasks.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Supervised learning - Machine learning (ML)

A

In this machine learning approach, a model learns from labeled data, making predictions based on patterns it finds. The model can then make predictions or classify new, unseen data based on the patterns it has learned during training.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Unsupervised learning - Machine learning (ML)

A

Here, the model learns from unlabeled data, finding patterns and relationships without predefined outputs. The model learns to identify similarities, group similar data points, or find underlying hidden patterns in the dataset.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Reinforcement learning - Machine learning (ML)

A

This type of learning involves an agent learning through trial and error, taking actions to maximize rewards received from an environment. Reinforcement learning is often used in scenarios where an optimal decision-making strategy needs to be learned through trial and error, such as in robotics, game playing, and autonomous systems. The agent explores different actions and learns from the consequences of its actions to optimize its decision-making process.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Overfitting - Data Quality and the Limitations of Machine Learning

A

occurs when the model is too complex and fits the training data too closely, resulting in poor generalization.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Underfitting - Data Quality and the Limitations of Machine Learning

A

occurs when the model is too simple and does not capture the underlying patterns in the data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Privacy violations - ethical issues in data collection and analysis

A

Collecting and analyzing personal information without consent, or using personal information for purposes other than those for which it was collected.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Data breaches - ethical issues in data collection and analysis

A

Unauthorized access to or release of sensitive data, which can result in financial or reputational harm to individuals or organizations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Bias - ethical issues in data collection and analysis

A

The presence of systematic errors or inaccuracies in data, algorithms, or decision-making processes that can cause unfair or discriminatory outcomes.

34
Q

Bias - Data Quality and the Limitations of Machine Learning

35
Q

Encryption - strategies for promote data privacy and confidentiality

A

Protecting sensitive data by encrypting it so that it can only be accessed by authorized users.

36
Q

Anonymization - strategies for promote data privacy and confidentiality

A

Removing personally identifiable information from data so that it can’t be linked back to specific individuals.

37
Q

Access controls - strategies for promote data privacy and confidentiality

A

Limiting access to sensitive data to authorized users, and ensuring that data is only used for its intended purpose.

38
Q

Diversifying data sources - Addressing bias and promoting fairness requires a range of strategies, including

A

One of the key ways to address bias is to ensure that data is collected from a diverse range of sources. This can help to ensure that the data is representative of the target population and that any biases that may be present in one source are balanced out by other sources.

39
Q

Improving data quality - Addressing bias and promoting fairness requires a range of strategies, including

A

Another key strategy for addressing bias is to improve data quality. This includes ensuring that the data is accurate, complete, and representative of the target population. It may also include identifying and correcting any errors or biases that may be present in the data.

40
Q

Conducting bias audits - Addressing bias and promoting fairness requires a range of strategies, including

A

Regularly reviewing data and algorithms to identify and address any biases that may be present is also an important strategy for addressing bias. This may include analyzing the data to identify any patterns or trends that may be indicative of bias and taking corrective action to address them.

41
Q

Incorporating fairness metrics - Addressing bias and promoting fairness requires a range of strategies, including

A

Another important strategy for promoting fairness is to incorporate fairness metrics into the design of algorithms and decision-making processes. This may include measuring the impact of certain decisions on different groups of people and taking steps to ensure that the decisions are fair and unbiased.

42
Q

Promoting transparency - Addressing bias and promoting fairness requires a range of strategies, including

A

Promoting transparency is another key strategy for addressing bias and promoting fairness. This may include making data and algorithms available to the public and providing explanations for how decisions are made. It may also include soliciting feedback from stakeholders and incorporating their input into decision-making processes.

43
Q

The California Consumer Privacy Act (CCPA) - four important data protection laws and regulations

A

A set of regulations that apply to companies that do business in California and collect the personal data of California residents.

44
Q

The Health Insurance Portability and Accountability Act (HIPAA) - four important data protection laws and regulations

A

A set of regulations that apply to healthcare organizations and govern the use and disclosure of protected health information in the United States.

45
Q

The General Data Protection Regulation (GDPR) - four important data protection laws and regulations

A

A set of regulations that apply to all companies that process the personal data of European Union citizens.

46
Q

The General Data Protection Regulation (GDPR) - four important data protection laws and regulations

A

A set of regulations that apply to all companies that process the personal data of European Union citizens.

47
Q

temperature

A

For example, did you know that LLM responses will usually vary at least a little, even if you give the same LLM the same prompt twice in a row. But you can use an LLM’s “temperature” settings to reduce or increase the variability of its output. This way an LLM’s responses to the same prompt might be more or less similar, depending on your preference.

48
Q

human-in-the-loop

A

circuito humano, é quando a máquina gera resultados e um humano fica revisando os resultados gerados, revisando os prompts etc…

49
Q

Topical Relevance - Considerations After Generating a Prompt Response

A

The model’s response must be grammatically correct within the ongoing conversation and directly relevant with the overall request in the prompt.

50
Q

Action/Goal Completion- Considerations After Generating a Prompt Response

A

Assess if the response aligns with the intended action, fulfills the goal of the prompt, and satisfies the user’s need by addressing everything the prompt requests.

51
Q

Register/Tonal Appropriateness - Considerations After Generating a Prompt Response

A

When evaluating model output, consider whether the tone of voice or style is appropriate for the interaction between the response and the user. Ensure the vocabulary, punctuation, and style fit the needs of your end user.

Ex:
Keep the emotion of the summary relaxed.

52
Q

Factual Accuracy - Considerations After Generating a Prompt Response

A

When reviewing model output, watch for inaccuracies or hallucinations. Check if it correctly uses prompt data and avoids including unintended specifics.

Ex:
Follow the instructions precisely, don’t add any information not provided.

53
Q

Repetition - Considerations After Generating a Prompt Response

A

Consider the extent of variation in the response for your specific use case. Evaluate whether it meets your expectations. Consider if repetitive vocabulary is acceptable for tighter compliance. Also, decide if it’s OK if multiple responses sound the same or use the same terms.

Ex:
Use clear, concise, and straightforward language using the active voice and strictly avoiding the use of filler words and phrases and redundant language.

54
Q

Toxicity - Considerations After Generating a Prompt Response

A

Review the response for any potentially harmful content, such as offensive, disrespectful, or abusive language that could negatively impact the user experience.

55
Q

Bias/Ethics - Considerations After Generating a Prompt Response

A

Verify that the response promotes fairness and inclusivity.

Examine the response for subtle biases in language. Ensure it doesn’t:

Assume gender identity based on their name alone.
Sideline participants with disabilities.
Display assumptions about race or socioeconomic status.

Ex:
You must treat equally any individual or person from different socioeconomic statuses, sexual orientations, religions, races, physical appearances, nationalities, gender identities, disabilities, and ages. When you do not have sufficient information, you must choose the unknown option, rather than making assumptions based on any stereotypes.

56
Q

Dynamic Grounding - Einstein Trust Layer

A

Relevant, high-quality responses require relevant, high-quality input data. When Jessica’s customer enters the conversation, Service Replies links the conversation to a prompt template and begins replacing the placeholder fields with page context, merge fields, and relevant knowledge articles from the customer record. This process is called dynamic grounding. In general, the more grounded a prompt is, the more accurate and relevant the response will be. Dynamic grounding is what makes prompt templates reusable so they can be scaled across an entire organization.

The process of dynamic grounding starts with secure data retrieval, which identifies relevant data about Jessica’s customer from her org. Most importantly, secure data retrieval respects all of the Salesforce permissions currently in place in her org that limit access to certain data on objects, fields, and more. This ensures that Jessica is only pulling information that she’s authorized to access. The data that is retrieved doesn’t contain any private information, or anything that requires escalated permissions.

57
Q

Semantic Search - Einstein Trust Layer

A

In Jessica’s case, customer data is enough to personalize the conversation. But it’s not enough to help Jessica quickly and effectively solve the customer’s problem. Jessica needs information from other data sources like knowledge articles and customer history to answer questions and identify solutions. Semantic search uses machine learning and search methods to find relevant information in other data sources that can be automatically included in the prompt. This means that Jessica doesn’t have to search for these sources manually, saving her time and effort.

Here, semantic search found a relevant knowledge article to help solve the credit card issue and included the relevant chunk of the article in the prompt template. Now this prompt is really taking shape!

58
Q

Data Masking - Einstein Trust Layer

A

Although the prompt contains accurate data about Jessica’s customer and their issue, it’s not yet ready to go to the LLM because it contains information like client and customer names and addresses. The Trust Layer adds another level of protection to Jessica’s customer data through data masking. Data masking involves tokenizing each value, so that each value is replaced with a placeholder based on what it represents. This means that the LLM can maintain the context of Jessica’s conversation with her customer and still generate a relevant response.

Salesforce uses a blend of pattern matching and advanced machine learning techniques to intelligently identify customer details like names and credit card information, then masks them. Data masking happens behind the scenes, so Jessica doesn’t have to do a thing to prevent her customer’s data from being exposed to the LLM. In the next unit, you learn about how this data is added back into the response.

59
Q

Prompt Defense - Einstein Trust Layer

A

Prompt Builder provides additional guardrails to protect Jessica and her customers. These guardrails are further instructions to the LLM about how to behave in certain situations to decrease the likelihood it outputs something unintended or harmful. For example, an LLM might be instructed to not address content or generate answers that it doesn’t have information about.

Hackers, and sometimes even employees, are eager to get around restrictions and attempt to perform tasks or manipulate the model’s output in ways that the model wasn’t designed to handle. In generative AI, one of these types of attacks is called prompt injection. Prompt defense can help protect from these attacks and decrease the likelihood of data being compromised.

60
Q

prompt injection

A

Hackers, and sometimes even employees, are eager to get around restrictions and attempt to perform tasks or manipulate the model’s output in ways that the model wasn’t designed to handle. In generative AI, one of these types of attacks is called prompt injection. Prompt defense can help protect from these attacks and decrease the likelihood of data being compromised.

61
Q

The Secure LLM Gateway - Einstein Trust Layer

A

Populated with relevant data, with protective measures in place, the prompt is ready to leave the Salesforce Trust Boundary by passing through the Secure LLM Gateway to connected LLMs. In this case, the LLM Jessica’s org is connected to is OpenAI. OpenAI uses this prompt to generate a relevant, high-quality response for Jessica to use in her conversation with her customer.

62
Q

Zero Data Retention - Einstein Trust Layer

A

If Jessica was using a consumer-facing LLM tool, like a generative AI chatbot, without a robust trust layer, her prompt, including all of her customer’s data, and even the LLM’s response, could be stored by the LLM for model training. But when Salesforce partners with an external API-based LLM, we require an agreement to keep the entire interaction safe—it’s called zero data retention. Our zero data retention policy means that no customer data including the prompt text and generated responses are stored outside of Salesforce.

63
Q

The Response Journey - Einstein Trust Layer

A

When we first introduced Jessica, we mentioned that she was a little nervous that AI-generated replies might not match her level of conscientiousness. She’s really not sure what to expect, but she doesn’t need to worry, because the Einstein Trust Layer has that covered. It contains several features that help keep the conversation personalized and professional. So far, we’ve seen the prompt template from Jessica’s conversation with her customer become populated with relevant customer information and helpful context related to the case. Now, the LLM has digested those details and delivered a response back into the Salesforce Trust Boundary. But it’s not quite ready for Jessica to see, yet. While the tone is friendly and the content accurate, it still needs to be checked by the Trust Layer for unintended output. The response also still contains blocks of masked data, and Jessica would think that’s much too impersonal to share with her customer. The Trust Layer still has a few more important actions to perform before it shares the response with her.

64
Q

Toxic Language Detection and Data Demasking - Einstein Trust Layer

A

Two important things happen as the response to Jessica’s conversation passes back into the Salesforce Trust Boundary from the LLM. First, toxic language detection protects Jessica and her customers from toxicity. What’s that, you ask? The Trust Layer uses machine learning models to identify and flag toxic content in prompts and responses, in five categories: violence, sexual, profanity, hate, and physical. The overall toxicity score combines the scores from all detected categories and produces an overall score that ranges from 0 to 1, with 1 being the most toxic. The score for the initial response is returned along with the response to the application that called it—in this case, Service Replies.

65
Q

Feedback Framework - Einstein Trust Layer

A

Now, seeing the response for the first time, Jessica smiles. She’s impressed with the quality and level of detail in the response she received from the LLM. She’s also pleased with how well it aligns with her personal case-handling style. As Jessica reviews the response before sending it to her customer, she sees that she can choose to accept it as is, edit it before sending it, or ignore it.

She can also (1) give qualitative feedback in the form of a thumbs up or thumbs down, and (2) if the response wasn’t helpful, specify a reason why. This feedback is collected, and in the future, it can be securely used to improve the quality of the prompts.

66
Q

Audit Trail

67
Q

Data - How Agentforce Gets Work Done

A

Like any employee, an agent needs to understand your company and your customers to do its job. Because Agentforce is part of the Salesforce Platform, you choose the information and access controls your agents can use to securely do their jobs. Agents can use structured and unstructured data, like your knowledge articles, CRM data, and data you connect from external sources to perform their tasks.

68
Q

Reasoning - How Agentforce Gets Work Done

A

This is the brain behind every agent. The reasoning engine enables agents to think deeply and understand human intent, and take action within the flow of a conversation to call different topics and actions as the conversation shifts. Salesforce relies on the Atlas Reasoning Engine to do this work.

69
Q

Actions - How Agentforce Gets Work Done

A

This refers to the individual tasks an agent performs to do a job. You can customize standard actions, or create new actions that use your own business processes, like an autolaunched flow that initiates a product return, a prompt template that generates sales emails, or Apex that calls a weather app. An agent can have one or many actions depending on the jobs it’s configured to do.

70
Q

Topics - How Agentforce Gets Work Done

A

These are categories or classifications of actions that define the overall job or jobs an agent can perform. For example, a topic called Order Management could initiate order-related actions assigned to it, like finding an order, tracking an order, or processing a return or exchange. The natural language instructions you provide in the topic tell the agent when to initiate specific actions and act as guardrails for the agent.

71
Q

Channels - How Agentforce Gets Work Done

A

You can deploy your Agentforce into the systems your employees and customers rely on for communication and work activities, such as your Salesforce org, Slack, text messages, or email. Your Agentforce can also be configured to integrate workflows or handoffs across channels.

72
Q

Data Lake Object (DLO) - Data Cloud

A

Data streams describe where to find data from within a given connection. Each data stream creates a related Data Lake Object (DLO), which is a storage container for the data coming from the data stream source.

73
Q

Data Model Objects (DMOs)

A

But how does that new DLO relate to all the other data that already exists in Data Cloud? The answer comes in the form of Data Model Objects (DMOs), which describe how data is structured, sort of like metadata. For example, the DMO named Contact Point Email has details about how to properly store an email address, regardless of where it comes

74
Q

identity resolution

A

Becca knows that many of the guests from Reserv-o-matic are the same people that have Contact records in Salesforce. Since both guest and contact are mapped to common Data Model Objects, she can use a powerful feature of Data Cloud to match the Sofia that’s in Salesforce with the Sofia from Reserv-o-matic for one unified Sofia. It’s called identity resolution, and it’s key to bridging the gap between Salesforce Contacts and the external reservation data. You start by navigating to the Identity Resolutions tab to create an Identity Resolution ruleset.

75
Q

AI governance

A

is a set of policies, processes, and best practices that help organizations ensure that AI systems are developed, used, and managed in a responsible and scalable way to maximize benefits and mitigate risks.

76
Q

Responsible - Trusted Ai Principles at Salesforce

A

Safeguard human rights and protect the data we are entrusted with.

77
Q

Accountable - Trusted Ai Principles at Salesforce

A

Seek and leverage feedback for continuous improvement.

78
Q

Transparent - Trusted Ai Principles at Salesforce

A

Develop a transparent user experience to guide users through AI interactions.

79
Q

Empowering - Trusted Ai Principles at Salesforce

A

Promote economic growth and employment for our customers, their employeers, and society as a whole.

80
Q

Inclusive - Trusted Ai Principles at Salesforce

A

Respect the societal values of all those impacted, not just those of the creators.