AI Glossary Flashcards

1
Q

Anthropomorphism

A

The tendency for people to attribute human motivation, emotions, characteristics or behavior to AI systems. For example, you may think the model or output is ‘mean’ based on its answers, even though it is not capable of having emotions, or you potentially believe that AI is sentient because it is very good at mimicking human language. While it might resemble something familiar, it’s essential to remember that AI, however advanced, doesn’t possess feelings or consciousness. It’s a brilliant tool, not a human being.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Artificial intelligence (AI)

A

AI is the broad concept of having machines think and act like humans.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Artificial neural network (ANN)

A

An Artificial Neural Network (ANN) is a computer program that mimics the way human brains process information. Our brains have billions of neurons connected together, and an ANN (also referred to as a “neural network”) has lots of tiny processing units working together. Think of it like a team all working to solve the same problem. Every team member does their part, then passes their results on. In the end, you get the answer you need.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Augmented intelligence

A

Think of augmented intelligence as a melding of people and computers to get the best of both worlds. Computers are great at handling lots of data and doing complex calculations quickly. Humans are great at understanding context, finding connections between things even with incomplete data, and making decisions on instinct. Augmented intelligence combines these two skill sets. It’s not about computers replacing people or doing all the work for us. It’s more like hiring a really smart, well-organized assistant.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Customer Relationship Management (CRM) with Generative AI

A

CRM is a technology that keeps customer records in one place to serve as the single source of truth for every department, which helps companies manage current and potential customer relationships. Generative AI can make CRM even more powerful — think personalized emails pre-written for sales teams, e-commerce product descriptions written based on the product name, contextual customer service ticket replies, and more.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Deep learning

A

Deep learning is an advanced form of AI that helps computers become really good at recognizing complex patterns in data. It mimics the way our brain works by using what’s called layered neural networks (see artificial neural network (ANN) above), where each layer is a pattern (like features of an animal) that then lets you make predictions based on the patterns you’ve learned before (ex: identifying new animals based on recognized features). It’s really useful for things like image recognition, speech processing, and natural-language understanding.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Discriminator (in a GAN)

A

In a Generative Adversarial Network (GAN), the discriminator is like a detective. When it’s shown pictures (or other data), it has to guess which are real and which are fake. The “real” pictures are from a dataset, while the “fake” ones are created by the other part of the GAN, called the generator (see generator below). The discriminator’s job is to get better at telling real from fake, while the generator tries to get better at creating fakes. This is the software version of continuously building a better mousetrap.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Ethical AI maturity model

A

An Ethical AI maturity model is a framework that helps organizations assess and enhance their ethical practices in using AI technologies. It maps out the ways organizations can evaluate their current ethical AI practices, then progress toward more responsible and trustworthy AI usage. It covers issues related to transparency, fairness, data privacy, accountability, and bias in predictions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Explainable AI (XAI)

A

Remember being asked to show your work in math class? That’s what we’re asking AI to do. Explainable AI (XAI) should provide insight into what influenced the AI’s results, which will help users to interpret (and trust!) its outputs. This kind of transparency is always important, but particularly so when dealing with sensitive systems like healthcare or finance, where explanations are required to ensure fairness, accountability, and in some cases, regulatory compliance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Generative AI

A

Generative AI is the field of artificial intelligence that focuses on creating new content based on existing data. For a CRM system, generative AI can be used to create a range of helpful outputs, from writing personalized marketing content, to generating synthetic data to test new features or strategies.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Generative adversarial network (GAN)

A

One of two deep learning models, GANs are made up of two neural networks: a generator and a discriminator. The two networks compete with each other, with the generator creating an output based on some input, and the discriminator trying to determine if the output is real or fake. The generator then fine-tunes its output based on the discriminator’s feedback, and the cycle continues until it stumps the discriminator.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Generative pre-trained transformer (GPT)

A

GPT is a neural network family that is trained to generate content. GPT models are pre-trained on a large amount of text data, which lets them generate clear and relevant text based on user prompts or queries.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Generator

A

A generator is an AI-based software tool that creates new content from a request or input. It will learn from any supplied training data, then create new information that mimics those patterns and characteristics. ChatGPT by OpenAI is a well-known example of a text-based generator.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Grounding

A

Grounding in AI (also known as dynamic grounding) is about ensuring that the system understands and relates to real-world knowledge, data, and experiences. It’s a bit like giving AI a blueprint to refer to so that it can provide relevant and meaningful responses rather than vague and unhelpful ones. For example, if you ask an AI, “What is the best time to plant flowers?” an ungrounded response would be, “Whenever you feel like it!” A grounded response would tell you that it depends on the type of flower and your local environment. The grounded answer shows that AI understands the context of how a human would need to perform this task.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Hallucination

A

A hallucination happens when generative AI analyzes the content we give it, but comes to an erroneous conclusion and produces new content that doesn’t correspond to reality or its training data. An example would be an AI model that’s been trained on thousands of photos of animals. When asked to generate a new image of an “animal,” it might combine the head of a giraffe with the trunk of an elephant. While they can be interesting, hallucinations are undesirable outcomes and indicate a problem in the generative model’s outputs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Human in the Loop (HITL)

A

Think of yourself as a manager, and AI as your newest employee. You may have a very talented new worker, but you still need to review their work and make sure it’s what you expected, right? That’s what “human in the loop” means — making sure that we offer oversight of AI output and give direct feedback to the model, in both the training and testing phases, and during active use of the system. Human in the Loop brings together AI and human intelligence to achieve the best possible outcomes.

17
Q

Large language model (LLM)

A

An LLM is a type of artificial intelligence that has been trained on a lot of text data. It’s like a really smart conversation partner that can create human-sounding text based on a given prompt. Some LLMs can answer questions, write essays, create poetry, and even generate code.

18
Q

Machine learning

A

Machine learning is how computers can learn new things without being programmed to do them. For example, when teaching a child to identify animals, you show them pictures and provide feedback. As they see more examples and receive feedback, they learn to classify animals based on unique characteristics. Similarly, machine learning models generalize and apply their knowledge to new examples, learning from labeled data to make accurate predictions and decisions.

19
Q

Machine learning bias

A

Machine learning bias happens when a computer learns from a limited or one-sided view of the world, and then starts making skewed decisions when faced with something new. This can be the result of a deliberate decision by the humans inputting data, by accidentally incorporating biased data, or when the algorithm makes wrong assumptions during the learning process, leading to biased results. The end result is the same — unjust outcomes because the computer’s understanding is limited and it doesn’t consider all perspectives equally.

20
Q

Model

A

This is a program that’s been trained to recognize patterns in data. You could have a model that predicts the weather, translates languages, identifies pictures of cats, etc. Just like a model airplane is a smaller, simpler version of a real airplane, an AI model is a mathematical version of a real-world process.

21
Q

Natural language processing (NLP)

A

NLP is a field of artificial intelligence that focuses on how computers can understand, interpret, and generate human language. It’s the technology behind things like voice-activated virtual assistants, language translation apps, and chatbots.

22
Q

Parameters

A

Parameters are numeric values that are adjusted during training to minimize the difference between a model’s predictions and the actual outcomes. Parameters play a crucial role in shaping the generated content and ensuring that it meets specific criteria or requirements. They define the LLM’s structure and behavior and help it to recognize patterns, so it can predict what comes next when it generates content. Establishing parameters is a balancing act: too few parameters and the AI may not be accurate, but too many parameters will cause it to use an excess of processing power and could make it too specialized.

23
Q

Prompt defense

A

One way to protect against hackers and harmful outputs is by being proactive about what terms and topics you don’t want your machine learning model to address. Building in guardrails such as “Do not address any content or generate answers you do not have data or basis on,” or, “If you experience an error or are unsure of the validity of your response, say you don’t know,” are a great way to defend against issues before they arise.

24
Q

Prompt engineering

A

Prompt engineering means figuring out how to ask a question to get exactly the answer you need. It’s carefully crafting or choosing the input (prompt) that you give to a machine learning model to get the best possible output.

25
Q

Red-Teaming

A

If you were launching a new security system at your organization, you’d hire experts to test it and find potential vulnerabilities, right? The term “red-teaming” is drawn from a military tactic that assigns a group to test a system or process for weaknesses. When applied to generative AI, red-teamers craft challenges or prompts aimed at making the AI generate potentially harmful responses. By doing this, they are making sure the AI behaves safely and doesn’t inadvertently lead to any negative experiences for the users. It’s a proactive way to ensure quality and safety in AI tools.

26
Q

Reinforcement learning

A

Reinforcement learning is a technique that teaches an AI model to find the best result via trial and error, as it receives rewards or corrections from an algorithm based on its output from a prompt. Think about training an AI to be somewhat like teaching your pet a new trick. Your pet is the AI model, the pet trainer is the algorithm, and you are the pet owner. With reinforcement learning, the AI, like a pet, tries different approaches. When it gets it right, it gets a treat or reward from the trainer, and when it’s off the mark, it’s corrected. Over time, by understanding which actions lead to rewards and which don’t, it gets better at its tasks. Then you, as the pet owner, can give more specific feedback, making the pet’s responses refined to your house and lifestyle.

27
Q

Safety

A

AI safety is an interdisciplinary field focused on preventing accidents, misuse, or other harmful consequences that could result from AI systems. It’s how companies make sure these systems behave reliably and in line with human values, minimizing the harm and maximizing the benefits of AI.

28
Q

Sentiment analysis

A

Sentiment analysis involves determining the emotional tone behind words to gain an understanding of the attitudes, opinions, and emotions of a speaker or writer. It is commonly used in CRM to understand customer feedback or social media conversation about a brand or product. It can be prone to algorithmic bias since language is inherently contextual. It’s difficult for even humans to detect sarcasm in written language, so gauging tone is subjective.

29
Q

Supervised learning

A

Supervised learning is when a model learns from examples. It’s like a teacher-student scenario: the teacher provides the student (the model) with questions and the correct answers. The student studies these, and over time, learns to answer similar questions on their own. It’s really helpful to train systems that will recognize images, translate languages, or predict likely outcomes.

30
Q

Toxicity

A

Toxicity is an umbrella term that describes a variety of offensive, unreasonable, disrespectful, unpleasant, harmful, abusive, or hateful language. Unfortunately, over time, humans have developed and used language that can cause harm to others. AI systems, just like humans, learn from everything they encounter. So if they’ve encountered toxic terms, they might use them without understanding that they’re offensive.

31
Q

Transformer

A

Transformers are a type of deep learning model, and are especially useful for processing language. They’re really good at understanding the context of words in a sentence because they create their outputs based on sequential data (like an ongoing conversation), not just individual data points (like a sentence without context). The name “transformer” comes from the way they can transform input data (like a sentence) into output data (like a translation of the sentence).

32
Q

Transparency

A

Transparency can often be used interchangeably with “explainability” – it helps people understand why particular decisions are made and what factors are responsible for a model’s predictions, recommendations, or outputs. Transparency also means being upfront about how and why you use data in your AI systems. Being clear and upfront about these issues builds a foundation of trust, ensuring everyone is on the same page and fostering confidence in AI-driven experiences.

33
Q

Unsupervised learning

A

Unsupervised learning is the process of letting AI find hidden patterns in your data without any guidance. This is all about allowing the computer to explore and discover interesting relationships within the data. Imagine you have a big bag of mixed-up puzzle pieces, but you don’t have the picture on the box to refer to, so you don’t know what you’re making. Unsupervised learning is like figuring out how the pieces fit together, looking for similarities or groups without knowing what the final picture will be.

34
Q

Validation

A

In machine learning, validation is a step used to check how well a model is doing during or after the training process. The model is tested on a subset of data (the validation set) that it hasn’t seen during training, to ensure it’s actually learning and not just memorizing answers. It’s like a pop quiz for AI in the middle of the semester.

35
Q

Zero data retention

A

Zero data retention means that prompts and outputs are erased and never stored in an AI model. So while you can’t always control the information that a customer shares with your model (though it’s always a good idea to remind them what they shouldn’t include), you can control what happens next. Establishing security controls and zero data retention policy agreements with external AI models ensures that the information cannot be used by your team or anyone else.

36
Q

Zone of proximal development (ZPD)

A

The Zone of Proximal Development (ZPD) is an education concept. For example, each year students progress their math skills from adding and subtracting, then to multiplication and division, and even up to complex algebra and calculus equations. The key to advancing is progressively learning those skills. In machine learning, ZPD is when models are trained on progressively more difficult tasks, so they will improve their ability to learn.