Fundamentals of ML and AI Flashcards

1
Q

Artificial Intelligence

A

The theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.

AI is a broad field that encompasses the development of intelligent systems capable of performing tasks that typically require human intelligence, such as perception, reasoning, learning, problem-solving, and decision-making. AI serves as an umbrella term for various techniques and approaches, including machine learning, deep learning, and generative AI, among others.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Machine Learning

A

The use and development of computer systems that are able to learn and adapt without following explicit instructions, by using algorithms and statistical models to analyze and draw inferences from patterns in data.

ML is a type of AI for understanding and building methods that make it possible for machines to learn. These methods use data to improve computer performance on a set of tasks.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Deep Learning

A

A type of machine learning based on artificial neural networks in which multiple layers of processing are used to extract progressively higher level features from data.

Deep learning uses the concept of neurons and synapses similar to how our brain is wired. An example of a deep learning application is Amazon Rekognition, which can analyze millions of images and streaming and stored videos within seconds.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Generative AI

A

Generative artificial intelligence (AI) is a type of AI that can create new content, such as images, music, text, videos, and audio. It uses deep neural networks to learn from large datasets and produce new content that’s similar to the data it’s learned from.

Generative AI is a subset of deep learning because it can adapt models built using deep learning, but without retraining or fine tuning.

Generative AI systems are capable of generating new data based on the patterns and structures learned from training data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Sentience

A

“Sentient” is the ability to feel or perceive, allowing to think and experience emotions. This would necessarily include consciousness.

Sentience is feeling; sapience is thinking.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Sapience

A

“Sapient” is the capacity for intelligence, wisdom, and logic along with the ability to solve problems, learn, and understand.

Sentience doesn’t even require self-awareness. Sapience, on the other hand, is often described as consciousness, or the ability to reason.

Sapience is generally the quality that would differentiate an intelligent species from animals.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Building a machine learning model involves…

A

Building a machine learning model involves data collection and preparation, selecting an appropriate algorithm, training the model on the prepared data, and evaluating its performance through testing and iteration.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Labeled data

A

Labeled data is a dataset where each instance or example is accompanied by a label or target variable that represents the desired output or classification. These labels are typically provided by human experts or obtained through a reliable process.

Example: In an image classification task, labeled data would consist of images along with their corresponding class labels (for example, cat, dog, car).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Unlabeled data

A

Unlabeled data is a dataset where the instances or examples do not have any associated labels or target variables. The data consists only of input features, without any corresponding output or classification.

Example: A collection of images without any labels or annotations

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Structured data

A

Structured data refers to data that is organized and formatted in a predefined manner, typically in the form of tables or databases with rows and columns. This type of data is suitable for traditional machine learning algorithms that require well-defined features and labels. The following are types of structured data.

Tabular data: This includes data stored in spreadsheets, databases, or CSV files, with rows representing instances and columns representing features or attributes.

Time-series data: This type of data consists of sequences of values measured at successive points in time, such as stock prices, sensor readings, or weather data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Unstructured data

A

Unstructured data is data that lacks a predefined structure or format, such as text, images, audio, and video. This type of data requires more advanced machine learning techniques to extract meaningful patterns and insights.

Text data: This includes documents, articles, social media posts, and other textual data.

Image data: This includes digital images, photographs, and video frames.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Supervised Learning

A

In supervised learning, the algorithms are trained on labeled data. The goal is to learn a mapping function that can predict the output for new, unseen input data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Unsupervised Learning

A

Unsupervised learning refers to algorithms that learn from unlabeled data. The goal is to discover inherent patterns, structures, or relationships within the input data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Reinforcement Learning

A

In reinforcement learning, the machine is given only a performance score as guidance and semi-supervised learning, where only a portion of training data is labeled. Feedback is provided in the form of rewards or penalties for its actions, and the machine learns from this feedback to improve its decision-making over time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Inferencing

A

After the model has been trained, it is time to begin the process of using the information that a model has learned to make predictions or decisions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Batch inferencing

A

Batch inferencing is when the computer takes a large amount of data, such as images or text, and analyzes it all at once to provide a set of results. This type of inferencing is often used for tasks like data analysis, where the speed of the decision-making process is not as crucial as the accuracy of the results.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Real-time inferencing

A

Real-time inferencing is when the computer has to make decisions quickly, in response to new information as it comes in. This is important for applications where immediate decision-making is critical, such as in chatbots or self-driving cars. The computer has to process the incoming data and make a decision almost instantaneously, without taking the time to analyze a large dataset.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Artificial neural networks

A

Computational models that are designed to mimic the way the human brain processes information

Neural networks have lots of tiny units called nodes that are connected together. These nodes are organized into layers. The layers include an input layer, one or more hidden layers, and an output layer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Computer Vision

A

Computer vision is a field of artificial intelligence that makes it possible for computers to interpret and understand digital images and videos. Deep learning has revolutionized computer vision by providing powerful techniques for tasks such as image classification, object detection, and image segmentation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Natural Language Processing (NLP)

A

Natural language processing (NLP) is a branch of artificial intelligence that deals with the interaction between computers and human languages. Deep learning has made significant strides in NLP, making possible tasks such as text classification, sentiment analysis, machine translation, and language generation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Amazon Bedrock provides access to…

A

Amazon Bedrock provides access to a choice of high-performing FMs from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, Stability AI, and Amazon.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

FM lifecycle

A

The foundation model lifecycle is a comprehensive process that involves several stages, each playing a crucial role in developing and deploying effective and reliable foundation models.

It’s important to note that the FM lifecycle is an iterative process, where lessons learned and insights gained from each stage can inform and improve subsequent iterations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Data selection

A

Step 1 of the Foundational Model Lifecycle:
Data selection

Unlabeled data can be used at scale for pre-training because it is much easier to obtain compared to labeled data. Unlabeled data includes raw data, such as images, text files, or videos, with no meaningful informative labels to provide context. FMs require training on massive datasets from diverse sources.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Pre-training

A

Step 2 of the Foundational Model Lifecycle:
Pre-training

Although traditional ML models rely on supervised, unsupervised, or reinforcement learning patterns, FMs are typically pre-trained through self-supervised learning. With self-supervised learning, labeled examples are not required. Self-supervised learning makes use of the structure within the data to autogenerate labels.

During the initial pre-training stage, the FM’s algorithm can learn the meaning, context, and relationship of the words in the datasets. For example, the model might learn whether drink means beverage, the noun, or swallowing the liquid, the verb.

After the initial pre-training, the model can be further pre-trained on additional data. This is known as continuous pre-training. The goal is to expand the model’s knowledge base and improve its ability to understand and generalize across different domains or tasks.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Optimization

A

Step 3 of the Foundational Model Lifecycle:
Optimization

Pre-trained language models can be optimized through techniques like prompt engineering, retrieval-augmented generation (RAG), and fine-tuning on task-specific data. These methods will vary in complexity and cost and will be discussed later in this lesson.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Evaluation

A

Step 4 of the Foundational Model Lifecycle:
Evaluation

Whether or not you fine-tune a model or use a pre-trained model off the shelf, the next logical step is to evaluate the model. An FM’s performance can be measured using appropriate metrics and benchmarks. Evaluation of model performance and its ability to meet business needs is important.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Deployment

A

Step 5 of the Foundational Model Lifecycle:
Deployment

When the FM meets the desired performance criteria, it can be deployed in the target production environment. Deployment can involve integrating the model into applications, APIs, or other software systems.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Feedback and continuous improvement

A

Step 6 of the Foundational Model Lifecycle:
Feedback and continuous improvement

After deployment, the model’s performance is continuously monitored, and feedback is collected from users, domain experts, or other stakeholders. This feedback, along with model monitoring data, is used to identify areas for improvement, detect potential biases or drift, and inform future iterations of the model. The feedback loop permits continuous enhancement of the foundation model through fine-tuning, continuous pre-training, or re-training, as needed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Large Language Model (LLM)

A

A type of Foundational Model.

Large language models (LLMs) can be based on a variety of architectures, but the most common architecture in today’s state-of-the-art models is the transformer architecture. Transformer-based LLMs are powerful models that can understand and generate human-like text. They are trained on vast amounts of text data from the internet, books, and other sources, and learn patterns and relationships between words and phrases.

LLMs use these tokens, embeddings, and vectors to understand and generate text. The models can capture complex relationships in language, so they can generate coherent and contextually appropriate text, answer questions, summarize information, and even engage in creative writing.

30
Q

Tokens

A

Tokens are the basic units of text that the model processes. Tokens can be words, phrases, or individual characters like a period. Tokens also provide standardization of input data, which makes it easier for the model to process.

As an example, the sentence “A puppy is to dog as a kitten is to cat.” might be broken up into the following tokens: “A” “puppy” “is” “to” “dog” “as” “a” “kitten” “is” “to” “cat.”

31
Q

Embeddings and Vectors

A

Embeddings are numerical representations of tokens, where each token is assigned a vector (a list of numbers) that captures its meaning and relationships with other tokens. These vectors are learned during the training process and allow the model to understand the context and nuances of language.

For example, the embedding vector for the token “cat” might be close to the vectors for “feline” and “kitten” in the embedding space, indicating that they are semantically related. This way, the model can understand that “cat” is similar to “feline” and “kitten” without being explicitly programmed with those relationships.

32
Q

Diffusion models

A

Diffusion is a deep learning architecture system that starts with pure noise or random data. The models gradually add more and more meaningful information to this noise until they end up with a clear and coherent output, like an image or a piece of text. Diffusion models learn through a two-step process of forward diffusion and reverse diffusion.

33
Q

Forward diffusion

A

Using forward diffusion, the system gradually introduces a small amount of noise to an input image until only the noise is left over.

34
Q

Reverse diffusion

A

In the subsequent reverse diffusion step, the noisy image is gradually introduced to denoising until a new image is generated.

35
Q

Multimodal models

A

Instead of just relying on a single type of input or output, like text or images, multimodal models can process and generate multiple modes of data simultaneously. For example, a multimodal model could take in an image and some text as input, and then generate a new image and a caption describing it as output.

These kinds of models learn how different modalities like images and text are connected and can influence each other. Multimodal models can be used for automating video captioning, creating graphics from text instructions, answering questions more intelligently by combining text and visual info, and even translating content while keeping relevant visuals.

36
Q

Generative adversarial networks (GANs)

A

GANs are a type of generative model that involves two neural networks competing against each other in a zero-sum game framework. The two networks are generator and discriminator.

Generator: This network generates new synthetic data (for example, images, text, or audio) by taking random noise as input and transforming it into data that resembles the training data distribution.

Discriminator: This network takes real data from the training set and synthetic data generated by the generator as input. Its goal is to distinguish between the real and generated data.

During training, the generator tries to generate data that can fool the discriminator into thinking it’s real, while the discriminator tries to correctly classify the real and generated data. This adversarial process continues until the generator produces data that is indistinguishable from the real data.

37
Q

Variational autoencoders (VAEs)

A

VAEs are a type of generative model that combines ideas from autoencoders (a type of neural network) and variational inference (a technique from Bayesian statistics). In a VAE, the model consists of two parts:

Encoder: This neural network takes the input data (for example, an image) and maps it to a lower-dimensional latent space, which captures the essential features of the data.

Decoder: This neural network takes the latent representation from the encoder and generates a reconstruction of the original input data.

The key aspect of VAEs is that the latent space is encouraged to follow a specific probability distribution (usually a Gaussian distribution), which allows for generating new data by sampling from this latent space and passing the samples through the decoder.

38
Q

Prompt engineering

A

Part of Step 3 of the FM Lifecycle (Optimization)

Typically the fastest and lowest cost option.

Prompts act as instructions for foundation models. Prompt engineering focuses on developing, designing, and optimizing prompts to enhance the output of FMs for your needs. It gives you a way to guide the model’s behavior to the outcomes that you want to achieve.

A prompt’s form depends on the task that you are giving to a model. As you explore prompt engineering examples, you will review prompts containing some or all of the following elements:

(1) Instructions: This is a task for the FM to do. It provides a task description or instruction for how the model should perform.

(2) Context: This is external information to guide the model.

(3) Input data: This is the input for which you want a response.

(4) Output indicator: This is the output type or format.

39
Q

Fine-tuning

A

Although FMs are pre-trained through self-supervised learning and have inherent capability of understanding information, fine-tuning the FM base model can improve performance. Fine-tuning is a supervised learning process that involves taking a pre-trained model and adding specific, smaller datasets. Adding these narrower datasets modifies the weights of the data to better align with the task.

There are two ways to fine-tune a model:

(1) Instruction fine-tuning uses examples of how the model should respond to a specific instruction. Prompt tuning is a type of instruction fine-tuning.

(2) Reinforcement learning from human feedback (RLHF) provides human feedback data, resulting in a model that is better aligned with human preferences.

Consider this use case for fine-tuning. If you are working on a task that requires industry knowledge, you can take a pre-trained model and fine-tune the model with industry data. If the task involves medical research, for example, the pre-trained model can be fine-tuned with articles from medical journals to achieve more contextualized results.

40
Q

Retrieval-augmented generation (RAG)

A

Retrieval-augmented generation (RAG) is a technique that supplies domain-relevant data as context to produce responses based on that data. This technique is similar to fine-tuning. However, rather than having to fine-tune an FM with a small set of labeled examples, RAG retrieves a small set of relevant documents and uses that to provide context to answer the user prompt. RAG will not change the weights of the foundation model, whereas fine-tuning will change model weights.

41
Q

A company wants to develop a system that can accurately recognize and classify handwritten digits from images.

Which of the following options best describes the use of neural networks for this task?

(a) Neural networks are a type of decision tree algorithm that can be trained on image data to create a set of rules for classifying handwritten digits.

(b) Neural networks are a form of linear regression that can be used to map pixel values from images to corresponding digit labels.

(c) Neural networks are a type of deep learning model inspired by the structure and function of the human brain. They consist of interconnected nodes that can learn to recognize patterns in data, such as images of handwritten digits.

(d) Neural networks are a type of database system that can store and retrieve images of handwritten digits based on their pixel values and associated labels.

A

(c) Neural networks are a type of deep learning model inspired by the structure and function of the human brain. They consist of interconnected nodes that can learn to recognize patterns in data, such as images of handwritten digits.

That’s correct! Neural networks are a powerful deep learning technique that is particularly well suited for tasks involving pattern recognition and classification, such as recognizing and classifying handwritten digits from images. They are inspired by the biological neural networks in the human brain.

42
Q

A company is developing an artificial intelligence (AI) system to control a self-driving car. The system learns through trial-and-error interactions with the driving environment, receiving rewards for safe and efficient actions.

Which machine learning (ML) approach is being used in this scenario?

(a) Supervised learning

(b) Reinforcement learning

(c) Unsupervised learning

(d) Self-supervised learning

A

(b) Reinforcement learning

That’s correct! In this scenario, the AI system interacts with a dynamic environment and must learn the optimal actions to take based on reinforcement learning. For example, the system could receive positive rewards for safe and efficient driving, and negative penalties for collisions or traffic violations.

43
Q

A company is developing a large language model (LLM) for natural language processing tasks, such as text generation, summarization, and question answering.

Which of the following best describes the role of embeddings, in the context of LLMs?

(a) Embeddings are numerical representations of words or tokens, where semantically similar words have similar vector representations.

(b) Embeddings are the preprocessing techniques used to clean and tokenize the text data before feeding it into the LLM for training or inference.

(c) Embeddings are the ensemble methods used to combine multiple LLMs to improve the overall performance and robustness of the system.

(d) Embeddings are the linguistic rules and grammar patterns extracted from the text data to aid the LLM in understanding and generating language.

A

(a) Embeddings are numerical representations of words or tokens, where semantically similar words have similar vector representations.

That’s correct! Embeddings play a crucial role in representing and understanding the meaning of words and language. LLMs are typically trained on vast amounts of text data, and embeddings are used to represent the words or tokens in this data as numerical vectors.

44
Q

A company has pre-trained a large language model on a vast corpus of text data. They want to adapt this pre-trained model to perform specific tasks such as sentiment analysis and document summarization.

Which of the following best describes the process of fine-tuning?

(a) Fine-tuning involves training the pre-trained language model from scratch.

(b) Fine-tuning refers to the process of further training the pre-trained language model on labeled data for the specific tasks.

(c) Fine-tuning is a technique used to preprocess and clean the task-specific data before feeding it into the pre-trained language model.

(d) Fine-tuning is an ensemble method that combines the pre-trained language model with task-specific models to improve the overall performance.

A

(b) Fine-tuning refers to the process of further training the pre-trained language model on labeled data for the specific tasks.

That’s correct! Fine-tuning is the process of adapting a pre-trained language model to perform specific tasks by further training it on labeled data for those tasks.

45
Q

A team is tasked with choosing a generative artificial intelligence (AI) model that can recognize and interpret different forms of input data, such as text, images, and audio.

Which of the following model architectures is best suited for this task?

(a) Large language model

(b) Diffusion model

(c) Multimodal model

(d) Foundation model

A

(c) Multimodal model

That’s correct! Multimodal models are specifically designed to handle inputs from multiple modalities, such as text, images, audio, and video. These models can fuse and process information from different input sources.

46
Q

Amazon SageMaker

A

Build Machine Language Models

The AWS AI/ML services stack starts at the ML frameworks layer. At the core of this layer is Amazon SageMaker.

SageMaker is a fully managed machine learning service that you can use to build, train, and deploy your own custom models. SageMaker provides tools and infrastructure to accelerate your ML development and deployment lifecycle.

With SageMaker, you can build, train, and deploy ML models for any use case with fully managed infrastructure, tools, and workflows. SageMaker removes the heavy lifting from each step of the ML process to make it easier to develop high-quality models. SageMaker provides all the components used for ML in a single toolset, so models get to production faster with much less effort and at lower cost.

47
Q

Amazon Comprehend for natural language processing

A

Understand Unstructured Data

Amazon Comprehend uses ML and natural language processing (NLP) to help you uncover the insights and relationships in your unstructured data. This service performs the following functions:

(1) Identifies the language of the text

(2) Extracts key phrases, places, people, brands, or events

(3) Understands how positive or negative the text is

(4) Analyzes text using tokenization and parts of speech

(5) And automatically organizes a collection of text files by topic

48
Q

Amazon Translate for language translation

A

Language Translation

Amazon Translate is a neural machine translation service that delivers fast, high-quality, and affordable language translation. Neural machine translation is a form of language translation automation that uses deep learning models to deliver more accurate and more natural-sounding translation than traditional statistical and rule-based translation algorithms. With Amazon Translate, you can localize content such as websites and applications for your diverse users, translate large volumes of text for analysis, and efficiently implement cross-lingual communication between users.

49
Q

Amazon Textract for extracting data from scanned documents.

A

Document Extraction

Amazon Textract is a service that automatically extracts text and data from scanned documents. Amazon Textract goes beyond optical character recognition (OCR) to also identify the contents of fields in forms and information stored in tables.

50
Q

Amazon Lex, which you can use to build conversational interfaces powered by the same deep learning technologies that drive Amazon Alexa.

A

Chatbots

Amazon Lex is a fully managed AI service to design, build, test, and deploy conversational interfaces into any application using voice and text. Amazon Lex provides the advanced deep learning functionalities of automatic speech recognition (ASR) for converting speech to text, and natural language understanding (NLU) to recognize the intent of the text. This permits you to build applications with highly engaging user experiences and lifelike conversational interactions, and create new categories of products. With Amazon Lex, the same deep learning technologies that power Amazon Alexa are now available to any developer. You can efficiently build sophisticated, natural-language conversational bots and voice-enabled interactive voice response (IVR) systems.

51
Q

Amazon Polly for text-to-speech

A

Text-to-Speech

Amazon Polly is a service that turns text into lifelike speech. Amazon Polly lets you create applications that talk, so you can build entirely new categories of speech-enabled products. Amazon Polly is an AI service that uses advanced deep learning technologies to synthesize speech that sounds like a human voice. Amazon Polly includes a wide selection of lifelike voices spread across dozens of languages, so you can select the ideal voice and build speech-enabled applications that work in many different countries.

52
Q

Amazon Transcribe for speech-to-text

A

Speech-to-Text

Amazon Transcribe is an automatic speech recognition (ASR) service for automatically converting speech to text. The service can transcribe audio files stored in common formats, like WAV and MP3, with time stamps for every word so that you can quickly locate the audio in the original source by searching for the text. You can also send a live audio stream to Amazon Transcribe and receive a stream of transcripts in real time. Amazon Transcribe is designed to handle a wide range of speech and acoustic characteristics, including variations in volume, pitch, and speaking rate. Customers can use Amazon Transcribe for a variety of business applications, including the following:

(1) Transcription of voice-based customer service calls

(2) Generation of subtitles on audio and video content

(3) Conducting (text based) content analysis on audio and video content

53
Q

Amazon Rekognition, a deep learning-based computer vision service that can analyze images and videos for a wide range of applications.

A

Analyze images and videos

Amazon Rekognition facilitates adding image and video analysis to your applications. It uses proven, highly scalable, deep learning technology that requires no ML expertise to use. With Amazon Rekognition, you can identify objects, people, text, scenes, and activities in images and videos, and even detect inappropriate content. Amazon Rekognition also provides highly accurate facial analysis and facial search capabilities. You can use it to detect, analyze, and compare faces for a wide variety of user verification, people counting, and public safety use cases.

54
Q

Amazon Kendra reimagines enterprise search for websites and applications so that individuals can readily find the content they are looking for.

A

Search

Amazon Kendra is an intelligent search service powered by ML. Amazon Kendra reimagines enterprise search for your websites and applications. Your employees and customers can conveniently find the content that they are looking for, even when it’s scattered across multiple locations and content repositories within your organization.

55
Q

Amazon Personalize for real-time personalization and recommendations.

A

Recommendations

Amazon Personalize is an ML service that developers can use to create individualized recommendations for customers who use their applications.

With Amazon Personalize, you provide an activity stream from your application (page views, signups, purchases, and so forth). You also provide an inventory of the items that you want to recommend, such as articles, products, videos, or music. You can choose to provide Amazon Personalize with additional demographic information from your users, such as age or geographic location. Amazon Personalize processes and examines the data, identifies what is meaningful, selects the right algorithms, and trains and optimizes a personalization model that is customized for your data.

56
Q

AWS DeepRacer, a fully autonomous 1/18th scale race car that lets you get hands-on experience with reinforcement learning.

A

Reinforcement learning.

AWS DeepRacer is a fully autonomous 1/18th scale race car that gives you an interesting and fun way to get started with reinforcement learning (RL). RL is an advanced ML technique that takes a very different approach to training models than other ML methods. Its superpower is that it learns very complex behaviors without requiring any labeled training data, and it can make short-term decisions while optimizing for a longer-term goal.

57
Q

Amazon SageMaker JumpStart, which provides a set of solutions for the most common use cases.

A

Amazon SageMaker JumpStart, which provides a set of solutions for the most common use cases.

SageMaker JumpStart helps you quickly get started with ML. To facilitate getting started, SageMaker JumpStart provides a set of solutions for the most common use cases, which can be readily deployed. The solutions are fully customizable and showcase the use of AWS CloudFormation templates and reference architectures so that you can accelerate your ML journey. SageMaker JumpStart also supports one-click deployment and fine-tuning of more than 150 popular open-source models such as natural language processing, object detection, and image classification models.

58
Q

Amazon Bedrock is a fully managed service that makes FMs from Amazon and leading AI startups available through an API. With Amazon Bedrock, you can quickly get started, experiment with FMs, privately customize them with your own data, and seamlessly integrate and deploy FMs into AWS applications. If you’d prefer to experiment with building AI applications, you can get hands-on experience by using PartyRock, an Amazon Bedrock Playground.

A

Amazon Bedrock is a fully managed service that makes FMs from Amazon and leading AI startups available through an API. With Amazon Bedrock, you can quickly get started, experiment with FMs, privately customize them with your own data, and seamlessly integrate and deploy FMs into AWS applications. If you’d prefer to experiment with building AI applications, you can get hands-on experience by using PartyRock, an Amazon Bedrock Playground.

59
Q

Amazon Q, a generative AI–powered assistant designed for work that can be tailored for a business’s data.

A

Amazon Q, a generative AI–powered assistant designed for work that can be tailored for a business’s data.

Amazon Q can help you get fast, relevant answers to pressing questions, solve problems, generate content, and take actions using the data and expertise found in your company’s information repositories, code, and enterprise systems. When you chat with Amazon Q, it provides immediate, relevant information and advice to help streamline tasks, speed decision-making, and help spark creativity and innovation.

60
Q

Amazon Q Developer, providing ML–powered code recommendations to accelerate development in a variety of programming languages and applications.

A

Amazon Q Developer, providing ML–powered code recommendations to accelerate development in a variety of programming languages and applications.

Designed to improve developer productivity, Amazon Q Developer provides ML–powered code recommendations to accelerate development of C#, Java, JavaScript, Python, and TypeScript applications. The service integrates with multiple integrated development environments (IDEs) and helps developers write code faster by generating entire functions and logical blocks of code—often consisting of more than 10–15 lines of code.

61
Q

Advantages of using AWS services #1:
Accelerated development and deployment

A

Accelerated development and deployment

Amazon Q Developer (previously Amazon CodeWhisperer) can generate code in real time. Amazon ran a productivity challenge during the preview of CodeWhisperer. Participants who used the service were 27 percent more likely to complete tasks successfully and did so an average of 57 percent faster than those who did not use CodeWhisperer.

SageMaker handles tasks such as data preprocessing, model training, and deployment. So developers can focus on the application logic and user experience.

Amazon Bedrock provides access to pre-trained models and APIs. So developers can quickly integrate AI capabilities into their applications without the need for extensive training or specialized hardware. This accelerates the development process and permits faster iteration cycles, reducing the time to market for AI-powered applications.

62
Q

Advantages of using AWS services #2:
Scalability and cost optimization

A

With pay-as-you-go pricing models, businesses only pay for the resources that they consume. This reduces upfront costs and facilitates efficient resource utilization.

AWS global infrastructure and distributed computing capabilities permit applications to scale seamlessly across regions and handle large datasets or high-volume traffic.

63
Q

Advantages of using AWS services #3:
Flexibility and access to models

A

AWS continuously updates and expands its AI services, providing access to the latest advancements in machine learning models, techniques, and algorithms.

Amazon Bedrock offers a choice of high-performing FMs from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, Stability AI, and AWS, through a single API.

64
Q

Advantages of using AWS services #4:
Integration with AWS tools and services

A

Services like Amazon Comprehend and Amazon Rekognition offer ready-to-use AI capabilities that can be readily incorporated into applications.

AWS AI services seamlessly integrate with other AWS services, so developers can build end-to-end solutions that use multiple cloud services.

The AWS ecosystem provides a wide range of tools, SDKs, and APIs, so developers can incorporate AI capabilities into their existing applications or build entirely new AI-driven applications.

65
Q

Cost Considerations #1:
Responsiveness and availability

A

AWS generative AI services are designed to be highly responsive and available. However, higher levels of responsiveness and availability often come at an increased cost. For example, services with lower latency and higher availability (for example, multi-Region deployment) will typically have higher pricing compared to alternatives with lower performance and availability guarantees.

66
Q

Cost Considerations #2:
Redundancy and Regional coverage

A

To ensure redundancy and high availability, AWS generative AI services can be deployed across multiple Availability Zones or even across multiple AWS Regions. This redundancy comes with an additional cost, because resources have to be provisioned and data replicated across multiple locations.

67
Q

Cost Considerations #3:
Performance

A

AWS offers different compute options (for example, CPU, GPU, and custom hardware accelerators) for generative AI services. Higher-performance options, such as GPU instances, generally come at a higher cost but can provide significant performance improvements for certain workloads.

68
Q

Cost Considerations #4:
Token-based pricing

A

Many AWS generative AI services, such as Amazon Q Developer and Amazon Bedrock, use a token-based pricing model. This means that you pay for the number of tokens (a unit of text or code) generated or processed by the service. The more tokens you generate or process, the higher the cost.

69
Q

Cost Considerations #5:
Provisioned throughput

A

Some AWS generative AI services, like Amazon Polly and Amazon Transcribe, let you provision a specific amount of throughput (for example, audio or text processing capacity) in advance. Higher provisioned throughput levels typically come at a higher cost but can ensure predictable performance for time-sensitive workloads.

70
Q

Cost Considerations #6:
Custom models

A

AWS provides pre-trained models for various generative AI tasks, but you can also bring your own custom models or fine-tune existing models. Training and deploying custom models can incur additional costs, depending on the complexity of the model, the training data, and the compute resources required.

71
Q

A company has a large collection of customer support emails and chat transcripts. They want to analyze the sentiment expressed in these messages and identify common issues or topics discussed by their customers.

Which AWS service would be most appropriate for this task?

(a) Amazon Transcribe

(b) Amazon Kendra

(c) Amazon Polly

(d) Amazon Comprehend

A

(d) Amazon Comprehend

That’s correct! Amazon Comprehend is a natural language processing service that can analyze text and extract insights such as sentiment, entities, key phrases, and topics.

72
Q

A retail company has accumulated a large volume of customer transaction data, including purchase history, product preferences, and demographic information. The company wants to use this data to build machine learning models that can provide personalized product recommendations to customers and improve their overall shopping experience.

Which AWS service would be most suitable for the retail company to build, train, and deploy machine learning models for personalized product recommendations?

(a) Amazon SageMaker

(b) Amazon Bedrock

(c) Amazon Lex

(d) Amazon Q Developer

A

(a) Amazon SageMaker

That’s correct! Amazon SageMaker is a fully managed service that provides a complete machine learning lifecycle, including data preparation, model building, training, tuning, and deployment.