Fundamentals of ML and AI Flashcards

1
Q

What are Neutral Networks

A

Neural networks have lots of tiny units called nodes that are connected together. These nodes are organized into layers. The layers include an input layer, one or more hidden layers, and an output layer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Computer vision

A

Computer vision is a field of artificial intelligence that makes it possible for computers to interpret and understand digital images and videos.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Natural Language Processing

A

Natural language processing (NLP) is a branch of artificial intelligence that deals with the interaction between computers and human languages. NLP includes tasks such as text classification, sentiment analysis, machine translation, and language generation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Diffusion models

A

-Make new data by making controlled changes to initial sample
-Diffusion is a deep learning architecture system that starts with pure noise or random data.
-Model adds more meaningful information to this noise until they end up with a clear and coherent output, like an image or a piece of text.
-Diffusion models learn through a two-step process of forward diffusion and reverse diffusion.
-Not a type of transformer model

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Forward diffusion

A

Using forward diffusion, the system gradually introduces a small amount of noise to an input image until only the noise is left over.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Reverse diffusion

A

In the subsequent reverse diffusion step, the noisy image is gradually introduced to denoising until a new image is generated

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Multimodal models

A

multimodal models can process and generate multiple modes of data simultaneously. For example, a multimodal model could take in an image and some text as input, and then generate a new image and a caption describing it as output. Use cases: automating video captioning, creating graphics from text instructions, answering questions more intelligently by combining text and visual info, and even translating content while keeping relevant visuals.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Generative adversarial networks (GANs0

A

GANs are a type of generative model that involves two neural networks competing against each other in a zero-sum game framework. 2 types Generator (generates new synthetic data (for example, images, text, or audio) by taking random noise as input and transforming it into data that resembles the training data distribution) and Discriminator (takes real data from the training set and synthetic data generated by the generator as input. Its goal is to distinguish between the real and generated data.)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Large Language models

A

Takes real data from the training set and synthetic data generated by the generator as input. Its goal is to distinguish between the real and generated data. Includes tokens and vectors

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Variational Autoencoders (VAEs)

A

VAEs are a type of generative model that combines ideas from autoencoders (a type of neural network) and variational inference (a technique from Bayesian statistics). In a VAE, the model consists of two parts, and follows a probability distribution

Encoder: This neural network takes the input data (for example, an image) and maps it to a lower-dimensional latent space, which captures the essential features of the data.

Decoder: This neural network takes the latent representation from the encoder and generates a reconstruction of the original input data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Reinforcement learning from human feedback (RLHF)

A

provides human feedback data, resulting in a model that is better aligned with human preferences.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Instruction fine-tuning

A

uses examples of how the model should respond to a specific instruction. Prompt tuning is a type of instruction fine-tuning.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Retrieval-augmented generation (RAG)

A

a technique that supplies domain-relevant data as context to produce responses based on that data. rather than having to fine-tune an FM with a small set of labeled examples, RAG retrieves a small set of relevant documents and uses that to provide context to answer the user prompt. RAG will not change the weights of the foundation model, whereas fine-tuning will change model weights.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

FM (Foundational Model) Lifecycle

A

The foundation model lifecycle is a comprehensive process that involves
1. Data selection
2. pre-training
3. optimization
4. evaluation
5. deployment
6. feedback and continuous improvement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Amazon SageMaker

A

With SageMaker, you can build, train, and deploy ML models for any use case with fully managed infrastructure, tools, and workflows. SageMaker provides all the components used for ML in a single toolset,

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Amazon Comprehend

A

Amazon Comprehend uses ML and natural language processing (NLP) to help you uncover the insights and relationships in your unstructured data. This service performs the following functions:

Identifies the language of the text
Extracts key phrases, places, people, brands, or events
Understands how positive or negative the text is
Analyzes text using tokenization and parts of speech
And automatically organizes a collection of text files by topic
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Amazon Textract

A

Amazon Textract is a service that automatically extracts text and data from scanned documents.

18
Q

Amazon Polly

A

turns text into lifelike speech

18
Q

Amazon Translate

A

Amazon Translate is a neural machine translation service that delivers fast, high-quality, and affordable language translation. (can localize content)

19
Q

Amazon Lex

A

Amazon Lex is a fully managed AI service to design, build, test, and deploy conversational interfaces into any application using voice and text. (use cases alexa)

20
Q

Amazon Transcribe

A

automatic speech recognition (ASR) service for automatically converting speech to text

21
Q

Amazon Rekognition

A

Amazon Rekognition facilitates adding image and video analysis to your applications.

22
Q

Amazon Kendra

A

Amazon Kendra is an intelligent search service powered by ML. Amazon Kendra reimagines enterprise search for your websites and applications.

23
Q

Amazon Personalize

A

Amazon Personalize is an ML service that developers can use to create individualized recommendations for customers who use their applications

24
Q

AWS DeepRacer

A

AWS DeepRacer is a 1/18th scale race car that gives you an interesting and fun way to get started with reinforcement learning (RL

25
Q

Reinforcement Learning

A

RL is an advanced ML technique that takes a very different approach to training models than other ML methods. Its superpower is that it learns very complex behaviors without requiring any labeled training data, and it can make short-term decisions while optimizing for a longer-term goal.

26
Q

Amazon SageMaker Jumpstart

A

SageMaker JumpStart helps you quickly get started with ML. To facilitate getting started, SageMaker JumpStart provides a set of solutions for the most common use cases, which can be readily deployed

27
Q

Amazon Bedrock

A

Amazon Bedrock is a fully managed service that makes FMs from Amazon and leading AI startups available through an API.

28
Q

Amazon Q

A

Generative AI powered assistant. It can answer questions and generate content and complete tasks.

29
Q

Amazon Q Developer

A

Designed to improve developer productivity, Amazon Q Developer provides ML–powered code recommendations to accelerate development of C#, Java, JavaScript, Python, and TypeScript applications.

30
Q

Amazon SageMaker Ground Truth

A

Helps you build highly accurate training datasets for machine learning quickly

31
Q

What is a Transformer based model?

A

The transformer-based generative AI model builds upon the encoder and decoder concepts of VAEs. Transformer-based models add more layers to the encoder to improve performance on text-based tasks like comprehension, translation, and creative writing. Transformer-based models use a self-attention mechanism. They weigh the importance of different parts of an input sequence when processing each element in the sequence.

32
Q

5 types of machine learning

A

1) Supervised
2) Unsupervised
3) Semi-Supervised (small amount of labeled data and large amount of unlabeled data)
4) Reinforcement Learning
5) Deep Learning

33
Q

What is inference?

A

-the stage where a trained machine learning model is deployed to make predictions or generate outputs based on new input data
- model uses the patterns and relationships it learned during training to provide accurate and meaningful results.

34
Q

self-supervised learning

A

It works when models are provided vast amounts of raw, almost entirely, or completely unlabeled data and then generate the labels themselves

35
Q

convolutional neural network (CNN),

A

A deep learning model primarily designed for processing image data -
-designed specifically for image recognition and processing tasks and is
-highly effective for analyzing visual data.

36
Q

pseudo-labeling

A

The part in semi-supervised learning where the model labels the unlabeled data

37
Q

Image processing vs. computer vision

A

Image processing focuses on enhancing and manipulating images for visual quality, whereas computer vision involves interpreting and understanding the content of images to make decisions

38
Q

What type of learning do foundational models use?

A

Self-supervised learning (they create their own labels from input data)

39
Q

How does a transformer model work?

A

Transformer models use a self-attention mechanism and implement contextual embeddings

40
Q

Examples of supervised learning

A

1) Regression (logistic or linear)
2) Neural networks
3) Decision tree

41
Q

Fine-tuning

A

1) Fine-tuning involves further training a pre-trained language model on a specific task or domain-specific dataset
2) Supervised learning
3) Does change the weights of a model