AI Practice Test #1 Nikolai Flashcards
Amazon Q Business
Fully managed infrastructure with no need for code
Amazon Q Business offers a fully managed infrastructure that does not require users to write or manage any code. This feature greatly simplifies the process, allowing businesses to focus on operations without worrying about the underlying technical setup.
Easy-to-use interface for deployment and configuration
The service is designed with a user-friendly interface that makes deployment and configuration straightforward, even for users without deep technical expertise. This accessibility enhances productivity and reduces the time needed to get the service up and running.
AWS Glue
Glue is a fully managed ETL (extract, transform, load) service used for preparing and transforming data
Amazon Translate
To ensure brand names and specific terms are accurately translated
Using Amazon Translate’s custom terminology feature allows users to ensure that brand names and specific terms are accurately translated. This is important for maintaining consistency and accuracy in translations for specific terms that may not have direct equivalents in other languages.
Variational Autoencoders (VAEs)
VAEs generate data by sampling from a learned distribution
VAEs generate data by sampling from a learned distribution, typically a Gaussian distribution in the latent space. This sampling process allows VAEs to generate new data points that are similar to the training data.
Generative Adversarial Networks (GANs)
GANs use adversarial training between two networks
GANs use adversarial training between a generator network that creates fake data samples and a discriminator network that tries to distinguish between real and fake samples.
Forward diffusion
Forward diffusion is a step in the process of diffusion models where information or signals are spread forward through a network or system. It involves propagating information from an initial source to other nodes or layers in the model, allowing for the flow of data in a specific direction.
Backward diffusion
Backward diffusion is a step in the process of diffusion models where information or signals are spread backward through a network or system. It involves propagating information from the output or final layer back to the input or initial source, allowing for the flow of data in the opposite direction to refine and adjust the model.
Tokenization
Tokenization is a process of breaking down text or data into smaller units called tokens, which can be words, phrases, or symbols. While tokenization is a common preprocessing step in natural language processing tasks, it is not specifically related to the process of diffusion models in AI.
Data augmentation
Data augmentation is a technique used to artificially increase the size of a dataset by creating modified or augmented versions of existing data samples. While data augmentation is a common practice in machine learning to improve model performance, it is not directly related to the process of diffusion models.
A company wants to use Amazon Comprehend to analyze stored documents for key phrases and sentiment. What method should they use?
Batch processing jobs
To analyze stored documents for key phrases and sentiment, the company should use batch processing jobs. This method allows for the analysis of large volumes of documents stored in S3.
Discriminator in a Generative Adversarial Network (GAN)
To evaluate and classify data as real or fake
The correct role of the discriminator in a Generative Adversarial Network (GAN) is to evaluate and classify data as real or fake. By providing feedback to the generator based on its classification, the discriminator helps improve the quality of the generated data over time.
Net Promoter Score (NPS)
High user satisfaction
A high NPS indicates high user satisfaction, as it measures the likelihood of users recommending the AI application to others.
How does Amazon Bedrock evaluate the pricing for image models?
Based on the number of images generated
Amazon Bedrock handles pricing for image models based on the number of images generated. This means that the cost is determined by the quantity of images produced using the model, rather than other factors such as input tokens or compute time.