AI Practice Test #5 Flashcards
Amazon Q Developer
Amazon Q Developer assists developers and IT professionals with all their tasks—from coding, testing, and upgrading applications, to diagnosing errors, performing security scanning and fixes, and optimizing AWS resources.
Amazon Q Developer (capabilities)
(1) Understand and manage your cloud infrastructure on AWS
Amazon Q Developer helps you understand and manage your cloud infrastructure on AWS. With this capability, you can list and describe your AWS resources using natural language prompts, minimizing friction in navigating the AWS Management Console and compiling all information from documentation pages.
For example, you can ask Amazon Q Developer, “List all of my Lambda functions”. Then, Amazon Q Developer returns the response with a set of my AWS Lambda functions as requested, as well as deep links so you can navigate to each resource easily.
(2) Get answers to your AWS account-specific cost-related questions using natural language
Amazon Q Developer can get answers to AWS cost-related questions using natural language. This capability works by retrieving and analyzing cost data from AWS Cost Explorer.
Amazon Q Developer (team’s development efforts)
Amazon Q Developer can suggest code snippets, providing developers with recommendations for code based on specific tasks or requirements
This is the correct option because Amazon Q Developer is designed to assist developers by providing code suggestions and recommendations that align with their coding tasks. It leverages machine learning models trained on vast datasets to suggest code snippets, optimize code efficiency, and help developers follow best practices. This functionality helps speed up development processes and enhances productivity.
Amazon Q Developer is not specifically designed to handle the full deployment process of applications.
Self-supervised learning
It works when models are provided vast amounts of raw, almost entirely, or completely unlabeled data and then generate the labels themselves.
Foundation models use self-supervised learning to create labels from input data. In self-supervised learning, models are provided vast amounts of raw completely unlabeled data and then the models generate the labels themselves. This means no one has instructed or trained the model with labeled training data sets.
Supervised learning
In supervised learning, models are supplied with labeled and defined training data to assess for correlations. The sample data specifies both the input and the output for the model. For example, images of handwritten figures are annotated to indicate which number they correspond to. A supervised learning system could recognize the clusters of pixels and shapes associated with each number, given sufficient examples.
Unsupervised learning
Unsupervised learning algorithms train on unlabeled data. They scan through new data, trying to establish meaningful connections between the inputs and predetermined outputs. They can spot patterns and categorize data. For example, unsupervised algorithms could group news articles from different news sites into common categories like sports, crime, etc. They can use natural language processing to comprehend meaning and emotion in the article.
Amazon SageMaker Model Cards
Describes how a model should be used in a production environment
Use Amazon SageMaker Model Cards to document critical details about your machine learning (ML) models in a single place for streamlined governance and reporting.
Catalog details such as the intended use and risk rating of a model, training details and metrics, evaluation results and observations, and additional call-outs such as considerations, recommendations, and custom information.
Model cards provide prescriptive guidance on what information to document and include fields for custom information. Specifying the intended uses of a model helps ensure that model developers and users have the information they need to train or deploy the model responsibly.
The intended uses of a model go beyond technical details and describe how a model should be used in production, the scenarios in which is appropriate to use a model, and additional considerations such as the type of data to use with the model or any assumptions made during development.
Machine Learning models
Machine Learning models can be deterministic or probabilistic or a mix of both
Machine Learning models can be deterministic or probabilistic or a mix of both, depending on their nature and how they are designed to operate.
Deterministic models always produce the same output given the same input. Their behavior is predictable and consistent. Example: Decision Trees: Given the same input data, a decision tree will always follow the same path and produce the same output.
Probabilistic models provide a distribution of possible outcomes rather than a single output. They incorporate uncertainty and randomness in their predictions. Example: Bayesian Networks: These models represent probabilistic relationships among variables and provide probabilities for different outcomes.
Some models combine both deterministic and probabilistic elements, such as neural networks and random forests.
Dynamic prompt engineering
Implement dynamic prompt engineering to customize responses based on user characteristics like age
Dynamic prompt engineering involves modifying the input prompts to the Large Language Model (LLM) to customize the chatbot’s responses based on the user’s age. By altering the prompt dynamically, you can provide specific instructions or context to the LLM to generate age-appropriate responses. For example, if the user is a child, the prompt might include instructions to use simpler language or a friendly tone. This approach does not require changing the model itself and leverages Amazon Bedrock’s ability to interpret context from customized prompts effectively.
To provide custom responses via an LLM chatbot built using Amazon Bedrock based on the user’s age, you can implement a strategy that dynamically adjusts the chatbot’s responses according to the age group of the user. For the given use case, you can leverage Amazon Bedrock to build a custom prompt logic for the LLM that dynamically adjusts the input prompt based on the user’s age category, like the following example in Python:
Then, use the Amazon Bedrock API to send the customized prompts to the foundation model. The Bedrock service will generate responses based on the context provided in each prompt, adapting the output to fit the desired style and tone for the specific age group.
Retrieval-Augmented Generation (RAG)
RAG is a technique that combines a retrieval mechanism (which fetches relevant documents or data from a knowledge base) with a generation model to provide more factual and context-rich responses. While RAG can enhance response accuracy by adding external context, it is not specifically designed for customizing responses based on user characteristics like age. RAG focuses on improving the relevance and factual accuracy of outputs, not on adapting the style or complexity of the language to suit different age groups.
model re-training
Re-training the model involves using a large dataset to update the entire model’s parameters, which is time-consuming, costly, and unnecessary for simply tailoring responses based on user age. Amazon Bedrock provides access to pre-trained foundation models that are already capable of generating diverse outputs based on the input prompts. Re-training is overkill for this task and is not the appropriate solution for generating age-specific responses dynamically.
Fine-tuning
Fine-tuning involves training the LLM on a specialized dataset to improve its performance on specific tasks or domains. However, this method is more suited for developing domain-specific expertise in the model rather than adjusting the style or tone of responses based on user age. Fine-tuning can be resource-intensive and time-consuming, and it is not necessary for generating age-appropriate responses when prompt engineering can dynamically handle the customization without modifying the model itself.
AWS Audit Manager
AWS Audit Manager helps automate the collection of evidence to continuously audit your AWS usage. It simplifies the process of assessing risk and compliance with regulations and industry standards, making it an essential tool for governance in AI systems.
AWS Artifact
AWS Artifact provides on-demand access to AWS’ compliance reports and online agreements. It is useful for obtaining compliance documentation but does not provide continuous auditing or automated evidence collection.
AWS Trusted Advisor
AWS Trusted Advisor offers guidance to help optimize your AWS environment for cost savings, performance, security, and fault tolerance. While it provides recommendations for best practices, it does not focus on auditing or evidence collection for compliance.
AWS CloudTrail
AWS CloudTrail records AWS API calls for auditing purposes and delivers log files for compliance and operational troubleshooting. It is crucial for tracking user activity but does not automate compliance assessments or evidence collection.
Amazon Q in Connect
Amazon Connect is the contact center service from AWS. Amazon Q helps customer service agents provide better customer service. Amazon Q in Connect uses real-time conversation with the customer along with relevant company content to automatically recommend what to say or what actions an agent should take to better assist customers.
Amazon Q Business
Amazon Q Business is a fully managed, generative-AI powered assistant that you can configure to answer questions, provide summaries, generate content, and complete tasks based on your enterprise data. It allows end users to receive immediate, permissions-aware responses from enterprise data sources with citations, for use cases such as IT, HR, and benefits help desks.
Amazon Q in QuickSight
With Amazon Q in QuickSight, customers get a generative BI assistant that allows business analysts to use natural language to build BI dashboards in minutes and easily create visualizations and complex calculations.