SAA L2P 701-800 v24.021 Flashcards
QUESTION 800
A city has deployed a web application running on Amazon EC2 instances behind an Application
Load Balancer (ALB). The application’s users have reported sporadic performance, which
appears to be related to DDoS attacks originating from random IP addresses. The city needs a
solution that requires minimal configuration changes and provides an audit trail for the DDoS
sources.
Which solution meets these requirements?
A. Enable an AWS WAF web ACL on the ALB, and configure rules to block traffic from unknown
sources.
B. Subscribe to Amazon Inspector. Engage the AWS DDoS Response Team (DRT) to integrate
mitigating controls into the service.
C. Subscribe to AWS Shield Advanced. Engage the AWS DDoS Response Team (DRT) to integrate
mitigating controls into the service.
D. Create an Amazon CloudFront distribution for the application, and set the ALB as the origin.
Enable an AWS WAF web ACL on the distribution, and configure rules to block traffic from
unknown sources
C. Subscribe to AWS Shield Advanced. Engage the AWS DDoS Response Team (DRT) to integrate
mitigating controls into the service.
QUESTION 799
A gaming company wants to launch a new internet-facing application in multiple AWS Regions.
The application will use the TCP and UDP protocols for communication. The company needs to
provide high availability and minimum latency for global users.
Which combination of actions should a solutions architect take to meet these requirements?
(Choose two.)
A. Create internal Network Load Balancers in front of the application in each Region.
B. Create external Application Load Balancers in front of the application in each Region.
C. Create an AWS Global Accelerator accelerator to route traffic to the load balancers in each
Region.
D. Configure Amazon Route 53 to use a geolocation routing policy to distribute the traffic.
E. Configure Amazon CloudFront to handle the traffic and route requests to the application in each
Region
A. Create internal Network Load Balancers in front of the application in each Region.
C. Create an AWS Global Accelerator accelerator to route traffic to the load balancers in each
Region.
When you add an internal Load Balancer or an Amazon EC2 instance endpoint in AWS Global
Accelerator, you enable internet traffic to flow directly to and from the endpoint in Virtual Private
Clouds (VPCs) by targeting it in a private subnet. The VPC that contains the load balancer or
EC2 instance must have an internet gateway attached to it, to indicate that the VPC accepts
internet traffic. However, you don’t need public IP addresses on the load balancer or EC2
instance. You also don’t need an associated internet gateway route for the subnet.
QUESTION 798
A company has an application that uses Docker containers in its local data center. The
application runs on a container host that stores persistent data in a volume on the host. The
container instances use the stored persistent data.
The company wants to move the application to a fully managed service because the company
does not want to manage any servers or storage infrastructure.
Which solution will meet these requirements?
A. Use Amazon Elastic Kubernetes Service (Amazon EKS) with self-managed nodes. Create an
Amazon Elastic Block Store (Amazon EBS) volume attached to an Amazon EC2 instance. Use the
EBS volume as a persistent volume mounted in the containers.
B. Use Amazon Elastic Container Service (Amazon ECS) with an AWS Fargate launch type. Create
an Amazon Elastic File System (Amazon EFS) volume. Add the EFS volume as a persistent
storage volume mounted in the containers.
C. Use Amazon Elastic Container Service (Amazon ECS) with an AWS Fargate launch type. Create
an Amazon S3 bucket. Map the S3 bucket as a persistent storage volume mounted in the
containers.
D. Use Amazon Elastic Container Service (Amazon ECS) with an Amazon EC2 launch type. Create
an Amazon Elastic File System (Amazon EFS) volume. Add the EFS volume as a persistent
storage volume mounted in the containers.
B. Use Amazon Elastic Container Service (Amazon ECS) with an AWS Fargate launch type. Create
an Amazon Elastic File System (Amazon EFS) volume. Add the EFS volume as a persistent
storage volume mounted in the containers.
Explanation:
Mounting S3 in Fargate is not supported commonly. You’d have to make it manually. EFS is very
well supported with Fargate.
https://stackoverflow.com/questions/66391791/how-to-mount-s3-bucket-to-ecs-fargate-container
https://docs.aws.amazon.com/AmazonECS/latest/bestpracticesguide/storage.html
QUESTION 797
A company is deploying a new application to Amazon Elastic Kubernetes Service (Amazon EKS)
with an AWS Fargate cluster. The application needs a storage solution for data persistence. The
solution must be highly available and fault tolerant. The solution also must be shared between
multiple application containers.
Which solution will meet these requirements with the LEAST operational overhead?
A. Create Amazon Elastic Block Store (Amazon EBS) volumes in the same Availability Zones where
EKS worker nodes are placed. Register the volumes in a StorageClass object on an EKS cluster.
Use EBS Multi-Attach to share the data between containers.
B. Create an Amazon Elastic File System (Amazon EFS) file system. Register the file system in a
StorageClass object on an EKS cluster. Use the same file system for all containers.
C. Create an Amazon Elastic Block Store (Amazon EBS) volume. Register the volume in a
StorageClass object on an EKS cluster. Use the same volume for all containers.
D. Create Amazon Elastic File System (Amazon EFS) file systems in the same Availability Zones
where EKS worker nodes are placed. Register the file systems in a StorageClass object on an EKS cluster. Create an AWS Lambda function to synchronize the data between file systems.
B. Create an Amazon Elastic File System (Amazon EFS) file system. Register the file system in a
StorageClass object on an EKS cluster. Use the same file system for all containers.
QUESTION 796
A solutions architect creates a VPC that includes two public subnets and two private subnets. A
corporate security mandate requires the solutions architect to launch all Amazon EC2 instances
in a private subnet. However, when the solutions architect launches an EC2 instance that runs a
web server on ports 80 and 443 in a private subnet, no external internet traffic can connect to the
server.
What should the solutions architect do to resolve this issue?
A. Attach the EC2 instance to an Auto Scaling group in a private subnet. Ensure that the DNS record
for the website resolves to the Auto Scaling group identifier.
B. Provision an internet-facing Application Load Balancer (ALB) in a public subnet. Add the EC2
instance to the target group that is associated with the ALEnsure that the DNS record for the
website resolves to the ALB.
C. Launch a NAT gateway in a private subnet. Update the route table for the private subnets to add a
default route to the NAT gateway. Attach a public Elastic IP address to the NAT gateway.
D. Ensure that the security group that is attached to the EC2 instance allows HTTP traffic on port 80
and HTTPS traffic on port 443. Ensure that the DNS record for the website resolves to the public
IP address of the EC2 instance.
B. Provision an internet-facing Application Load Balancer (ALB) in a public subnet. Add the EC2
instance to the target group that is associated with the ALEnsure that the DNS record for the
website resolves to the ALB.
QUESTION 795
A company needs to provide customers with secure access to its data. The company processes
customer data and stores the results in an Amazon S3 bucket.
All the data is subject to strong regulations and security requirements. The data must be
encrypted at rest. Each customer must be able to access only their data from their AWS account.
Company employees must not be able to access the data.
Which solution will meet these requirements?
A. Provision an AWS Certificate Manager (ACM) certificate for each customer. Encrypt the data
client-side. In the private certificate policy, deny access to the certificate for all principals except an IAM role that the customer provides.
B. Provision a separate AWS Key Management Service (AWS KMS) key for each customer. Encrypt
the data server-side. In the S3 bucket policy, deny decryption of data for all principals except an
IAM role that the customer provides.
C. Provision a separate AWS Key Management Service (AWS KMS) key for each customer. Encrypt
the data server-side. In each KMS key policy, deny decryption of data for all principals except an
IAM role that the customer provides.
D. Provision an AWS Certificate Manager (ACM) certificate for each customer. Encrypt the data
client-side. In the public certificate policy, deny access to the certificate for all principals except an
IAM role that the customer provides.
C. Provision a separate AWS Key Management Service (AWS KMS) key for each customer. Encrypt
the data server-side. In each KMS key policy, deny decryption of data for all principals except an
IAM role that the customer provides.
QUESTION 794
A company is building a microservices-based application that will be deployed on Amazon Elastic
Kubernetes Service (Amazon EKS). The microservices will interact with each other. The company
wants to ensure that the application is observable to identify performance issues in the future.
Which solution will meet these requirements?
A. Configure the application to use Amazon ElastiCache to reduce the number of requests that are
sent to the microservices.
B. Configure Amazon CloudWatch Container Insights to collect metrics from the EKS clusters.
Configure AWS X-Ray to trace the requests between the microservices.
C. Configure AWS CloudTrail to review the API calls. Build an Amazon QuickSight dashboard to
observe the microservice interactions.
D. Use AWS Trusted Advisor to understand the performance of the application.
B. Configure Amazon CloudWatch Container Insights to collect metrics from the EKS clusters.
Configure AWS X-Ray to trace the requests between the microservices.
Explanation:
Amazon CloudWatch Container Insights: This service provides monitoring and troubleshooting
capabilities for containerized applications. It collects and aggregates metrics, logs, and events
from Amazon EKS clusters and containers. This helps in monitoring the performance and health
of microservices.
QUESTION 793
A company is building a shopping application on AWS. The application offers a catalog that changes once each month and needs to scale with traffic volume. The company wants the lowest
possible latency from the application. Data from each user’s shopping cart needs to be highly
available. User session data must be available even if the user is disconnected and reconnects.
What should a solutions architect do to ensure that the shopping cart data is preserved at all
times?
A. Configure an Application Load Balancer to enable the sticky sessions feature (session affinity) for
access to the catalog in Amazon Aurora.
B. Configure Amazon ElastiCache for Redis to cache catalog data from Amazon DynamoDB and
shopping cart data from the user’s session.
C. Configure Amazon OpenSearch Service to cache catalog data from Amazon DynamoDB and
shopping cart data from the user’s session.
D. Configure an Amazon EC2 instance with Amazon Elastic Block Store (Amazon EBS) storage for
the catalog and shopping cart. Configure automated snapshots.
B. Configure Amazon ElastiCache for Redis to cache catalog data from Amazon DynamoDB and
shopping cart data from the user’s session.
QUESTION 792
A company has a web application that includes an embedded NoSQL database. The application
runs on Amazon EC2 instances behind an Application Load Balancer (ALB). The instances run in
an Amazon EC2 Auto Scaling group in a single Availability Zone.
A recent increase in traffic requires the application to be highly available and for the database to
be eventually consistent.
Which solution will meet these requirements with the LEAST operational overhead?
A. Replace the ALB with a Network Load Balancer. Maintain the embedded NoSQL database with its
replication service on the EC2 instances.
B. Replace the ALB with a Network Load Balancer. Migrate the embedded NoSQL database to
Amazon DynamoDB by using AWS Database Migration Service (AWS DMS).
C. Modify the Auto Scaling group to use EC2 instances across three Availability Zones. Maintain the
embedded NoSQL database with its replication service on the EC2 instances.
D. Modify the Auto Scaling group to use EC2 instances across three Availability Zones. Migrate the
embedded NoSQL database to Amazon DynamoDB by using AWS Database Migration Service
(AWS DMS).
D. Modify the Auto Scaling group to use EC2 instances across three Availability Zones. Migrate the
embedded NoSQL database to Amazon DynamoDB by using AWS Database Migration Service
(AWS DMS).
QUESTION 791
A company is deploying an application in three AWS Regions using an Application Load
Balancer. Amazon Route 53 will be used to distribute traffic between these Regions.
Which Route 53 configuration should a solutions architect use to provide the MOST high-
performing experience?
A. Create an A record with a latency policy.
B. Create an A record with a geolocation policy.
C. Create a CNAME record with a failover policy.
D. Create a CNAME record with a geoproximity policy.
A. Create an A record with a latency policy.
Explanation:
LBR (Latency Based Routing) is a new feature for Amazon Route 53 that helps you improve your
application’s performance for a global audience. You can run applications in multiple AWS
regions and Amazon Route 53, using dozens of edge locations worldwide, will route end users to
the AWS region that provides the lowest latency.
https://aws.amazon.com/route53/faqs/
QUESTION 790
A solutions architect is designing a shared storage solution for a web application that is deployed
across multiple Availability Zones. The web application runs on Amazon EC2 instances that are in
an Auto Scaling group. The company plans to make frequent changes to the content. The
solution must have strong consistency in returning the new content as soon as the changes
occur.
Which solutions meet these requirements? (Choose two.)
A. Use AWS Storage Gateway Volume Gateway Internet Small Computer Systems Interface (iSCSI)
block storage that is mounted to the individual EC2 instances.
B. Create an Amazon Elastic File System (Amazon EFS) file system. Mount the EFS file system on
the individual EC2 instances.
C. Create a shared Amazon Elastic Block Store (Amazon EBS) volume. Mount the EBS volume on
the individual EC2 instances.
D. Use AWS DataSync to perform continuous synchronization of data between EC2 hosts in the Auto
Scaling group.
E. Create an Amazon S3 bucket to store the web content. Set the metadata for the Cache-Control
header to no-cache. Use Amazon CloudFront to deliver the content.
B. Create an Amazon Elastic File System (Amazon EFS) file system. Mount the EFS file system on
the individual EC2 instances.
E. Create an Amazon S3 bucket to store the web content. Set the metadata for the Cache-Control
header to no-cache. Use Amazon CloudFront to deliver the content.
QUESTION 789
A company regularly uploads GB-sized files to Amazon S3. After the company uploads the files,
the company uses a fleet of Amazon EC2 Spot Instances to transcode the file format. The
company needs to scale throughput when the company uploads data from the on-premises data
center to Amazon S3 and when the company downloads data from Amazon S3 to the EC2
instances.
Which solutions will meet these requirements? (Choose two.)
A. Use the S3 bucket access point instead of accessing the S3 bucket directly.
B. Upload the files into multiple S3 buckets.
C. Use S3 multipart uploads.
D. Fetch multiple byte-ranges of an object in parallel.
E. Add a random prefix to each object when uploading the files.
C. Use S3 multipart uploads.
D. Fetch multiple byte-ranges of an object in parallel.
QUESTION 788
A company wants to standardize its Amazon Elastic Block Store (Amazon EBS) volume
encryption strategy. The company also wants to minimize the cost and configuration effort
required to operate the volume encryption check.
Which solution will meet these requirements?
A. Write API calls to describe the EBS volumes and to confirm the EBS volumes are encrypted. Use
Amazon EventBridge to schedule an AWS Lambda function to run the API calls.
B. Write API calls to describe the EBS volumes and to confirm the EBS volumes are encrypted. Run
the API calls on an AWS Fargate task.
C. Create an AWS Identity and Access Management (IAM) policy that requires the use of tags on
EBS volumes. Use AWS Cost Explorer to display resources that are not properly tagged. Encrypt
the untagged resources manually.
D. Create an AWS Config rule for Amazon EBS to evaluate if a volume is encrypted and to flag the
volume if it is not encrypted.
D. Create an AWS Config rule for Amazon EBS to evaluate if a volume is encrypted and to flag the
volume if it is not encrypted.
Explanation:
You could use a managed rule to quickly start assessing whether your Amazon Elastic Block
Store (Amazon EBS) volumes are encrypted or whether specific tags are applied to your
resources.
https://docs.aws.amazon.com/config/latest/developerguide/evaluate-config_use-managed-
rules.html
QUESTION 787
A company manages AWS accounts in AWS Organizations. AWS IAM Identity Center (AWS
Single Sign-On) and AWS Control Tower are configured for the accounts. The company wants to
manage multiple user permissions across all the accounts.
The permissions will be used by multiple IAM users and must be split between the developer and
administrator teams. Each team requires different permissions. The company wants a solution
that includes new users that are hired on both teams.
Which solution will meet these requirements with the LEAST operational overhead?
A. Create individual users in IAM Identity Center for each account. Create separate developer and
administrator groups in IAM Identity Center. Assign the users to the appropriate groups. Create a
custom IAM policy for each group to set fine-grained permissions.
B. Create individual users in IAM Identity Center for each account. Create separate developer and
administrator groups in IAM Identity Center. Assign the users to the appropriate groups. Attach
AWS managed IAM policies to each user as needed for fine-grained permissions.
C. Create individual users in IAM Identity Center. Create new developer and administrator groups in
IAM Identity Center. Create new permission sets that include the appropriate IAM policies for each
group. Assign the new groups to the appropriate accounts. Assign the new permission sets to the
new groups. When new users are hired, add them to the appropriate group.
D. Create individual users in IAM Identity Center. Create new permission sets that include the
appropriate IAM policies for each user. Assign the users to the appropriate accounts. Grant
additional IAM permissions to the users from within specific accounts. When new users are hired,
add them to IAM Identity Center and assign them to the accounts.
C. Create individual users in IAM Identity Center. Create new developer and administrator groups in
IAM Identity Center. Create new permission sets that include the appropriate IAM policies for each
group. Assign the new groups to the appropriate accounts. Assign the new permission sets to the
new groups. When new users are hired, add them to the appropriate group.
Explanation:
https://docs.aws.amazon.com/controltower/latest/userguide/sso.html
QUESTION 786
A company that uses AWS needs a solution to predict the resources needed for manufacturing
processes each month. The solution must use historical values that are currently stored in an
Amazon S3 bucket. The company has no machine learning (ML) experience and wants to use a
managed service for the training and predictions.
Which combination of steps will meet these requirements? (Choose two.)
A. Deploy an Amazon SageMaker model. Create a SageMaker endpoint for inference.
B. Use Amazon SageMaker to train a model by using the historical data in the S3 bucket.
C. Configure an AWS Lambda function with a function URL that uses Amazon SageMaker endpoints
to create predictions based on the inputs.
D. Configure an AWS Lambda function with a function URL that uses an Amazon Forecast predictor
to create a prediction based on the inputs.
E. Train an Amazon Forsecast predictor by using the historical data in the S3 bucket.
B. Use Amazon SageMaker to train a model by using the historical data in the S3 bucket.
D. Configure an AWS Lambda function with a function URL that uses an Amazon Forecast predictor
to create a prediction based on the inputs.
QUESTION 785
A company is creating an application. The company stores data from tests of the application in
multiple on-premises locations.
The company needs to connect the on-premises locations to VPCs in an AWS Region in the
AWS Cloud. The number of accounts and VPCs will increase during the next year. The network
architecture must simplify the administration of new connections and must provide the ability to
scale.
Which solution will meet these requirements with the LEAST administrative overhead?
A. Create a peering connection between the VPCs. Create a VPN connection between the VPCs and
the on-premises locations.
B. Launch an Amazon EC2 instance. On the instance, include VPN software that uses a VPN
connection to connect all VPCs and on-premises locations.
C. Create a transit gateway. Create VPC attachments for the VPC connections. Create VPN
attachments for the on-premises connections.
D. Create an AWS Direct Connect connection between the on-premises locations and a central VPC.
Connect the central VPC to other VPCs by using peering connections.
D. Create an AWS Direct Connect connection between the on-premises locations and a central VPC.
Connect the central VPC to other VPCs by using peering connections.
QUESTION 784
A company’s ecommerce website has unpredictable traffic and uses AWS Lambda functions to
directly access a private Amazon RDS for PostgreSQL DB instance. The company wants to
maintain predictable database performance and ensure that the Lambda invocations do not
overload the database with too many connections.
What should a solutions architect do to meet these requirements?
A. Point the client driver at an RDS custom endpoint. Deploy the Lambda functions inside a VPC.
B. Point the client driver at an RDS proxy endpoint. Deploy the Lambda functions inside a VPC.
C. Point the client driver at an RDS custom endpoint. Deploy the Lambda functions outside a VPC.
D. Point the client driver at an RDS proxy endpoint. Deploy the Lambda functions outside a VPC.
B. Point the client driver at an RDS proxy endpoint. Deploy the Lambda functions inside a VPC.
QUESTION 783
A company wants to migrate its web applications from on premises to AWS. The company is
located close to the eu-central-1 Region. Because of regulations, the company cannot launch
some of its applications in eu-central-1. The company wants to achieve single-digit millisecond
latency.
Which solution will meet these requirements?
A. Deploy the applications in eu-central-1. Extend the company’s VPC from eu-central-1 to an edge
location in Amazon CloudFront.
B. Deploy the applications in AWS Local Zones by extending the company’s VPC from eu-central-1 to
the chosen Local Zone.
C. Deploy the applications in eu-central-1. Extend the company’s VPC from eu-central-1 to the
regional edge caches in Amazon CloudFront.
D. Deploy the applications in AWS Wavelength Zones by extending the company’s VPC from eu-
central-1 to the chosen Wavelength Zone.
B. Deploy the applications in AWS Local Zones by extending the company’s VPC from eu-central-1 to
the chosen Local Zone.
Explanation:
AWS Local Zones are a type of AWS infrastructure deployment that place compute, storage,
database, and other select services closer to large population, industry, and IT centers, enabling
you to deliver applications that require single-digit millisecond latency to end-users.
QUESTION 782
A company is migrating its multi-tier on-premises application to AWS. The application consists of
a single-node MySQL database and a multi-node web tier. The company must minimize changes
to the application during the migration. The company wants to improve application resiliency after
the migration.
Which combination of steps will meet these requirements? (Choose two.)
A. Migrate the web tier to Amazon EC2 instances in an Auto Scaling group behind an Application
Load Balancer.
B. Migrate the database to Amazon EC2 instances in an Auto Scaling group behind a Network Load
Balancer.
C. Migrate the database to an Amazon RDS Multi-AZ deployment.
D. Migrate the web tier to an AWS Lambda function.
E. Migrate the database to an Amazon DynamoDB table.
A. Migrate the web tier to Amazon EC2 instances in an Auto Scaling group behind an Application
Load Balancer.
C. Migrate the database to an Amazon RDS Multi-AZ deployment.
Explanation:
Web Tier Migration (Option A): Migrating the web tier to Amazon EC2 instances in an Auto
Scaling group behind an Application Load Balancer (ALB) provides horizontal scalability,
automatic scaling, and improved resiliency. Auto Scaling helps in managing and maintaining the
desired number of EC2 instances based on demand, and the ALB distributes incoming traffic
across multiple instances.
Database Migration to Amazon RDS Multi-AZ (Option C): Migrating the database to Amazon RDS
in a Multi-AZ deployment provides high availability and automatic failover. In a Multi-AZ
deployment, Amazon RDS maintains a standby replica in a different Availability Zone, and in the
event of a failure, it automatically promotes the replica to the primary instance. This enhances the
resiliency of the database.
QUESTION 781
A company needs a solution to enforce data encryption at rest on Amazon EC2 instances. The
solution must automatically identify noncompliant resources and enforce compliance policies on
findings.
Which solution will meet these requirements with the LEAST administrative overhead?
A. Use an IAM policy that allows users to create only encrypted Amazon Elastic Block Store (Amazon
EBS) volumes. Use AWS Config and AWS Systems Manager to automate the detection and
remediation of unencrypted EBS volumes.
B. Use AWS Key Management Service (AWS KMS) to manage access to encrypted Amazon Elastic
Block Store (Amazon EBS) volumes. Use AWS Lambda and Amazon EventBridge to automate the
detection and remediation of unencrypted EBS volumes.
C. Use Amazon Macie to detect unencrypted Amazon Elastic Block Store (Amazon EBS) volumes. Use AWS Systems Manager Automation rules to automatically encrypt existing and new EBS
volumes.
D. Use Amazon inspector to detect unencrypted Amazon Elastic Block Store (Amazon EBS) volumes.
Use AWS Systems Manager Automation rules to automatically encrypt existing and new EBS
volumes.
A. Use an IAM policy that allows users to create only encrypted Amazon Elastic Block Store (Amazon
EBS) volumes. Use AWS Config and AWS Systems Manager to automate the detection and
remediation of unencrypted EBS volumes.
Explanation:
By creating an IAM policy that allows users to create only encrypted EBS volumes, you
proactively prevent the creation of unencrypted volumes. Using AWS Config, you can set up rules
to detect noncompliant resources, and AWS Systems Manager Automation can be used for
automated remediation. This approach provides a proactive and automated solution.
QUESTION 780
A company uses Amazon EC2 instances and stores data on Amazon Elastic Block Store
(Amazon EBS) volumes. The company must ensure that all data is encrypted at rest by using
AWS Key Management Service (AWS KMS). The company must be able to control rotation of the
encryption keys.
Which solution will meet these requirements with the LEAST operational overhead?
A. Create a customer managed key. Use the key to encrypt the EBS volumes.
B. Use an AWS managed key to encrypt the EBS volumes. Use the key to configure automatic key
rotation.
C. Create an external KMS key with imported key material. Use the key to encrypt the EBS volumes.
D. Use an AWS owned key to encrypt the EBS volumes.
A. Create a customer managed key. Use the key to encrypt the EBS volumes.
QUESTION 779
A solutions architect needs to copy files from an Amazon S3 bucket to an Amazon Elastic File
System (Amazon EFS) file system and another S3 bucket. The files must be copied continuously.
New files are added to the original S3 bucket consistently. The copied files should be overwritten
only if the source file changes.
Which solution will meet these requirements with the LEAST operational overhead?
A. Create an AWS DataSync location for both the destination S3 bucket and the EFS file system.
Create a task for the destination S3 bucket and the EFS file system. Set the transfer mode to
transfer only data that has changed.
B. Create an AWS Lambda function. Mount the file system to the function. Set up an S3 event
notification to invoke the function when files are created and changed in Amazon S3. Configure the
function to copy files to the file system and the destination S3 bucket.
C. Create an AWS DataSync location for both the destination S3 bucket and the EFS file system.
Create a task for the destination S3 bucket and the EFS file system. Set the transfer mode to
transfer all data.
D. Launch an Amazon EC2 instance in the same VPC as the file system. Mount the file system.
Create a script to routinely synchronize all objects that changed in the origin S3 bucket to the
destination S3 bucket and the mounted file system.
A. Create an AWS DataSync location for both the destination S3 bucket and the EFS file system.
Create a task for the destination S3 bucket and the EFS file system. Set the transfer mode to
transfer only data that has changed.
Explanation:
AWS DataSync is designed for efficient and reliable copying of data between different storage
solutions. By setting up an AWS DataSync task with the transfer mode set to transfer only data
that has changed, you ensure that only the new or modified files are copied. This minimizes data
transfer and operational overhead.
QUESTION 778
A company wants to back up its on-premises virtual machines (VMs) to AWS. The company’s
backup solution exports on-premises backups to an Amazon S3 bucket as objects. The S3
backups must be retained for 30 days and must be automatically deleted after 30 days.
Which combination of steps will meet these requirements? (Choose three.)
A. Create an S3 bucket that has S3 Object Lock enabled.
B. Create an S3 bucket that has object versioning enabled.
C. Configure a default retention period of 30 days for the objects.
D. Configure an S3 Lifecycle policy to protect the objects for 30 days.
E. Configure an S3 Lifecycle policy to expire the objects after 30 days.
F. Configure the backup solution to tag the objects with a 30-day retention period
A. Create an S3 bucket that has S3 Object Lock enabled.
C. Configure a default retention period of 30 days for the objects.
E. Configure an S3 Lifecycle policy to expire the objects after 30 days.
Explanation:
https://docs.aws.amazon.com/AmazonS3/latest/userguide/batch-ops-retention-date.html
QUESTION 777
A company stores sensitive data in Amazon S3. A solutions architect needs to create an
encryption solution. The company needs to fully control the ability of users to create, rotate, and
disable encryption keys with minimal effort for any data that must be encrypted.
Which solution will meet these requirements?
A. Use default server-side encryption with Amazon S3 managed encryption keys (SSE-S3) to store
the sensitive data.
B. Create a customer managed key by using AWS Key Management Service (AWS KMS). Use the
new key to encrypt the S3 objects by using server-side encryption with AWS KMS keys (SSE-
KMS).
C. Create an AWS managed key by using AWS Key Management Service (AWS KMS). Use the new
key to encrypt the S3 objects by using server-side encryption with AWS KMS keys (SSE-KMS).
D. Download S3 objects to an Amazon EC2 instance. Encrypt the objects by using customer
managed keys. Upload the encrypted objects back into Amazon S3.
B. Create a customer managed key by using AWS Key Management Service (AWS KMS). Use the
new key to encrypt the S3 objects by using server-side encryption with AWS KMS keys (SSE-
KMS).
Explanation:
This option allows you to create a customer managed key using AWS KMS. With a customer
managed key, you have full control over key lifecycle management, including the ability to create,
rotate, and disable keys with minimal effort. SSE-KMS also integrates with AWS Identity and
Access Management (IAM) for fine-grained access control.
QUESTION 776
A company is developing an application that will run on a production Amazon Elastic Kubernetes
Service (Amazon EKS) cluster. The EKS cluster has managed node groups that are provisioned
with On-Demand Instances.
The company needs a dedicated EKS cluster for development work. The company will use the
development cluster infrequently to test the resiliency of the application. The EKS cluster must
manage all the nodes.
Which solution will meet these requirements MOST cost-effectively?
A. Create a managed node group that contains only Spot Instances.
B. Create two managed node groups. Provision one node group with On-Demand Instances.
Provision the second node group with Spot Instances.
C. Create an Auto Scaling group that has a launch configuration that uses Spot Instances. Configure
the user data to add the nodes to the EKS cluster.
D. Create a managed node group that contains only On-Demand Instances.
A. Create a managed node group that contains only Spot Instances.
https://aws.amazon.com/blogs/containers/amazon-eks-now-supports-provisioning-and-managing-
ec2-spot-instances-in-managed-node-groups/
QUESTION 775
A company’s application uses Network Load Balancers, Auto Scaling groups, Amazon EC2
instances, and databases that are deployed in an Amazon VPC. The company wants to capture
information about traffic to and from the network interfaces in near real time in its Amazon VPC.
The company wants to send the information to Amazon OpenSearch Service for analysis.
Which solution will meet these requirements?
A. Create a log group in Amazon CloudWatch Logs. Configure VPC Flow Logs to send the log data to
the log group. Use Amazon Kinesis Data Streams to stream the logs from the log group to
OpenSearch Service.
B. Create a log group in Amazon CloudWatch Logs. Configure VPC Flow Logs to send the log data to
the log group. Use Amazon Kinesis Data Firehose to stream the logs from the log group to
OpenSearch Service.
C. Create a trail in AWS CloudTrail. Configure VPC Flow Logs to send the log data to the trail. Use
Amazon Kinesis Data Streams to stream the logs from the trail to OpenSearch Service.
D. Create a trail in AWS CloudTrail. Configure VPC Flow Logs to send the log data to the trail. Use
Amazon Kinesis Data Firehose to stream the logs from the trail to OpenSearch Service.
B. Create a log group in Amazon CloudWatch Logs. Configure VPC Flow Logs to send the log data to
the log group. Use Amazon Kinesis Data Firehose to stream the logs from the log group to
OpenSearch Service.
Explanation:
VPC Flow Logs capture information about the IP traffic going to and from network interfaces in a
VPC. By configuring VPC Flow Logs to send the log data to a log group in Amazon CloudWatch
Logs, you can then use Amazon Kinesis Data Firehose to stream the logs from the log group to
Amazon OpenSearch Service for analysis. This approach provides near real-time streaming of
logs to the analytics service.
QUESTION 774
A company uses Amazon EC2 instances and Amazon Elastic Block Store (Amazon EBS)
volumes to run an application. The company creates one snapshot of each EBS volume every
day to meet compliance requirements. The company wants to implement an architecture that
prevents the accidental deletion of EBS volume snapshots. The solution must not change the
administrative rights of the storage administrator user.
Which solution will meet these requirements with the LEAST administrative effort?
A. Create an IAM role that has permission to delete snapshots. Attach the role to a new EC2
instance. Use the AWS CLI from the new EC2 instance to delete snapshots.
B. Create an IAM policy that denies snapshot deletion. Attach the policy to the storage administrator
user.
C. Add tags to the snapshots. Create retention rules in Recycle Bin for EBS snapshots that have the
tags.
D. Lock the EBS snapshots to prevent deletion.
D. Lock the EBS snapshots to prevent deletion.
The “lock” feature in AWS allows you to prevent accidental deletion of resources, including EBS
snapshots. This can be set at the snapshot level, providing a straightforward and effective way to
meet the requirements without changing the administrative rights of the storage administrator
user.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-snapshot-lock.html
QUESTION 773
A company runs a web application on Amazon EC2 instances in an Auto Scaling group. The
application uses a database that runs on an Amazon RDS for PostgreSQL DB instance. The
application performs slowly when traffic increases. The database experiences a heavy read load
during periods of high traffic.
Which actions should a solutions architect take to resolve these performance issues? (Choose
two.)
A. Turn on auto scaling for the DB instance.
B. Create a read replica for the DB instance. Configure the application to send read traffic to the read
replica.
C. Convert the DB instance to a Multi-AZ DB instance deployment. Configure the application to send
read traffic to the standby DB instance.
D. Create an Amazon ElastiCache cluster. Configure the application to cache query results in the
ElastiCache cluster.
E. Configure the Auto Scaling group subnets to ensure that the EC2 instances are provisioned in the
same Availability Zone as the DB instance.
B. Create a read replica for the DB instance. Configure the application to send read traffic to the read
replica.
D. Create an Amazon ElastiCache cluster. Configure the application to cache query results in the
ElastiCache cluster.
Explanation:
By creating a read replica, you offload read traffic from the primary DB instance to the replica,
distributing the load and improving overall performance during periods of heavy read traffic.
Amazon ElastiCache can be used to cache frequently accessed data, reducing the load on the
database. This is particularly effective for read-heavy workloads, as it allows the application to
retrieve data from the cache rather than making repeated database queries.
QUESTION 772
A company runs an SMB file server in its data center. The file server stores large files that the
company frequently accesses for up to 7 days after the file creation date. After 7 days, the
company needs to be able to access the files with a maximum retrieval time of 24 hours.
Which solution will meet these requirements?
A. Use AWS DataSync to copy data that is older than 7 days from the SMB file server to AWS.
B. Create an Amazon S3 File Gateway to increase the company’s storage space. Create an S3
Lifecycle policy to transition the data to S3 Glacier Deep Archive after 7 days.
C. Create an Amazon FSx File Gateway to increase the company’s storage space. Create an
Amazon S3 Lifecycle policy to transition the data after 7 days.
D. Configure access to Amazon S3 for each user. Create an S3 Lifecycle policy to transition the data
to S3 Glacier Flexible Retrieval after 7 days.
B. Create an Amazon S3 File Gateway to increase the company’s storage space. Create an S3
Lifecycle policy to transition the data to S3 Glacier Deep Archive after 7 days.
Explanation:
S3 file gateway supports SMB and S3 Glacier Deep Archive can retrieve data within 12 hours.
https://aws.amazon.com/storagegateway/file/s3/
https://docs.aws.amazon.com/prescriptive-guidance/latest/backup-recovery/amazon-s3-
glacier.html
QUESTION 771
A marketing company receives a large amount of new clickstream data in Amazon S3 from a
marketing campaign. The company needs to analyze the clickstream data in Amazon S3 quickly.
Then the company needs to determine whether to process the data further in the data pipeline.
Which solution will meet these requirements with the LEAST operational overhead?
A. Create external tables in a Spark catalog. Configure jobs in AWS Glue to query the data.
B. Configure an AWS Glue crawler to crawl the data. Configure Amazon Athena to query the data.
C. Create external tables in a Hive metastore. Configure Spark jobs in Amazon EMR to query the
data.
D. Configure an AWS Glue crawler to crawl the data. Configure Amazon Kinesis Data Analytics to
use SQL to query the data.
D. Configure an AWS Glue crawler to crawl the data. Configure Amazon Kinesis Data Analytics to
use SQL to query the data.
Explanation:
AWS Glue is a fully managed extract, transform, and load (ETL) service, and Athena is a
serverless query service that allows you to analyze data directly in Amazon S3 using SQL
queries. By configuring an AWS Glue crawler to crawl the data, you can create a schema for the
data, and then use Athena to query the data directly without the need to load it into a separate
database. This minimizes operational overhead.
QUESTION 770
A company runs its applications on Amazon EC2 instances. The company performs periodic
financial assessments of its AWS costs. The company recently identified unusual spending.
The company needs a solution to prevent unusual spending. The solution must monitor costs and
notify responsible stakeholders in the event of unusual spending.
Which solution will meet these requirements?
A. Use an AWS Budgets template to create a zero spend budget.
B. Create an AWS Cost Anomaly Detection monitor in the AWS Billing and Cost Management
console.
C. Create AWS Pricing Calculator estimates for the current running workload pricing details.
D. Use Amazon CloudWatch to monitor costs and to identify unusual spending.
B. Create an AWS Cost Anomaly Detection monitor in the AWS Billing and Cost Management
console.
Explanation:
AWS Cost Anomaly Detection is designed to automatically detect unusual spending patterns
based on machine learning algorithms. It can identify anomalies and send notifications when it
detects unexpected changes in spending. This aligns well with the requirement to prevent
unusual spending and notify stakeholders.
https://aws.amazon.com/aws-cost-management/aws-cost-anomaly-detection/
QUESTION 769
A company performs tests on an application that uses an Amazon DynamoDB table. The tests
run for 4 hours once a week. The company knows how many read and write operations the
application performs to the table each second during the tests. The company does not currently
use DynamoDB for any other use case. A solutions architect needs to optimize the costs for the
table.
Which solution will meet these requirements?
A. Choose on-demand mode. Update the read and write capacity units appropriately.
B. Choose provisioned mode. Update the read and write capacity units appropriately.
C. Purchase DynamoDB reserved capacity for a 1-year term.
D. Purchase DynamoDB reserved capacity for a 3-year term.
A. Choose on-demand mode. Update the read and write capacity units appropriately.
Explanation:
With provisioned capacity mode, you specify the number of reads and writes per second that you
expect your application to require, and you are billed based on that. Furthermore if you can
forecast your capacity requirements you can also reserve a portion of DynamoDB provisioned
capacity and optimize your costs even further.
https://docs.aws.amazon.com/wellarchitected/latest/serverless-applications-lens/capacity.html
QUESTION 768
A company runs its databases on Amazon RDS for PostgreSQL. The company wants a secure
solution to manage the master user password by rotating the password every 30 days.
Which solution will meet these requirements with the LEAST operational overhead?
A. Use Amazon EventBridge to schedule a custom AWS Lambda function to rotate the password
every 30 days.
B. Use the modify-db-instance command in the AWS CLI to change the password.
C. Integrate AWS Secrets Manager with Amazon RDS for PostgreSQL to automate password
rotation.
D. Integrate AWS Systems Manager Parameter Store with Amazon RDS for PostgreSQL to
automate password rotation.
C. Integrate AWS Secrets Manager with Amazon RDS for PostgreSQL to automate password
rotation.
Explanation:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/rds-secrets-manager.html
QUESTION 767
A company created a new organization in AWS Organizations. The organization has multiple
accounts for the company’s development teams. The development team members use AWS IAM
Identity Center (AWS Single Sign-On) to access the accounts. For each of the company’s
applications, the development teams must use a predefined application name to tag resources
that are created.
A solutions architect needs to design a solution that gives the development team the ability to
create resources only if the application name tag has an approved value.
Which solution will meet these requirements?
A. Create an IAM group that has a conditional Allow policy that requires the application name tag to
be specified for resources to be created.
B. Create a cross-account role that has a Deny policy for any resource that has the application name
tag.
C. Create a resource group in AWS Resource Groups to validate that the tags are applied to all
resources in all accounts.
D. Create a tag policy in Organizations that has a list of allowed application names.
D. Create a tag policy in Organizations that has a list of allowed application names.
https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_tag-
policies.html
QUESTION 766
A company is moving its data and applications to AWS during a multiyear migration project. The
company wants to securely access data on Amazon S3 from the company’s AWS Region and
from the company’s on-premises location. The data must not traverse the internet. The company
has established an AWS Direct Connect connection between its Region and its on-premises
location.
Which solution will meet these requirements?
A. Create gateway endpoints for Amazon S3. Use the gateway endpoints to securely access the
data from the Region and the on-premises location.
B. Create a gateway in AWS Transit Gateway to access Amazon S3 securely from the Region and
the on-premises location.
C. Create interface endpoints for Amazon S3. Use the interface endpoints to securely access the
data from the Region and the on-premises location.
D. Use an AWS Key Management Service (AWS KMS) key to access the data securely from the
Region and the on-premises location.
C. Create interface endpoints for Amazon S3. Use the interface endpoints to securely access the
data from the Region and the on-premises location.
QUESTION 765
A startup company is hosting a website for its customers on an Amazon EC2 instance. The
website consists of a stateless Python application and a MySQL database. The website serves
only a small amount of traffic. The company is concerned about the reliability of the instance and
needs to migrate to a highly available architecture. The company cannot modify the application
code.
Which combination of actions should a solutions architect take to achieve high availability for the
website? (Choose two.)
A. Provision an internet gateway in each Availability Zone in use.
B. Migrate the database to an Amazon RDS for MySQL Multi-AZ DB instance.
C. Migrate the database to Amazon DynamoDB, and enable DynamoDB auto scaling.
D. Use AWS DataSync to synchronize the database data across multiple EC2 instances.
E. Create an Application Load Balancer to distribute traffic to an Auto Scaling group of EC2
instances that are distributed across two Availability Zones.
B. Migrate the database to an Amazon RDS for MySQL Multi-AZ DB instance.
E. Create an Application Load Balancer to distribute traffic to an Auto Scaling group of EC2
instances that are distributed across two Availability Zones.
QUESTION 764
A company has customers located across the world. The company wants to use automation to
secure its systems and network infrastructure. The company’s security team must be able to track
and audit all incremental changes to the infrastructure.
Which solution will meet these requirements?
A. Use AWS Organizations to set up the infrastructure. Use AWS Config to track changes.
B. Use AWS CloudFormation to set up the infrastructure. Use AWS Config to track changes.
C. Use AWS Organizations to set up the infrastructure. Use AWS Service Catalog to track changes.
D. Use AWS CloudFormation to set up the infrastructure. Use AWS Service Catalog to track
changes.
B. Use AWS CloudFormation to set up the infrastructure. Use AWS Config to track changes.
QUESTION 763
A company has a web application that runs on premises. The application experiences latency
issues during peak hours. The latency issues occur twice each month. At the start of a latency
issue, the application’s CPU utilization immediately increases to 10 times its normal amount.
The company wants to migrate the application to AWS to improve latency. The company also
wants to scale the application automatically when application demand increases. The company
will use AWS Elastic Beanstalk for application deployment.
Which solution will meet these requirements?
A. Configure an Elastic Beanstalk environment to use burstable performance instances in unlimited
mode. Configure the environment to scale based on requests.
B. Configure an Elastic Beanstalk environment to use compute optimized instances. Configure the
environment to scale based on requests.
C. Configure an Elastic Beanstalk environment to use compute optimized instances. Configure the
environment to scale on a schedule.
D. Configure an Elastic Beanstalk environment to use burstable performance instances in unlimited
mode. Configure the environment to scale on predictive metrics.
D. Configure an Elastic Beanstalk environment to use burstable performance instances in unlimited
mode. Configure the environment to scale on predictive metrics.
QUESTION 762
A company is developing a new application on AWS. The application consists of an Amazon
Elastic Container Service (Amazon ECS) cluster, an Amazon S3 bucket that contains assets for
the application, and an Amazon RDS for MySQL database that contains the dataset for the
application. The dataset contains sensitive information. The company wants to ensure that only
the ECS cluster can access the data in the RDS for MySQL database and the data in the S3
bucket.
Which solution will meet these requirements?
A. Create a new AWS Key Management Service (AWS KMS) customer managed key to encrypt
both the S3 bucket and the RDS for MySQL database. Ensure that the KMS key policy includes
encrypt and decrypt permissions for the ECS task execution role.
B. Create an AWS Key Management Service (AWS KMS) AWS managed key to encrypt both the S3
bucket and the RDS for MySQL database. Ensure that the S3 bucket policy specifies the ECS
task execution role as a user.
C. Create an S3 bucket policy that restricts bucket access to the ECS task execution role. Create a
VPC endpoint for Amazon RDS for MySQL. Update the RDS for MySQL security group to allow
access from only the subnets that the ECS cluster will generate tasks in.
D. Create a VPC endpoint for Amazon RDS for MySQL. Update the RDS for MySQL security group
to allow access from only the subnets that the ECS cluster will generate tasks in. Create a VPC
endpoint for Amazon S3. Update the S3 bucket policy to allow access from only the S3 VPC
endpoint.
D. Create a VPC endpoint for Amazon RDS for MySQL. Update the RDS for MySQL security group
to allow access from only the subnets that the ECS cluster will generate tasks in. Create a VPC
endpoint for Amazon S3. Update the S3 bucket policy to allow access from only the S3 VPC
endpoint.
Explanation:
The most comprehensive solution as it leverages VPC endpoints for both Amazon RDS and
Amazon S3, along with proper network-level controls to restrict access to only the necessary
resources from the ECS cluster.
QUESTION 761
A company uses AWS Cost Explorer to monitor its AWS costs. The company notices that
Amazon Elastic Block Store (Amazon EBS) storage and snapshot costs increase every month.
However, the company does not purchase additional EBS storage every month. The company
wants to optimize monthly costs for its current storage usage.
Which solution will meet these requirements with the LEAST operational overhead?
A. Use logs in Amazon CloudWatch Logs to monitor the storage utilization of Amazon EBS. Use
Amazon EBS Elastic Volumes to reduce the size of the EBS volumes.
B. Use a custom script to monitor space usage. Use Amazon EBS Elastic Volumes to reduce the
size of the EBS volumes.
C. Delete all expired and unused snapshots to reduce snapshot costs.
D. Delete all nonessential snapshots. Use Amazon Data Lifecycle Manager to create and manage
the snapshots according to the company’s snapshot policy requirements.
D. Delete all nonessential snapshots. Use Amazon Data Lifecycle Manager to create and manage
the snapshots according to the company’s snapshot policy requirements.