SAA L2P 601-700 v24.021 Flashcards
QUESTION 700
A company runs its critical database on an Amazon RDS for PostgreSQL DB instance. The company wants to migrate to Amazon Aurora PostgreSQL with minimal downtime and data loss.
Which solution will meet these requirements with the LEAST operational overhead?
A. Create a DB snapshot of the RDS for PostgreSQL DB instance to populate a new Aurora
PostgreSQL DB cluster.
B. Create an Aurora read replica of the RDS for PostgreSQL DB instance. Promote the Aurora read
replicate to a new Aurora PostgreSQL DB cluster.
C. Use data import from Amazon S3 to migrate the database to an Aurora PostgreSQL DB cluster.
D. Use the pg_dump utility to back up the RDS for PostgreSQL database. Restore the backup to a
new Aurora PostgreSQL DB cluster.
B. Create an Aurora read replica of the RDS for PostgreSQL DB instance. Promote the Aurora read
replicate to a new Aurora PostgreSQL DB cluster.
Explanation:
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraPostgreSQL.Migrating
.html
QUESTION 699
A company is planning to migrate a TCP-based application into the company’s VPC. The
application is publicly accessible on a nonstandard TCP port through a hardware appliance in the
company’s data center. This public endpoint can process up to 3 million requests per second with
low latency. The company requires the same level of performance for the new public endpoint in
AWS.
What should a solutions architect recommend to meet this requirement?
A. Deploy a Network Load Balancer (NLB). Configure the NLB to be publicly accessible over the
TCP port that the application requires.
B. Deploy an Application Load Balancer (ALB). Configure the ALB to be publicly accessible over the
TCP port that the application requires.
C. Deploy an Amazon CloudFront distribution that listens on the TCP port that the application
requires. Use an Application Load Balancer as the origin.
D. Deploy an Amazon API Gateway API that is configured with the TCP port that the application
requires. Configure AWS Lambda functions with provisioned concurrency to process the
requests.
A. Deploy a Network Load Balancer (NLB). Configure the NLB to be publicly accessible over the
TCP port that the application requires.
Explanation:
Since the company requires the same level of performance for the new public endpoint in AWS.
A Network Load Balancer functions at the fourth layer of the Open Systems Interconnection (OSI)
model. It can handle millions of requests per second. After the load balancer receives a
connection request, it selects a target from the target group for the default rule. It attempts to
open a TCP connection to the selected target on the port specified in the listener configuration.
https://docs.aws.amazon.com/elasticloadbalancing/latest/network/introduction.html
QUESTION 698
A company wants to use Amazon Elastic Container Service (Amazon ECS) clusters and Amazon
RDS DB instances to build and run a payment processing application. The company will run the
application in its on-premises data center for compliance purposes.
A solutions architect wants to use AWS Outposts as part of the solution. The solutions architect is
working with the company’s operational team to build the application.
Which activities are the responsibility of the company’s operational team? (Choose three.)
A. Providing resilient power and network connectivity to the Outposts racks
B. Managing the virtualization hypervisor, storage systems, and the AWS services that run on
Outposts
C. Physical security and access controls of the data center environment
D. Availability of the Outposts infrastructure including the power supplies, servers, and networking
equipment within the Outposts racks
E. Physical maintenance of Outposts components
F. Providing extra capacity for Amazon ECS clusters to mitigate server failures and maintenance
events
A. Providing resilient power and network connectivity to the Outposts racks
C. Physical security and access controls of the data center environment
F. Providing extra capacity for Amazon ECS clusters to mitigate server failures and maintenance
events
Explanation:
https://docs.aws.amazon.com/whitepapers/latest/aws-outposts-high-availability-design/aws-
outposts-high-availability-design.html
With Outposts, you are responsible for providing resilient power and network connectivity to the
Outpost racks to meet your availability requirements for workloads running on Outposts. You are
responsible for the physical security and access controls of the data center environment. You
must provide sufficient power, space, and cooling to keep the Outpost operational and network
connections to connect the Outpost back to the Region. Since Outpost capacity is finite and
determined by the size and number of racks AWS installs at your site, you must decide how much
EC2, EBS, and S3 on Outposts capacity you need to run your initial workloads, accommodate
future growth, and to provide extra capacity to mitigate server failures and maintenance events.
QUESTION 697
A research company uses on-premises devices to generate data for analysis. The company
wants to use the AWS Cloud to analyze the data. The devices generate .csv files and support
writing the data to an SMB file share. Company analysts must be able to use SQL commands to
query the data. The analysts will run queries periodically throughout the day.
Which combination of steps will meet these requirements MOST cost-effectively? (Choose three.)
A. Deploy an AWS Storage Gateway on premises in Amazon S3 File Gateway mode.
B. Deploy an AWS Storage Gateway on premises in Amazon FSx File Gateway made.
C. Set up an AWS Glue crawler to create a table based on the data that is in Amazon S3.
D. Set up an Amazon EMR cluster with EMR File System (EMRFS) to query the data that is in
Amazon S3. Provide access to analysts.
E. Set up an Amazon Redshift cluster to query the data that is in Amazon S3. Provide access to
analysts.
F. Setup Amazon Athena to query the data that is in Amazon S3. Provide access to analysts.
A. Deploy an AWS Storage Gateway on premises in Amazon S3 File Gateway mode.
C. Set up an AWS Glue crawler to create a table based on the data that is in Amazon S3.
F. Setup Amazon Athena to query the data that is in Amazon S3. Provide access to analysts.
Explanation:
https://docs.aws.amazon.com/glue/latest/dg/aws-glue-programming-etl-format-csv-home.html
https://aws.amazon.com/blogs/aws/amazon-athena-interactive-sql-queries-for-data-in-amazon-
s3/
https://aws.amazon.com/storagegateway/faqs/
QUESTION 696
A company hosts an internal serverless application on AWS by using Amazon API Gateway and AWS Lambda. The company’s employees report issues with high latency when they begin using
the application each day. The company wants to reduce latency.
Which solution will meet these requirements?
A. Increase the API Gateway throttling limit.
B. Set up a scheduled scaling to increase Lambda provisioned concurrency before employees begin
to use the application each day.
C. Create an Amazon CloudWatch alarm to initiate a Lambda function as a target for the alarm at
the beginning of each day.
D. Increase the Lambda function memory.
B. Set up a scheduled scaling to increase Lambda provisioned concurrency before employees begin
to use the application each day.
Explanation:
https://aws.amazon.com/blogs/compute/scheduling-aws-lambda-provisioned-concurrency-for-
recurring-peak-usage/
QUESTION 695
An ecommerce application uses a PostgreSQL database that runs on an Amazon EC2 instance.
During a monthly sales event, database usage increases and causes database connection issues
for the application. The traffic is unpredictable for subsequent monthly sales events, which
impacts the sales forecast. The company needs to maintain performance when there is an
unpredictable increase in traffic.
Which solution resolves this issue in the MOST cost-effective way?
A. Migrate the PostgreSQL database to Amazon Aurora Serverless v2.
B. Enable auto scaling for the PostgreSQL database on the EC2 instance to accommodate
increased usage.
C. Migrate the PostgreSQL database to Amazon RDS for PostgreSQL with a larger instance type.
D. Migrate the PostgreSQL database to Amazon Redshift to accommodate increased usage.
A. Migrate the PostgreSQL database to Amazon Aurora Serverless v2.
Explanation:
Aurora Serverless v2 got autoscaling, highly available and cheaper when compared to the other
options.
QUESTION 694
A company’s applications run on Amazon EC2 instances in Auto Scaling groups. The company
notices that its applications experience sudden traffic increases on random days of the week. The
company wants to maintain application performance during sudden traffic increases.
Which solution will meet these requirements MOST cost-effectively?
A. Use manual scaling to change the size of the Auto Scaling group.
B. Use predictive scaling to change the size of the Auto Scaling group.
C. Use dynamic scaling to change the size of the Auto Scaling group.
D. Use schedule scaling to change the size of the Auto Scaling group.
C. Use dynamic scaling to change the size of the Auto Scaling group.
Explanation:
Dynamic Scaling - This is yet another type of Auto Scaling in which the number of EC2 instances
is changed automatically depending on the signals received. Dynamic Scaling is a good choice
when there is a high volume of unpredictable traffic.
QUESTION 693
A company plans to migrate to AWS and use Amazon EC2 On-Demand Instances for its
application. During the migration testing phase, a technical team observes that the application
takes a long time to launch and load memory to become fully productive.
Which solution will reduce the launch time of the application during the next testing phase?
A. Launch two or more EC2 On-Demand Instances. Turn on auto scaling features and make the
EC2 On-Demand Instances available during the next testing phase.
B. Launch EC2 Spot Instances to support the application and to scale the application so it is available during the next testing phase.
C. Launch the EC2 On-Demand Instances with hibernation turned on. Configure EC2 Auto Scaling
warm pools during the next testing phase.
D. Launch EC2 On-Demand Instances with Capacity Reservations. Start additional EC2 instances
during the next testing phase.
C. Launch the EC2 On-Demand Instances with hibernation turned on. Configure EC2 Auto Scaling
warm pools during the next testing phase.
Explanation:
With Amazon EC2 hibernation enabled, you can maintain your EC2 instances in a “pre-warmed”
state so these can get to a productive state faster.
QUESTION 692
A solutions architect is designing a highly available Amazon ElastiCache for Redis based
solution. The solutions architect needs to ensure that failures do not result in performance
degradation or loss of data locally and within an AWS Region. The solution needs to provide high
availability at the node level and at the Region level.
Which solution will meet these requirements?
A. Use Multi-AZ Redis replication groups with shards that contain multiple nodes.
B. Use Redis shards that contain multiple nodes with Redis append only files (AOF) turned on.
C. Use a Multi-AZ Redis cluster with more than one read replica in the replication group.
D. Use Redis shards that contain multiple nodes with Auto Scaling turned on.
A. Use Multi-AZ Redis replication groups with shards that contain multiple nodes.
Explanation:
https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/Replication.html
QUESTION 691
A company uses AWS and sells access to copyrighted images. The company’s global customer
base needs to be able to access these images quickly. The company must deny access to users
from specific countries. The company wants to minimize costs as much as possible.
Which solution will meet these requirements?
A. Use Amazon S3 to store the images. Turn on multi-factor authentication (MFA) and public bucket
access. Provide customers with a link to the S3 bucket.
B. Use Amazon S3 to store the images. Create an IAM user for each customer. Add the users to a
group that has permission to access the S3 bucket.
C. Use Amazon EC2 instances that are behind Application Load Balancers (ALBs) to store the
images. Deploy the instances only in the countries the company services. Provide customers with
links to the ALBs for their specific country’s instances.
D. Use Amazon S3 to store the images. Use Amazon CloudFront to distribute the images with
geographic restrictions. Provide a signed URL for each customer to access the data in
CloudFront.
D. Use Amazon S3 to store the images. Use Amazon CloudFront to distribute the images with
geographic restrictions. Provide a signed URL for each customer to access the data in
CloudFront.
Explanation:
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/georestrictions.html
QUESTION 690
A company runs a container application by using Amazon Elastic Kubernetes Service (Amazon
EKS). The application includes microservices that manage customers and place orders. The
company needs to route incoming requests to the appropriate microservices.
Which solution will meet this requirement MOST cost-effectively?
A. Use the AWS Load Balancer Controller to provision a Network Load Balancer.
B. Use the AWS Load Balancer Controller to provision an Application Load Balancer.
C. Use an AWS Lambda function to connect the requests to Amazon EKS.
D. Use Amazon API Gateway to connect the requests to Amazon EKS.
B. Use the AWS Load Balancer Controller to provision an Application Load Balancer.
QUESTION 689
A company migrated a MySQL database from the company’s on-premises data center to an
Amazon RDS for MySQL DB instance. The company sized the RDS DB instance to meet the
company’s average daily workload. Once a month, the database performs slowly when the
company runs queries for a report. The company wants to have the ability to run reports and
maintain the performance of the daily workloads.
Which solution will meet these requirements?
A. Create a read replica of the database. Direct the queries to the read replica.
B. Create a backup of the database. Restore the backup to another DB instance. Direct the queries
to the new database.
C. Export the data to Amazon S3. Use Amazon Athena to query the S3 bucket.
D. Resize the DB instance to accommodate the additional workload.
A. Create a read replica of the database. Direct the queries to the read replica.
QUESTION 688
A company runs a web application on Amazon EC2 instances in an Auto Scaling group behind an
Application Load Balancer that has sticky sessions enabled. The web server currently hosts the
user session state. The company wants to ensure high availability and avoid user session state
loss in the event of a web server outage.
Which solution will meet these requirements?
A. Use an Amazon ElastiCache for Memcached instance to store the session data. Update the
application to use ElastiCache for Memcached to store the session state.
B. Use Amazon ElastiCache for Redis to store the session state. Update the application to use
ElastiCache for Redis to store the session state.
C. Use an AWS Storage Gateway cached volume to store session data. Update the application to
use AWS Storage Gateway cached volume to store the session state.
D. Use Amazon RDS to store the session state. Update the application to use Amazon RDS to store
the session state.
B. Use Amazon ElastiCache for Redis to store the session state. Update the application to use
ElastiCache for Redis to store the session state.
Explanation:
ElastiCache Redis provides in-memory caching that can deliver microsecond latency for session
data.
Redis supports replication and multi-AZ which can provide high availability for the cache.
The application can be updated to store session data in ElastiCache Redis rather than locally on
the web servers.
If a web server fails, the user can be routed via the load balancer to another web server which
can retrieve their session data from the highly available ElastiCache Redis cluster.
QUESTION 687
An ecommerce company wants a disaster recovery solution for its Amazon RDS DB instances
that run Microsoft SQL Server Enterprise Edition. The company’s current recovery point objective
(RPO) and recovery time objective (RTO) are 24 hours.
Which solution will meet these requirements MOST cost-effectively?
A. Create a cross-Region read replica and promote the read replica to the primary instance.
B. Use AWS Database Migration Service (AWS DMS) to create RDS cross-Region replication.
C. Use cross-Region replication every 24 hours to copy native backups to an Amazon S3 bucket.
D. Copy automatic snapshots to another Region every 24 hours.
D. Copy automatic snapshots to another Region every 24 hours.
Explanation:
Amazon RDS creates and saves automated backups of your DB instance or Multi-AZ DB cluster
during the backup window of your DB instance. RDS creates a storage volume snapshot of your
DB instance, backing up the entire DB instance and not just individual databases. RDS saves the
automated backups of your DB instance according to the backup retention period that you
specify. If necessary, you can recover your DB instance to any point in time during the backup
retention period.
QUESTION 686
A company is designing a solution to capture customer activity in different web applications to process analytics and make predictions. Customer activity in the web applications is
unpredictable and can increase suddenly. The company requires a solution that integrates with
other web applications. The solution must include an authorization step for security purposes.
Which solution will meet these requirements?
A. Configure a Gateway Load Balancer (GWLB) in front of an Amazon Elastic Container Service
(Amazon ECS) container instance that stores the information that the company receives in an
Amazon Elastic File System (Amazon EFS) file system. Authorization is resolved at the GWLB.
B. Configure an Amazon API Gateway endpoint in front of an Amazon Kinesis data stream that
stores the information that the company receives in an Amazon S3 bucket. Use an AWS Lambda
function to resolve authorization.
C. Configure an Amazon API Gateway endpoint in front of an Amazon Kinesis Data Firehose that
stores the information that the company receives in an Amazon S3 bucket. Use an API Gateway
Lambda authorizer to resolve authorization.
D. Configure a Gateway Load Balancer (GWLB) in front of an Amazon Elastic Container Service
(Amazon ECS) container instance that stores the information that the company receives on an
Amazon Elastic File System (Amazon EFS) file system. Use an AWS Lambda function to resolve
authorization.
C. Configure an Amazon API Gateway endpoint in front of an Amazon Kinesis Data Firehose that
stores the information that the company receives in an Amazon S3 bucket. Use an API Gateway
Lambda authorizer to resolve authorization.
Explanation:
https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-use-lambda-
authorizer.html
QUESTION 685
A company has five organizational units (OUs) as part of its organization in AWS Organizations.
Each OU correlates to the five businesses that the company owns. The company’s research and
development (R&D) business is separating from the company and will need its own organization.
A solutions architect creates a separate new management account for this purpose.
What should the solutions architect do next in the new management account?
A. Have the R&D AWS account be part of both organizations during the transition.
B. Invite the R&D AWS account to be part of the new organization after the R&D AWS account has
left the prior organization.
C. Create a new R&D AWS account in the new organization. Migrate resources from the prior R&D
AWS account to the new R&D AWS account.
D. Have the R&D AWS account join the new organization. Make the new management account a
member of the prior organization.
B. Invite the R&D AWS account to be part of the new organization after the R&D AWS account has
left the prior organization.
Explanation:
https://aws.amazon.com/blogs/mt/migrating-accounts-between-aws-organizations-with-
consolidated-billing-to-all-features/
QUESTION 684
A solutions architect is designing a disaster recovery (DR) strategy to provide Amazon EC2
capacity in a failover AWS Region. Business requirements state that the DR strategy must meet
capacity in the failover Region.
Which solution will meet these requirements?
A. Purchase On-Demand Instances in the failover Region.
B. Purchase an EC2 Savings Plan in the failover Region.
C. Purchase regional Reserved Instances in the failover Region.
D. Purchase a Capacity Reservation in the failover Region.
D. Purchase a Capacity Reservation in the failover Region.
Explanation:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/reserved-instances-scope.html
QUESTION 683
A company is deploying an application that processes large quantities of data in parallel. The
company plans to use Amazon EC2 instances for the workload. The network architecture must be
configurable to prevent groups of nodes from sharing the same underlying hardware.
Which networking solution meets these requirements?
A. Run the EC2 instances in a spread placement group.
B. Group the EC2 instances in separate accounts.
C. Configure the EC2 instances with dedicated tenancy.
D. Configure the EC2 instances with shared tenancy.
B. Group the EC2 instances in separate accounts.
Explanation:
Configuring the EC2 instances with dedicated tenancy ensures that each instance will run on
isolated, single-tenant hardware. This meets the requirement to prevent groups of nodes from
sharing underlying hardware.
A spread placement group only provides isolation at the Availability Zone level. Instances could
still share hardware within an AZ.
QUESTION 682
A company has 5 PB of archived data on physical tapes. The company needs to preserve the
data on the tapes for another 10 years for compliance purposes. The company wants to migrate
to AWS in the next 6 months. The data center that stores the tapes has a 1 Gbps uplink internet
connectivity.
Which solution will meet these requirements MOST cost-effectively?
A. Read the data from the tapes on premises. Stage the data in a local NFS storage. Use AWS
DataSync to migrate the data to Amazon S3 Glacier Flexible Retrieval.
B. Use an on-premises backup application to read the data from the tapes and to write directly to
Amazon S3 Glacier Deep Archive.
C. Order multiple AWS Snowball devices that have Tape Gateway. Copy the physical tapes to virtual
tapes in Snowball. Ship the Snowball devices to AWS. Create a lifecycle policy to move the tapes
to Amazon S3 Glacier Deep Archive.
D. Configure an on-premises Tape Gateway. Create virtual tapes in the AWS Cloud. Use backup
software to copy the physical tape to the virtual tape.
C. Order multiple AWS Snowball devices that have Tape Gateway. Copy the physical tapes to virtual
tapes in Snowball. Ship the Snowball devices to AWS. Create a lifecycle policy to move the tapes
to Amazon S3 Glacier Deep Archive.
QUESTION 681
An ecommerce company uses Amazon Route 53 as its DNS provider. The company hosts its
website on premises and in the AWS Cloud. The company’s on-premises data center is near the
us-west-1 Region. The company uses the eu-central-1 Region to host the website. The company
wants to minimize load time for the website as much as possible.
Which solution will meet these requirements?
A. Set up a geolocation routing policy. Send the traffic that is near us-west-1 to the on-premises data
center. Send the traffic that is near eu-central-1 to eu-central-1.
B. Set up a simple routing policy that routes all traffic that is near eu-central-1 to eu-central-1 and
routes all traffic that is near the on-premises datacenter to the on-premises data center.
C. Set up a latency routing policy. Associate the policy with us-west-1.
D. Set up a weighted routing policy. Split the traffic evenly between eu-central-1 and the on-
premises data center.
A. Set up a geolocation routing policy. Send the traffic that is near us-west-1 to the on-premises data
center. Send the traffic that is near eu-central-1 to eu-central-1.
Explanation:
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy-geo.html
QUESTION 680
A company runs a stateful production application on Amazon EC2 instances. The application
requires at least two EC2 instances to always be running.
A solutions architect needs to design a highly available and fault-tolerant architecture for the
application. The solutions architect creates an Auto Scaling group of EC2 instances.
Which set of additional steps should the solutions architect take to meet these requirements?
A. Set the Auto Scaling group’s minimum capacity to two. Deploy one On-Demand Instance in one
Availability Zone and one On-Demand Instance in a second Availability Zone.
B. Set the Auto Scaling group’s minimum capacity to four. Deploy two On-Demand Instances in one
Availability Zone and two On-Demand Instances in a second Availability Zone.
C. Set the Auto Scaling group’s minimum capacity to two. Deploy four Spot Instances in one
Availability Zone.
D. Set the Auto Scaling group’s minimum capacity to four. Deploy two On-Demand Instances in one
Availability Zone and two Spot Instances in a second Availability Zone.
B. Set the Auto Scaling group’s minimum capacity to four. Deploy two On-Demand Instances in one
Availability Zone and two On-Demand Instances in a second Availability Zone.
Explanation:
By setting the Auto Scaling group’s minimum capacity to four, the architect ensures that there are
always at least two running instances. Deploying two On-Demand Instances in each of two Availability Zones ensures that the application is highly available and fault-tolerant. If one
Availability Zone becomes unavailable, the application can still run in the other Availability Zone.
QUESTION 679
A company uses locally attached storage to run a latency-sensitive application on premises. The
company is using a lift and shift method to move the application to the AWS Cloud. The company
does not want to change the application architecture.
Which solution will meet these requirements MOST cost-effectively?
A. Configure an Auto Scaling group with an Amazon EC2 instance. Use an Amazon FSx for Lustre
file system to run the application.
B. Host the application on an Amazon EC2 instance. Use an Amazon Elastic Block Store (Amazon
EBS) GP2 volume to run the application.
C. Configure an Auto Scaling group with an Amazon EC2 instance. Use an Amazon FSx for
OpenZFS file system to run the application.
D. Host the application on an Amazon EC2 instance. Use an Amazon Elastic Block Store (Amazon
EBS) GP3 volume to run the application.
D. Host the application on an Amazon EC2 instance. Use an Amazon Elastic Block Store (Amazon
EBS) GP3 volume to run the application.
QUESTION 678
A company runs an application that uses Amazon RDS for PostgreSQL. The application receives
traffic only on weekdays during business hours. The company wants to optimize costs and
reduce operational overhead based on this usage.
Which solution will meet these requirements?
A. Use the Instance Scheduler on AWS to configure start and stop schedules.
B. Turn off automatic backups. Create weekly manual snapshots of the database.
C. Create a custom AWS Lambda function to start and stop the database based on minimum CPU
utilization.
D. Purchase All Upfront reserved DB instances.
A. Use the Instance Scheduler on AWS to configure start and stop schedules.
Explanation:
The Instance Scheduler on AWS solution automates the starting and stopping of Amazon Elastic
Compute Cloud (Amazon EC2) and Amazon Relational Database Service (Amazon RDS)
instances.
This solution helps reduce operational costs by stopping resources that are not in use and
starting them when they are needed. The cost savings can be significant if you leave all of your
instances running at full utilization continuously.
https://aws.amazon.com/solutions/implementations/instance-scheduler-on-aws/
QUESTION 677
A company deployed a serverless application that uses Amazon DynamoDB as a database layer.
The application has experienced a large increase in users. The company wants to improve
database response time from milliseconds to microseconds and to cache requests to the
database.
Which solution will meet these requirements with the LEAST operational overhead?
A. Use DynamoDB Accelerator (DAX).
B. Migrate the database to Amazon Redshift.
C. Migrate the database to Amazon RDS.
D. Use Amazon ElastiCache for Redis.
A. Use DynamoDB Accelerator (DAX).
Explanation:
Amazon DynamoDB Accelerator (DAX) is a fully managed, highly available, in-memory cache for
Amazon DynamoDB that delivers up to a 10 times performance improvement - from milliseconds
to microseconds - even at millions of requests per second.
QUESTION 676
A company uses an Amazon CloudFront distribution to serve content pages for its website. The
company needs to ensure that clients use a TLS certificate when accessing the company’s
website. The company wants to automate the creation and renewal of the TLS certificates.
Which solution will meet these requirements with the MOST operational efficiency?
A. Use a CloudFront security policy to create a certificate.
B. Use a CloudFront origin access control (OAC) to create a certificate.
C. Use AWS Certificate Manager (ACM) to create a certificate. Use DNS validation for the domain.
D. Use AWS Certificate Manager (ACM) to create a certificate. Use email validation for the domain.
C. Use AWS Certificate Manager (ACM) to create a certificate. Use DNS validation for the domain.
Explanation:
AWS Certificate Manager (ACM) provides free public TLS/SSL certificates and handles certificate
renewals automatically.
Using DNS validation with ACM is operationally efficient since it automatically makes changes to
Route 53 rather than requiring manual validation steps.
ACM integrates natively with CloudFront distributions for delivering HTTPS content.
CloudFront security policies and origin access controls do not issue TLS certificates.
Email validation requires manual steps to approve the domain validation emails for each renewal.
QUESTION 675
A company is building a RESTful serverless web application on AWS by using Amazon API
Gateway and AWS Lambda. The users of this web application will be geographically distributed,
and the company wants to reduce the latency of API requests to these users.
Which type of endpoint should a solutions architect use to meet these requirements?
A. Private endpoint
B. Regional endpoint
C. Interface VPC endpoint
D. Edge-optimized endpoint
D. Edge-optimized endpoint
Explanation:
An edge-optimized API endpoint typically routes requests to the nearest CloudFront Point of
Presence (POP), which could help in cases where your clients are geographically distributed.
This is the default endpoint type for API Gateway REST APIs.
https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-api-endpoint-
types.html
QUESTION 674
A company deploys its applications on Amazon Elastic Kubernetes Service (Amazon EKS)
behind an Application Load Balancer in an AWS Region. The application needs to store data in a
PostgreSQL database engine. The company wants the data in the database to be highly
available. The company also needs increased capacity for read workloads.
Which solution will meet these requirements with the MOST operational efficiency?
A. Create an Amazon DynamoDB database table configured with global tables.
B. Create an Amazon RDS database with Multi-AZ deployments.
C. Create an Amazon RDS database with Multi-AZ DB cluster deployment.
D. Create an Amazon RDS database configured with cross-Region read replicas.
C. Create an Amazon RDS database with Multi-AZ DB cluster deployment.
Explanation:
DB cluster deployment can scale read workloads by adding read replicas. This provides
increased capacity for read workloads without impacting the write workload.
QUESTION 673
A financial services company launched a new application that uses an Amazon RDS for MySQL
database. The company uses the application to track stock market trends. The company needs to
operate the application for only 2 hours at the end of each week. The company needs to optimize
the cost of running the database.
Which solution will meet these requirements MOST cost-effectively?
A. Migrate the existing RDS for MySQL database to an Aurora Serverless v2 MySQL database
cluster.
B. Migrate the existing RDS for MySQL database to an Aurora MySQL database cluster.
C. Migrate the existing RDS for MySQL database to an Amazon EC2 instance that runs MySQL.
Purchase an instance reservation for the EC2 instance.
D. Migrate the existing RDS for MySQL database to an Amazon Elastic Container Service (Amazon
ECS) cluster that uses MySQL container images to run tasks.
A. Migrate the existing RDS for MySQL database to an Aurora Serverless v2 MySQL database
cluster.
Explanation:
Aurora Serverless v2 scales compute capacity automatically based on actual usage, down to
zero when not in use. This minimizes costs for intermittent usage.
Since it only runs for 2 hours per week, the application is ideal for a serverless architecture like
Aurora Serverless.
Aurora Serverless v2 charges per second when the database is active, unlike RDS which
charges hourly.
Aurora Serverless provides higher availability than self-managed MySQL on EC2 or ECS.
Using reserved EC2 instances or ECS still incurs charges when not in use versus the fine-grained
scaling of serverless.
Standard Aurora clusters have a minimum capacity unlike the auto-scaling serverless
architecture.
QUESTION 672
A company wants to use an event-driven programming model with AWS Lambda. The company
wants to reduce startup latency for Lambda functions that run on Java 11. The company does not
have strict latency requirements for the applications. The company wants to reduce cold starts
and outlier latencies when a function scales up.
Which solution will meet these requirements MOST cost-effectively?
A. Configure Lambda provisioned concurrency.
B. Increase the timeout of the Lambda functions.
C. Increase the memory of the Lambda functions.
D. Configure Lambda SnapStart.
D. Configure Lambda SnapStart.
Explanation:
Lambda SnapStart for Java can improve startup performance for latency-sensitive applications by
up to 10x at no extra cost, typically with no changes to your function code.
https://docs.aws.amazon.com/lambda/latest/dg/snapstart.html
QUESTION 671
A company runs an application on AWS. The application receives inconsistent amounts of usage.
The application uses AWS Direct Connect to connect to an on-premises MySQL-compatible
database. The on-premises database consistently uses a minimum of 2 GiB of memory.
The company wants to migrate the on-premises database to a managed AWS service. The
company wants to use auto scaling capabilities to manage unexpected workload increases.
Which solution will meet these requirements with the LEAST administrative overhead?
A. Provision an Amazon DynamoDB database with default read and write capacity settings.
B. Provision an Amazon Aurora database with a minimum capacity of 1 Aurora capacity unit (ACU).
C. Provision an Amazon Aurora Serverless v2 database with a minimum capacity of 1 Aurora
capacity unit (ACU).
D. Provision an Amazon RDS for MySQL database with 2 GiB of memory.
C. Provision an Amazon Aurora Serverless v2 database with a minimum capacity of 1 Aurora
capacity unit (ACU).
Explanation:
Aurora Serverless v2 provides auto-scaling so the database can handle inconsistent workloads
and spikes automatically without admin intervention.
It can scale down to zero when not in use to minimize costs.
The minimum 1 ACU capacity is sufficient to replace the on-prem 2 GiB database based on the
info given.
Serverless capabilities reduce admin overhead for capacity management.
DynamoDB lacks MySQL compatibility and requires more hands-on management.
RDS and provisioned Aurora require manually resizing instances to scale, increasing admin
overhead.
QUESTION 670
A company is creating a REST API. The company has strict requirements for the use of TLS. The
company requires TLSv1.3 on the API endpoints. The company also requires a specific public
third-party certificate authority (CA) to sign the TLS certificate.
Which solution will meet these requirements?
A. Use a local machine to create a certificate that is signed by the third-party CImport the certificate
into AWS Certificate Manager (ACM). Create an HTTP API in Amazon API Gateway with a
custom domain. Configure the custom domain to use the certificate.
B. Create a certificate in AWS Certificate Manager (ACM) that is signed by the third-party CA.
Create an HTTP API in Amazon API Gateway with a custom domain. Configure the custom
domain to use the certificate.
C. Use AWS Certificate Manager (ACM) to create a certificate that is signed by the third-party CA.
Import the certificate into AWS Certificate Manager (ACM). Create an AWS Lambda function with
a Lambda function URL. Configure the Lambda function URL to use the certificate.
D. Create a certificate in AWS Certificate Manager (ACM) that is signed by the third-party CA.
Create an AWS Lambda function with a Lambda function URL. Configure the Lambda function
URL to use the certificate.
A. Use a local machine to create a certificate that is signed by the third-party CImport the certificate
into AWS Certificate Manager (ACM). Create an HTTP API in Amazon API Gateway with a
custom domain. Configure the custom domain to use the certificate.
QUESTION 669
A company has a large workload that runs every Friday evening. The workload runs on Amazon
EC2 instances that are in two Availability Zones in the us-east-1 Region. Normally, the company must run no more than two instances at all times. However, the company wants to scale up to six
instances each Friday to handle a regularly repeating increased workload.
Which solution will meet these requirements with the LEAST operational overhead?
A. Create a reminder in Amazon EventBridge to scale the instances.
B. Create an Auto Scaling group that has a scheduled action.
C. Create an Auto Scaling group that uses manual scaling.
D. Create an Auto Scaling group that uses automatic scaling.
B. Create an Auto Scaling group that has a scheduled action.
Explanation:
https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-scheduled-scaling.html
QUESTION 668
An Amazon EventBridge rule targets a third-party API. The third-party API has not received any
incoming traffic. A solutions architect needs to determine whether the rule conditions are being
met and if the rule’s target is being invoked.
Which solution will meet these requirements?
A. Check for metrics in Amazon CloudWatch in the namespace for AWS/Events.
B. Review events in the Amazon Simple Queue Service (Amazon SQS) dead-letter queue.
C. Check for the events in Amazon CloudWatch Logs.
D. Check the trails in AWS CloudTrail for the EventBridge events.
A. Check for metrics in Amazon CloudWatch in the namespace for AWS/Events.
QUESTION 667
A solutions architect is designing the storage architecture for a new web application used for
storing and viewing engineering drawings. All application components will be deployed on the
AWS infrastructure.
The application design must support caching to minimize the amount of time that users wait for
the engineering drawings to load. The application must be able to store petabytes of data.
Which combination of storage and caching should the solutions architect use?
A. Amazon S3 with Amazon CloudFront
B. Amazon S3 Glacier with Amazon ElastiCache
C. Amazon Elastic Block Store (Amazon EBS) volumes with Amazon CloudFront
D. AWS Storage Gateway with Amazon ElastiCache
A. Amazon S3 with Amazon CloudFront
QUESTION 666
A solutions architect is designing a workload that will store hourly energy consumption by
business tenants in a building. The sensors will feed a database through HTTP requests that will
add up usage for each tenant. The solutions architect must use managed services when possible.
The workload will receive more features in the future as the solutions architect adds independent
components.
Which solution will meet these requirements with the LEAST operational overhead?
A. Use Amazon API Gateway with AWS Lambda functions to receive the data from the sensors,
process the data, and store the data in an Amazon DynamoDB table.
B. Use an Elastic Load Balancer that is supported by an Auto Scaling group of Amazon EC2
instances to receive and process the data from the sensors. Use an Amazon S3 bucket to store
the processed data.
C. Use Amazon API Gateway with AWS Lambda functions to receive the data from the sensors,
process the data, and store the data in a Microsoft SQL Server Express database on an Amazon
EC2 instance.
D. Use an Elastic Load Balancer that is supported by an Auto Scaling group of Amazon EC2
instances to receive and process the data from the sensors. Use an Amazon Elastic File System
(Amazon EFS) shared file system to store the processed data.
A. Use Amazon API Gateway with AWS Lambda functions to receive the data from the sensors,
process the data, and store the data in an Amazon DynamoDB table.
QUESTION 665
A company runs multiple Amazon EC2 Linux instances in a VPC across two Availability Zones.
The instances host applications that use a hierarchical directory structure. The applications need
to read and write rapidly and concurrently to shared storage.
What should a solutions architect do to meet these requirements?
A. Create an Amazon S3 bucket. Allow access from all the EC2 instances in the VPC.
B. Create an Amazon Elastic File System (Amazon EFS) file system. Mount the EFS file system
from each EC2 instance.
C. Create a file system on a Provisioned IOPS SSD (io2) Amazon Elastic Block Store (Amazon
EBS) volume. Attach the EBS volume to all the EC2 instances.
D. Create file systems on Amazon Elastic Block Store (Amazon EBS) volumes that are attached to
each EC2 instance. Synchronize the EBS volumes across the different EC2 instances.
B. Create an Amazon Elastic File System (Amazon EFS) file system. Mount the EFS file system
from each EC2 instance.
Explanation:
How is Amazon EFS different than Amazon S3?
Amazon EFS provides shared access to data using a traditional file sharing permissions model
and hierarchical directory structure via the NFSv4 protocol. Applications that access data using a
standard file system interface provided through the operating system can use Amazon EFS to
take advantage of the scalability and reliability of file storage in the cloud without writing any new
code or adjusting applications.
Amazon S3 is an object storage platform that uses a simple API for storing and accessing data.
Applications that do not require a file system structure and are designed to work with object
storage can use Amazon S3 as a massively scalable, durable, low-cost object storage solution.
QUESTION 664
A company has an on-premises MySQL database that handles transactional data. The company
is migrating the database to the AWS Cloud. The migrated database must maintain compatibility
with the company’s applications that use the database. The migrated database also must scale
automatically during periods of increased demand.
Which migration solution will meet these requirements?
A. Use native MySQL tools to migrate the database to Amazon RDS for MySQL. Configure elastic
storage scaling.
B. Migrate the database to Amazon Redshift by using the mysqldump utility. Turn on Auto Scaling
for the Amazon Redshift cluster.
C. Use AWS Database Migration Service (AWS DMS) to migrate the database to Amazon Aurora.
Turn on Aurora Auto Scaling.
D. Use AWS Database Migration Service (AWS DMS) to migrate the database to Amazon
DynamoDB. Configure an Auto Scaling policy.
C. Use AWS Database Migration Service (AWS DMS) to migrate the database to Amazon Aurora.
Turn on Aurora Auto Scaling.
Explanation:
DMS provides an easy migration path from MySQL to Aurora while minimizing downtime.
Aurora is a MySQL-compatible relational database service that will maintain compatibility with the
company’s applications.
Aurora Auto Scaling allows the database to automatically scale up and down based on demand
to handle increased workloads.
RDS MySQL (Option A) does not scale as well as the Aurora architecture.
Redshift (Option B) is for analytics, not transactional data, and may not be compatible.
DynamoDB (Option D) is a NoSQL datastore and lacks MySQL compatibility.
QUESTION 663
A company is building an ecommerce application and needs to store sensitive customer
information. The company needs to give customers the ability to complete purchase transactions
on the website. The company also needs to ensure that sensitive customer data is protected,
even from database administrators.
Which solution meets these requirements?
A. Store sensitive data in an Amazon Elastic Block Store (Amazon EBS) volume. Use EBS
encryption to encrypt the data. Use an IAM instance role to restrict access.
B. Store sensitive data in Amazon RDS for MySQL. Use AWS Key Management Service (AWS
KMS) client-side encryption to encrypt the data.
C. Store sensitive data in Amazon S3. Use AWS Key Management Service (AWS KMS) server-side
encryption to encrypt the data. Use S3 bucket policies to restrict access.
D. Store sensitive data in Amazon FSx for Windows Server. Mount the file share on application
servers. Use Windows file permissions to restrict access.
B. Store sensitive data in Amazon RDS for MySQL. Use AWS Key Management Service (AWS
KMS) client-side encryption to encrypt the data.
Explanation:
RDS MySQL provides a fully managed database service well suited for an ecommerce
application.
AWS KMS client-side encryption allows encrypting sensitive data before it hits the database. The
data remains encrypted at rest.
This protects sensitive customer data from database admins and privileged users.
EBS encryption (Option A) protects data at rest but not in use. IAM roles don’t prevent admin
access.
S3 (Option C) encrypts data at rest on the server side. Bucket policies don’t restrict admin
access.
FSx file permissions (Option D) don’t prevent admin access to unencrypted data.
QUESTION 662
A company runs its applications on both Amazon Elastic Kubernetes Service (Amazon EKS)
clusters and on-premises Kubernetes clusters. The company wants to view all clusters and
workloads from a central location.
Which solution will meet these requirements with the LEAST operational overhead?
A. Use Amazon CloudWatch Container Insights to collect and group the cluster information.
B. Use Amazon EKS Connector to register and connect all Kubernetes clusters.
C. Use AWS Systems Manager to collect and view the cluster information.
D. Use Amazon EKS Anywhere as the primary cluster to view the other clusters with native
Kubernetes commands.
B. Use Amazon EKS Connector to register and connect all Kubernetes clusters.
Explanation:
You can use Amazon EKS Connector to register and connect any conformant Kubernetes cluster
to AWS and visualize it in the Amazon EKS console. After a cluster is connected, you can see the
status, configuration, and workloads for that cluster in the Amazon EKS console.
https://docs.aws.amazon.com/eks/latest/userguide/eks-connector.html
QUESTION 661
A solutions architect needs to ensure that API calls to Amazon DynamoDB from Amazon EC2
instances in a VPC do not travel across the internet.
Which combination of steps should the solutions architect take to meet this requirement?
(Choose two.)
A. Create a route table entry for the endpoint.
B. Create a gateway endpoint for DynamoDB.
C. Create an interface endpoint for Amazon EC2.
D. Create an elastic network interface for the endpoint in each of the subnets of the VPC.
E. Create a security group entry in the endpoint’s security group to provide access.
A. Create a route table entry for the endpoint.
B. Create a gateway endpoint for DynamoDB.
Explanation:
https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-ddb.html