SAA L2P 601-700 v24.021 Flashcards

1
Q

QUESTION 700
A company runs its critical database on an Amazon RDS for PostgreSQL DB instance. The company wants to migrate to Amazon Aurora PostgreSQL with minimal downtime and data loss.
Which solution will meet these requirements with the LEAST operational overhead?
A. Create a DB snapshot of the RDS for PostgreSQL DB instance to populate a new Aurora
PostgreSQL DB cluster.
B. Create an Aurora read replica of the RDS for PostgreSQL DB instance. Promote the Aurora read
replicate to a new Aurora PostgreSQL DB cluster.
C. Use data import from Amazon S3 to migrate the database to an Aurora PostgreSQL DB cluster.
D. Use the pg_dump utility to back up the RDS for PostgreSQL database. Restore the backup to a
new Aurora PostgreSQL DB cluster.

A

B. Create an Aurora read replica of the RDS for PostgreSQL DB instance. Promote the Aurora read
replicate to a new Aurora PostgreSQL DB cluster.

Explanation:
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraPostgreSQL.Migrating
.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

QUESTION 699
A company is planning to migrate a TCP-based application into the company’s VPC. The
application is publicly accessible on a nonstandard TCP port through a hardware appliance in the
company’s data center. This public endpoint can process up to 3 million requests per second with
low latency. The company requires the same level of performance for the new public endpoint in
AWS.
What should a solutions architect recommend to meet this requirement?
A. Deploy a Network Load Balancer (NLB). Configure the NLB to be publicly accessible over the
TCP port that the application requires.
B. Deploy an Application Load Balancer (ALB). Configure the ALB to be publicly accessible over the
TCP port that the application requires.
C. Deploy an Amazon CloudFront distribution that listens on the TCP port that the application
requires. Use an Application Load Balancer as the origin.
D. Deploy an Amazon API Gateway API that is configured with the TCP port that the application
requires. Configure AWS Lambda functions with provisioned concurrency to process the
requests.

A

A. Deploy a Network Load Balancer (NLB). Configure the NLB to be publicly accessible over the
TCP port that the application requires.

Explanation:
Since the company requires the same level of performance for the new public endpoint in AWS.
A Network Load Balancer functions at the fourth layer of the Open Systems Interconnection (OSI)
model. It can handle millions of requests per second. After the load balancer receives a
connection request, it selects a target from the target group for the default rule. It attempts to
open a TCP connection to the selected target on the port specified in the listener configuration.
https://docs.aws.amazon.com/elasticloadbalancing/latest/network/introduction.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

QUESTION 698
A company wants to use Amazon Elastic Container Service (Amazon ECS) clusters and Amazon
RDS DB instances to build and run a payment processing application. The company will run the
application in its on-premises data center for compliance purposes.
A solutions architect wants to use AWS Outposts as part of the solution. The solutions architect is
working with the company’s operational team to build the application.
Which activities are the responsibility of the company’s operational team? (Choose three.)
A. Providing resilient power and network connectivity to the Outposts racks
B. Managing the virtualization hypervisor, storage systems, and the AWS services that run on
Outposts
C. Physical security and access controls of the data center environment
D. Availability of the Outposts infrastructure including the power supplies, servers, and networking
equipment within the Outposts racks
E. Physical maintenance of Outposts components
F. Providing extra capacity for Amazon ECS clusters to mitigate server failures and maintenance
events

A

A. Providing resilient power and network connectivity to the Outposts racks
C. Physical security and access controls of the data center environment
F. Providing extra capacity for Amazon ECS clusters to mitigate server failures and maintenance
events

Explanation:
https://docs.aws.amazon.com/whitepapers/latest/aws-outposts-high-availability-design/aws-
outposts-high-availability-design.html
With Outposts, you are responsible for providing resilient power and network connectivity to the
Outpost racks to meet your availability requirements for workloads running on Outposts. You are
responsible for the physical security and access controls of the data center environment. You
must provide sufficient power, space, and cooling to keep the Outpost operational and network
connections to connect the Outpost back to the Region. Since Outpost capacity is finite and
determined by the size and number of racks AWS installs at your site, you must decide how much
EC2, EBS, and S3 on Outposts capacity you need to run your initial workloads, accommodate
future growth, and to provide extra capacity to mitigate server failures and maintenance events.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

QUESTION 697
A research company uses on-premises devices to generate data for analysis. The company
wants to use the AWS Cloud to analyze the data. The devices generate .csv files and support
writing the data to an SMB file share. Company analysts must be able to use SQL commands to
query the data. The analysts will run queries periodically throughout the day.
Which combination of steps will meet these requirements MOST cost-effectively? (Choose three.)
A. Deploy an AWS Storage Gateway on premises in Amazon S3 File Gateway mode.
B. Deploy an AWS Storage Gateway on premises in Amazon FSx File Gateway made.
C. Set up an AWS Glue crawler to create a table based on the data that is in Amazon S3.
D. Set up an Amazon EMR cluster with EMR File System (EMRFS) to query the data that is in
Amazon S3. Provide access to analysts.
E. Set up an Amazon Redshift cluster to query the data that is in Amazon S3. Provide access to
analysts.
F. Setup Amazon Athena to query the data that is in Amazon S3. Provide access to analysts.

A

A. Deploy an AWS Storage Gateway on premises in Amazon S3 File Gateway mode.
C. Set up an AWS Glue crawler to create a table based on the data that is in Amazon S3.
F. Setup Amazon Athena to query the data that is in Amazon S3. Provide access to analysts.

Explanation:
https://docs.aws.amazon.com/glue/latest/dg/aws-glue-programming-etl-format-csv-home.html
https://aws.amazon.com/blogs/aws/amazon-athena-interactive-sql-queries-for-data-in-amazon-
s3/
https://aws.amazon.com/storagegateway/faqs/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

QUESTION 696
A company hosts an internal serverless application on AWS by using Amazon API Gateway and AWS Lambda. The company’s employees report issues with high latency when they begin using
the application each day. The company wants to reduce latency.
Which solution will meet these requirements?
A. Increase the API Gateway throttling limit.
B. Set up a scheduled scaling to increase Lambda provisioned concurrency before employees begin
to use the application each day.
C. Create an Amazon CloudWatch alarm to initiate a Lambda function as a target for the alarm at
the beginning of each day.
D. Increase the Lambda function memory.

A

B. Set up a scheduled scaling to increase Lambda provisioned concurrency before employees begin
to use the application each day.

Explanation:
https://aws.amazon.com/blogs/compute/scheduling-aws-lambda-provisioned-concurrency-for-
recurring-peak-usage/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

QUESTION 695
An ecommerce application uses a PostgreSQL database that runs on an Amazon EC2 instance.
During a monthly sales event, database usage increases and causes database connection issues
for the application. The traffic is unpredictable for subsequent monthly sales events, which
impacts the sales forecast. The company needs to maintain performance when there is an
unpredictable increase in traffic.
Which solution resolves this issue in the MOST cost-effective way?
A. Migrate the PostgreSQL database to Amazon Aurora Serverless v2.
B. Enable auto scaling for the PostgreSQL database on the EC2 instance to accommodate
increased usage.
C. Migrate the PostgreSQL database to Amazon RDS for PostgreSQL with a larger instance type.
D. Migrate the PostgreSQL database to Amazon Redshift to accommodate increased usage.

A

A. Migrate the PostgreSQL database to Amazon Aurora Serverless v2.

Explanation:
Aurora Serverless v2 got autoscaling, highly available and cheaper when compared to the other
options.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

QUESTION 694
A company’s applications run on Amazon EC2 instances in Auto Scaling groups. The company
notices that its applications experience sudden traffic increases on random days of the week. The
company wants to maintain application performance during sudden traffic increases.
Which solution will meet these requirements MOST cost-effectively?
A. Use manual scaling to change the size of the Auto Scaling group.
B. Use predictive scaling to change the size of the Auto Scaling group.
C. Use dynamic scaling to change the size of the Auto Scaling group.
D. Use schedule scaling to change the size of the Auto Scaling group.

A

C. Use dynamic scaling to change the size of the Auto Scaling group.

Explanation:
Dynamic Scaling - This is yet another type of Auto Scaling in which the number of EC2 instances
is changed automatically depending on the signals received. Dynamic Scaling is a good choice
when there is a high volume of unpredictable traffic.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

QUESTION 693
A company plans to migrate to AWS and use Amazon EC2 On-Demand Instances for its
application. During the migration testing phase, a technical team observes that the application
takes a long time to launch and load memory to become fully productive.
Which solution will reduce the launch time of the application during the next testing phase?
A. Launch two or more EC2 On-Demand Instances. Turn on auto scaling features and make the
EC2 On-Demand Instances available during the next testing phase.
B. Launch EC2 Spot Instances to support the application and to scale the application so it is available during the next testing phase.
C. Launch the EC2 On-Demand Instances with hibernation turned on. Configure EC2 Auto Scaling
warm pools during the next testing phase.
D. Launch EC2 On-Demand Instances with Capacity Reservations. Start additional EC2 instances
during the next testing phase.

A

C. Launch the EC2 On-Demand Instances with hibernation turned on. Configure EC2 Auto Scaling
warm pools during the next testing phase.

Explanation:
With Amazon EC2 hibernation enabled, you can maintain your EC2 instances in a “pre-warmed”
state so these can get to a productive state faster.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

QUESTION 692
A solutions architect is designing a highly available Amazon ElastiCache for Redis based
solution. The solutions architect needs to ensure that failures do not result in performance
degradation or loss of data locally and within an AWS Region. The solution needs to provide high
availability at the node level and at the Region level.
Which solution will meet these requirements?
A. Use Multi-AZ Redis replication groups with shards that contain multiple nodes.
B. Use Redis shards that contain multiple nodes with Redis append only files (AOF) turned on.
C. Use a Multi-AZ Redis cluster with more than one read replica in the replication group.
D. Use Redis shards that contain multiple nodes with Auto Scaling turned on.

A

A. Use Multi-AZ Redis replication groups with shards that contain multiple nodes.

Explanation:
https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/Replication.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

QUESTION 691
A company uses AWS and sells access to copyrighted images. The company’s global customer
base needs to be able to access these images quickly. The company must deny access to users
from specific countries. The company wants to minimize costs as much as possible.
Which solution will meet these requirements?
A. Use Amazon S3 to store the images. Turn on multi-factor authentication (MFA) and public bucket
access. Provide customers with a link to the S3 bucket.
B. Use Amazon S3 to store the images. Create an IAM user for each customer. Add the users to a
group that has permission to access the S3 bucket.
C. Use Amazon EC2 instances that are behind Application Load Balancers (ALBs) to store the
images. Deploy the instances only in the countries the company services. Provide customers with
links to the ALBs for their specific country’s instances.
D. Use Amazon S3 to store the images. Use Amazon CloudFront to distribute the images with
geographic restrictions. Provide a signed URL for each customer to access the data in
CloudFront.

A

D. Use Amazon S3 to store the images. Use Amazon CloudFront to distribute the images with
geographic restrictions. Provide a signed URL for each customer to access the data in
CloudFront.

Explanation:
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/georestrictions.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

QUESTION 690
A company runs a container application by using Amazon Elastic Kubernetes Service (Amazon
EKS). The application includes microservices that manage customers and place orders. The
company needs to route incoming requests to the appropriate microservices.
Which solution will meet this requirement MOST cost-effectively?
A. Use the AWS Load Balancer Controller to provision a Network Load Balancer.
B. Use the AWS Load Balancer Controller to provision an Application Load Balancer.
C. Use an AWS Lambda function to connect the requests to Amazon EKS.
D. Use Amazon API Gateway to connect the requests to Amazon EKS.

A

B. Use the AWS Load Balancer Controller to provision an Application Load Balancer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

QUESTION 689
A company migrated a MySQL database from the company’s on-premises data center to an
Amazon RDS for MySQL DB instance. The company sized the RDS DB instance to meet the
company’s average daily workload. Once a month, the database performs slowly when the
company runs queries for a report. The company wants to have the ability to run reports and
maintain the performance of the daily workloads.
Which solution will meet these requirements?
A. Create a read replica of the database. Direct the queries to the read replica.
B. Create a backup of the database. Restore the backup to another DB instance. Direct the queries
to the new database.
C. Export the data to Amazon S3. Use Amazon Athena to query the S3 bucket.
D. Resize the DB instance to accommodate the additional workload.

A

A. Create a read replica of the database. Direct the queries to the read replica.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

QUESTION 688
A company runs a web application on Amazon EC2 instances in an Auto Scaling group behind an
Application Load Balancer that has sticky sessions enabled. The web server currently hosts the
user session state. The company wants to ensure high availability and avoid user session state
loss in the event of a web server outage.
Which solution will meet these requirements?

A. Use an Amazon ElastiCache for Memcached instance to store the session data. Update the
application to use ElastiCache for Memcached to store the session state.
B. Use Amazon ElastiCache for Redis to store the session state. Update the application to use
ElastiCache for Redis to store the session state.
C. Use an AWS Storage Gateway cached volume to store session data. Update the application to
use AWS Storage Gateway cached volume to store the session state.
D. Use Amazon RDS to store the session state. Update the application to use Amazon RDS to store
the session state.

A

B. Use Amazon ElastiCache for Redis to store the session state. Update the application to use
ElastiCache for Redis to store the session state.

Explanation:
ElastiCache Redis provides in-memory caching that can deliver microsecond latency for session
data.
Redis supports replication and multi-AZ which can provide high availability for the cache.
The application can be updated to store session data in ElastiCache Redis rather than locally on
the web servers.
If a web server fails, the user can be routed via the load balancer to another web server which
can retrieve their session data from the highly available ElastiCache Redis cluster.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

QUESTION 687
An ecommerce company wants a disaster recovery solution for its Amazon RDS DB instances
that run Microsoft SQL Server Enterprise Edition. The company’s current recovery point objective
(RPO) and recovery time objective (RTO) are 24 hours.
Which solution will meet these requirements MOST cost-effectively?
A. Create a cross-Region read replica and promote the read replica to the primary instance.
B. Use AWS Database Migration Service (AWS DMS) to create RDS cross-Region replication.
C. Use cross-Region replication every 24 hours to copy native backups to an Amazon S3 bucket.
D. Copy automatic snapshots to another Region every 24 hours.

A

D. Copy automatic snapshots to another Region every 24 hours.

Explanation:
Amazon RDS creates and saves automated backups of your DB instance or Multi-AZ DB cluster
during the backup window of your DB instance. RDS creates a storage volume snapshot of your
DB instance, backing up the entire DB instance and not just individual databases. RDS saves the
automated backups of your DB instance according to the backup retention period that you
specify. If necessary, you can recover your DB instance to any point in time during the backup
retention period.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

QUESTION 686
A company is designing a solution to capture customer activity in different web applications to process analytics and make predictions. Customer activity in the web applications is
unpredictable and can increase suddenly. The company requires a solution that integrates with
other web applications. The solution must include an authorization step for security purposes.
Which solution will meet these requirements?
A. Configure a Gateway Load Balancer (GWLB) in front of an Amazon Elastic Container Service
(Amazon ECS) container instance that stores the information that the company receives in an
Amazon Elastic File System (Amazon EFS) file system. Authorization is resolved at the GWLB.
B. Configure an Amazon API Gateway endpoint in front of an Amazon Kinesis data stream that
stores the information that the company receives in an Amazon S3 bucket. Use an AWS Lambda
function to resolve authorization.
C. Configure an Amazon API Gateway endpoint in front of an Amazon Kinesis Data Firehose that
stores the information that the company receives in an Amazon S3 bucket. Use an API Gateway
Lambda authorizer to resolve authorization.
D. Configure a Gateway Load Balancer (GWLB) in front of an Amazon Elastic Container Service
(Amazon ECS) container instance that stores the information that the company receives on an
Amazon Elastic File System (Amazon EFS) file system. Use an AWS Lambda function to resolve
authorization.

A

C. Configure an Amazon API Gateway endpoint in front of an Amazon Kinesis Data Firehose that
stores the information that the company receives in an Amazon S3 bucket. Use an API Gateway
Lambda authorizer to resolve authorization.

Explanation:
https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-use-lambda-
authorizer.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

QUESTION 685
A company has five organizational units (OUs) as part of its organization in AWS Organizations.
Each OU correlates to the five businesses that the company owns. The company’s research and
development (R&D) business is separating from the company and will need its own organization.
A solutions architect creates a separate new management account for this purpose.
What should the solutions architect do next in the new management account?
A. Have the R&D AWS account be part of both organizations during the transition.
B. Invite the R&D AWS account to be part of the new organization after the R&D AWS account has
left the prior organization.
C. Create a new R&D AWS account in the new organization. Migrate resources from the prior R&D
AWS account to the new R&D AWS account.
D. Have the R&D AWS account join the new organization. Make the new management account a
member of the prior organization.

A

B. Invite the R&D AWS account to be part of the new organization after the R&D AWS account has
left the prior organization.

Explanation:
https://aws.amazon.com/blogs/mt/migrating-accounts-between-aws-organizations-with-
consolidated-billing-to-all-features/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

QUESTION 684
A solutions architect is designing a disaster recovery (DR) strategy to provide Amazon EC2
capacity in a failover AWS Region. Business requirements state that the DR strategy must meet
capacity in the failover Region.
Which solution will meet these requirements?
A. Purchase On-Demand Instances in the failover Region.
B. Purchase an EC2 Savings Plan in the failover Region.
C. Purchase regional Reserved Instances in the failover Region.
D. Purchase a Capacity Reservation in the failover Region.

A

D. Purchase a Capacity Reservation in the failover Region.

Explanation:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/reserved-instances-scope.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

QUESTION 683
A company is deploying an application that processes large quantities of data in parallel. The
company plans to use Amazon EC2 instances for the workload. The network architecture must be
configurable to prevent groups of nodes from sharing the same underlying hardware.
Which networking solution meets these requirements?
A. Run the EC2 instances in a spread placement group.
B. Group the EC2 instances in separate accounts.
C. Configure the EC2 instances with dedicated tenancy.
D. Configure the EC2 instances with shared tenancy.

A

B. Group the EC2 instances in separate accounts.

Explanation:
Configuring the EC2 instances with dedicated tenancy ensures that each instance will run on
isolated, single-tenant hardware. This meets the requirement to prevent groups of nodes from
sharing underlying hardware.
A spread placement group only provides isolation at the Availability Zone level. Instances could
still share hardware within an AZ.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

QUESTION 682
A company has 5 PB of archived data on physical tapes. The company needs to preserve the
data on the tapes for another 10 years for compliance purposes. The company wants to migrate
to AWS in the next 6 months. The data center that stores the tapes has a 1 Gbps uplink internet
connectivity.
Which solution will meet these requirements MOST cost-effectively?
A. Read the data from the tapes on premises. Stage the data in a local NFS storage. Use AWS
DataSync to migrate the data to Amazon S3 Glacier Flexible Retrieval.
B. Use an on-premises backup application to read the data from the tapes and to write directly to
Amazon S3 Glacier Deep Archive.
C. Order multiple AWS Snowball devices that have Tape Gateway. Copy the physical tapes to virtual
tapes in Snowball. Ship the Snowball devices to AWS. Create a lifecycle policy to move the tapes
to Amazon S3 Glacier Deep Archive.
D. Configure an on-premises Tape Gateway. Create virtual tapes in the AWS Cloud. Use backup
software to copy the physical tape to the virtual tape.

A

C. Order multiple AWS Snowball devices that have Tape Gateway. Copy the physical tapes to virtual
tapes in Snowball. Ship the Snowball devices to AWS. Create a lifecycle policy to move the tapes
to Amazon S3 Glacier Deep Archive.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

QUESTION 681
An ecommerce company uses Amazon Route 53 as its DNS provider. The company hosts its
website on premises and in the AWS Cloud. The company’s on-premises data center is near the
us-west-1 Region. The company uses the eu-central-1 Region to host the website. The company
wants to minimize load time for the website as much as possible.
Which solution will meet these requirements?
A. Set up a geolocation routing policy. Send the traffic that is near us-west-1 to the on-premises data
center. Send the traffic that is near eu-central-1 to eu-central-1.
B. Set up a simple routing policy that routes all traffic that is near eu-central-1 to eu-central-1 and
routes all traffic that is near the on-premises datacenter to the on-premises data center.
C. Set up a latency routing policy. Associate the policy with us-west-1.
D. Set up a weighted routing policy. Split the traffic evenly between eu-central-1 and the on-
premises data center.

A

A. Set up a geolocation routing policy. Send the traffic that is near us-west-1 to the on-premises data
center. Send the traffic that is near eu-central-1 to eu-central-1.

Explanation:
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy-geo.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

QUESTION 680
A company runs a stateful production application on Amazon EC2 instances. The application
requires at least two EC2 instances to always be running.
A solutions architect needs to design a highly available and fault-tolerant architecture for the
application. The solutions architect creates an Auto Scaling group of EC2 instances.
Which set of additional steps should the solutions architect take to meet these requirements?
A. Set the Auto Scaling group’s minimum capacity to two. Deploy one On-Demand Instance in one
Availability Zone and one On-Demand Instance in a second Availability Zone.
B. Set the Auto Scaling group’s minimum capacity to four. Deploy two On-Demand Instances in one
Availability Zone and two On-Demand Instances in a second Availability Zone.
C. Set the Auto Scaling group’s minimum capacity to two. Deploy four Spot Instances in one
Availability Zone.
D. Set the Auto Scaling group’s minimum capacity to four. Deploy two On-Demand Instances in one
Availability Zone and two Spot Instances in a second Availability Zone.

A

B. Set the Auto Scaling group’s minimum capacity to four. Deploy two On-Demand Instances in one
Availability Zone and two On-Demand Instances in a second Availability Zone.

Explanation:
By setting the Auto Scaling group’s minimum capacity to four, the architect ensures that there are
always at least two running instances. Deploying two On-Demand Instances in each of two Availability Zones ensures that the application is highly available and fault-tolerant. If one
Availability Zone becomes unavailable, the application can still run in the other Availability Zone.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

QUESTION 679
A company uses locally attached storage to run a latency-sensitive application on premises. The
company is using a lift and shift method to move the application to the AWS Cloud. The company
does not want to change the application architecture.
Which solution will meet these requirements MOST cost-effectively?
A. Configure an Auto Scaling group with an Amazon EC2 instance. Use an Amazon FSx for Lustre
file system to run the application.
B. Host the application on an Amazon EC2 instance. Use an Amazon Elastic Block Store (Amazon
EBS) GP2 volume to run the application.
C. Configure an Auto Scaling group with an Amazon EC2 instance. Use an Amazon FSx for
OpenZFS file system to run the application.
D. Host the application on an Amazon EC2 instance. Use an Amazon Elastic Block Store (Amazon
EBS) GP3 volume to run the application.

A

D. Host the application on an Amazon EC2 instance. Use an Amazon Elastic Block Store (Amazon
EBS) GP3 volume to run the application.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

QUESTION 678
A company runs an application that uses Amazon RDS for PostgreSQL. The application receives
traffic only on weekdays during business hours. The company wants to optimize costs and
reduce operational overhead based on this usage.
Which solution will meet these requirements?
A. Use the Instance Scheduler on AWS to configure start and stop schedules.
B. Turn off automatic backups. Create weekly manual snapshots of the database.
C. Create a custom AWS Lambda function to start and stop the database based on minimum CPU
utilization.
D. Purchase All Upfront reserved DB instances.

A

A. Use the Instance Scheduler on AWS to configure start and stop schedules.

Explanation:
The Instance Scheduler on AWS solution automates the starting and stopping of Amazon Elastic
Compute Cloud (Amazon EC2) and Amazon Relational Database Service (Amazon RDS)
instances.
This solution helps reduce operational costs by stopping resources that are not in use and
starting them when they are needed. The cost savings can be significant if you leave all of your
instances running at full utilization continuously.
https://aws.amazon.com/solutions/implementations/instance-scheduler-on-aws/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

QUESTION 677
A company deployed a serverless application that uses Amazon DynamoDB as a database layer.
The application has experienced a large increase in users. The company wants to improve
database response time from milliseconds to microseconds and to cache requests to the
database.
Which solution will meet these requirements with the LEAST operational overhead?
A. Use DynamoDB Accelerator (DAX).
B. Migrate the database to Amazon Redshift.
C. Migrate the database to Amazon RDS.
D. Use Amazon ElastiCache for Redis.

A

A. Use DynamoDB Accelerator (DAX).

Explanation:
Amazon DynamoDB Accelerator (DAX) is a fully managed, highly available, in-memory cache for
Amazon DynamoDB that delivers up to a 10 times performance improvement - from milliseconds
to microseconds - even at millions of requests per second.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

QUESTION 676
A company uses an Amazon CloudFront distribution to serve content pages for its website. The
company needs to ensure that clients use a TLS certificate when accessing the company’s
website. The company wants to automate the creation and renewal of the TLS certificates.
Which solution will meet these requirements with the MOST operational efficiency?
A. Use a CloudFront security policy to create a certificate.
B. Use a CloudFront origin access control (OAC) to create a certificate.
C. Use AWS Certificate Manager (ACM) to create a certificate. Use DNS validation for the domain.
D. Use AWS Certificate Manager (ACM) to create a certificate. Use email validation for the domain.

A

C. Use AWS Certificate Manager (ACM) to create a certificate. Use DNS validation for the domain.

Explanation:
AWS Certificate Manager (ACM) provides free public TLS/SSL certificates and handles certificate
renewals automatically.
Using DNS validation with ACM is operationally efficient since it automatically makes changes to
Route 53 rather than requiring manual validation steps.
ACM integrates natively with CloudFront distributions for delivering HTTPS content.
CloudFront security policies and origin access controls do not issue TLS certificates.
Email validation requires manual steps to approve the domain validation emails for each renewal.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

QUESTION 675
A company is building a RESTful serverless web application on AWS by using Amazon API
Gateway and AWS Lambda. The users of this web application will be geographically distributed,
and the company wants to reduce the latency of API requests to these users.
Which type of endpoint should a solutions architect use to meet these requirements?
A. Private endpoint
B. Regional endpoint
C. Interface VPC endpoint
D. Edge-optimized endpoint

A

D. Edge-optimized endpoint

Explanation:
An edge-optimized API endpoint typically routes requests to the nearest CloudFront Point of
Presence (POP), which could help in cases where your clients are geographically distributed.
This is the default endpoint type for API Gateway REST APIs.
https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-api-endpoint-
types.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

QUESTION 674
A company deploys its applications on Amazon Elastic Kubernetes Service (Amazon EKS)
behind an Application Load Balancer in an AWS Region. The application needs to store data in a
PostgreSQL database engine. The company wants the data in the database to be highly
available. The company also needs increased capacity for read workloads.
Which solution will meet these requirements with the MOST operational efficiency?
A. Create an Amazon DynamoDB database table configured with global tables.
B. Create an Amazon RDS database with Multi-AZ deployments.
C. Create an Amazon RDS database with Multi-AZ DB cluster deployment.
D. Create an Amazon RDS database configured with cross-Region read replicas.

A

C. Create an Amazon RDS database with Multi-AZ DB cluster deployment.

Explanation:
DB cluster deployment can scale read workloads by adding read replicas. This provides
increased capacity for read workloads without impacting the write workload.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

QUESTION 673
A financial services company launched a new application that uses an Amazon RDS for MySQL
database. The company uses the application to track stock market trends. The company needs to
operate the application for only 2 hours at the end of each week. The company needs to optimize
the cost of running the database.
Which solution will meet these requirements MOST cost-effectively?
A. Migrate the existing RDS for MySQL database to an Aurora Serverless v2 MySQL database
cluster.
B. Migrate the existing RDS for MySQL database to an Aurora MySQL database cluster.
C. Migrate the existing RDS for MySQL database to an Amazon EC2 instance that runs MySQL.
Purchase an instance reservation for the EC2 instance.
D. Migrate the existing RDS for MySQL database to an Amazon Elastic Container Service (Amazon
ECS) cluster that uses MySQL container images to run tasks.

A

A. Migrate the existing RDS for MySQL database to an Aurora Serverless v2 MySQL database
cluster.

Explanation:
Aurora Serverless v2 scales compute capacity automatically based on actual usage, down to
zero when not in use. This minimizes costs for intermittent usage.
Since it only runs for 2 hours per week, the application is ideal for a serverless architecture like
Aurora Serverless.
Aurora Serverless v2 charges per second when the database is active, unlike RDS which
charges hourly.
Aurora Serverless provides higher availability than self-managed MySQL on EC2 or ECS.
Using reserved EC2 instances or ECS still incurs charges when not in use versus the fine-grained
scaling of serverless.
Standard Aurora clusters have a minimum capacity unlike the auto-scaling serverless
architecture.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

QUESTION 672
A company wants to use an event-driven programming model with AWS Lambda. The company
wants to reduce startup latency for Lambda functions that run on Java 11. The company does not
have strict latency requirements for the applications. The company wants to reduce cold starts
and outlier latencies when a function scales up.
Which solution will meet these requirements MOST cost-effectively?
A. Configure Lambda provisioned concurrency.
B. Increase the timeout of the Lambda functions.
C. Increase the memory of the Lambda functions.
D. Configure Lambda SnapStart.

A

D. Configure Lambda SnapStart.

Explanation:
Lambda SnapStart for Java can improve startup performance for latency-sensitive applications by
up to 10x at no extra cost, typically with no changes to your function code.
https://docs.aws.amazon.com/lambda/latest/dg/snapstart.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

QUESTION 671
A company runs an application on AWS. The application receives inconsistent amounts of usage.
The application uses AWS Direct Connect to connect to an on-premises MySQL-compatible
database. The on-premises database consistently uses a minimum of 2 GiB of memory.
The company wants to migrate the on-premises database to a managed AWS service. The
company wants to use auto scaling capabilities to manage unexpected workload increases.
Which solution will meet these requirements with the LEAST administrative overhead?
A. Provision an Amazon DynamoDB database with default read and write capacity settings.
B. Provision an Amazon Aurora database with a minimum capacity of 1 Aurora capacity unit (ACU).
C. Provision an Amazon Aurora Serverless v2 database with a minimum capacity of 1 Aurora
capacity unit (ACU).
D. Provision an Amazon RDS for MySQL database with 2 GiB of memory.

A

C. Provision an Amazon Aurora Serverless v2 database with a minimum capacity of 1 Aurora
capacity unit (ACU).

Explanation:
Aurora Serverless v2 provides auto-scaling so the database can handle inconsistent workloads
and spikes automatically without admin intervention.
It can scale down to zero when not in use to minimize costs.
The minimum 1 ACU capacity is sufficient to replace the on-prem 2 GiB database based on the
info given.
Serverless capabilities reduce admin overhead for capacity management.
DynamoDB lacks MySQL compatibility and requires more hands-on management.
RDS and provisioned Aurora require manually resizing instances to scale, increasing admin
overhead.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

QUESTION 670
A company is creating a REST API. The company has strict requirements for the use of TLS. The
company requires TLSv1.3 on the API endpoints. The company also requires a specific public
third-party certificate authority (CA) to sign the TLS certificate.
Which solution will meet these requirements?
A. Use a local machine to create a certificate that is signed by the third-party CImport the certificate
into AWS Certificate Manager (ACM). Create an HTTP API in Amazon API Gateway with a
custom domain. Configure the custom domain to use the certificate.
B. Create a certificate in AWS Certificate Manager (ACM) that is signed by the third-party CA.
Create an HTTP API in Amazon API Gateway with a custom domain. Configure the custom
domain to use the certificate.
C. Use AWS Certificate Manager (ACM) to create a certificate that is signed by the third-party CA.
Import the certificate into AWS Certificate Manager (ACM). Create an AWS Lambda function with
a Lambda function URL. Configure the Lambda function URL to use the certificate.
D. Create a certificate in AWS Certificate Manager (ACM) that is signed by the third-party CA.
Create an AWS Lambda function with a Lambda function URL. Configure the Lambda function
URL to use the certificate.

A

A. Use a local machine to create a certificate that is signed by the third-party CImport the certificate
into AWS Certificate Manager (ACM). Create an HTTP API in Amazon API Gateway with a
custom domain. Configure the custom domain to use the certificate.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

QUESTION 669
A company has a large workload that runs every Friday evening. The workload runs on Amazon
EC2 instances that are in two Availability Zones in the us-east-1 Region. Normally, the company must run no more than two instances at all times. However, the company wants to scale up to six
instances each Friday to handle a regularly repeating increased workload.
Which solution will meet these requirements with the LEAST operational overhead?
A. Create a reminder in Amazon EventBridge to scale the instances.
B. Create an Auto Scaling group that has a scheduled action.
C. Create an Auto Scaling group that uses manual scaling.
D. Create an Auto Scaling group that uses automatic scaling.

A

B. Create an Auto Scaling group that has a scheduled action.

Explanation:
https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-scheduled-scaling.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

QUESTION 668
An Amazon EventBridge rule targets a third-party API. The third-party API has not received any
incoming traffic. A solutions architect needs to determine whether the rule conditions are being
met and if the rule’s target is being invoked.
Which solution will meet these requirements?
A. Check for metrics in Amazon CloudWatch in the namespace for AWS/Events.
B. Review events in the Amazon Simple Queue Service (Amazon SQS) dead-letter queue.
C. Check for the events in Amazon CloudWatch Logs.
D. Check the trails in AWS CloudTrail for the EventBridge events.

A

A. Check for metrics in Amazon CloudWatch in the namespace for AWS/Events.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

QUESTION 667
A solutions architect is designing the storage architecture for a new web application used for
storing and viewing engineering drawings. All application components will be deployed on the
AWS infrastructure.
The application design must support caching to minimize the amount of time that users wait for
the engineering drawings to load. The application must be able to store petabytes of data.
Which combination of storage and caching should the solutions architect use?
A. Amazon S3 with Amazon CloudFront
B. Amazon S3 Glacier with Amazon ElastiCache
C. Amazon Elastic Block Store (Amazon EBS) volumes with Amazon CloudFront
D. AWS Storage Gateway with Amazon ElastiCache

A

A. Amazon S3 with Amazon CloudFront

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

QUESTION 666
A solutions architect is designing a workload that will store hourly energy consumption by
business tenants in a building. The sensors will feed a database through HTTP requests that will
add up usage for each tenant. The solutions architect must use managed services when possible.
The workload will receive more features in the future as the solutions architect adds independent
components.

Which solution will meet these requirements with the LEAST operational overhead?
A. Use Amazon API Gateway with AWS Lambda functions to receive the data from the sensors,
process the data, and store the data in an Amazon DynamoDB table.
B. Use an Elastic Load Balancer that is supported by an Auto Scaling group of Amazon EC2
instances to receive and process the data from the sensors. Use an Amazon S3 bucket to store
the processed data.
C. Use Amazon API Gateway with AWS Lambda functions to receive the data from the sensors,
process the data, and store the data in a Microsoft SQL Server Express database on an Amazon
EC2 instance.
D. Use an Elastic Load Balancer that is supported by an Auto Scaling group of Amazon EC2
instances to receive and process the data from the sensors. Use an Amazon Elastic File System
(Amazon EFS) shared file system to store the processed data.

A

A. Use Amazon API Gateway with AWS Lambda functions to receive the data from the sensors,
process the data, and store the data in an Amazon DynamoDB table.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

QUESTION 665
A company runs multiple Amazon EC2 Linux instances in a VPC across two Availability Zones.
The instances host applications that use a hierarchical directory structure. The applications need
to read and write rapidly and concurrently to shared storage.
What should a solutions architect do to meet these requirements?
A. Create an Amazon S3 bucket. Allow access from all the EC2 instances in the VPC.
B. Create an Amazon Elastic File System (Amazon EFS) file system. Mount the EFS file system
from each EC2 instance.
C. Create a file system on a Provisioned IOPS SSD (io2) Amazon Elastic Block Store (Amazon
EBS) volume. Attach the EBS volume to all the EC2 instances.
D. Create file systems on Amazon Elastic Block Store (Amazon EBS) volumes that are attached to
each EC2 instance. Synchronize the EBS volumes across the different EC2 instances.

A

B. Create an Amazon Elastic File System (Amazon EFS) file system. Mount the EFS file system
from each EC2 instance.

Explanation:
How is Amazon EFS different than Amazon S3?
Amazon EFS provides shared access to data using a traditional file sharing permissions model
and hierarchical directory structure via the NFSv4 protocol. Applications that access data using a
standard file system interface provided through the operating system can use Amazon EFS to
take advantage of the scalability and reliability of file storage in the cloud without writing any new
code or adjusting applications.
Amazon S3 is an object storage platform that uses a simple API for storing and accessing data.
Applications that do not require a file system structure and are designed to work with object
storage can use Amazon S3 as a massively scalable, durable, low-cost object storage solution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

QUESTION 664
A company has an on-premises MySQL database that handles transactional data. The company
is migrating the database to the AWS Cloud. The migrated database must maintain compatibility
with the company’s applications that use the database. The migrated database also must scale
automatically during periods of increased demand.
Which migration solution will meet these requirements?

A. Use native MySQL tools to migrate the database to Amazon RDS for MySQL. Configure elastic
storage scaling.
B. Migrate the database to Amazon Redshift by using the mysqldump utility. Turn on Auto Scaling
for the Amazon Redshift cluster.
C. Use AWS Database Migration Service (AWS DMS) to migrate the database to Amazon Aurora.
Turn on Aurora Auto Scaling.
D. Use AWS Database Migration Service (AWS DMS) to migrate the database to Amazon
DynamoDB. Configure an Auto Scaling policy.

A

C. Use AWS Database Migration Service (AWS DMS) to migrate the database to Amazon Aurora.
Turn on Aurora Auto Scaling.

Explanation:
DMS provides an easy migration path from MySQL to Aurora while minimizing downtime.
Aurora is a MySQL-compatible relational database service that will maintain compatibility with the
company’s applications.
Aurora Auto Scaling allows the database to automatically scale up and down based on demand
to handle increased workloads.
RDS MySQL (Option A) does not scale as well as the Aurora architecture.
Redshift (Option B) is for analytics, not transactional data, and may not be compatible.
DynamoDB (Option D) is a NoSQL datastore and lacks MySQL compatibility.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

QUESTION 663
A company is building an ecommerce application and needs to store sensitive customer
information. The company needs to give customers the ability to complete purchase transactions
on the website. The company also needs to ensure that sensitive customer data is protected,
even from database administrators.
Which solution meets these requirements?
A. Store sensitive data in an Amazon Elastic Block Store (Amazon EBS) volume. Use EBS
encryption to encrypt the data. Use an IAM instance role to restrict access.
B. Store sensitive data in Amazon RDS for MySQL. Use AWS Key Management Service (AWS
KMS) client-side encryption to encrypt the data.
C. Store sensitive data in Amazon S3. Use AWS Key Management Service (AWS KMS) server-side
encryption to encrypt the data. Use S3 bucket policies to restrict access.
D. Store sensitive data in Amazon FSx for Windows Server. Mount the file share on application
servers. Use Windows file permissions to restrict access.

A

B. Store sensitive data in Amazon RDS for MySQL. Use AWS Key Management Service (AWS
KMS) client-side encryption to encrypt the data.

Explanation:
RDS MySQL provides a fully managed database service well suited for an ecommerce
application.
AWS KMS client-side encryption allows encrypting sensitive data before it hits the database. The
data remains encrypted at rest.
This protects sensitive customer data from database admins and privileged users.
EBS encryption (Option A) protects data at rest but not in use. IAM roles don’t prevent admin
access.
S3 (Option C) encrypts data at rest on the server side. Bucket policies don’t restrict admin
access.
FSx file permissions (Option D) don’t prevent admin access to unencrypted data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

QUESTION 662
A company runs its applications on both Amazon Elastic Kubernetes Service (Amazon EKS)
clusters and on-premises Kubernetes clusters. The company wants to view all clusters and
workloads from a central location.
Which solution will meet these requirements with the LEAST operational overhead?

A. Use Amazon CloudWatch Container Insights to collect and group the cluster information.
B. Use Amazon EKS Connector to register and connect all Kubernetes clusters.
C. Use AWS Systems Manager to collect and view the cluster information.
D. Use Amazon EKS Anywhere as the primary cluster to view the other clusters with native
Kubernetes commands.

A

B. Use Amazon EKS Connector to register and connect all Kubernetes clusters.

Explanation:
You can use Amazon EKS Connector to register and connect any conformant Kubernetes cluster
to AWS and visualize it in the Amazon EKS console. After a cluster is connected, you can see the
status, configuration, and workloads for that cluster in the Amazon EKS console.
https://docs.aws.amazon.com/eks/latest/userguide/eks-connector.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

QUESTION 661
A solutions architect needs to ensure that API calls to Amazon DynamoDB from Amazon EC2
instances in a VPC do not travel across the internet.
Which combination of steps should the solutions architect take to meet this requirement?
(Choose two.)
A. Create a route table entry for the endpoint.
B. Create a gateway endpoint for DynamoDB.
C. Create an interface endpoint for Amazon EC2.
D. Create an elastic network interface for the endpoint in each of the subnets of the VPC.
E. Create a security group entry in the endpoint’s security group to provide access.

A

A. Create a route table entry for the endpoint.
B. Create a gateway endpoint for DynamoDB.

Explanation:
https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-ddb.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

QUESTION 660
A company’s website handles millions of requests each day, and the number of requests
continues to increase. A solutions architect needs to improve the response time of the web
application. The solutions architect determines that the application needs to decrease latency
when retrieving product details from the Amazon DynamoDB table.
Which solution will meet these requirements with the LEAST amount of operational overhead?
A. Set up a DynamoDB Accelerator (DAX) cluster. Route all read requests through DAX.
B. Set up Amazon ElastiCache for Redis between the DynamoDB table and the web application.
Route all read requests through Redis.
C. Set up Amazon ElastiCache for Memcached between the DynamoDB table and the web
application. Route all read requests through Memcached.
D. Set up Amazon DynamoDB Streams on the table, and have AWS Lambda read from the table
and populate Amazon ElastiCache. Route all read requests through ElastiCache.

A

A. Set up a DynamoDB Accelerator (DAX) cluster. Route all read requests through DAX.

Explanation:
DAX provides a DynamoDB-compatible caching layer to reduce read latency. It is purpose-built
for accelerating DynamoDB workloads.
Using DAX requires minimal application changes - only read requests are routed through it.
DAX handles caching logic automatically without needing complex integration code.
ElastiCache Redis/Memcached (Options B/C) require more integration work to sync DynamoDB
data.
Using Lambda and Streams to populate ElastiCache (Option D) is a complex event-driven
approach requiring ongoing maintenance.
DAX plugs in seamlessly to accelerate DynamoDB with very little operational overhead.

42
Q

QUESTION 659
A company’s solutions architect is designing an AWS multi-account solution that uses AWS
Organizations. The solutions architect has organized the company’s accounts into organizational
units (OUs).
The solutions architect needs a solution that will identify any changes to the OU hierarchy. The
solution also needs to notify the company’s operations team of any changes.
Which solution will meet these requirements with the LEAST operational overhead?
A. Provision the AWS accounts by using AWS Control Tower. Use account drift notifications to
identify the changes to the OU hierarchy.
B. Provision the AWS accounts by using AWS Control Tower. Use AWS Config aggregated rules to
identify the changes to the OU hierarchy.
C. Use AWS Service Catalog to create accounts in Organizations. Use an AWS CloudTrail
organization trail to identify the changes to the OU hierarchy.
D. Use AWS CloudFormation templates to create accounts in Organizations. Use the drift detection
operation on a stack to identify the changes to the OU hierarchy.

A

A. Provision the AWS accounts by using AWS Control Tower. Use account drift notifications to
identify the changes to the OU hierarchy.

Explanation:
The key advantages you highlight of Control Tower are convincing:
Fully managed service simplifies multi-account setup.
Built-in account drift notifications detect OU changes automatically.
More scalable and less complex than Config rules or CloudTrail.
Better security and compliance guardrails than custom options.
Lower operational overhead compared to other solution

43
Q

QUESTION 658
A company hosts multiple applications on AWS for different product lines. The applications use
different compute resources, including Amazon EC2 instances and Application Load Balancers.
The applications run in different AWS accounts under the same organization in AWS Organizations across multiple AWS Regions. Teams for each product line have tagged each
compute resource in the individual accounts.
The company wants more details about the cost for each product line from the consolidated
billing feature in Organizations.
Which combination of steps will meet these requirements? (Choose two.)
A. Select a specific AWS generated tag in the AWS Billing console.
B. Select a specific user-defined tag in the AWS Billing console.
C. Select a specific user-defined tag in the AWS Resource Groups console.
D. Activate the selected tag from each AWS account.
E. Activate the selected tag from the Organizations management account.

A

B. Select a specific user-defined tag in the AWS Billing console.
E. Activate the selected tag from the Organizations management account.

Explanation:
User-defined tags were created by each product team to identify resources. Selecting the
relevant tag in the Billing console will group costs.
The tag must be activated from the Organizations management account to consolidate billing
across all accounts.
AWS generated tags are predefined by AWS and won’t align to product lines.
Resource Groups (Option C) helps manage resources but not billing.
Activating the tag from each account (Option D) is not needed since Organizations centralizes
billing.

44
Q

QUESTION 657
A company has two VPCs that are located in the us-west-2 Region within the same AWS
account. The company needs to allow network traffic between these VPCs. Approximately 500
GB of data transfer will occur between the VPCs each month.
What is the MOST cost-effective solution to connect these VPCs?
A. Implement AWS Transit Gateway to connect the VPCs. Update the route tables of each VPC to
use the transit gateway for inter-VPC communication.
B. Implement an AWS Site-to-Site VPN tunnel between the VPCs. Update the route tables of each
VPC to use the VPN tunnel for inter-VPC communication.
C. Set up a VPC peering connection between the VPCs. Update the route tables of each VPC to use
the VPC peering connection for inter-VPC communication.
D. Set up a 1 GB AWS Direct Connect connection between the VPCs. Update the route tables of
each VPC to use the Direct Connect connection for inter-VPC communication.

A

C. Set up a VPC peering connection between the VPCs. Update the route tables of each VPC to use
the VPC peering connection for inter-VPC communication.

Explanation:
VPC peering provides private connectivity between VPCs without using public IP space.
Data transferred between peered VPCs is free as long as they are in the same region.
500 GB/month inter-VPC data transfer fits within peering free tier.
Transit Gateway (Option A) incurs hourly charges plus data transfer fees. More costly than
peering.
Site-to-Site VPN (Option B) incurs hourly charges and data transfer fees. More expensive than
peering.
Direct Connect (Option D) has high hourly charges and would be overkill for this use case.

45
Q

QUESTION 656
A solutions architect manages an analytics application. The application stores large amounts of
semistructured data in an Amazon S3 bucket. The solutions architect wants to use parallel data
processing to process the data more quickly. The solutions architect also wants to use information that is stored in an Amazon Redshift database to enrich the data.
Which solution will meet these requirements?
A. Use Amazon Athena to process the S3 data. Use AWS Glue with the Amazon Redshift data to
enrich the S3 data.
B. Use Amazon EMR to process the S3 data. Use Amazon EMR with the Amazon Redshift data to
enrich the S3 data.
C. Use Amazon EMR to process the S3 data. Use Amazon Kinesis Data Streams to move the S3
data into Amazon Redshift so that the data can be enriched.
D. Use AWS Glue to process the S3 data. Use AWS Lake Formation with the Amazon Redshift data
to enrich the S3 data.

A

B. Use Amazon EMR to process the S3 data. Use Amazon EMR with the Amazon Redshift data to
enrich the S3 data.

Explanation:
Use Amazon EMR to process the semi-structured data in Amazon S3. EMR provides a managed
Hadoop framework optimized for processing large datasets in S3.
EMR supports parallel data processing across multiple nodes to speed up the processing.
EMR can integrate directly with Amazon Redshift using the EMR-Redshift integration. This allows
querying the Redshift data from EMR and joining it with the S3 data.
This enables enriching the semi-structured S3 data with the information stored in Redshift.

46
Q

QUESTION 655
A solutions architect is using an AWS CloudFormation template to deploy a three-tier web
application. The web application consists of a web tier and an application tier that stores and
retrieves user data in Amazon DynamoDB tables. The web and application tiers are hosted on
Amazon EC2 instances, and the database tier is not publicly accessible. The application EC2
instances need to access the DynamoDB tables without exposing API credentials in the template.
What should the solutions architect do to meet these requirements?
A. Create an IAM role to read the DynamoDB tables. Associate the role with the application
instances by referencing an instance profile.
B. Create an IAM role that has the required permissions to read and write from the DynamoDB
tables. Add the role to the EC2 instance profile, and associate the instance profile with the
application instances.
C. Use the parameter section in the AWS CloudFormation template to have the user input access
and secret keys from an already-created IAM user that has the required permissions to read and
write from the DynamoDB tables.
D. Create an IAM user in the AWS CloudFormation template that has the required permissions to
read and write from the DynamoDB tables. Use the GetAtt function to retrieve the access and
secret keys, and pass them to the application instances through the user data.

A

B. Create an IAM role that has the required permissions to read and write from the DynamoDB
tables. Add the role to the EC2 instance profile, and associate the instance profile with the
application instances.

47
Q

QUESTION 654
A company runs an application in a VPC with public and private subnets. The VPC extends
across multiple Availability Zones. The application runs on Amazon EC2 instances in private
subnets. The application uses an Amazon Simple Queue Service (Amazon SQS) queue.
A solutions architect needs to design a secure solution to establish a connection between the
EC2 instances and the SQS queue.
Which solution will meet these requirements?
A. Implement an interface VPC endpoint for Amazon SQS. Configure the endpoint to use the private
subnets. Add to the endpoint a security group that has an inbound access rule that allows traffic
from the EC2 instances that are in the private subnets.
B. Implement an interface VPC endpoint for Amazon SQS. Configure the endpoint to use the public
subnets. Attach to the interface endpoint a VPC endpoint policy that allows access from the EC2
instances that are in the private subnets.
C. Implement an interface VPC endpoint for Amazon SQS. Configure the endpoint to use the public
subnets. Attach an Amazon SQS access policy to the interface VPC endpoint that allows
requests from only a specified VPC endpoint.
D. Implement a gateway endpoint for Amazon SQS. Add a NAT gateway to the private subnets.
Attach an IAM role to the EC2 instances that allows access to the SQS queue.

A

A. Implement an interface VPC endpoint for Amazon SQS. Configure the endpoint to use the private
subnets. Add to the endpoint a security group that has an inbound access rule that allows traffic
from the EC2 instances that are in the private subnets.

Explanation:
An interface VPC endpoint is a private way to connect to AWS services without having to expose
your VPC to the public internet. This is the most secure way to connect to Amazon SQS from the
private subnets.
Configuring the endpoint to use the private subnets ensures that the traffic between the EC2
instances and the SQS queue is only within the VPC. This helps to protect the traffic from being
intercepted by a malicious actor.
Adding a security group to the endpoint that has an inbound access rule that allows traffic from
the EC2 instances that are in the private subnets further restricts the traffic to only the authorized
sources. This helps to prevent unauthorized access to the SQS queue.

48
Q

QUESTION 653
A company’s SAP application has a backend SQL Server database in an on-premises
environment. The company wants to migrate its on-premises application and database server to
AWS. The company needs an instance type that meets the high demands of its SAP database.
On-premises performance data shows that both the SAP application and the database have high
memory utilization.
Which solution will meet these requirements?
A. Use the compute optimized instance family for the application. Use the memory optimized
instance family for the database.
B. Use the storage optimized instance family for both the application and the database.
C. Use the memory optimized instance family for both the application and the database.
D. Use the high performance computing (HPC) optimized instance family for the application. Use the
memory optimized instance family for the database.

A

C. Use the memory optimized instance family for both the application and the database.

Explanation:
Since both the app and database have high memory needs, the memory optimized family like R5
instances meet those requirements well.
Using the same instance family simplifies management and operations, rather than mixing
instance types.
Compute optimized instances may not provide enough memory for the SAP app’s needs.
Storage optimized is overkill for the database’s compute and memory needs.
HPC is overprovisioned for the SAP app.

49
Q

QUESTION 652
A solutions architect needs to review a company’s Amazon S3 buckets to discover personally
identifiable information (PII). The company stores the PII data in the us-east-1 Region and us-
west-2 Region.
Which solution will meet these requirements with the LEAST operational overhead?

A. Configure Amazon Macie in each Region. Create a job to analyze the data that is in Amazon S3.
B. Configure AWS Security Hub for all Regions. Create an AWS Config rule to analyze the data that
is in Amazon S3.
C. Configure Amazon Inspector to analyze the data that is in Amazon S3.
D. Configure Amazon GuardDuty to analyze the data that is in Amazon S3.

A

A. Configure Amazon Macie in each Region. Create a job to analyze the data that is in Amazon S3.

Explanation:
Amazon Macie is designed specifically for discovering and classifying sensitive data like PII in S3.
This makes it the optimal service to use.
Macie can be enabled directly in the required Regions rather than enabling it across all Regions
which is unnecessary. This minimizes overhead.
Macie can be set up to automatically scan the specified S3 buckets on a schedule. No need to
create separate jobs.
Security Hub is for security monitoring across AWS accounts, not specific for PII discovery. More
overhead than needed.
Inspector and GuardDuty are not built for PII discovery in S3 buckets. They provide broader
security capabilities.

50
Q

QUESTION 651
A company needs to optimize the cost of its Amazon EC2 instances. The company also needs to
change the type and family of its EC2 instances every 2-3 months.
What should the company do to meet these requirements?
A. Purchase Partial Upfront Reserved Instances for a 3-year term.
B. Purchase a No Upfront Compute Savings Plan for a 1-year term.
C. Purchase All Upfront Reserved Instances for a 1-year term.
D. Purchase an All Upfront EC2 Instance Savings Plan for a 1-year term.

A

B. Purchase a No Upfront Compute Savings Plan for a 1-year term.

Explanation:
EC2 Instance Savings Plans give you the flexibility to change your usage between instances
WITHIN a family in that region.
https://aws.amazon.com/savingsplans/compute-pricing/

51
Q

QUESTION 650
A company has a financial application that produces reports. The reports average 50 KB in size
and are stored in Amazon S3. The reports are frequently accessed during the first week after
production and must be stored for several years. The reports must be retrievable within 6 hours.
Which solution meets these requirements MOST cost-effectively?
A. Use S3 Standard. Use an S3 Lifecycle rule to transition the reports to S3 Glacier after 7 days.
B. Use S3 Standard. Use an S3 Lifecycle rule to transition the reports to S3 Standard-Infrequent
Access (S3 Standard-IA) after 7 days.
C. Use S3 Intelligent-Tiering. Configure S3 Intelligent-Tiering to transition the reports to S3
Standard-Infrequent Access (S3 Standard-IA) and S3 Glacier.
D. Use S3 Standard. Use an S3 Lifecycle rule to transition the reports to S3 Glacier Deep Archive
after 7 days.

A

A. Use S3 Standard. Use an S3 Lifecycle rule to transition the reports to S3 Glacier after 7 days.

Explanation:
Amazon S3 Glacier:
Expedited Retrieval: Provides access to data within 1-5 minutes.
Standard Retrieval: Provides access to data within 3-5 hours.
Bulk Retrieval: Provides access to data within 5-12 hours.
Amazon S3 Glacier Deep Archive:
Standard Retrieval: Provides access to data within 12 hours.
Bulk Retrieval: Provides access to data within 48 hours.

52
Q

QUESTION 649
A company is using AWS Key Management Service (AWS KMS) keys to encrypt AWS Lambda
environment variables. A solutions architect needs to ensure that the required permissions are in
place to decrypt and use the environment variables.
Which steps must the solutions architect take to implement the correct permissions? (Choose
two.)
A. Add AWS KMS permissions in the Lambda resource policy.
B. Add AWS KMS permissions in the Lambda execution role.
C. Add AWS KMS permissions in the Lambda function policy.
D. Allow the Lambda execution role in the AWS KMS key policy.
E. Allow the Lambda resource policy in the AWS KMS key policy.

A

B. Add AWS KMS permissions in the Lambda execution role.
D. Allow the Lambda execution role in the AWS KMS key policy.

Explanation:
To decrypt environment variables encrypted with AWS KMS, Lambda needs to be granted
permissions to call KMS APIs. This is done in two places:
The Lambda execution role needs kms:Decrypt and kms:GenerateDataKey permissions added.
The execution role governs what AWS services the function code can access.
The KMS key policy needs to allow the Lambda execution role to have kms:Decrypt and
kms:GenerateDataKey permissions for that specific key. This allows the execution role to use that particular key.

53
Q

QUESTION 648
A company has created a multi-tier application for its ecommerce website. The website uses an
Application Load Balancer that resides in the public subnets, a web tier in the public subnets, and
a MySQL cluster hosted on Amazon EC2 instances in the private subnets. The MySQL database
needs to retrieve product catalog and pricing information that is hosted on the internet by a third-
party provider. A solutions architect must devise a strategy that maximizes security without
increasing operational overhead.
What should the solutions architect do to meet these requirements?
A. Deploy a NAT instance in the VPC. Route all the internet-based traffic through the NAT instance.
B. Deploy a NAT gateway in the public subnets. Modify the private subnet route table to direct all
internet-bound traffic to the NAT gateway.
C. Configure an internet gateway and attach it to the VPModify the private subnet route table to
direct internet-bound traffic to the internet gateway.
D. Configure a virtual private gateway and attach it to the VPC. Modify the private subnet route table
to direct internet-bound traffic to the virtual private gateway.

A

B. Deploy a NAT gateway in the public subnets. Modify the private subnet route table to direct all
internet-bound traffic to the NAT gateway.

54
Q

QUESTION 647
A company has separate AWS accounts for its finance, data analytics, and development
departments. Because of costs and security concerns, the company wants to control which
services each AWS account can use.
Which solution will meet these requirements with the LEAST operational overhead?

A. Use AWS Systems Manager templates to control which AWS services each department can use.
B. Create organization units (OUs) for each department in AWS Organizations. Attach service
control policies (SCPs) to the OUs.
C. Use AWS CloudFormation to automatically provision only the AWS services that each department
can use.
D. Set up a list of products in AWS Service Catalog in the AWS accounts to manage and control the
usage of specific AWS services.

A

B. Create organization units (OUs) for each department in AWS Organizations. Attach service
control policies (SCPs) to the OUs.

55
Q

QUESTION 646
A company has data collection sensors at different locations. The data collection sensors stream
a high volume of data to the company. The company wants to design a platform on AWS to
ingest and process high-volume streaming data. The solution must be scalable and support data
collection in near real time. The company must store the data in Amazon S3 for future reporting.
Which solution will meet these requirements with the LEAST operational overhead?
A. Use Amazon Kinesis Data Firehose to deliver streaming data to Amazon S3.
B. Use AWS Glue to deliver streaming data to Amazon S3.
C. Use AWS Lambda to deliver streaming data and store the data to Amazon S3.
D. Use AWS Database Migration Service (AWS DMS) to deliver streaming data to Amazon S3.

A

A. Use Amazon Kinesis Data Firehose to deliver streaming data to Amazon S3.

Explanation:
Amazon Kinesis Data Firehose: Capture, transform, and load data streams into AWS data stores
(S3) in near real-time.

56
Q

QUESTION 645
A recent analysis of a company’s IT expenses highlights the need to reduce backup costs. The
company’s chief information officer wants to simplify the on-premises backup infrastructure and
reduce costs by eliminating the use of physical backup tapes. The company must preserve the
existing investment in the on-premises backup applications and workflows.
What should a solutions architect recommend?
A. Set up AWS Storage Gateway to connect with the backup applications using the NFS interface.
B. Set up an Amazon EFS file system that connects with the backup applications using the NFS
interface.
C. Set up an Amazon EFS file system that connects with the backup applications using the iSCSI
interface.
D. Set up AWS Storage Gateway to connect with the backup applications using the iSCSI-virtual
tape library (VTL) interface.

A

D. Set up AWS Storage Gateway to connect with the backup applications using the iSCSI-virtual
tape library (VTL) interface.

Explanation:
https://aws.amazon.com/storagegateway/vtl/?nc1=h_ls

57
Q

QUESTION 644
A retail company uses a regional Amazon API Gateway API for its public REST APIs. The API
Gateway endpoint is a custom domain name that points to an Amazon Route 53 alias record. A
solutions architect needs to create a solution that has minimal effects on customers and minimal
data loss to release the new version of APIs.
Which solution will meet these requirements?
A. Create a canary release deployment stage for API Gateway. Deploy the latest API version. Point
an appropriate percentage of traffic to the canary stage. After API verification, promote the canary
stage to the production stage.
B. Create a new API Gateway endpoint with a new version of the API in OpenAPI YAML file format.
Use the import-to-update operation in merge mode into the API in API Gateway. Deploy the new
version of the API to the production stage.
C. Create a new API Gateway endpoint with a new version of the API in OpenAPI JSON file format.
Use the import-to-update operation in overwrite mode into the API in API Gateway. Deploy the
new version of the API to the production stage.
D. Create a new API Gateway endpoint with new versions of the API definitions. Create a custom
domain name for the new API Gateway API. Point the Route 53 alias record to the new API
Gateway API custom domain name.

A

A. Create a canary release deployment stage for API Gateway. Deploy the latest API version. Point
an appropriate percentage of traffic to the canary stage. After API verification, promote the canary
stage to the production stage.

In a canary release deployment, total API traffic is separated at random into a production release
and a canary release with a pre-configured ratio. Typically, the canary release receives a small
percentage of API traffic and the production release takes up the rest. The updated API features
are only visible to API traffic through the canary. You can adjust the canary traffic percentage to
optimize test coverage or performance.
https://docs.aws.amazon.com/apigateway/latest/developerguide/canary-release.html

58
Q

QUESTION 643
A company runs Amazon EC2 instances in multiple AWS accounts that are individually bled. The
company recently purchased a Savings Pian. Because of changes in the company’s business
requirements, the company has decommissioned a large number of EC2 instances. The
company wants to use its Savings Plan discounts on its other AWS accounts.
Which combination of steps will meet these requirements? (Choose two.)
A. From the AWS Account Management Console of the management account, turn on discount
sharing from the billing preferences section.
B. From the AWS Account Management Console of the account that purchased the existing Savings
Plan, turn on discount sharing from the billing preferences section. Include all accounts.
C. From the AWS Organizations management account, use AWS Resource Access Manager (AWS
RAM) to share the Savings Plan with other accounts.
D. Create an organization in AWS Organizations in a new payer account. Invite the other AWS
accounts to join the organization from the management account.
E. Create an organization in AWS Organizations in the existing AWS account with the existing EC2
instances and Savings Plan. Invite the other AWS accounts to join the organization from the
management account.

A

A. From the AWS Account Management Console of the management account, turn on discount
sharing from the billing preferences section.
E. Create an organization in AWS Organizations in the existing AWS account with the existing EC2
instances and Savings Plan. Invite the other AWS accounts to join the organization from the
management account.

Explanation:
https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/ri-turn-off.html

59
Q

QUESTION 642
A media company uses an Amazon CloudFront distribution to deliver content over the internet.
The company wants only premium customers to have access to the media streams and file
content. The company stores all content in an Amazon S3 bucket. The company also delivers
content on demand to customers for a specific purpose, such as movie rentals or music
downloads.
Which solution will meet these requirements?
A. Generate and provide S3 signed cookies to premium customers.
B. Generate and provide CloudFront signed URLs to premium customers.
C. Use origin access control (OAC) to limit the access of non-premium customers.
D. Generate and activate field-level encryption to block non-premium customers.

A

B. Generate and provide CloudFront signed URLs to premium customers.

60
Q

QUESTION 641
A company wants to build a web application on AWS. Client access requests to the website are
not predictable and can be idle for a long time. Only customers who have paid a subscription fee
can have the ability to sign in and use the web application.
Which combination of steps will meet these requirements MOST cost-effectively? (Choose three.)
A. Create an AWS Lambda function to retrieve user information from Amazon DynamoDB. Create
an Amazon API Gateway endpoint to accept RESTful APIs. Send the API calls to the Lambda
function.
B. Create an Amazon Elastic Container Service (Amazon ECS) service behind an Application Load
Balancer to retrieve user information from Amazon RDS. Create an Amazon API Gateway
endpoint to accept RESTful APIs. Send the API calls to the Lambda function.
C. Create an Amazon Cognito user pool to authenticate users.
D. Create an Amazon Cognito identity pool to authenticate users.
E. Use AWS Amplify to serve the frontend web content with HTML, CSS, and JS. Use an integrated
Amazon CloudFront configuration.
F. Use Amazon S3 static web hosting with PHP, CSS, and JS. Use Amazon CloudFront to serve the
frontend web content.

A

A. Create an AWS Lambda function to retrieve user information from Amazon DynamoDB. Create
an Amazon API Gateway endpoint to accept RESTful APIs. Send the API calls to the Lambda
function.
C. Create an Amazon Cognito user pool to authenticate users.
E. Use AWS Amplify to serve the frontend web content with HTML, CSS, and JS. Use an integrated
Amazon CloudFront configuration.

61
Q

QUESTION 640
A company has an on-premises server that uses an Oracle database to process and store
customer information. The company wants to use an AWS database service to achieve higher
availability and to improve application performance. The company also wants to offload reporting
from its primary database system.
Which solution will meet these requirements in the MOST operationally efficient way?
A. Use AWS Database Migration Service (AWS DMS) to create an Amazon RDS DB instance in
multiple AWS Regions. Point the reporting functions toward a separate DB instance from the
primary DB instance.
B. Use Amazon RDS in a Single-AZ deployment to create an Oracle database. Create a read replica
in the same zone as the primary DB instance. Direct the reporting functions to the read replica.
C. Use Amazon RDS deployed in a Multi-AZ cluster deployment to create an Oracle database.
Direct the reporting functions to use the reader instance in the cluster deployment.
D. Use Amazon RDS deployed in a Multi-AZ instance deployment to create an Amazon Aurora
database. Direct the reporting functions to the reader instances.

A

D. Use Amazon RDS deployed in a Multi-AZ instance deployment to create an Amazon Aurora
database. Direct the reporting functions to the reader instances.

62
Q

QUESTION 639
A company wants to use the AWS Cloud to improve its on-premises disaster recovery (DR)
configuration. The company’s core production business application uses Microsoft SQL Server
Standard, which runs on a virtual machine (VM). The application has a recovery point objective
(RPO) of 30 seconds or fewer and a recovery time objective (RTO) of 60 minutes. The DR
solution needs to minimize costs wherever possible.
Which solution will meet these requirements?
A. Configure a multi-site active/active setup between the on-premises server and AWS by using
Microsoft SQL Server Enterprise with Always On availability groups.
B. Configure a warm standby Amazon RDS for SQL Server database on AWS. Configure AWS
Database Migration Service (AWS DMS) to use change data capture (CDC).
C. Use AWS Elastic Disaster Recovery configured to replicate disk changes to AWS as a pilot light.
D. Use third-party backup software to capture backups every night. Store a secondary set of
backups in Amazon S3.

A

B. Configure a warm standby Amazon RDS for SQL Server database on AWS. Configure AWS
Database Migration Service (AWS DMS) to use change data capture (CDC).

63
Q

QUESTION 638
A global video streaming company uses Amazon CloudFront as a content distribution network
(CDN). The company wants to roll out content in a phased manner across multiple countries. The
company needs to ensure that viewers who are outside the countries to which the company rolls
out content are not able to view the content.
Which solution will meet these requirements?
A. Add geographic restrictions to the content in CloudFront by using an allow list. Set up a custom
error message.
B. Set up a new URL tor restricted content. Authorize access by using a signed URL and cookies.
Set up a custom error message.
C. Encrypt the data for the content that the company distributes. Set up a custom error message.
D. Create a new URL for restricted content. Set up a time-restricted access policy for signed URLs.

A

A. Add geographic restrictions to the content in CloudFront by using an allow list. Set up a custom
error message.

Explanation:
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/georestrictions.html

64
Q

QUESTION 637
A company runs a three-tier web application in the AWS Cloud that operates across three
Availability Zones. The application architecture has an Application Load Balancer, an Amazon
EC2 web server that hosts user session states, and a MySQL database that runs on an EC2
instance. The company expects sudden increases in application traffic. The company wants to be
able to scale to meet future application capacity demands and to ensure high availability across
all three Availability Zones.
Which solution will meet these requirements?
A. Migrate the MySQL database to Amazon RDS for MySQL with a Multi-AZ DB cluster deployment.
Use Amazon ElastiCache for Redis with high availability to store session data and to cache
reads. Migrate the web server to an Auto Scaling group that is in three Availability Zones.
B. Migrate the MySQL database to Amazon RDS for MySQL with a Multi-AZ DB cluster deployment.
Use Amazon ElastiCache for Memcached with high availability to store session data and to cache
reads. Migrate the web server to an Auto Scaling group that is in three Availability Zones.
C. Migrate the MySQL database to Amazon DynamoDB Use DynamoDB Accelerator (DAX) to cache reads. Store the session data in DynamoDB. Migrate the web server to an Auto Scaling
group that is in three Availability Zones.
D. Migrate the MySQL database to Amazon RDS for MySQL in a single Availability Zone. Use
Amazon ElastiCache for Redis with high availability to store session data and to cache reads.
Migrate the web server to an Auto Scaling group that is in three Availability Zones.

A

A. Migrate the MySQL database to Amazon RDS for MySQL with a Multi-AZ DB cluster deployment.
Use Amazon ElastiCache for Redis with high availability to store session data and to cache
reads. Migrate the web server to an Auto Scaling group that is in three Availability Zones.

Explanation:
Memcached is best suited for caching data, while Redis is better for storing data that needs to be
persisted. If you need to store data that needs to be accessed frequently, such as user profiles,
session data, and application settings, then Redis is the better choice.

65
Q

QUESTION 636
A company wants to provide data scientists with near real-time read-only access to the
company’s production Amazon RDS for PostgreSQL database. The database is currently
configured as a Single-AZ database. The data scientists use complex queries that will not affect
the production database. The company needs a solution that is highly available.
Which solution will meet these requirements MOST cost-effectively?
A. Scale the existing production database in a maintenance window to provide enough power for the
data scientists.
B. Change the setup from a Single-AZ to a Multi-AZ instance deployment with a larger secondary
standby instance. Provide the data scientists access to the secondary instance.
C. Change the setup from a Single-AZ to a Multi-AZ instance deployment. Provide two additional
read replicas for the data scientists.
D. Change the setup from a Single-AZ to a Multi-AZ cluster deployment with two readable standby
instances. Provide read endpoints to the data scientists.

A

D. Change the setup from a Single-AZ to a Multi-AZ cluster deployment with two readable standby
instances. Provide read endpoints to the data scientists.

Explanation:
Multi-AZ instance: the standby instance doesn’t serve any read or write traffic.
Multi-AZ DB cluster: consists of primary instance running in one AZ serving read-write traffic and
two other standby running in two different AZs serving read traffic.
https://aws.amazon.com/blogs/database/choose-the-right-amazon-rds-deployment-option-single-
az-instance-multi-az-instance-or-multi-az-database-cluster/

66
Q

QUESTION 635
A company is building an Amazon Elastic Kubernetes Service (Amazon EKS) cluster for its
workloads. All secrets that are stored in Amazon EKS must be encrypted in the Kubernetes etcd
key-value store.
Which solution will meet these requirements?
A. Create a new AWS Key Management Service (AWS KMS) key. Use AWS Secrets Manager to
manage, rotate, and store all secrets in Amazon EKS.
B. Create a new AWS Key Management Service (AWS KMS) key. Enable Amazon EKS KMS secrets encryption on the Amazon EKS cluster.
C. Create the Amazon EKS cluster with default options. Use the Amazon Elastic Block Store
(Amazon EBS) Container Storage Interface (CSI) driver as an add-on.
D. Create a new AWS Key Management Service (AWS KMS) key with the alias/aws/ebs alias.
Enable default Amazon Elastic Block Store (Amazon EBS) volume encryption for the account.

A

B. Create a new AWS Key Management Service (AWS KMS) key. Enable Amazon EKS KMS secrets encryption on the Amazon EKS cluster.

Explanation:
https://docs.aws.amazon.com/eks/latest/userguide/enable-kms.html

67
Q

QUESTION 634
A company wants to build a logging solution for its multiple AWS accounts. The company
currently stores the logs from all accounts in a centralized account. The company has created an
Amazon S3 bucket in the centralized account to store the VPC flow logs and AWS CloudTrail
logs. All logs must be highly available for 30 days for frequent analysis, retained for an additional
60 days for backup purposes, and deleted 90 days after creation.
Which solution will meet these requirements MOST cost-effectively?
A. Transition objects to the S3 Standard storage class 30 days after creation. Write an expiration
action that directs Amazon S3 to delete objects after 90 days.
B. Transition objects to the S3 Standard-Infrequent Access (S3 Standard-IA) storage class 30 days
after creation. Move all objects to the S3 Glacier Flexible Retrieval storage class after 90 days.
Write an expiration action that directs Amazon S3 to delete objects after 90 days.
C. Transition objects to the S3 Glacier Flexible Retrieval storage class 30 days after creation. Write
an expiration action that directs Amazon S3 to delete objects after 90 days.
D. Transition objects to the S3 One Zone-Infrequent Access (S3 One Zone-IA) storage class 30
days after creation. Move all objects to the S3 Glacier Flexible Retrieval storage class after 90
days. Write an expiration action that directs Amazon S3 to delete objects after 90 days.

A

C. Transition objects to the S3 Glacier Flexible Retrieval storage class 30 days after creation. Write
an expiration action that directs Amazon S3 to delete objects after 90 days.

68
Q

QUESTION 633
A company stores data in Amazon S3. According to regulations, the data must not contain
personally identifiable information (PII). The company recently discovered that S3 buckets have some objects that contain PII. The company needs to automatically detect PII in S3 buckets and
to notify the company’s security team.
Which solution will meet these requirements?
A. Use Amazon Macie. Create an Amazon EventBridge rule to filter the SensitiveData event type
from Macie findings and to send an Amazon Simple Notification Service (Amazon SNS)
notification to the security team.
B. Use Amazon GuardDuty. Create an Amazon EventBridge rule to filter the CRITICAL event type
from GuardDuty findings and to send an Amazon Simple Notification Service (Amazon SNS)
notification to the security team.
C. Use Amazon Macie. Create an Amazon EventBridge rule to filter the
SensitiveData:S3Object/Personal event type from Macie findings and to send an Amazon Simple
Queue Service (Amazon SQS) notification to the security team.
D. Use Amazon GuardDuty. Create an Amazon EventBridge rule to filter the CRITICAL event type
from GuardDuty findings and to send an Amazon Simple Queue Service (Amazon SQS)
notification to the security team.

A

A. Use Amazon Macie. Create an Amazon EventBridge rule to filter the SensitiveData event type
from Macie findings and to send an Amazon Simple Notification Service (Amazon SNS)

69
Q

QUESTION 632
A company has a workload in an AWS Region. Customers connect to and access the workload
by using an Amazon API Gateway REST API. The company uses Amazon Route 53 as its DNS
provider. The company wants to provide individual and secure URLs for all customers.
Which combination of steps will meet these requirements with the MOST operational efficiency?
(Choose three.)
A. Register the required domain in a registrar. Create a wildcard custom domain name in a Route 53
hosted zone and record in the zone that points to the API Gateway endpoint.
B. Request a wildcard certificate that matches the domains in AWS Certificate Manager (ACM) in a
different Region.
C. Create hosted zones for each customer as required in Route 53. Create zone records that point
to the API Gateway endpoint.
D. Request a wildcard certificate that matches the custom domain name in AWS Certificate Manager
(ACM) in the same Region.
E. Create multiple API endpoints for each customer in API Gateway.
F. Create a custom domain name in API Gateway for the REST API. Import the certificate from AWS
Certificate Manager (ACM).

A

A. Register the required domain in a registrar. Create a wildcard custom domain name in a Route 53
hosted zone and record in the zone that points to the API Gateway endpoint.
D. Request a wildcard certificate that matches the custom domain name in AWS Certificate Manager
(ACM) in the same Region.
F. Create a custom domain name in API Gateway for the REST API. Import the certificate from AWS
Certificate Manager (ACM).

70
Q

QUESTION 631
A company needs to integrate with a third-party data feed. The data feed sends a webhook to
notify an external service when new data is ready for consumption. A developer wrote an AWS
Lambda function to retrieve data when the company receives a webhook callback. The developer
must make the Lambda function available for the third party to call.
Which solution will meet these requirements with the MOST operational efficiency?
A. Create a function URL for the Lambda function. Provide the Lambda function URL to the third
party for the webhook.
B. Deploy an Application Load Balancer (ALB) in front of the Lambda function. Provide the ALB URL
to the third party for the webhook.
C. Create an Amazon Simple Notification Service (Amazon SNS) topic. Attach the topic to the
Lambda function. Provide the public hostname of the SNS topic to the third party for the webhook.
D. Create an Amazon Simple Queue Service (Amazon SQS) queue. Attach the queue to the
Lambda function. Provide the public hostname of the SQS queue to the third party for the
webhook.

A

A. Create a function URL for the Lambda function. Provide the Lambda function URL to the third
party for the webhook.

Explanation:
https://docs.aws.amazon.com/lambda/latest/dg/lambda-urls.html

71
Q

QUESTION 630
A company has an online gaming application that has TCP and UDP multiplayer gaming
capabilities. The company uses Amazon Route 53 to point the application traffic to multiple
Network Load Balancers (NLBs) in different AWS Regions. The company needs to improve
application performance and decrease latency for the online game in preparation for user growth.
Which solution will meet these requirements?
A. Add an Amazon CloudFront distribution in front of the NLBs. Increase the Cache-Control max-age
parameter.
B. Replace the NLBs with Application Load Balancers (ALBs). Configure Route 53 to use latency-
based routing.
C. Add AWS Global Accelerator in front of the NLBs. Configure a Global Accelerator endpoint to use
the correct listener ports.
D. Add an Amazon API Gateway endpoint behind the NLBs. Enable API caching. Override method caching for the different stages.

A

C. Add AWS Global Accelerator in front of the NLBs. Configure a Global Accelerator endpoint to use
the correct listener ports.

72
Q

QUESTION 629
A company is migrating its workloads to AWS. The company has transactional and sensitive data
in its databases. The company wants to use AWS Cloud solutions to increase security and
reduce operational overhead for the databases.
Which solution will meet these requirements?
A. Migrate the databases to Amazon EC2. Use an AWS Key Management Service (AWS KMS)
AWS managed key for encryption.
B. Migrate the databases to Amazon RDS Configure encryption at rest.
C. Migrate the data to Amazon S3 Use Amazon Macie for data security and protection
D. Migrate the database to Amazon RDS. Use Amazon CloudWatch Logs for data security and
protection.

A

B. Migrate the databases to Amazon RDS Configure encryption at rest.

73
Q

QUESTION 628
A data analytics company wants to migrate its batch processing system to AWS. The company
receives thousands of small data files periodically during the day through FTP. An on-premises
batch job processes the data files overnight. However, the batch job takes hours to finish running.
The company wants the AWS solution to process incoming data files as soon as possible with
minimal changes to the FTP clients that send the files. The solution must delete the incoming
data files after the files have been processed successfully. Processing for each file needs to take
3-8 minutes.
Which solution will meet these requirements in the MOST operationally efficient way?
A. Use an Amazon EC2 instance that runs an FTP server to store incoming files as objects in
Amazon S3 Glacier Flexible Retrieval. Configure a job queue in AWS Batch. Use Amazon
EventBridge rules to invoke the job to process the objects nightly from S3 Glacier Flexible
Retrieval. Delete the objects after the job has processed the objects.
B. Use an Amazon EC2 instance that runs an FTP server to store incoming files on an Amazon
Elastic Block Store (Amazon EBS) volume. Configure a job queue in AWS Batch. Use Amazon
EventBridge rules to invoke the job to process the files nightly from the EBS volume. Delete the
files after the job has processed the files.
C. Use AWS Transfer Family to create an FTP server to store incoming files on an Amazon Elastic
Block Store (Amazon EBS) volume. Configure a job queue in AWS Batch. Use an Amazon S3
event notification when each file arrives to invoke the job in AWS Batch. Delete the files after the
job has processed the files.
D. Use AWS Transfer Family to create an FTP server to store incoming files in Amazon S3
Standard. Create an AWS Lambda function to process the files and to delete the files after they
are processed. Use an S3 event notification to invoke the Lambda function when the files arrive.

A

D. Use AWS Transfer Family to create an FTP server to store incoming files in Amazon S3
Standard. Create an AWS Lambda function to process the files and to delete the files after they
are processed. Use an S3 event notification to invoke the Lambda function when the files arrive.

74
Q

QUESTION 627
A company has a regional subscription-based streaming service that runs in a single AWS
Region. The architecture consists of web servers and application servers on Amazon EC2
instances. The EC2 instances are in Auto Scaling groups behind Elastic Load Balancers. The
architecture includes an Amazon Aurora global database cluster that extends across multiple
Availability Zones.
The company wants to expand globally and to ensure that its application has minimal downtime.
Which solution will provide the MOST fault tolerance?
A. Extend the Auto Scaling groups for the web tier and the application tier to deploy instances in
Availability Zones in a second Region. Use an Aurora global database to deploy the database in
the primary Region and the second Region. Use Amazon Route 53 health checks with a failover
routing policy to the second Region.
B. Deploy the web tier and the application tier to a second Region. Add an Aurora PostgreSQL
cross-Region Aurora Replica in the second Region. Use Amazon Route 53 health checks with a
failover routing policy to the second Region. Promote the secondary to primary as needed.
C. Deploy the web tier and the application tier to a second Region. Create an Aurora PostgreSQL
database in the second Region. Use AWS Database Migration Service (AWS DMS) to replicate
the primary database to the second Region. Use Amazon Route 53 health checks with a failover
routing policy to the second Region.
D. Deploy the web tier and the application tier to a second Region. Use an Amazon Aurora global
database to deploy the database in the primary Region and the second Region. Use Amazon
Route 53 health checks with a failover routing policy to the second Region. Promote the
secondary to primary as needed.

A

D. Deploy the web tier and the application tier to a second Region. Use an Amazon Aurora global
database to deploy the database in the primary Region and the second Region. Use Amazon
Route 53 health checks with a failover routing policy to the second Region. Promote the
secondary to primary as needed.

Explanation:
Aws Aurora Global Database allows you to read and write from any region in the global cluster.
This enables you to distribute read and write workloads globally, improving performance and
reducing latency. Data is replicated synchronously across regions, ensuring strong consistency.

75
Q

QUESTION 626
A solutions architect is reviewing the resilience of an application. The solutions architect notices
that a database administrator recently failed over the application’s Amazon Aurora PostgreSQL
database writer instance as part of a scaling exercise. The failover resulted in 3 minutes of
downtime for the application.
Which solution will reduce the downtime for scaling exercises with the LEAST operational
overhead?

A. Create more Aurora PostgreSQL read replicas in the cluster to handle the load during failover.
B. Set up a secondary Aurora PostgreSQL cluster in the same AWS Region. During failover, update
the application to use the secondary cluster’s writer endpoint.
C. Create an Amazon ElastiCache for Memcached cluster to handle the load during failover.
D. Set up an Amazon RDS proxy for the database. Update the application to use the proxy endpoint.

A

D. Set up an Amazon RDS proxy for the database. Update the application to use the proxy endpoint.

Explanation:
Amazon RDS proxy allows you to automatically route write request to the healthy writer,
minimizing downtime.

76
Q

QUESTION 625
A company wants to add its existing AWS usage cost to its operation cost dashboard. A solutions
architect needs to recommend a solution that will give the company access to its usage cost
programmatically. The company must be able to access cost data for the current year and
forecast costs for the next 12 months.
Which solution will meet these requirements with the LEAST operational overhead?
A. Access usage cost-related data by using the AWS Cost Explorer API with pagination.
B. Access usage cost-related data by using downloadable AWS Cost Explorer report .csv files.
C. Configure AWS Budgets actions to send usage cost data to the company through FTP.
D. Create AWS Budgets reports for usage cost data. Send the data to the company through SMTP.

A

A. Access usage cost-related data by using the AWS Cost Explorer API with pagination.

Explanation:
You can view your costs and usage using the Cost Explorer user interface free of charge. You
can also access your data programmatically using the Cost Explorer API. Each paginated API
request incurs a charge of $0.01. You can’t disable Cost Explorer after you enable it.
https://docs.aws.amazon.com/cost-management/latest/userguide/ce-what-is.html
https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-cost-
explorer/interfaces/costexplorerpaginationconfiguration.html

77
Q

QUESTION 624
A company wants to analyze and troubleshoot Access Denied errors and Unauthorized errors
that are related to IAM permissions. The company has AWS CloudTrail turned on.
Which solution will meet these requirements with the LEAST effort?
A. Use AWS Glue and write custom scripts to query CloudTrail logs for the errors.
B. Use AWS Batch and write custom scripts to query CloudTrail logs for the errors.
C. Search CloudTrail logs with Amazon Athena queries to identify the errors.
D. Search CloudTrail logs with Amazon QuickSight. Create a dashboard to identify the errors.

A

C. Search CloudTrail logs with Amazon Athena queries to identify the errors.

Explanation:
“Using Athena with CloudTrail logs is a powerful way to enhance your analysis of AWS service
activity.”
https://docs.aws.amazon.com/athena/latest/ug/cloudtrail-logs.html

78
Q

QUESTION 623
A company runs a microservice-based serverless web application. The application must be able
to retrieve data from multiple Amazon DynamoDB tables A solutions architect needs to give the
application the ability to retrieve the data with no impact on the baseline performance of the
application.
Which solution will meet these requirements in the MOST operationally efficient way?
A. AWS AppSync pipeline resolvers
B. Amazon CloudFront with Lambda@Edge functions
C. Edge-optimized Amazon API Gateway with AWS Lambda functions
D. Amazon Athena Federated Query with a DynamoDB connector

A

D. Amazon Athena Federated Query with a DynamoDB connector

The Amazon Athena DynamoDB connector enables Amazon Athena to communicate with
DynamoDB so that you can query your tables with SQL. Write operations like INSERT INTO are
not supported.

79
Q

QUESTION 622
A company runs container applications by using Amazon Elastic Kubernetes Service (Amazon
EKS). The company’s workload is not consistent throughout the day. The company wants
Amazon EKS to scale in and out according to the workload.
Which combination of steps will meet these requirements with the LEAST operational overhead?
(Choose two.)
A. Use an AWS Lambda function to resize the EKS cluster.
B. Use the Kubernetes Metrics Server to activate horizontal pod autoscaling.
C. Use the Kubernetes Cluster Autoscaler to manage the number of nodes in the cluster.
D. Use Amazon API Gateway and connect it to Amazon EKS.
E. Use AWS App Mesh to observe network activity.

A

B. Use the Kubernetes Metrics Server to activate horizontal pod autoscaling.
C. Use the Kubernetes Cluster Autoscaler to manage the number of nodes in the cluster.

Explanation:
By combining the Kubernetes Cluster Autoscaler (option C) to manage the number of nodes in
the cluster and enabling horizontal pod autoscaling (option B) with the Kubernetes Metrics
Server, you can achieve automatic scaling of your EKS cluster and container applications based
on workload demand. This approach minimizes operational overhead as it leverages built-in
Kubernetes functionality and automation mechanisms.

80
Q

QUESTION 621
A retail company has several businesses. The IT team for each business manages its own AWS
account. Each team account is part of an organization in AWS Organizations. Each team
monitors its product inventory levels in an Amazon DynamoDB table in the team’s own AWS
account.
The company is deploying a central inventory reporting application into a shared AWS account.
The application must be able to read items from all the teams’ DynamoDB tables.
Which authentication option will meet these requirements MOST securely?
A. Integrate DynamoDB with AWS Secrets Manager in the inventory application account. Configure
the application to use the correct secret from Secrets Manager to authenticate and read the
DynamoDB table. Schedule secret rotation for every 30 days.
B. In every business account, create an IAM user that has programmatic access. Configure the
application to use the correct IAM user access key ID and secret access key to authenticate and
read the DynamoDB table. Manually rotate IAM access keys every 30 days.
C. In every business account, create an IAM role named BU_ROLE with a policy that gives the role
access to the DynamoDB table and a trust policy to trust a specific role in the inventory
application account. In the inventory account, create a role named APP_ROLE that allows access
to the STS AssumeRole API operation. Configure the application to use APP_ROLE and assume
the crossaccount role BU_ROLE to read the DynamoDB table.
D. Integrate DynamoDB with AWS Certificate Manager (ACM). Generate identity certificates to
authenticate DynamoDB. Configure the application to use the correct certificate to authenticate
and read the DynamoDB table.

A

C. In every business account, create an IAM role named BU_ROLE with a policy that gives the role
access to the DynamoDB table and a trust policy to trust a specific role in the inventory
application account. In the inventory account, create a role named APP_ROLE that allows access
to the STS AssumeRole API operation. Configure the application to use APP_ROLE and assume
the crossaccount role BU_ROLE to read the DynamoDB table.

Explanation:
IAM Roles: IAM roles provide a secure way to grant permissions to entities within AWS. By
creating an IAM role in each business account named BU_ROLE with the necessary permissions
to access the DynamoDB table, the access can be controlled at the IAM role level.
Cross-Account Access: By configuring a trust policy in the BU_ROLE that trusts a specific role in
the inventory application account (APP_ROLE), you establish a trusted relationship between the
two accounts.
Least Privilege: By creating a specific IAM role (BU_ROLE) in each business account and
granting it access only to the required DynamoDB table, you can ensure that each team’s table is
accessed with the least privilege principle.
Security Token Service (STS): The use of STS AssumeRole API operation in the inventory
application account allows the application to assume the cross-account role (BU_ROLE) in each
business account.

81
Q

QUESTION 620
A company is designing a new web application that will run on Amazon EC2 Instances. The
application will use Amazon DynamoDB for backend data storage. The application traffic will be
unpredictable. The company expects that the application read and write throughput to the
database will be moderate to high. The company needs to scale in response to application traffic.
Which DynamoDB table configuration will meet these requirements MOST cost-effectively?
A. Configure DynamoDB with provisioned read and write by using the DynamoDB Standard table
class. Set DynamoDB auto scaling to a maximum defined capacity.
B. Configure DynamoDB in on-demand mode by using the DynamoDB Standard table class.
C. Configure DynamoDB with provisioned read and write by using the DynamoDB Standard
Infrequent Access (DynamoDB Standard-IA) table class. Set DynamoDB auto scaling to a
maximum defined capacity.
D. Configure DynamoDB in on-demand mode by using the DynamoDB Standard Infrequent Access
(DynamoDB Standard-IA) table class.

A

B. Configure DynamoDB in on-demand mode by using the DynamoDB Standard table class.

Explanation:
AWS Service Catalog allows you to create and manage catalogs of IT services that can be
deployed within your organization. With Service Catalog, you can define a standardized set of
products (solutions and tools in this case) that customers can self-service provision. By creating
Service Catalog products, you can control and enforce the deployment of approved and validated
solutions and tools.

82
Q

QUESTION 619
A consulting company provides professional services to customers worldwide. The company
provides solutions and tools for customers to expedite gathering and analyzing data on AWS. The
company needs to centrally manage and deploy a common set of solutions and tools for
customers to use for self-service purposes.
Which solution will meet these requirements?
A. Create AWS CloudFormation templates for the customers.
B. Create AWS Service Catalog products for the customers.
C. Create AWS Systems Manager templates for the customers.
D. Create AWS Config items for the customers.

A

B. Create AWS Service Catalog products for the customers.

Explanation:
AWS Service Catalog allows you to create and manage catalogs of IT services that can be
deployed within your organization. With Service Catalog, you can define a standardized set of
products (solutions and tools in this case) that customers can self-service provision. By creating
Service Catalog products, you can control and enforce the deployment of approved and validated
solutions and tools.

83
Q

QUESTION 618
An application uses an Amazon RDS MySQL DB instance. The RDS database is becoming low
on disk space. A solutions architect wants to increase the disk space without downtime.
Which solution meets these requirements with the LEAST amount of effort?
A. Enable storage autoscaling in RDS
B. Increase the RDS database instance size
C. Change the RDS database instance storage type to Provisioned IOPS
D. Back up the RDS database, increase the storage capacity, restore the database, and stop the
previous instance

A

A. Enable storage autoscaling in RDS

Explanation:
Enabling storage autoscaling allows RDS to automatically adjust the storage capacity based on
the application’s needs. When the storage usage exceeds a predefined threshold, RDS will
automatically increase the allocated storage without requiring manual intervention or causing
downtime. This ensures that the RDS database has sufficient disk space to handle the increasing
storage requirements.

84
Q

QUESTION 617
A company wants to send all AWS Systems Manager Session Manager logs to an Amazon S3
bucket for archival purposes.
Which solution will meet this requirement with the MOST operational efficiency?
A. Enable S3 logging in the Systems Manager console. Choose an S3 bucket to send the session
data to.
B. Install the Amazon CloudWatch agent. Push all logs to a CloudWatch log group. Export the logs
to an S3 bucket from the group for archival purposes.
C. Create a Systems Manager document to upload all server logs to a central S3 bucket. Use
Amazon EventBridge to run the Systems Manager document against all servers that are in the
account daily.
D. Install an Amazon CloudWatch agent. Push all logs to a CloudWatch log group. Create a
CloudWatch logs subscription that pushes any incoming log events to an Amazon Kinesis Data
Firehose delivery stream. Set Amazon S3 as the destination.

A

A. Enable S3 logging in the Systems Manager console. Choose an S3 bucket to send the session
data to.

Explanation:
https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-logging.html

85
Q

QUESTION 616
A company provides an API interface to customers so the customers can retrieve their financial
information. 舎e company expects a larger number of requests during peak usage times of the
year.
The company requires the API to respond consistently with low latency to ensure customer
satisfaction. The company needs to provide a compute host for the API.
Which solution will meet these requirements with the LEAST operational overhead?
A. Use an Application Load Balancer and Amazon Elastic Container Service (Amazon ECS).
B. Use Amazon API Gateway and AWS Lambda functions with provisioned concurrency.
C. Use an Application Load Balancer and an Amazon Elastic Kubernetes Service (Amazon EKS)
cluster.
D. Use Amazon API Gateway and AWS Lambda functions with reserved concurrency.

A

B. Use Amazon API Gateway and AWS Lambda functions with provisioned concurrency.

Explanation:
In the context of the given scenario, where the company wants low latency and consistent
performance for their API during peak usage times, it would be more suitable to use provisioned
concurrency. By allocating a specific number of concurrent executions, the company can ensure
that there are enough function instances available to handle the expected load and minimize the
impact of cold starts. This will result in lower latency and improved performance for the API.

86
Q

QUESTION 615
A company is migrating an on-premises application to AWS. The company wants to use Amazon
Redshift as a solution.

Which use cases are suitable for Amazon Redshift in this scenario? (Choose three.)
A. Supporting data APIs to access data with traditional, containerized, and event-driven applications
B. Supporting client-side and server-side encryption
C. Building analytics workloads during specified hours and when the application is not active
D. Caching data to reduce the pressure on the backend database
E. Scaling globally to support petabytes of data and tens of millions of requests per minute
F. Creating a secondary replica of the cluster by using the AWS Management Console

A

B. Supporting client-side and server-side encryption
C. Building analytics workloads during specified hours and when the application is not active
E. Scaling globally to support petabytes of data and tens of millions of requests per minute

Explanation:
B. Supporting client-side and server-side encryption: Amazon Redshift supports both client-side
and server-side encryption for improved data security.
C. Building analytics workloads during specified hours and when the application is not active:
Amazon Redshift is optimized for running complex analytic queries against very large datasets,
making it a good choice for this use case.
E. Scaling globally to support petabytes of data and tens of millions of requests per minute:
Amazon Redshift is designed to handle petabytes of data, and to deliver fast query and I/O
performance for virtually any size dataset.

87
Q

QUESTION 614
A company is running a microservices application on Amazon EC2 instances. The company
wants to migrate the application to an Amazon Elastic Kubernetes Service (Amazon EKS) cluster
for scalability. The company must configure the Amazon EKS control plane with endpoint private
access set to true and endpoint public access set to false to maintain security compliance. The
company must also put the data plane in private subnets. However, the company has received
error notifications because the node cannot join the cluster.
Which solution will allow the node to join the cluster?
A. Grant the required permission in AWS Identity and Access Management (IAM) to the
AmazonEKSNodeRole IAM role.
B. Create interface VPC endpoints to allow nodes to access the control plane.
C. Recreate nodes in the public subnet. Restrict security groups for EC2 nodes.
D. Allow outbound traffic in the security group of the nodes.

A

B. Create interface VPC endpoints to allow nodes to access the control plane.

Explanation:
By creating interface VPC endpoints, you can enable the necessary communication between the
Amazon EKS control plane and the nodes in private subnets. This solution ensures that the
control plane maintains endpoint private access (set to true) and endpoint public access (set to
false) for security compliance.

88
Q

QUESTION 613
A social media company wants to allow its users to upload images in an application that is hosted
in the AWS Cloud. The company needs a solution that automatically resizes the images so that
the images can be displayed on multiple device types. The application experiences unpredictable
traffic patterns throughout the day. The company is seeking a highly available solution that
maximizes scalability.
What should a solutions architect do to meet these requirements?

A. Create a static website hosted in Amazon S3 that invokes AWS Lambda functions to resize the
images and store the images in an Amazon S3 bucket.
B. Create a static website hosted in Amazon CloudFront that invokes AWS Step Functions to resize
the images and store the images in an Amazon RDS database.
C. Create a dynamic website hosted on a web server that runs on an Amazon EC2 instance.
Configure a process that runs on the EC2 instance to resize the images and store the images in
an Amazon S3 bucket.
D. Create a dynamic website hosted on an automatically scaling Amazon Elastic Container Service
(Amazon ECS) cluster that creates a resize job in Amazon Simple Queue Service (Amazon
SQS). Set up an image-resizing program that runs on an Amazon EC2 instance to process the
resize jobs.

A

A. Create a static website hosted in Amazon S3 that invokes AWS Lambda functions to resize the
images and store the images in an Amazon S3 bucket.

Explanation:
By using Amazon S3 and AWS Lambda together, you can create a serverless architecture that
provides highly scalable and available image resizing capabilities. Here’s how the solution would
work:
Set up an Amazon S3 bucket to store the original images uploaded by users.
Configure an event trigger on the S3 bucket to invoke an AWS Lambda function whenever a new
image is uploaded.
The Lambda function can be designed to retrieve the uploaded image, perform the necessary
resizing operations based on device requirements, and store the resized images back in the S3
bucket or a different bucket designated for resized images.
Configure the Amazon S3 bucket to make the resized images publicly accessible for serving to
users.

89
Q

QUESTION 612
A company uses AWS Organizations with resources tagged by account. The company also uses
AWS Backup to back up its AWS infrastructure resources. The company needs to back up all
AWS resources.
Which solution will meet these requirements with the LEAST operational overhead?
A. Use AWS Config to identify all untagged resources. Tag the identified resources
programmatically. Use tags in the backup plan.
B. Use AWS Config to identify all resources that are not running. Add those resources to the backup
vault.
C. Require all AWS account owners to review their resources to identify the resources that need to
be backed up.
D. Use Amazon Inspector to identify all noncompliant resources.

A

A. Use AWS Config to identify all untagged resources. Tag the identified resources
programmatically. Use tags in the backup plan.

Explanation:
This solution allows you to leverage AWS Config to identify any untagged resources within your
AWS Organizations accounts. Once identified, you can programmatically apply the necessary
tags to indicate the backup requirements for each resource. By using tags in the backup plan
configuration, you can ensure that only the tagged resources are included in the backup process,
reducing operational overhead and ensuring all necessary resources are backed up.

90
Q

QUESTION 611
A company is developing software that uses a PostgreSQL database schema. The company
needs to configure multiple development environments and databases for the company’s
developers. On average, each development environment is used for half of the 8-hour workday.
Which solution will meet these requirements MOST cost-effectively?
A. Configure each development environment with its own Amazon Aurora PostgreSQL database
B. Configure each development environment with its own Amazon RDS for PostgreSQL Single-AZ
DB instances
C. Configure each development environment with its own Amazon Aurora On-Demand PostgreSQL-
Compatible database
D. Configure each development environment with its own Amazon S3 bucket by using Amazon S3
Object Select

A

C. Configure each development environment with its own Amazon Aurora On-Demand PostgreSQL-
Compatible database

Explanation:
With Aurora Serverless, you create a database, specify the desired database capacity range, and
connect your applications. You pay on a per-second basis for the database capacity that you use
when the database is active, and migrate between standard and serverless configurations with a
few steps in the Amazon Relational Database Service (Amazon RDS) console.

91
Q

QUESTION 610
A global marketing company has applications that run in the ap-southeast-2 Region and the eu-
west-1 Region. Applications that run in a VPC in eu-west-1 need to communicate securely with
databases that run in a VPC in ap-southeast-2.
Which network design will meet these requirements?
A. Create a VPC peering connection between the eu-west-1 VPC and the ap-southeast-2 VPC.
Create an inbound rule in the eu-west-1 application security group that allows traffic from the
database server IP addresses in the ap-southeast-2 security group.
B. Configure a VPC peering connection between the ap-southeast-2 VPC and the eu-west-1 VPC.
Update the subnet route tables. Create an inbound rule in the ap-southeast-2 database security
group that references the security group ID of the application servers in eu-west-1.
C. Configure a VPC peering connection between the ap-southeast-2 VPC and the eu-west-1
VPUpdate the subnet route tables. Create an inbound rule in the ap-southeast-2 database
security group that allows traffic from the eu-west-1 application server IP addresses.
D. Create a transit gateway with a peering attachment between the eu-west-1 VPC and the ap-
southeast-2 VPC. After the transit gateways are properly peered and routing is configured, create
an inbound rule in the database security group that references the security group ID of the
application servers in eu-west-1.

A

C. Configure a VPC peering connection between the ap-southeast-2 VPC and the eu-west-1
VPUpdate the subnet route tables. Create an inbound rule in the ap-southeast-2 database
security group that allows traffic from the eu-west-1 application server IP addresses.

Explanation:
You cannot reference the security group of a peer VPC that’s in a different Region. Instead, use
the CIDR block of the peer VPC.
https://docs.aws.amazon.com/vpc/latest/peering/vpc-peering-security-groups.html

92
Q

QUESTION 609
A company operates a two-tier application for image processing. The application uses two
Availability Zones, each with one public subnet and one private subnet. An Application Load
Balancer (ALB) for the web tier uses the public subnets. Amazon EC2 instances for the
application tier use the private subnets.
Users report that the application is running more slowly than expected. A security audit of the web server log files shows that the application is receiving millions of illegitimate requests from a small
number of IP addresses. A solutions architect needs to resolve the immediate performance
problem while the company investigates a more permanent solution.
What should the solutions architect recommend to meet this requirement?
A. Modify the inbound security group for the web tier. Add a deny rule for the IP addresses that are
consuming resources.
B. Modify the network ACL for the web tier subnets. Add an inbound deny rule for the IP addresses
that are consuming resources.
C. Modify the inbound security group for the application tier. Add a deny rule for the IP addresses
that are consuming resources.
D. Modify the network ACL for the application tier subnets. Add an inbound deny rule for the IP
addresses that are consuming resources.

A

B. Modify the network ACL for the web tier subnets. Add an inbound deny rule for the IP addresses
that are consuming resources.

Explanation:
In this scenario, the security audit reveals that the application is receiving millions of illegitimate
requests from a small number of IP addresses. To address this issue, it is recommended to
modify the network ACL (Access Control List) for the web tier subnets.
By adding an inbound deny rule specifically targeting the IP addresses that are consuming
resources, the network ACL can block the illegitimate traffic at the subnet level before it reaches
the web servers. This will help alleviate the excessive load on the web tier and improve the
application’s performance.

93
Q

QUESTION 608
A company has migrated multiple Microsoft Windows Server workloads to Amazon EC2
instances that run in the us-west-1 Region. The company manually backs up the workloads to
create an image as needed.
In the event of a natural disaster in the us-west-1 Region, the company wants to recover
workloads quickly in the us-west-2 Region. The company wants no more than 24 hours of data
loss on the EC2 instances. The company also wants to automate any backups of the EC2
instances.
Which solutions will meet these requirements with the LEAST administrative effort? (Choose
two.)
A. Create an Amazon EC2-backed Amazon Machine Image (AMI) lifecycle policy to create a backup
based on tags. Schedule the backup to run twice daily. Copy the image on demand.
B. Create an Amazon EC2-backed Amazon Machine Image (AMI) lifecycle policy to create a backup
based on tags. Schedule the backup to run twice daily. Configure the copy to the us-west-2
Region.
C. Create backup vaults in us-west-1 and in us-west-2 by using AWS Backup. Create a backup plan
for the EC2 instances based on tag values. Create an AWS Lambda function to run as a
scheduled job to copy the backup data to us-west-2.
D. Create a backup vault by using AWS Backup. Use AWS Backup to create a backup plan for the
EC2 instances based on tag values. Define the destination for the copy as us-west-2. Specify the
backup schedule to run twice daily.
E. Create a backup vault by using AWS Backup. Use AWS Backup to create a backup plan for the
EC2 instances based on tag values. Specify the backup schedule to run twice daily. Copy on
demand to us-west-2.

A

B. Create an Amazon EC2-backed Amazon Machine Image (AMI) lifecycle policy to create a backup
based on tags. Schedule the backup to run twice daily. Configure the copy to the us-west-2
Region.
D. Create a backup vault by using AWS Backup. Use AWS Backup to create a backup plan for the
EC2 instances based on tag values. Define the destination for the copy as us-west-2. Specify the
backup schedule to run twice daily.

Explanation:
Solutions are both automated and require no manual intervention to create or copy backups。

94
Q

QUESTION 607
A company has a web application for travel ticketing. The application is based on a database that
runs in a single data center in North America. The company wants to expand the application to
serve a global user base. The company needs to deploy the application to multiple AWS Regions.
Average latency must be less than 1 second on updates to the reservation database.
The company wants to have separate deployments of its web platform across multiple Regions.
However, the company must maintain a single primary reservation database that is globally
consistent.
Which solution should a solutions architect recommend to meet these requirements?
A. Convert the application to use Amazon DynamoDB. Use a global table for the center reservation
table. Use the correct Regional endpoint in each Regional deployment.
B. Migrate the database to an Amazon Aurora MySQL database. Deploy Aurora Read Replicas in
each Region. Use the correct Regional endpoint in each Regional deployment for access to the
database.
C. Migrate the database to an Amazon RDS for MySQL database. Deploy MySQL read replicas in each Region. Use the correct Regional endpoint in each Regional deployment for access to the
database.
D. Migrate the application to an Amazon Aurora Serverless database. Deploy instances of the
database to each Region. Use the correct Regional endpoint in each Regional deployment to
access the database. Use AWS Lambda functions to process event streams in each Region to
synchronize the databases.

A

A. Convert the application to use Amazon DynamoDB. Use a global table for the center reservation
table. Use the correct Regional endpoint in each Regional deployment.

Explanation:
Using DynamoDB’s global tables feature, you can achieve a globally consistent reservation
database with low latency on updates, making it suitable for serving a global user base. The
automatic replication provided by DynamoDB eliminates the need for manual synchronization
between Regions.

95
Q

QUESTION 606
A social media company is building a feature for its website. The feature will give users the ability
to upload photos. The company expects significant increases in demand during large events and
must ensure that the website can handle the upload traffic from users.
Which solution meets these requirements with the MOST scalability?
A. Upload files from the user’s browser to the application servers. Transfer the files to an Amazon S3
bucket.
B. Provision an AWS Storage Gateway file gateway. Upload files directly from the user’s browser to
the file gateway.
C. Generate Amazon S3 presigned URLs in the application. Upload files directly from the user’s
browser into an S3 bucket.
D. Provision an Amazon Elastic File System (Amazon EFS) file system. Upload files directly from the
user’s browser to the file system.

A

C. Generate Amazon S3 presigned URLs in the application. Upload files directly from the user’s
browser into an S3 bucket.

Explanation:
This approach allows users to upload files directly to S3 without passing through the application
servers, reducing the load on the application and improving scalability. It leverages the client-side
capabilities to handle the file uploads and offloads the processing to S3.

96
Q

QUESTION 605
A company has Amazon EC2 instances that run nightly batch jobs to process data. The EC2
instances run in an Auto Scaling group that uses On-Demand billing. If a job fails on one
instance, another instance will reprocess the job. The batch jobs run between 12:00 AM and
06:00 AM local time every day.
Which solution will provide EC2 instances to meet these requirements MOST cost-effectively?
A. Purchase a 1-year Savings Plan for Amazon EC2 that covers the instance family of the Auto
Scaling group that the batch job uses.
B. Purchase a 1-year Reserved Instance for the specific instance type and operating system of the
instances in the Auto Scaling group that the batch job uses.
C. Create a new launch template for the Auto Scaling group. Set the instances to Spot Instances.
Set a policy to scale out based on CPU usage.
D. Create a new launch template for the Auto Scaling group. Increase the instance size. Set a policy
to scale out based on CPU usage.

A

C. Create a new launch template for the Auto Scaling group. Set the instances to Spot Instances.
Set a policy to scale out based on CPU usage.

Explanation:
Purchasing a 1-year Savings Plan (option A) or a 1-year Reserved Instance (option B) may
provide cost savings, but they are more suitable for long-running, steady-state workloads. Since
your batch jobs run for a specific period each day, using Spot Instances with the ability to scale
out based on CPU usage is a more cost-effective choice.

97
Q

QUESTION 604
A company needs to connect several VPCs in the us-east-1 Region that span hundreds of AWS
accounts. The company’s networking team has its own AWS account to manage the cloud
network.
What is the MOST operationally efficient solution to connect the VPCs?
A. Set up VPC peering connections between each VPC. Update each associated subnet’s route
table
B. Configure a NAT gateway and an internet gateway in each VPC to connect each VPC through the
internet
C. Create an AWS Transit Gateway in the networking team’s AWS account. Configure static routes
from each VPC.
D. Deploy VPN gateways in each VPC. Create a transit VPC in the networking team’s AWS account
to connect to each VPC.

A

C. Create an AWS Transit Gateway in the networking team’s AWS account. Configure static routes
from each VPC.

Explanation:
WS Transit Gateway is a highly scalable and centralized hub for connecting multiple VPCs, on-
premises networks, and remote networks. It simplifies network connectivity by providing a single
entry point and reducing the number of connections required. In this scenario, deploying an AWS
Transit Gateway in the networking team’s AWS account allows for efficient management and
control over the network connectivity across multiple VPCs.

98
Q

QUESTION 603
A company runs an infrastructure monitoring service. The company is building a new feature that
will enable the service to monitor data in customer AWS accounts. The new feature will call AWS
APIs in customer accounts to describe Amazon EC2 instances and read Amazon CloudWatch
metrics.
What should the company do to obtain access to customer accounts in the MOST secure way?
A. Ensure that the customers create an IAM role in their account with read-only EC2 and
CloudWatch permissions and a trust policy to the company’s account.
B. Create a serverless API that implements a token vending machine to provide temporary AWS
credentials for a role with read-only EC2 and CloudWatch permissions.
C. Ensure that the customers create an IAM user in their account with read-only EC2 and
CloudWatch permissions. Encrypt and store customer access and secret keys in a secrets
management system.
D. Ensure that the customers create an Amazon Cognito user in their account to use an IAM role
with read-only EC2 and CloudWatch permissions. Encrypt and store the Amazon Cognito user
and password in a secrets management system.

A

A. Ensure that the customers create an IAM role in their account with read-only EC2 and
CloudWatch permissions and a trust policy to the company’s account.

Explanation:
By having customers create an IAM role with the necessary permissions in their own accounts,
the company can use AWS Identity and Access Management (IAM) to establish cross-account
access. The trust policy allows the company’s AWS account to assume the customer’s IAM role
temporarily, granting access to the specified resources (EC2 instances and CloudWatch metrics)
within the customer’s account. This approach follows the principle of least privilege, as the
company only requests the necessary permissions and does not require long-term access keys
or user credentials from the customers.

99
Q

QUESTION 602
A company runs a website that uses a content management system (CMS) on Amazon EC2. The
CMS runs on a single EC2 instance and uses an Amazon Aurora MySQL Multi-AZ DB instance
for the data tier. Website images are stored on an Amazon Elastic Block Store (Amazon EBS)
volume that is mounted inside the EC2 instance.
Which combination of actions should a solutions architect take to improve the performance and
resilience of the website? (Choose two.)
A. Move the website images into an Amazon S3 bucket that is mounted on every EC2 instance
B. Share the website images by using an NFS share from the primary EC2 instance. Mount this
share on the other EC2 instances.
C. Move the website images onto an Amazon Elastic File System (Amazon EFS) file system that is
mounted on every EC2 instance.
D. Create an Amazon Machine Image (AMI) from the existing EC2 instance. Use the AMI to
provision new instances behind an Application Load Balancer as part of an Auto Scaling group.
Configure the Auto Scaling group to maintain a minimum of two instances. Configure an
accelerator in AWS Global Accelerator for the website
E. Create an Amazon Machine Image (AMI) from the existing EC2 instance. Use the AMI to
provision new instances behind an Application Load Balancer as part of an Auto Scaling group.
Configure the Auto Scaling group to maintain a minimum of two instances. Configure an Amazon
CloudFront distribution for the website.

A

C. Move the website images onto an Amazon Elastic File System (Amazon EFS) file system that is
mounted on every EC2 instance.
E. Create an Amazon Machine Image (AMI) from the existing EC2 instance. Use the AMI to
provision new instances behind an Application Load Balancer as part of an Auto Scaling group.
Configure the Auto Scaling group to maintain a minimum of two instances. Configure an Amazon
CloudFront distribution for the website.

Explanation:
By combining the use of Amazon EFS for shared file storage and Amazon CloudFront for content
delivery, you can achieve improved performance and resilience for the website.

100
Q

QUESTION 601
A company wants to ingest customer payment data into the company’s data lake in Amazon S3.
The company receives payment data every minute on average. The company wants to analyze
the payment data in real time. Then the company wants to ingest the data into the data lake.
Which solution will meet these requirements with the MOST operational efficiency?
A. Use Amazon Kinesis Data Streams to ingest data. Use AWS Lambda to analyze the data in real
time.
B. Use AWS Glue to ingest data. Use Amazon Kinesis Data Analytics to analyze the data in real
time.
C. Use Amazon Kinesis Data Firehose to ingest data. Use Amazon Kinesis Data Analytics to
analyze the data in real time.
D. Use Amazon API Gateway to ingest data. Use AWS Lambda to analyze the data in real time.

A

C. Use Amazon Kinesis Data Firehose to ingest data. Use Amazon Kinesis Data Analytics to
analyze the data in real time.

Explanation:
By leveraging the combination of Amazon Kinesis Data Firehose and Amazon Kinesis Data
Analytics, you can efficiently ingest and analyze the payment data in real time without the need
for manual processing or additional infrastructure management. This solution provides a
streamlined and scalable approach to handle continuous data ingestion and analysis
requirements.