SAA L2P 451-500 v24.021 Flashcards

1
Q

QUESTION 500
A company uses a payment processing system that requires messages for a particular payment
ID to be received in the same order that they were sent. Otherwise, the payments might be
processed incorrectly.
Which actions should a solutions architect take to meet this requirement? (Choose two.)
A. Write the messages to an Amazon DynamoDB table with the payment ID as the partition key
B. Write the messages to an Amazon Kinesis data stream with the payment ID as the partition key.
C. Write the messages to an Amazon ElastiCache for Memcached cluster with the payment ID as
the key.
D. Write the messages to an Amazon Simple Queue Service (Amazon SQS) queue. Set the message attribute to use the payment ID.
E. Write the messages to an Amazon Simple Queue Service (Amazon SQS) FIFO queue. Set the
message group to use the payment ID.

A

B. Write the messages to an Amazon Kinesis data stream with the payment ID as the partition key.
E. Write the messages to an Amazon Simple Queue Service (Amazon SQS) FIFO queue. Set the
message group to use the payment ID.

1) SQS FIFO queues ensure that messages are received in the exact order they are sent. Using the payment ID as a message group ensures that all messages from a payment ID are received sequentially. 2) Kinesis data streams can also enforce order by partition key. Using the payment ID as a partition key will ensure strict ordering of messages for each payment ID. The other options do not guarantee message ordering. DynamoDB and ElastiCache are not message queues. Standard SQS queues deliver messages only in approximate order. E –> certainly B –> see https://docs.aws.amazon.com/streams/latest/dev/key-concepts.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

QUESTION 499
A company has a web application with sporadic usage patterns. There is heavy usage at the
beginning of each month moderate usage at the start of each week and unpredictable usage
during the week. The application consists of a web server and a MySQL database server running
inside the data center. The company would like to move the application to the AWS Cloud and
needs to select a cost-effective database platform that will not require database modifications.
Which solution will meet these requirements?
A. Amazon DynamoDB
B. Amazon RDS for MySQL
C. MySQL-compatible Amazon Aurora Serverless
D. MySQL deployed on Amazon EC2 in an Auto Scaling group

A

C. MySQL-compatible Amazon Aurora Serverless

Explanation:
Amazon RDS for MySQL is a fully-managed relational database service that makes it easy to set
up, operate, and scale MySQL deployments in the cloud. Amazon Aurora Serverless is an on-
demand, auto-scaling configuration for Amazon Aurora (MySQL-compatible edition), where the
database will automatically start up, shut down, and scale capacity up or down based on your
application’s needs. It is a simple, cost-effective option for infrequent, intermittent, or
unpredictable workloads.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

QUESTION 498
A company needs to create an Amazon Elastic Kubernetes Service (Amazon EKS) cluster to host
a digital media streaming application. The EKS cluster will use a managed node group that is
backed by Amazon Elastic Block Store (Amazon EBS) volumes for storage. The company must
encrypt all data at rest by using a customer managed key that is stored in AWS Key Management Service (AWS KMS).
Which combination of actions will meet this requirement with the LEAST operational overhead?
(Choose two.)
A. Use a Kubernetes plugin that uses the customer managed key to perform data encryption.
B. After creation of the EKS cluster, locate the EBS volumes. Enable encryption by using the
customer managed key.
C. Enable EBS encryption by default in the AWS Region where the EKS cluster will be created.
Select the customer managed key as the default key.
D. Create the EKS cluster. Create an IAM role that has a policy that grants permission to the
customer managed key. Associate the role with the EKS cluster.
E. Store the customer managed key as a Kubernetes secret in the EKS cluster. Use the customer
managed key to encrypt the EBS volumes.

A

C. Enable EBS encryption by default in the AWS Region where the EKS cluster will be created.
Select the customer managed key as the default key.
D. Create the EKS cluster. Create an IAM role that has a policy that grants permission to the
customer managed key. Associate the role with the EKS cluster.

C - Enable EBS encryption by default in a region - https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html

D - Provides key access permission only for the EKS cluster without changing Broader IAM Permissions

Other Options:
A. Using a custom plugin requires installing, managing, and troubleshooting the plugin. Significant operational overheads.

E. Managing Kubernetes secrets for key access requires operations within the EKS cluster. Additional operational complexity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

QUESTION 496
A meteorological startup company has a custom web application to sell weather data to its users online. The company uses Amazon DynamoDB to store is data and wants to build a new service
that sends an alert to the managers of four Internal teams every time a new weather event is
recorded. The company does not want true new service to affect the performance of the current
application.
What should a solutions architect do to meet these requirement with the LEAST amount of
operational overhead?
A. Use DynamoDB transactions to write new event data to the table. Configure the transactions to
notify internal teams.
B. Have the current application publish a message to four Amazon Simple Notification Service
(Amazon SNS) topics. Have each team subscribe to one topic.
C. Enable Amazon DynamoDB Streams on the table. Use triggers to write to a mingle Amazon
Simple Notification Service (Amazon SNS) topic to which the teams can subscribe.
D. Add a custom attribute to each record to flag new items. Write a cron job that scans the table
every minute for items that are new and notifies an Amazon Simple Queue Service (Amazon
SOS) queue to which the teams can subscribe.

A

C. Enable Amazon DynamoDB Streams on the table. Use triggers to write to a mingle Amazon
Simple Notification Service (Amazon SNS) topic to which the teams can subscribe.

Explanation:
The best solution to meet these requirements with the least amount of operational overhead is to
enable Amazon DynamoDB Streams on the table and use triggers to write to a single Amazon
Simple Notification Service (Amazon SNS) topic to which the teams can subscribe. This solution
requires minimal configuration and infrastructure setup, and Amazon DynamoDB Streams
provide a low-latency way to capture changes to the DynamoDB table. The triggers automatically
capture the changes and publish them to the SNS topic, which notifies the internal teams.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

QUESTION 497
A company is developing a real-time multiplayer game that uses UDP for communications
between the client and servers. In an Auto Scaling group Spikes in demand are anticipated during
the day, so the game server platform must adapt accordingly. Developers want to store gamer
scores and other non-relational data in a database solution that will scale without intervention.
Which solution should a solutions architect recommend?
A. Use Amazon Route 53 for traffic distribution and Amazon Aurora Serverless for data storage.
B. Use a Network Load Balancer for traffic distribution and Amazon DynamoDB on-demand for data
storage.
C. Use a Network Load Balancer for traffic distribution and Amazon Aurora Global Database for data
storage.
D. Use an Application Load Balancer for traffic distribution and Amazon DynamoDB global tables for
data storage.

A

B. Use a Network Load Balancer for traffic distribution and Amazon DynamoDB on-demand for data
storage.

Explanation:
A Network Load Balancer can handle UDP traffic, and Amazon DynamoDB on-demand can
provide automatic scaling without intervention.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

QUESTION 495
A company has a large dataset for its online advertising business stored in an Amazon RDS for
MySQL DB instance in a single Availability Zone. The company wants business reporting queries
to run without impacting the write operations to the production DB instance.
Which solution meets these requirements?
A. Deploy RDS read replicas to process the business reporting queries.
B. Scale out the DB instance horizontally by placing it behind an Elastic Load Balancer.
C. Scale up the DB instance to a larger instance type to handle write operations and queries.
D. Deploy the OB distance in multiple Availability Zones to process the business reporting queries.

A

A. Deploy RDS read replicas to process the business reporting queries.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

QUESTION 494
A developer has an application that uses an AWS Lambda function to upload files to Amazon S3
and needs the required permissions to perform the task. The developer already has an IAM user
with valid IAM credentials required for Amazon S3.
What should a solutions architect do to grant the permissions?
A. Add required IAM permissions in the resource policy of the Lambda function.
B. Create a signed request using the existing IAM credentials in the Lambda function
C. Create a new IAM user and use the existing IAM credentials in the Lambda function.
D. Create an IAM execution role with the required permissions and attach the IAM rote to the
Lambda function.

A

D. Create an IAM execution role with the required permissions and attach the IAM rote to the
Lambda function.

Explanation:
To grant the necessary permissions to an AWS Lambda function to upload files to Amazon S3, a
solutions architect should create an IAM execution role with the required permissions and attach
the IAM role to the Lambda function. This approach follows the principle of least privilege and
ensures that the Lambda function can only access the resources it needs to perform its specific
task.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

QUESTION 493
A company has deployed a serverless application that invokes an AWS Lambda function when
new documents are uploaded to an Amazon S3 bucket. The application uses the Lambda
function to process the documents. After a recent marketing campaign, the company noticed that
the application did not process many of the documents.
What should a solutions architect do to improve the architecture of this application?
A. Set the Lambda function’s runtime timeout value to 15 minutes.
B. Configure an S3 bucket replication policy. Stage the documents m the S3 bucket for later
processing.
C. Deploy an additional Lambda function Load balance the processing of the documents across the
two Lambda functions.
D. Create an Amazon Simple Queue Service (Amazon SOS) queue. Send the requests to the
queue. Configure the queue as an event source for Lambda.

A

D. Create an Amazon Simple Queue Service (Amazon SOS) queue. Send the requests to the
queue. Configure the queue as an event source for Lambda.

Explanation:
To improve the architecture of this application, the best solution would be to use Amazon Simple
Queue Service (Amazon SQS) to buffer the requests and decouple the S3 bucket from the
Lambda function. This will ensure that the documents are not lost and can be processed at a later
time if the Lambda function is not available.
This will ensure that the documents are not lost and can be processed at a later time if the
Lambda function is not available. By using Amazon SQS, the architecture is decoupled and the
Lambda function can process the documents in a scalable and fault-tolerant manner.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

QUESTION 492
A solutions architect is designing the architecture for a software demonstration environment. The
environment will run on Amazon EC2 instances in an Auto Scaling group behind an Application
Load Balancer (ALB). The system will experience significant increases in traffic during working
hours but Is not required to operate on weekends.
Which combination of actions should the solutions architect take to ensure that the system can
scale to meet demand? (Choose two.)
A. Use AWS Auto Scaling to adjust the ALB capacity based on request rate.
B. Use AWS Auto Scaling to scale the capacity of the VPC internet gateway.
C. Launch the EC2 instances in multiple AWS Regions to distribute the load across Regions.
D. Use a target tracking scaling policy to scale the Auto Scaling group based on instance CPU
utilization.
E. Use scheduled scaling to change the Auto Scaling group minimum, maximum, and desired
capacity to zero for weekends. Revert to the default values at the start of the week.

A

D. Use a target tracking scaling policy to scale the Auto Scaling group based on instance CPU
utilization.
E. Use scheduled scaling to change the Auto Scaling group minimum, maximum, and desired
capacity to zero for weekends. Revert to the default values at the start of the week.

Answer D scales the auto-scaling group based on the CPU utilization of the instance, ensuring that the number of instances in the group can be adjusted to handle increased traffic during business hours and reduce capacity during low traffic periods. - Answer E uses scheduled scaling to change the minimum of the auto-scaling group, Maximum and desired capacity set to zero for weekends, ensuring that the Auto Scaling group is reduced to zero on weekends to save costs. Answers A and E: A. Use AWS Auto Scaling to adjust ALB capacity based on request rate. E. Use scheduled scaling to change the minimum, maximum, and desired capacity auto-scaling group to zero for weekends. Return to default values ​​at the beginning of the week. - Response A adjusts the capacity of the ALB based on the request rate, ensuring that the ALB can handle increased traffic during business hours and reduce capacity during low traffic periods. - Answer E uses scheduled scaling to change the minimum, maximum, and desired auto-scaling group to zero for weekends, ensuring that the auto-scaling group is reduced to zero during weekends to save costs. Comparing the two options, both answers D and A are valid options to scale the application based on demand. However, Answer D scales the auto-scaling group based on the instance’s CPU utilization, which is a more granular metric than request rate and can provide better performance and cost optimization. The A response only scales the ALB based on the request rate, which may not be enough to handle sudden increases in traffic. Answer E is a common option to reduce to zero on weekends to save costs. Both answers D and A can be used in conjunction with answer E to ensure that the auto-scaling group is reduced to zero during weekends. However, Answer D provides more granular control over auto-scaling group scaling based on instance CPU utilization, which can result in better performance and cost optimization.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

QUESTION 491
A company has launched an Amazon RDS for MySQL DB instance. Most of the connections to
the database come from serverless applications. Application traffic to the database changes
significantly at random intervals. At limes of high demand, users report that their applications
experience database connection rejection errors.

Which solution will resolve this issue with the LEAST operational overhead?
A. Create a proxy in RDS Proxy. Configure the users’ applications to use the DB instance through
RDS Proxy.
B. Deploy Amazon ElastCache for Memcached between the users’ application and the DB instance.
C. Migrate the DB instance to a different instance class that has higher I/O capacity. Configure the
users’ applications to use the new DB instance.
D. Configure Multi-AZ for the DB instance. Configure the users’ application to switch between the DB
instances.

A

A. Create a proxy in RDS Proxy. Configure the users’ applications to use the DB instance through
RDS Proxy.

Explanation:
Many applications, including those built on modern serverless architectures, can have a large
number of open connections to the database server and may open and close database
connections at a high rate, exhausting database memory and compute resources. Amazon RDS
Proxy allows applications to pool and share connections established with the database, improving
database efficiency and application scalability.
https://aws.amazon.com/pt/rds/proxy/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

QUESTION 490
A company recently deployed a new auditing system to centralize information about operating
system versions patching and installed software for Amazon EC2 instances. A solutions architect
must ensure all instances provisioned through EC2 Auto Scaling groups successfully send
reports to the auditing system as soon as they are launched and terminated.
Which solution achieves these goals MOST efficiently?
A. Use a scheduled AWS Lambda function and run a script remotely on all EC2 instances to send
data to the audit system.
B. Use EC2 Auto Scaling lifecycle hooks to run a custom script to send data to the audit system
when instances are launched and terminated.
C. Use an EC2 Auto Scaling launch configuration to run a custom script through user data to send
data to the audit system when instances are launched and terminated.
D. Run a custom script on the instance operating system to send data to the audit system. Configure
the script to be invoked by the EC2 Auto Scaling group when the instance starts and is
terminated.

A

B. Use EC2 Auto Scaling lifecycle hooks to run a custom script to send data to the audit system
when instances are launched and terminated.

Explanation:
Amazon EC2 Auto Scaling offers the ability to add lifecycle hooks to your Auto Scaling groups.
These hooks let you create solutions that are aware of events in the Auto Scaling instance
lifecycle, and then perform a custom action on instances when the corresponding lifecycle event
occurs.
https://docs.aws.amazon.com/autoscaling/ec2/userguide/lifecycle-hooks.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

QUESTION 489
A company has an application that collects data from loT sensors on automobiles. The data is streamed and stored in Amazon S3 through Amazon Kinesis Date Firehose. The data produces
trillions of S3 objects each year. Each morning, the company uses the data from the previous 30
days to retrain a suite of machine learning (ML) models.
Four times each year, the company uses the data from the previous 12 months to perform
analysis and train other ML models. The data must be available with minimal delay for up to 1
year. After 1 year, the data must be retained for archival purposes.
Which storage solution meets these requirements MOST cost-effectively?
A. Use the S3 Intelligent-Tiering storage class. Create an S3 Lifecycle policy to transition objects to
S3 Glacier Deep Archive after 1 year.
B. Use the S3 Intelligent-Tiering storage class. Configure S3 Intelligent-Tiering to automatically
move objects to S3 Glacier Deep Archive after 1 year.
C. Use the S3 Standard-Infrequent Access (S3 Standard-IA) storage class. Create an S3 Lifecycle
policy to transition objects to S3 Glacier Deep Archive after 1 year.
D. Use the S3 Standard storage class. Create an S3 Lifecycle policy to transition objects to S3
Standard-Infrequent Access (S3 Standard-IA) after 30 days, and then to S3 Glacier Deep Archive
after 1 year.

A

D. Use the S3 Standard storage class. Create an S3 Lifecycle policy to transition objects to S3
Standard-Infrequent Access (S3 Standard-IA) after 30 days, and then to S3 Glacier Deep Archive
after 1 year.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

QUESTION 488
A solutions architect is designing a two-tiered architecture that includes a public subnet and a
database subnet. The web servers in the public subnet must be open to the internet on port 443.
The Amazon RDS for MySQL DB instance in the database subnet must be accessible only to the
web servers on port 3306.
Which combination of steps should the solutions architect take to meet these requirements?
(Choose two.)
A. Create a network ACL for the public subnet. Add a rule to deny outbound traffic to 0.0.0.0/0 on
port.
B. Create a security group for the DB instance. Add a rule to allow traffic from the public subnet
CIDR block on port 3306.
C. Create a security group for the web servers in the public subnet. Add a rule to allow traffic from
0.0.0.0/0 on port 443.
D. Create a security group for the DB instance. Add a rule to allow traffic from the web servers’
security group on port 3306.
E. Create a security group for the DB instance. Add a rule to deny all traffic except traffic from the
web servers’ security group on port 3306.

A

C. Create a security group for the web servers in the public subnet. Add a rule to allow traffic from
0.0.0.0/0 on port 443.
D. Create a security group for the DB instance. Add a rule to allow traffic from the web servers’
security group on port 3306.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

QUESTION 487
A company wants to use Amazon S3 for the secondary copy of its on-premises dataset. The
company would rarely need to access this copy. The storage solution’s cost should be minimal.
Which storage solution meets these requirements?
A. S3 Standard
B. S3 Intelligent-Tiering
C. S3 Standard-Infrequent Access (S3 Standard-IA)
D. S3 One Zone-Infrequent Access (S3 One Zone-IA)

A

D. S3 One Zone-Infrequent Access (S3 One Zone-IA)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

QUESTION 486
A solutions architect is creating a new VPC design. There are two public subnets for the load
balancer, two private subnets for web servers, and two private subnets for MySQL. The web
servers use only HTTPS. The solutions architect has already created a security group for the load
balancer allowing port 443 from 0.0.0.0/0. Company policy requires that each resource has the
least access required to still be able to perform its tasks.
Which additional configuration strategy should the solutions architect use to meet these
requirements?
A. Create a security group for the web servers and allow port 443 from 0.0.0.0/0. Create a security group for the MySQL servers and allow port 3306 from the web servers security group.
B. Create a network ACL for the web servers and allow port 443 from 0.0.0.0/0. Create a network
ACL for the MySQL servers and allow port 3306 from the web servers security group.
C. Create a security group for the web servers and allow port 443 from the load balancer. Create a
security group for the MySQL servers and allow port 3306 from the web servers security group.
D. Create a network ACL for the web servers and allow port 443 from the load balancer. Create a
network ACL for the MySQL servers and allow port 3306 from the web servers security group.

A

C. Create a security group for the web servers and allow port 443 from the load balancer. Create a
security group for the MySQL servers and allow port 3306 from the web servers security group.

Explanation:
Load balancer is public facing accepting all traffic coming towards the VPC (0.0.0.0/0). The web
server needs to trust traffic originating from the ALB. The DB will only trust traffic originating from
the Web server on port 3306 for Mysql.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

QUESTION 485
A hospital is designing a new application that gathers symptoms from patients. The hospital has
decided to use Amazon Simple Queue Service (Amazon SQS) and Amazon Simple Notification
Service (Amazon SNS) in the architecture. A solutions architect is reviewing the infrastructure
design Data must be encrypted at test and in transit. Only authorized personnel of the hospital
should be able to access the data.
Which combination of steps should the solutions architect take to meet these requirements?
(Choose two.)
A. Turn on server-side encryption on the SQS components. Update the default key policy to restrict key usage to a set of authorized principals.

B. Turn on server-side encryption on the SNS components by using an AWS Key Management Service (AWS KMS) customer managed key. Apply a key policy to restrict key usage to a set of authorized principals.

C. Turn on encryption on the SNS components. Update the default key policy to restrict key usage to a set of authorized principals. Set a condition in the topic policy to allow only encrypted connections over TLS.

D. Turn on server-side encryption on the SQS components by using an AWS Key Management Service (AWS KMS) customer managed key. Apply a key policy to restrict key usage to a set of authorized principals. Set a condition in the queue policy to allow only encrypted connections over TLS.

E. Turn on server-side encryption on the SQS components by using an AWS Key Management Service (AWS KMS) customer managed key. Apply an IAM policy to restrict key usage to a set of authorized principals. Set a condition in the queue policy to allow only encrypted connections over TLS.

A

B. Turn on server-side encryption on the SNS components by using an AWS Key Management
Service (AWS KMS) customer managed key. Apply a key policy to restrict key usage to a set ofauthorized principals.

D. Turn on server-side encryption on the SQS components by using an AWS Key Management
Service (AWS KMS) customer managed key. Apply a key pokey to restrict key usage to a set of authorized principals. Set a condition in the queue pokey to allow only encrypted connections
over TLS.

Explanation:
For a customer managed KMS key, you must configure the key policy to add permissions for
each queue producer and consumer.
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-key-
management.html

https://docs.aws.amazon.com/sns/latest/dg/sns-server-side-encryption.html https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-server-side-encryption.html

Encryption at transit = use SSL/TLS -> rule out A,B Encryption at rest = encryption on components -> keep C, D, E KMS always need a key policy, IAM is optional -> E out -> C, D left, one for SNS, one for SQS. TLS: checked, encryption on components: checked

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

QUESTION 484
An ecommerce company is running a multi-tier application on AWS. The front-end and backend
tiers both run on Amazon EC2, and the database runs on Amazon RDS for MySQL. The backend
tier communicates with the RDS instance. There are frequent calls to return identical datasets
from the database that are causing performance slowdowns.
Which action should be taken to improve the performance of the backend?
A. Implement Amazon SNS to store the database calls.
B. Implement Amazon ElasticCache to cache the large database.
C. Implement an RDS for MySQL read replica to cache database calls.
D. Implement Amazon Kinesis Data Firehose to stream the calls to the database.

A

B. Implement Amazon ElasticCache to cache the large database.

Explanation:
Key term is identical datasets from the database it means caching can solve this issue by cached
in frequently used dataset from DB.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

QUESTION 483
A company runs an application that receives data from thousands of geographically dispersed
remote devices that use UDP. The application processes the data immediately and sends a
message back to the device if necessary. No data is stored.
The company needs a solution that minimizes latency for the data transmission from the devices.
The solution also must provide rapid failover to another AWS Region.
Which solution will meet these requirements?
A. Configure an Amazon Route 53 failover routing policy. Create a Network Load Balancer (NLB) in
each of the two Regions. Configure the NLB to invoke an AWS Lambda function to process the
data.
B. Use AWS Global Accelerator. Create a Network Load Balancer (NLB) in each of the two Regions
as an endpoint. Create an Amazon Elastic Container Service (Amazon ECS) cluster with the
Fargate launch type. Create an ECS service on the cluster. Set the ECS service as the target for
the NLProcess the data in Amazon ECS.
C. Use AWS Global Accelerator Create an Application Load Balancer (ALB) in each of the two
Regions as an endpoint. Create an Amazon Elastic Container Service (Amazon ECS) cluster with
the Fargate launch type. Create an ECS service on the cluster. Set the ECS service as the target
for the ALB Process the data in Amazon ECS.
D. Configure an Amazon Route 53 failover routing policy. Create an Application Load Balancer
(ALB) in each of the two Regions. Create an Amazon Elastic Container Service (Amazon ECS)
cluster with the Fargate launch type. Create an ECS service on the cluster. Set the ECS service
as the target for the ALB Process the data in Amazon ECS.

A

B. Use AWS Global Accelerator. Create a Network Load Balancer (NLB) in each of the two Regions
as an endpoint. Create an Amazon Elastic Container Service (Amazon ECS) cluster with the
Fargate launch type. Create an ECS service on the cluster. Set the ECS service as the target for
the NLProcess the data in Amazon ECS.

Explanation:
Geographically dispersed (related to UDP) - Global Accelerator - multiple entrances worldwide to
the AWS network to provide better transfer rates.
UDP - NLB (Network Load Balancer).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

QUESTION 482
A company wants to migrate its 1 PB on-premises image repository to AWS. The images will be
used by a serverless web application. Images stored in the repository are rarely accessed, but
they must be immediately available Additionally, the images must be encrypted at rest and
protected from accidental deletion.
Which solution meets these requirements?
A. Implement client-side encryption and store the images in an Amazon S3 Glacier vault. Set a vault
lock to prevent accidental deletion.
B. Store the images in an Amazon S3 bucket in the S3 Standard-Infrequent Access (S3 Standard-
IA) storage class. Enable versioning default encryption and MFA Delete on the S3 bucket.
C. Store the images in an Amazon FSx for Windows File Server file share. Configure the Amazon FSx file share to use an AWS Key Management Service (AWS KMS) customer master key (CMK)
to encrypt the images in the file share. Use NTFS permission sets on the images to prevent
accidental deletion.
D. Store the images in an Amazon Elastic File System (Amazon EFS) file share in the Infrequent
Access storage class. Configure the EFS file share to use an AWS Key Management Service
(AWS KMS) customer master key (CMK) to encrypt the images in the file share. Use NFS
permission sets on the images to prevent accidental deletion.

A

B. Store the images in an Amazon S3 bucket in the S3 Standard-Infrequent Access (S3 Standard-
IA) storage class. Enable versioning default encryption and MFA Delete on the S3 bucket.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

QUESTION 481
A company runs an application on Amazon EC2 Linux instances across multiple Availability
Zones. The application needs a storage layer that is highly available and Portable Operating
System Interface (POSIX)-compliant. The storage layer must provide maximum data durability
and must be shareable across the EC2 instances. The data in the storage layer will be accessed
frequently for the first 30 days and will be accessed infrequently after that time.
Which solution will meet these requirements MOST cost-effectively?
A. Use the Amazon S3 Standard storage class. Create an S3 Lifecycle policy to move infrequently
accessed data to S3 Glacier.
B. Use the Amazon S3 Standard storage class. Create an S3 Lifecycle policy to move infrequently
accessed data to S3 Standard-Infrequent Access (EF3 Standard-IA).
C. Use the Amazon Elastic File System (Amazon EFS) Standard storage class. Create a Lifecycle
management policy to move infrequently accessed data to EFS Standard-Infrequent Access (EFS
Standard-IA).
D. Use the Amazon Elastic File System (Amazon EFS) One Zone storage class. Create a Lifecycle
management policy to move infrequently accessed data to EFS One Zone-Infrequent Access
(EFS One Zone-IA).

A

C. Use the Amazon Elastic File System (Amazon EFS) Standard storage class. Create a Lifecycle
management policy to move infrequently accessed data to EFS Standard-Infrequent Access (EFS
Standard-IA).

Explanation:
https://aws.amazon.com/efs/features/infrequent-access/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

QUESTION 480
A company is using Amazon Route 53 latency-based routing to route requests to its UDP-based
application for users around the world. The application is hosted on redundant servers in the
company’s on-premises data centers in the United States. Asia, and Europe. The company’s
compliance requirements state that the application must be hosted on premises. The company
wants to improve the performance and availability of the application.
What should a solutions architect do to meet these requirements?
A. A Configure three Network Load Balancers (NLBs) in the three AWS Regions to address the on-
premises endpoints. Create an accelerator by using AWS Global Accelerator, and register the
NLBs as its endpoints. Provide access to the application by using a CNAME that points to the
accelerator DNS.
B. Configure three Application Load Balancers (ALBs) in the three AWS Regions to address the on-
premises endpoints. Create an accelerator by using AWS Global Accelerator and register the
ALBs as its endpoints. Provide access to the application by using a CNAME that points to the
accelerator DNS.
C. Configure three Network Load Balancers (NLBs) in the three AWS Regions to address the on-
premises endpoints. In Route 53, create a latency-based record that points to the three NLBs.
and use it as an origin for an Amazon CloudFront distribution Provide access to the application by
using a CNAME that points to the CloudFront DNS.
D. Configure three Application Load Balancers (ALBs) in the three AWS Regions to address the on-
premises endpoints. In Route 53 create a latency-based record that points to the three ALBs and use it as an origin for an Amazon CloudFront distribution. Provide access to the application by
using a CNAME that points to the CloudFront DNS.

A

A. A Configure three Network Load Balancers (NLBs) in the three AWS Regions to address the on-
premises endpoints. Create an accelerator by using AWS Global Accelerator, and register the
NLBs as its endpoints. Provide access to the application by using a CNAME that points to the
accelerator DNS.

Explanation:
Q: How is AWS Global Accelerator different from Amazon CloudFront?
A: AWS Global Accelerator and Amazon CloudFront are separate services that use the AWS
global network and its edge locations around the world. CloudFront improves performance for
both cacheable content (such as images and videos) and dynamic content (such as API
acceleration and dynamic site delivery). Global Accelerator improves performance for a wide
range of applications over TCP or UDP by proxying packets at the edge to applications running in
one or more AWS Regions. Global Accelerator is a good fit for non-HTTP use cases, such as
gaming (UDP), IoT (MQTT), or Voice over IP, as well as for HTTP use cases that specifically
require static IP addresses or dete

22
Q

QUESTION 479
An ecommerce company is building a distributed application that involves several serverless
functions and AWS services to complete order-processing tasks. These tasks require manual
approvals as part of the workflow. A solutions architect needs to design an architecture for the
order-processing application. The solution must be able to combine multiple AWS Lambda
functions into responsive serverless applications. The solution also must orchestrate data and
services that run on Amazon EC2 instances, containers, or on-premises servers.
Which solution will meet these requirements with the LEAST operational overhead?
A. Use AWS Step Functions to build the application.
B. Integrate all the application components in an AWS Glue job
C. Use Amazon Simple Queue Service (Amazon SQS) to build the application
D. Use AWS Lambda functions and Amazon EventBridge (Amazon CloudWatch Events) events to
build the application

A

A. Use AWS Step Functions to build the application.

Explanation:
AWS Step Functions is a fully managed service that makes it easy to build applications by
coordinating the components of distributed applications and microservices using visual workflows.
With Step Functions, you can combine multiple AWS Lambda functions into responsive
serverless applications and orchestrate data and services that run on Amazon EC2 instances,
containers, or on-premises servers. Step Functions also allows for manual approvals as part of
the workflow. This solution meets all the requirements with the least operational overhead.

23
Q

QUESTION 478
A company wants to restrict access to the content of one of its main web applications and to protect the content by using authorization techniques available on AWS. The company wants to implement a serverless architecture and an authentication solution for fewer than 100 users. The solution needs to integrate with the main web application and serve web content globally. The solution must also scale as the company’s user base grows while providing the lowest login latency possible.

Which solution will meet these requirements MOST cost-effectively?

A. Use Amazon Cognito for authentication. Use Lambda@Edge for authorization. Use Amazon CloudFront to serve the web application globally.

B. Use AWS Directory Service for Microsoft Active Directory for authentication. Use AWS Lambda for authorization. Use an Application Load Balancer to serve the web application globally.

C. Use Amazon Cognito for authentication. Use AWS Lambda for authorization. Use Amazon S3 Transfer Acceleration to serve the web application globally.

D. Use AWS Directory Service for Microsoft Active Directory for authentication. Use Lambda@Edge for authorization. Use AWS Elastic Beanstalk to serve the web application globally.

A

A. Use Amazon Cognito tor authentication. Use Lambda@Edge for authorization. Use Amazon
CloudFront to serve the web application globally.

CloudFront=globally
Lambda@edge = Authorization/ Latency
Cognito=Authentication for Web apps

Amazon Cognito is a serverless authentication service that can be used to easily add user sign-up and authentication to web and mobile apps. It is a good choice for this scenario because it is scalable and can handle a small number of users without any additional costs. Lambda@Edge is a serverless compute service that can be used to run code at the edge of the AWS network. It is a good choice for this scenario because it can be used to perform authorization checks at the edge, which can improve the login latency. Amazon CloudFront is a content delivery network (CDN) that can be used to serve web content globally. It is a good choice for this scenario because it can cache web content closer to users, which can improve the performance of the web application.

24
Q

QUESTION 477
A solutions architect must migrate a Windows Internet Information Services (IIS) web application
to AWS. The application currently relies on a file share hosted in the user’s on-premises network-
attached storage (NAS). The solutions architect has proposed migrating the MS web servers to
Amazon EC2 instances in multiple Availability Zones that are connected to the storage solution,
and configuring an Elastic Load Balancer attached to the instances.
Which replacement to the on-premises file share is MOST resilient and durable?
A. Migrate the file share to Amazon RDS
B. Migrate the file share to AWS Storage Gateway
C. Migrate the file share to Amazon FSx for Windows File Server
D. Migrate the file share to Amazon Elastic File System (Amazon EFS)

A

C. Migrate the file share to Amazon FSx for Windows File Server

Explanation:
Amazon FSx makes it easy and cost effective to launch, run, and scale feature-rich, high-
performance file systems in the cloud.

25
Q

QUESTION 476
An ecommerce company needs to run a scheduled daily job to aggregate and filter sales records
for analytics. The company stores the sales records in an Amazon S3 bucket. Each object can be
up to 10 GB in size. Based on the number of sales events, the job can take up to an hour to
complete. The CPU and memory usage of the job are constant and are known in advance. A solutions architect needs to minimize the amount of operational effort that is needed for the job
to run.
Which solution meets these requirements?
A. Create an AWS Lambda function that has an Amazon EventBridge notification. Schedule the
EventBridge event to run once a day.
B. Create an AWS Lambda function. Create an Amazon API Gateway HTTP API, and integrate the
API with the function. Create an Amazon EventBridge scheduled avert that calls the API and
invokes the function.
C. Create an Amazon Elastic Container Service (Amazon ECS) cluster with an AWS Fargate launch
type. Create an Amazon EventBridge scheduled event that launches an ECS task on the cluster
to run the job.
D. Create an Amazon Elastic Container Service (Amazon ECS) duster with an Amazon EC2 launch
type and an Auto Scaling group with at least one EC2 instance. Create an Amazon EventBridge
scheduled event that launches an ECS task on the duster to run the job.

A

C. Create an Amazon Elastic Container Service (Amazon ECS) cluster with an AWS Fargate launch
type. Create an Amazon EventBridge scheduled event that launches an ECS task on the cluster
to run the job.

26
Q

QUESTION 475
A company hosts a three-tier ecommerce application on a fleet of Amazon EC2 instances. The
instances run in an Auto Scaling group behind an Application Load Balancer (ALB). All
ecommerce data is stored in an Amazon RDS for ManaDB Multi-AZ DB instance. The company
wants to optimize customer session management during transactions. The application must store
session data durably.
Which solutions will meet these requirements? (Choose two.)
A. Turn on the sticky sessions feature (session affinity) on the ALB
B. Use an Amazon DynamoDB table to store customer session information
C. Deploy an Amazon Cognito user pool to manage user session information
D. Deploy an Amazon ElastiCache for Redis cluster to store customer session information
E. Use AWS Systems Manager Application Manager in the application to manage user session
information

A

A. Turn on the sticky sessions feature (session affinity) on the ALB
D. Deploy an Amazon ElastiCache for Redis cluster to store customer session information

Explanation:
https://aws.amazon.com/caching/session-management/

27
Q

QUESTION 474
A company is migrating its on-premises workload to the AWS Cloud. The company already uses
several Amazon EC2 instances and Amazon RDS DB instances. The company wants a solution
that automatically starts and stops the EC2 instances and D6 instances outside of business
hours. The solution must minimize cost and infrastructure maintenance.
Which solution will meet these requirement?
A. Scale the EC2 instances by using elastic resize Scale the DB instances to zero outside of
business hours.
B. Explore AWS Marketplace for partner solutions that will automatically start and stop the EC2
Instances and OB instances on a schedule.
C. Launch another EC2 instance. Configure a crontab schedule to run shell scripts that will start and
stop the existing EC2 instances and DB instances on a schedule.
D. Create an AWS Lambda function that will start and stop the EC2 instances and DB instances.
Configure Amazon EventBridge to invoke the Lambda function on a schedule.

A

D. Create an AWS Lambda function that will start and stop the EC2 instances and DB instances.
Configure Amazon EventBridge to invoke the Lambda function on a schedule.

Explanation:
The most efficient solution for automatically starting and stopping EC2 instances and DB
instances on a schedule while minimizing cost and infrastructure maintenance is to create an
AWS Lambda function and configure Amazon EventBridge to invoke the function on a schedule.

28
Q

QUESTION 473
A payment processing company records all voice communication with its customers and stores
the audio files in an Amazon S3 bucket. The company needs to capture the text from the audio
files. The company must remove from the text any personally identifiable information (Pll) that
belongs to customers.
What should a solutions architect do to meet these requirements?
A. Process the audio files by using Amazon Kinesis Video Streams. Use an AWS Lambda function
to scan for known Pll patterns.
B. When an audio file is uploaded to the S3 bucket, invoke an AWS Lambda function to start an
Amazon Textract task to analyze the call recordings.
C. Configure an Amazon Transcribe transcription job with Pll redaction turned on. When an audio file
is uploaded to the S3 bucket, invoke an AWS Lambda function to start the transcription job. Store
the output in a separate S3 bucket.
D. Create an Amazon Connect contact flow that ingests the audio files with transcription turned on.
Embed an AWS Lambda function to scan for known Pll patterns. Use Amazon EventBridge
(Amazon CloudWatch Events) to start the contact flow when an audio file is uploaded to the S3
bucket.

A

C. Configure an Amazon Transcribe transcription job with Pll redaction turned on. When an audio file
is uploaded to the S3 bucket, invoke an AWS Lambda function to start the transcription job. Store
the output in a separate S3 bucket.

29
Q

QUESTION 472
A financial company hosts a web application on AWS. The application uses an Amazon API
Gateway Regional API endpoint to give users the ability to retrieve current stock prices. The
company’s security team has noticed an increase in the number of API requests. The security
team is concerned that HTTP flood attacks might take the application offline.
A solutions architect must design a solution to protect the application from this type of attack.
Which solution meats these requirements with the LEAST operational overhead?
A. Create an Amazon CloudFront distribution in front of the API Gateway Regional API endpoint with
a maximum TTL of 24 hours.
B. Create a Regional AWS WAF web ACL with a rate-based rule. Associate the web ACL with the
API Gateway stage.
C. Use Amazon CloudWatch metrics to monitor the Count metric and alert the security team when
the predefined rate is reached.
D. Create an Amazon CloudFront distribution with Lambda@Edge in front of the API Gateway
Regional API endpoint. Create an AWS Lambda function to block requests from IP addresses
that exceed the predefined rate.

A

B. Create a Regional AWS WAF web ACL with a rate-based rule. Associate the web ACL with the
API Gateway stage.

Explanation:
A rate-based rule in AWS WAF allows the security team to configure thresholds that trigger rate-
based rules, which enable AWS WAF to track the rate of requests for a specified time period and
then block them automatically when the threshold is exceeded. This provides the ability to
prevent HTTP flood attacks with minimal operational overhead.
https://docs.aws.amazon.com/waf/latest/developerguide/web-acl.html

30
Q

QUESTION 471
An image-hosting company stores its objects in Amazon S3 buckets. The company wants to
avoid accidental exposure of the objects in the S3 buckets to the public. All S3 objects in the
entire AWS account need to remain private.
Which solution will meal these requirements?
A. Use Amazon GuardDuty to monitor S3 bucket policies. Create an automatic remediation action
rule that uses an AWS Lambda function to remediate any change that makes the objects public.
B. Use AWS Trusted Advisor to find publicly accessible S3 Dockets. Configure email notifications In
Trusted Advisor when a change is detected manually change the S3 bucket policy if it allows
public access.
C. Use AWS Resource Access Manager to find publicly accessible S3 buckets. Use Amazon Simple
Notification Service (Amazon SNS) to invoke an AWS Lambda function when a change it
detected. Deploy a Lambda function that programmatically remediates the change.
D. Use the S3 Block Public Access feature on the account level. Use AWS Organizations to create a
service control policy (SCP) that prevents IAM users from changing the setting. Apply tie SCP to
tie account.

A

D. Use the S3 Block Public Access feature on the account level. Use AWS Organizations to create a
service control policy (SCP) that prevents IAM users from changing the setting. Apply tie SCP to
tie account.

Explanation:
https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-control-block-public-
access.html

31
Q

QUESTION 470
A company is moving its data management application to AWS. The company wants to transition
to an event-driven architecture. The architecture needs to be more distributed and to use
serverless concepts while performing the different aspects of the workflow. The company also
wants to minimize operational overhead.
Which solution will meet these requirements?
A. Build out the workflow in AWS Glue. Use AWS Glue to invoke AWS Lambda functions to process
the workflow slaps.
B. Build out the workflow in AWS Step Functions. Deploy the application on Amazon EC2 Instances.
Use Step Functions to invoke the workflow steps on the EC2 instances.
C. Build out the workflow in Amazon EventBridge. Use EventBridge to invoke AWS Lambda
functions on a schedule to process the workflow steps.
D. Build out the workflow in AWS Step Functions. Use Step Functions to create a state machine.
Use the state machine to invoke AWS Lambda functions to process the workflow steps.

A

D. Build out the workflow in AWS Step Functions. Use Step Functions to create a state machine.
Use the state machine to invoke AWS Lambda functions to process the workflow steps.

Explanation:
Step 3: Create a State Machine
Use the Step Functions console to create a state machine that invokes the Lambda function that
you created earlier in Step 1.
https://docs.aws.amazon.com/step-functions/latest/dg/tutorial-creating-lambda-state-
machine.html
In Step Functions, a workflow is called a state machine, which is a series of event-driven steps.
Each step in a workflow is called a state.

32
Q

QUESTION 469
A company is deploying a new application on Amazon EC2 instances. The application writes data
to Amazon Elastic Block Store (Amazon EBS) volumes. The company needs to ensure that all
data that is written to the EBS volumes is encrypted at rest.
Which solution will meet this requirement?
A. Create an IAM role that specifies EBS encryption. Attach the role to the EC2 instances.
B. Create the EBS volumes as encrypted volumes. Attach the EBS volumes to the EC2 instances.
C. Create an EC2 instance tag that has a key of Encrypt and a value of True. Tag all instances that
require encryption at the ESS level.
D. Create an AWS Key Management Service (AWS KMS) key policy that enforces EBS encryption
in the account Ensure that the key policy is active.

A

B. Create the EBS volumes as encrypted volumes. Attach the EBS volumes to the EC2 instances.

Explanation:
When you create an EBS volume, you can specify whether to encrypt the volume. If you choose
to encrypt the volume, all data written to the volume is automatically encrypted at rest using
AWS-managed keys. You can also use customer-managed keys (CMKs) stored in AWS KMS to
encrypt and protect your EBS volumes. You can create encrypted EBS volumes and attach them
to EC2 instances to ensure that all data written to the volumes is encrypted at rest.

33
Q

QUESTION 468
A company is running a multi-tier ecommerce web application in the AWS Cloud. The application
runs on Amazon EC2 instances with an Amazon RDS for MySQL Multi-AZ DB instance. Amazon
RDS is configured with the latest generation DB instance with 2,000 GB of storage in a General
Purpose SSD (gp3) Amazon Elastic Block Store (Amazon EBS) volume. The database
performance affects the application during periods of high demand.
A database administrator analyzes the logs in Amazon CloudWatch Logs and discovers that the
application performance always degrades when the number of read and write IOPS is higher than
20,000.
What should a solutions architect do to improve the application performance?
A. Replace the volume with a magnetic volume.
B. Increase the number of IOPS on the gp3 volume.
C. Replace the volume with a Provisioned IOPS SSD (Io2) volume.
D. Replace the 2,000 GB gp3 volume with two 1,000 GB gp3 volumes.

A

D. Replace the 2,000 GB gp3 volume with two 1,000 GB gp3 volumes.

Explanation:
To improve the application performance, you can replace the 2,000 GB gp3 volume with two
1,000 GB gp3 volumes. This will increase the number of IOPS available to the database and
improve performance.
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Storage.html

34
Q

QUESTION 467
A company is deploying a two-tier web application in a VPC. The web tier is using an Amazon
EC2 Auto Scaling group with public subnets that span multiple Availability Zones. The database
tier consists of an Amazon RDS for MySQL DB instance in separate private subnets. The web tier
requires access to the database to retrieve product information.
The web application is not working as intended. The web application reports that it cannot
connect to the database. The database is confirmed to be up and running. All configurations for
the network ACLs, security groups, and route tables are still in their default states.
What should a solutions architect recommend to fix the application?
A. Add an explicit rule to the private subnet’s network ACL to allow traffic from the web tier’s EC2
instances.
B. Add a route in the VPC route table to allow traffic between the web tier’s EC2 instances and the
database tier.
C. Deploy the web tier’s EC2 instances and the database tier’s RDS instance into two separate
VPCs and configure VPC peering.
D. Add an inbound rule to the security group of the database tier’s RDS instance to allow traffic from
the web tier’s security group.

A

D. Add an inbound rule to the security group of the database tier’s RDS instance to allow traffic from
the web tier’s security group.

Explanation:
By default, all inbound traffic to an RDS instance is blocked. Therefore, an inbound rule needs to
be added to the security group of the RDS instance to allow traffic from the security group of the
web tier’s EC2 instances.

35
Q

QUESTION 466
An ecommerce company is experiencing an increase in user traffic. The company’s store is
deployed on Amazon EC2 instances as a two-tier web application consisting of a web tier and a
separate database tier. As traffic increases, the company notices that the architecture is causing
significant delays in sending timely marketing and order confirmation email to users. The
company wants to reduce the time it spends resolving complex email delivery issues and
minimize operational overhead.
What should a solutions architect do to meet these requirements?
A. Create a separate application tier using EC2 instances dedicated to email processing.
B. Configure the web instance to send email through Amazon Simple Email Service (Amazon SES).
C. Configure the web instance to send email through Amazon Simple Notification Service (Amazon
SNS).
D. Create a separate application tier using EC2 instances dedicated to email processing. Place the
instances in an Auto Scaling group.

A

B. Configure the web instance to send email through Amazon Simple Email Service (Amazon SES).

Explanation:
Amazon SES is a cost-effective and scalable email service that enables businesses to send and
receive email using their own email addresses and domains. Configuring the web instance to
send email through Amazon SES is a simple and effective solution that can reduce the time spent
resolving complex email delivery issues and minimize operational overhead.

36
Q

QUESTION 465
A company is planning to migrate a commercial off-the-shelf application from is on-premises data
center to AWS. The software has a software licensing model using sockets and cores with
predictable capacity and uptime requirements. The company wants to use its existing licenses,
which were purchased earlier this year.

Which Amazon EC2 pricing option is the MOST cost-effective?
A. Dedicated Reserved Hosts
B. Dedicated On-Demand Hosts
C. Dedicated Reserved Instances
D. Dedicated On-Demand Instances

A

A. Dedicated Reserved Hosts

Explanation:
Dedicated Host Reservations provide a billing discount compared to running On-Demand
Dedicated Hosts. Reservations are available in three payment options.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/dedicated-hosts-overview.html

37
Q

QUESTION 464
A company wants to deploy a new public web application on AWS. The application includes a
web server tier that uses Amazon EC2 instances. The application also includes a database tier
that uses an Amazon RDS for MySQL DB instance.
The application must be secure and accessible for global customers that have dynamic IP
addresses.
How should a solutions architect configure the security groups to meet these requirements?
A. Configure the security group tor the web servers to allow inbound traffic on port 443 from
0.0.0.0/0. Configure the security group for the DB instance to allow inbound traffic on port 3306
from the security group of the web servers.
B. Configure the security group for the web servers to allow inbound traffic on port 443 from the IP
addresses of the customers. Configure the security group for the DB instance lo allow inbound
traffic on port 3306 from the security group of the web servers.
C. Configure the security group for the web servers to allow inbound traffic on port 443 from the IP
addresses of the customers. Configure the security group for the DB instance to allow inbound
traffic on port 3306 from the IP addresses of the customers.
D. Configure the security group for the web servers to allow inbound traffic on port 443 from
0.0.0.0.0. Configure the security group for the DB instance to allow inbound traffic on port 3306
from 0.0.0.0/0.

A

A. Configure the security group tor the web servers to allow inbound traffic on port 443 from
0.0.0.0/0. Configure the security group for the DB instance to allow inbound traffic on port 3306
from the security group of the web servers.

38
Q

QUESTION 463
A solutions architect wants all new users to have specific complexity requirements and mandatory
rotation periods tor IAM user passwords.
What should the solutions architect do to accomplish this?
A. Set an overall password policy for the entire AWS account.
B. Set a password policy for each IAM user in the AWS account.
C. Use third-party vendor software to set password requirements.
D. Attach an Amazon CloudWatch rule to the Create_newuser event to set the password with the
appropriate requirements.

A

A. Set an overall password policy for the entire AWS account.

Explanation:
To accomplish this, the solutions architect should set an overall password policy for the entire
AWS account. This policy will apply to all IAM users in the account, including new users.

39
Q

QUESTION 462
A rapidly growing global ecommerce company is hosting its web application on AWS. The web
application includes static content and dynamic content. The website stores online transaction
processing (OLTP) data in an Amazon RDS database. The website’s users are experiencing slow
page loads.
Which combination of actions should a solutions architect take to resolve this issue? (Choose
two.)
A. Configure an Amazon Redshift cluster.
B. Set up an Amazon CloudFront distribution
C. Host the dynamic web content in Amazon S3
D. Create a read replica for the RDS DB instance.
E. Configure a Multi-AZ deployment for the RDS DB instance

A

B. Set up an Amazon CloudFront distribution
D. Create a read replica for the RDS DB instance.

Explanation:
To resolve the issue of slow page loads for a rapidly growing e-commerce website hosted on
AWS, a solutions architect can take the following two actions:
1. Set up an Amazon CloudFront distribution
2. Create a read replica for the RDS DB instance
Configuring an Amazon Redshift cluster is not relevant to this issue since Redshift is a data
warehousing service and is typically used for the analytical processing of large amounts of data.
Hosting the dynamic web content in Amazon S3 may not necessarily improve performance since
S3 is an object storage service, not a web application server. While S3 can be used to host static web content, it may not be suitable for hosting dynamic web content since S3 doesn’t support
server-side scripting or processing.
Configuring a Multi-AZ deployment for the RDS DB instance will improve high availability but may
not necessarily improve performance.

40
Q

QUESTION 461
A solutions architect is designing a company’s disaster recovery (DR) architecture. The company
has a MySQL database that runs on an Amazon EC2 instance in a private subnet with scheduled
backup. The DR design needs to include multiple AWS Regions.
Which solution will meet these requirements with the LEAST operational overhead?
A. Migrate the MySQL database to multiple EC2 instances. Configure a standby EC2 instance in the
DR Region Turn on replication.
B. Migrate the MySQL database to Amazon RDS. Use a Multi-AZ deployment. Turn on read
replication for the primary DB instance in the different Availability Zones.
C. Migrate the MySQL database to an Amazon Aurora global database. Host the primary DB cluster
in the primary Region. Host the secondary DB cluster in the DR Region.
D. Store the schedule backup of the MySQL database in an Amazon S3 bucket that is configured for
S3 Cross-Region Replication (CRR). Use the data backup to restore the database in the DR
Region.

A

C. Migrate the MySQL database to an Amazon Aurora global database. Host the primary DB cluster
in the primary Region. Host the secondary DB cluster in the DR Region.

Explanation:
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Replication.html

41
Q

QUESTION 460
A company wants to configure its Amazon CloudFront distribution to use SSL/TLS certificates.
The company does not want to use the default domain name for the distribution. Instead, the
company wants to use a different domain name for the distribution.
Which solution will deploy the certificate with icurring any additional costs?
A. Request an Amazon issued private certificate from AWS Certificate Manager (ACM) in the us-
east-1 Region.
B. Request an Amazon issued private certificate from AWS Certificate Manager (ACM) in the us-
west-1 Region.
C. Request an Amazon issued public certificate from AWS Certificate Manager (ACU) in the us-east-
1 Region.
D. Request an Amazon issued public certificate from AWS Certificate Manager (ACU) in the us-west-1 Regon.

A

C. Request an Amazon issued public certificate from AWS Certificate Manager (ACU) in the us-east-
1 Region.

42
Q

QUESTION 459
A company has a regional subscription-based streaming service that runs in a single AWS
Region. The architecture consists of web servers and application servers on Amazon EC2
instances. The EC2 instances are in Auto Scaling groups behind Elastic Load Balancers. The
architecture includes an Amazon Aurora database cluster that extends across multiple Availability
Zones.
The company wants to expand globally and to ensure that its application has minimal downtime.
A. Extend the Auto Scaling groups for the web tier and the application tier to deploy instances in
Availability Zones in a second Region. Use an Aurora global database to deploy the database in
the primary Region and the second Region. Use Amazon Route 53 health checks with a failover
routing policy to the second Region.
B. Deploy the web tier and the application tier to a second Region. Add an Aurora PostgreSQL
cross-Region Aurara Replica in the second Region. Use Amazon Route 53 health checks with a
failovers routing policy to the second Region, Promote the secondary to primary as needed.
C. Deploy the web tier and the applicatin tier to a second Region. Create an Aurora PostSQL
database in the second Region. Use AWS Database Migration Service (AWS DMS) to replicate
the primary database to the second Region. Use Amazon Route 53 health checks with a failover
routing policy to the second Region.
D. Deploy the web tier and the application tier to a second Region. Use an Amazon Aurora global
database to deploy the database in the primary Region and the second Region. Use Amazon
Route 53 health checks with a failover routing policy to the second Region. Promote the
secondary to primary as needed.

A

A. Extend the Auto Scaling groups for the web tier and the application tier to deploy instances in
Availability Zones in a second Region. Use an Aurora global database to deploy the database in
the primary Region and the second Region. Use Amazon Route 53 health checks with a failover
routing policy to the second Region.

43
Q

QUESTION 458
A company hosts a multiplayer gaming application on AWS. The company wants the application
to read data with sub-millisecond latency and run one-time queries on historical data.
Which solution will meet these requirements with the LEAST operational overhead?
A. Use Amazon RDS for data that is frequently accessed. Run a periodic custom script to export the
data to an Amazon S3 bucket.
B. Store the data directly in an Amazon S3 bucket. Implement an S3 Lifecycle policy to move older
data to S3 Glacier Deep Archive for long-term storage. Run one-time queries on the data in
Amazon S3 by using Amazon Athena
C. Use Amazon DynamoDB with DynamoDB Accelerator (DAX) for data that is frequently accessed.
Export the data to an Amazon S3 bucket by using DynamoDB table export. Run one-time queries
on the data in Amazon S3 by using Amazon Athena.
D. Use Amazon DynamoDB for data that is frequently accessed. Turn on streaming to Amazon
Kinesis Data Streams. Use Amazon Kinesis Data Firehose to read the data from Kinesis Data
Streams. Store the records in an Amazon S3 bucket.

A

C. Use Amazon DynamoDB with DynamoDB Accelerator (DAX) for data that is frequently accessed.
Export the data to an Amazon S3 bucket by using DynamoDB table export. Run one-time queries
on the data in Amazon S3 by using Amazon Athena.

Explanation:
DynamoDB supports some of the world’s largest scale applications by providing consistent,
single-digit millisecond response times at any scale. You can build applications with virtually
unlimited throughput and storage.
https://aws.amazon.com/dynamodb/dax/?nc1=h_ls

44
Q

QUESTION 457
A company runs an application on Amazon EC2 instances. The company needs to implement a
disaster recovery (DR) solution for the application. The DR solution needs to have a recovery
time objective (RTO) of less than 4 hours. The DR solution also needs to use the fewest possible
AWS resources during normal operations.
Which solution will meet these requirements in the MOST operationally efficient way?
A. Create Amazon Machine Images (AMIs) to back up the EC2 instances. Copy the AMIs to a
secondary AWS Region. Automate infrastructure deployment in the secondary Region by using
AWS Lambda and custom scripts.
B. Create Amazon Machine Images (AMIs) to back up the EC2 instances. Copy the AMIs to a
secondary AWS Region. Automate infrastructure deployment in the secondary Region by using
AWS CloudFormation.
C. Launch EC2 instances in a secondary AWS Region. Keep the EC2 instances in the secondary
Region active at all times.
D. Launch EC2 instances in a secondary Availability Zone. Keep the EC2 instances in the secondary
Availability Zone active at all times.

A

B. Create Amazon Machine Images (AMIs) to back up the EC2 instances. Copy the AMIs to a
secondary AWS Region. Automate infrastructure deployment in the secondary Region by using
AWS CloudFormation.

Explanation:
Option B would be the most operationally efficient solution for implementing a DR solution for the
application, meeting the requirement of an RTO of less than 4 hours and using the fewest
possible AWS resources during normal operations.
By creating Amazon Machine Images (AMIs) to back up the EC2 instances and copying them to a
secondary AWS Region, the company can ensure that they have a reliable backup in the event of
a disaster. By using AWS CloudFormation to automate infrastructure deployment in the
secondary Region, the company can minimize the amount of time and effort required to set up
the DR solution.

45
Q

QUESTION 456
A company has an On-premises volume backup solution that has reached its end of life. The
company wants to use AWS as part of a new backup solution and wants to maintain local access
to all the data while it is backed up on AWS. The company wants to ensure that the data backed
up on AWS is automatically and securely transferred.
Which solution meets these requirements?
A. Use AWS Snowball to migrate data out of the on-premises solution to Amazon S3. Configure on-
premises systems to mount the Snowball S3 endpoint to provide local access to the data.
B. Use AWS Snowball Edge to migrate data out of the on-premises solution to Amazon S3. Use the
Snowball Edge file interface to provide on-premises systems with local access to the data.
C. Use AWS Storage Gateway and configure a cached volume gateway. Run the Storage Gateway
software application on premises and configure a percentage of data to cache locally. Mount the
gateway storage volumes to provide local access to the data.
D. Use AWS Storage Gateway and configure a stored volume gateway. Run the Storage software
application on premises and map the gateway storage volumes to on-premises storage. Mount
the gateway storage volumes to provide local access to the data.

A

D. Use AWS Storage Gateway and configure a stored volume gateway. Run the Storage software
application on premises and map the gateway storage volumes to on-premises storage. Mount
the gateway storage volumes to provide local access to the data.

Explanation:
https://docs.aws.amazon.com/storagegateway/latest/vgw/WhatIsStorageGateway.html

46
Q

QUESTION 455
A company recently created a disaster recovery site in a different AWS Region. The company
needs to transfer large amounts of data back and forth between NFS file systems in the two
Regions on a periodic basis.
Which solution will meet these requirements with the LEAST operational overhead?
A. Use AWS DataSync.
B. Use AWS Snowball devices
C. Set up an SFTP server on Amazon EC2
D. Use AWS Database Migration Service (AWS DMS)

A

A. Use AWS DataSync.

Explanation:
AWS DataSync is a fully managed data transfer service that simplifies moving large amounts of
data between on-premises storage systems and AWS services. It can also transfer data between
different AWS services, including different AWS Regions. DataSync provides a simple, scalable,
and automated solution to transfer data, and it minimizes the operational overhead because it is
fully managed by AWS.

47
Q

QUESTION 454
A company’s application runs on AWS. The application stores large documents in an Amazon S3
bucket that uses the S3 Standard-infrequent Access (S3 Standerd-IA) storage class. The
company will continue paying to store the data but wants to save on its total S3 costs. The
company wants authorized external users to have the ability to access the documents in
milliseconds.
Which solution will meet these requirements MOST cost-effectively?
A. Configure the S3 bucket to be a Requester Pays bucket.
B. Change the storage tier to S3 Standard for all existing and future objects.
C. Turn on S3 Transfer Acceleration tor the S3 Docket.
D. Use Amazon CloudFront to handle all the requests to the S3 bucket.

A

D. Use Amazon CloudFront to handle all the requests to the S3 bucket.

48
Q

QUESTION 453
A company’s web application consists of an Amazon API Gateway API in front of an AWS
Lambda function and an Amazon DynamoDB database. The Lambda function handles the
business logic, and the DynamoDB table hosts the data. The application uses Amazon Cognito
user pools to identify the individual users of the application. A solutions architect needs to update
the application so that only users who have a subscription can access premium content.
Which solution will meet this requirement with the LEAST operational overhead?
A. Enable API caching and throttling on the API Gateway API.
B. Set up AWS WAF on the API Gateway API Create a rule to filter users who have a subscription.
C. Apply fine-grained IAM permissions to the premium content in the DynamoDB table.
D. Implement API usage plans and API keys to limit the access of users who do not have a
subscription.

A

D. Implement API usage plans and API keys to limit the access of users who do not have a
subscription.

Explanation:
To meet the requirement with the least operational overhead, you can implement API usage
plans and API keys to limit the access of users who do not have a subscription. This way, you
can control access to your API Gateway APIs by requiring clients to submit valid API keys with
requests. You can associate usage plans with API keys to configure throttling and quota limits on
individual client accounts.
https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-api-usage-
plans.html

49
Q

QUESTION 452
A company is hosting a three-tier ecommerce application in the AWS Cloud. The company hosts
the website on Amazon S3 and integrates the website with an API that handles sales requests.
The company hosts the API on three Amazon EC2 instances behind an Application Load
Balancer (ALB). The API consists of static and dynamic front-end content along with backend
workers that process sales requests asynchronously.
The company is expecting a significant and sudden increase in the number of sales requests
during events for the launch of new products.
What should a solutions architect recommend to ensure that all the requests are processed
successfully?
A. Add an Amazon CloudFront distribution for the dynamic content. Increase the number of EC2
instances to handle the increase in traffic.
B. Add an Amazon CloudFront distribution for the static content. Place the EC2 instances in an Auto
Scaling group to launch new instances based on network traffic.
C. Add an Amazon CloudFront distribution for the dynamic content. Add an Amazon ElastiCache
instance in front of the ALB to reduce traffic for the API to handle.
D. Add an Amazon CloudFront distribution for the static content. Add an Amazon Simple Queue
Service (Amazon SOS) queue to receive requests from the website for later processing by the
EC2 instances.

A

D. Add an Amazon CloudFront distribution for the static content. Add an Amazon Simple Queue
Service (Amazon SOS) queue to receive requests from the website for later processing by the
EC2 instances.

Explanation:
Static content can include images and style sheets that are the same across all users and are
best cached at the edges of the content distribution network (CDN). Dynamic content includes
information that changes frequently or is personalized based on user preferences, behavior,
location or other factors - all content is sales requests.

50
Q

QUESTION 451
A company is building a mobile app on AWS. The company wants to expand its reach to millions
of users. The company needs to build a platform so that authorized users can watch the
company’s content on their mobile devices.
What should a solutions architect recommend to meet these requirements?
A. Publish content to a public Amazon S3 bucket. Use AWS Key Management Service (AWS KMS)
keys to stream content.
B. Set up IPsec VPN between the mobile app and the AWS environment to stream content.
C. Use Amazon CloudFront Provide signed URLs to stream content.
D. Set up AWS Client VPN between the mobile app and the AWS environment to stream content.

A

C. Use Amazon CloudFront Provide signed URLs to stream content.

Explanation:
Amazon CloudFront is a content delivery network (CDN) that securely delivers data, videos,
applications, and APIs to customers globally with low latency and high transfer speeds.
CloudFront supports signed URLs that provide authorized access to your content. This feature
allows the company to control who can access their content and for how long, providing a secure
and scalable solution for millions of users.
https://www.amazonaws.cn/en/cloudfront/