SAA L2P 451-500 v24.021 Flashcards
QUESTION 500
A company uses a payment processing system that requires messages for a particular payment
ID to be received in the same order that they were sent. Otherwise, the payments might be
processed incorrectly.
Which actions should a solutions architect take to meet this requirement? (Choose two.)
A. Write the messages to an Amazon DynamoDB table with the payment ID as the partition key
B. Write the messages to an Amazon Kinesis data stream with the payment ID as the partition key.
C. Write the messages to an Amazon ElastiCache for Memcached cluster with the payment ID as
the key.
D. Write the messages to an Amazon Simple Queue Service (Amazon SQS) queue. Set the message attribute to use the payment ID.
E. Write the messages to an Amazon Simple Queue Service (Amazon SQS) FIFO queue. Set the
message group to use the payment ID.
B. Write the messages to an Amazon Kinesis data stream with the payment ID as the partition key.
E. Write the messages to an Amazon Simple Queue Service (Amazon SQS) FIFO queue. Set the
message group to use the payment ID.
1) SQS FIFO queues ensure that messages are received in the exact order they are sent. Using the payment ID as a message group ensures that all messages from a payment ID are received sequentially. 2) Kinesis data streams can also enforce order by partition key. Using the payment ID as a partition key will ensure strict ordering of messages for each payment ID. The other options do not guarantee message ordering. DynamoDB and ElastiCache are not message queues. Standard SQS queues deliver messages only in approximate order. E –> certainly B –> see https://docs.aws.amazon.com/streams/latest/dev/key-concepts.html
QUESTION 499
A company has a web application with sporadic usage patterns. There is heavy usage at the
beginning of each month moderate usage at the start of each week and unpredictable usage
during the week. The application consists of a web server and a MySQL database server running
inside the data center. The company would like to move the application to the AWS Cloud and
needs to select a cost-effective database platform that will not require database modifications.
Which solution will meet these requirements?
A. Amazon DynamoDB
B. Amazon RDS for MySQL
C. MySQL-compatible Amazon Aurora Serverless
D. MySQL deployed on Amazon EC2 in an Auto Scaling group
C. MySQL-compatible Amazon Aurora Serverless
Explanation:
Amazon RDS for MySQL is a fully-managed relational database service that makes it easy to set
up, operate, and scale MySQL deployments in the cloud. Amazon Aurora Serverless is an on-
demand, auto-scaling configuration for Amazon Aurora (MySQL-compatible edition), where the
database will automatically start up, shut down, and scale capacity up or down based on your
application’s needs. It is a simple, cost-effective option for infrequent, intermittent, or
unpredictable workloads.
QUESTION 498
A company needs to create an Amazon Elastic Kubernetes Service (Amazon EKS) cluster to host
a digital media streaming application. The EKS cluster will use a managed node group that is
backed by Amazon Elastic Block Store (Amazon EBS) volumes for storage. The company must
encrypt all data at rest by using a customer managed key that is stored in AWS Key Management Service (AWS KMS).
Which combination of actions will meet this requirement with the LEAST operational overhead?
(Choose two.)
A. Use a Kubernetes plugin that uses the customer managed key to perform data encryption.
B. After creation of the EKS cluster, locate the EBS volumes. Enable encryption by using the
customer managed key.
C. Enable EBS encryption by default in the AWS Region where the EKS cluster will be created.
Select the customer managed key as the default key.
D. Create the EKS cluster. Create an IAM role that has a policy that grants permission to the
customer managed key. Associate the role with the EKS cluster.
E. Store the customer managed key as a Kubernetes secret in the EKS cluster. Use the customer
managed key to encrypt the EBS volumes.
C. Enable EBS encryption by default in the AWS Region where the EKS cluster will be created.
Select the customer managed key as the default key.
D. Create the EKS cluster. Create an IAM role that has a policy that grants permission to the
customer managed key. Associate the role with the EKS cluster.
C - Enable EBS encryption by default in a region - https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html
D - Provides key access permission only for the EKS cluster without changing Broader IAM Permissions
Other Options:
A. Using a custom plugin requires installing, managing, and troubleshooting the plugin. Significant operational overheads.
E. Managing Kubernetes secrets for key access requires operations within the EKS cluster. Additional operational complexity.
QUESTION 496
A meteorological startup company has a custom web application to sell weather data to its users online. The company uses Amazon DynamoDB to store is data and wants to build a new service
that sends an alert to the managers of four Internal teams every time a new weather event is
recorded. The company does not want true new service to affect the performance of the current
application.
What should a solutions architect do to meet these requirement with the LEAST amount of
operational overhead?
A. Use DynamoDB transactions to write new event data to the table. Configure the transactions to
notify internal teams.
B. Have the current application publish a message to four Amazon Simple Notification Service
(Amazon SNS) topics. Have each team subscribe to one topic.
C. Enable Amazon DynamoDB Streams on the table. Use triggers to write to a mingle Amazon
Simple Notification Service (Amazon SNS) topic to which the teams can subscribe.
D. Add a custom attribute to each record to flag new items. Write a cron job that scans the table
every minute for items that are new and notifies an Amazon Simple Queue Service (Amazon
SOS) queue to which the teams can subscribe.
C. Enable Amazon DynamoDB Streams on the table. Use triggers to write to a mingle Amazon
Simple Notification Service (Amazon SNS) topic to which the teams can subscribe.
Explanation:
The best solution to meet these requirements with the least amount of operational overhead is to
enable Amazon DynamoDB Streams on the table and use triggers to write to a single Amazon
Simple Notification Service (Amazon SNS) topic to which the teams can subscribe. This solution
requires minimal configuration and infrastructure setup, and Amazon DynamoDB Streams
provide a low-latency way to capture changes to the DynamoDB table. The triggers automatically
capture the changes and publish them to the SNS topic, which notifies the internal teams.
QUESTION 497
A company is developing a real-time multiplayer game that uses UDP for communications
between the client and servers. In an Auto Scaling group Spikes in demand are anticipated during
the day, so the game server platform must adapt accordingly. Developers want to store gamer
scores and other non-relational data in a database solution that will scale without intervention.
Which solution should a solutions architect recommend?
A. Use Amazon Route 53 for traffic distribution and Amazon Aurora Serverless for data storage.
B. Use a Network Load Balancer for traffic distribution and Amazon DynamoDB on-demand for data
storage.
C. Use a Network Load Balancer for traffic distribution and Amazon Aurora Global Database for data
storage.
D. Use an Application Load Balancer for traffic distribution and Amazon DynamoDB global tables for
data storage.
B. Use a Network Load Balancer for traffic distribution and Amazon DynamoDB on-demand for data
storage.
Explanation:
A Network Load Balancer can handle UDP traffic, and Amazon DynamoDB on-demand can
provide automatic scaling without intervention.
QUESTION 495
A company has a large dataset for its online advertising business stored in an Amazon RDS for
MySQL DB instance in a single Availability Zone. The company wants business reporting queries
to run without impacting the write operations to the production DB instance.
Which solution meets these requirements?
A. Deploy RDS read replicas to process the business reporting queries.
B. Scale out the DB instance horizontally by placing it behind an Elastic Load Balancer.
C. Scale up the DB instance to a larger instance type to handle write operations and queries.
D. Deploy the OB distance in multiple Availability Zones to process the business reporting queries.
A. Deploy RDS read replicas to process the business reporting queries.
QUESTION 494
A developer has an application that uses an AWS Lambda function to upload files to Amazon S3
and needs the required permissions to perform the task. The developer already has an IAM user
with valid IAM credentials required for Amazon S3.
What should a solutions architect do to grant the permissions?
A. Add required IAM permissions in the resource policy of the Lambda function.
B. Create a signed request using the existing IAM credentials in the Lambda function
C. Create a new IAM user and use the existing IAM credentials in the Lambda function.
D. Create an IAM execution role with the required permissions and attach the IAM rote to the
Lambda function.
D. Create an IAM execution role with the required permissions and attach the IAM rote to the
Lambda function.
Explanation:
To grant the necessary permissions to an AWS Lambda function to upload files to Amazon S3, a
solutions architect should create an IAM execution role with the required permissions and attach
the IAM role to the Lambda function. This approach follows the principle of least privilege and
ensures that the Lambda function can only access the resources it needs to perform its specific
task.
QUESTION 493
A company has deployed a serverless application that invokes an AWS Lambda function when
new documents are uploaded to an Amazon S3 bucket. The application uses the Lambda
function to process the documents. After a recent marketing campaign, the company noticed that
the application did not process many of the documents.
What should a solutions architect do to improve the architecture of this application?
A. Set the Lambda function’s runtime timeout value to 15 minutes.
B. Configure an S3 bucket replication policy. Stage the documents m the S3 bucket for later
processing.
C. Deploy an additional Lambda function Load balance the processing of the documents across the
two Lambda functions.
D. Create an Amazon Simple Queue Service (Amazon SOS) queue. Send the requests to the
queue. Configure the queue as an event source for Lambda.
D. Create an Amazon Simple Queue Service (Amazon SOS) queue. Send the requests to the
queue. Configure the queue as an event source for Lambda.
Explanation:
To improve the architecture of this application, the best solution would be to use Amazon Simple
Queue Service (Amazon SQS) to buffer the requests and decouple the S3 bucket from the
Lambda function. This will ensure that the documents are not lost and can be processed at a later
time if the Lambda function is not available.
This will ensure that the documents are not lost and can be processed at a later time if the
Lambda function is not available. By using Amazon SQS, the architecture is decoupled and the
Lambda function can process the documents in a scalable and fault-tolerant manner.
QUESTION 492
A solutions architect is designing the architecture for a software demonstration environment. The
environment will run on Amazon EC2 instances in an Auto Scaling group behind an Application
Load Balancer (ALB). The system will experience significant increases in traffic during working
hours but Is not required to operate on weekends.
Which combination of actions should the solutions architect take to ensure that the system can
scale to meet demand? (Choose two.)
A. Use AWS Auto Scaling to adjust the ALB capacity based on request rate.
B. Use AWS Auto Scaling to scale the capacity of the VPC internet gateway.
C. Launch the EC2 instances in multiple AWS Regions to distribute the load across Regions.
D. Use a target tracking scaling policy to scale the Auto Scaling group based on instance CPU
utilization.
E. Use scheduled scaling to change the Auto Scaling group minimum, maximum, and desired
capacity to zero for weekends. Revert to the default values at the start of the week.
D. Use a target tracking scaling policy to scale the Auto Scaling group based on instance CPU
utilization.
E. Use scheduled scaling to change the Auto Scaling group minimum, maximum, and desired
capacity to zero for weekends. Revert to the default values at the start of the week.
Answer D scales the auto-scaling group based on the CPU utilization of the instance, ensuring that the number of instances in the group can be adjusted to handle increased traffic during business hours and reduce capacity during low traffic periods. - Answer E uses scheduled scaling to change the minimum of the auto-scaling group, Maximum and desired capacity set to zero for weekends, ensuring that the Auto Scaling group is reduced to zero on weekends to save costs. Answers A and E: A. Use AWS Auto Scaling to adjust ALB capacity based on request rate. E. Use scheduled scaling to change the minimum, maximum, and desired capacity auto-scaling group to zero for weekends. Return to default values at the beginning of the week. - Response A adjusts the capacity of the ALB based on the request rate, ensuring that the ALB can handle increased traffic during business hours and reduce capacity during low traffic periods. - Answer E uses scheduled scaling to change the minimum, maximum, and desired auto-scaling group to zero for weekends, ensuring that the auto-scaling group is reduced to zero during weekends to save costs. Comparing the two options, both answers D and A are valid options to scale the application based on demand. However, Answer D scales the auto-scaling group based on the instance’s CPU utilization, which is a more granular metric than request rate and can provide better performance and cost optimization. The A response only scales the ALB based on the request rate, which may not be enough to handle sudden increases in traffic. Answer E is a common option to reduce to zero on weekends to save costs. Both answers D and A can be used in conjunction with answer E to ensure that the auto-scaling group is reduced to zero during weekends. However, Answer D provides more granular control over auto-scaling group scaling based on instance CPU utilization, which can result in better performance and cost optimization.
QUESTION 491
A company has launched an Amazon RDS for MySQL DB instance. Most of the connections to
the database come from serverless applications. Application traffic to the database changes
significantly at random intervals. At limes of high demand, users report that their applications
experience database connection rejection errors.
Which solution will resolve this issue with the LEAST operational overhead?
A. Create a proxy in RDS Proxy. Configure the users’ applications to use the DB instance through
RDS Proxy.
B. Deploy Amazon ElastCache for Memcached between the users’ application and the DB instance.
C. Migrate the DB instance to a different instance class that has higher I/O capacity. Configure the
users’ applications to use the new DB instance.
D. Configure Multi-AZ for the DB instance. Configure the users’ application to switch between the DB
instances.
A. Create a proxy in RDS Proxy. Configure the users’ applications to use the DB instance through
RDS Proxy.
Explanation:
Many applications, including those built on modern serverless architectures, can have a large
number of open connections to the database server and may open and close database
connections at a high rate, exhausting database memory and compute resources. Amazon RDS
Proxy allows applications to pool and share connections established with the database, improving
database efficiency and application scalability.
https://aws.amazon.com/pt/rds/proxy/
QUESTION 490
A company recently deployed a new auditing system to centralize information about operating
system versions patching and installed software for Amazon EC2 instances. A solutions architect
must ensure all instances provisioned through EC2 Auto Scaling groups successfully send
reports to the auditing system as soon as they are launched and terminated.
Which solution achieves these goals MOST efficiently?
A. Use a scheduled AWS Lambda function and run a script remotely on all EC2 instances to send
data to the audit system.
B. Use EC2 Auto Scaling lifecycle hooks to run a custom script to send data to the audit system
when instances are launched and terminated.
C. Use an EC2 Auto Scaling launch configuration to run a custom script through user data to send
data to the audit system when instances are launched and terminated.
D. Run a custom script on the instance operating system to send data to the audit system. Configure
the script to be invoked by the EC2 Auto Scaling group when the instance starts and is
terminated.
B. Use EC2 Auto Scaling lifecycle hooks to run a custom script to send data to the audit system
when instances are launched and terminated.
Explanation:
Amazon EC2 Auto Scaling offers the ability to add lifecycle hooks to your Auto Scaling groups.
These hooks let you create solutions that are aware of events in the Auto Scaling instance
lifecycle, and then perform a custom action on instances when the corresponding lifecycle event
occurs.
https://docs.aws.amazon.com/autoscaling/ec2/userguide/lifecycle-hooks.html
QUESTION 489
A company has an application that collects data from loT sensors on automobiles. The data is streamed and stored in Amazon S3 through Amazon Kinesis Date Firehose. The data produces
trillions of S3 objects each year. Each morning, the company uses the data from the previous 30
days to retrain a suite of machine learning (ML) models.
Four times each year, the company uses the data from the previous 12 months to perform
analysis and train other ML models. The data must be available with minimal delay for up to 1
year. After 1 year, the data must be retained for archival purposes.
Which storage solution meets these requirements MOST cost-effectively?
A. Use the S3 Intelligent-Tiering storage class. Create an S3 Lifecycle policy to transition objects to
S3 Glacier Deep Archive after 1 year.
B. Use the S3 Intelligent-Tiering storage class. Configure S3 Intelligent-Tiering to automatically
move objects to S3 Glacier Deep Archive after 1 year.
C. Use the S3 Standard-Infrequent Access (S3 Standard-IA) storage class. Create an S3 Lifecycle
policy to transition objects to S3 Glacier Deep Archive after 1 year.
D. Use the S3 Standard storage class. Create an S3 Lifecycle policy to transition objects to S3
Standard-Infrequent Access (S3 Standard-IA) after 30 days, and then to S3 Glacier Deep Archive
after 1 year.
D. Use the S3 Standard storage class. Create an S3 Lifecycle policy to transition objects to S3
Standard-Infrequent Access (S3 Standard-IA) after 30 days, and then to S3 Glacier Deep Archive
after 1 year.
QUESTION 488
A solutions architect is designing a two-tiered architecture that includes a public subnet and a
database subnet. The web servers in the public subnet must be open to the internet on port 443.
The Amazon RDS for MySQL DB instance in the database subnet must be accessible only to the
web servers on port 3306.
Which combination of steps should the solutions architect take to meet these requirements?
(Choose two.)
A. Create a network ACL for the public subnet. Add a rule to deny outbound traffic to 0.0.0.0/0 on
port.
B. Create a security group for the DB instance. Add a rule to allow traffic from the public subnet
CIDR block on port 3306.
C. Create a security group for the web servers in the public subnet. Add a rule to allow traffic from
0.0.0.0/0 on port 443.
D. Create a security group for the DB instance. Add a rule to allow traffic from the web servers’
security group on port 3306.
E. Create a security group for the DB instance. Add a rule to deny all traffic except traffic from the
web servers’ security group on port 3306.
C. Create a security group for the web servers in the public subnet. Add a rule to allow traffic from
0.0.0.0/0 on port 443.
D. Create a security group for the DB instance. Add a rule to allow traffic from the web servers’
security group on port 3306.
QUESTION 487
A company wants to use Amazon S3 for the secondary copy of its on-premises dataset. The
company would rarely need to access this copy. The storage solution’s cost should be minimal.
Which storage solution meets these requirements?
A. S3 Standard
B. S3 Intelligent-Tiering
C. S3 Standard-Infrequent Access (S3 Standard-IA)
D. S3 One Zone-Infrequent Access (S3 One Zone-IA)
D. S3 One Zone-Infrequent Access (S3 One Zone-IA)
QUESTION 486
A solutions architect is creating a new VPC design. There are two public subnets for the load
balancer, two private subnets for web servers, and two private subnets for MySQL. The web
servers use only HTTPS. The solutions architect has already created a security group for the load
balancer allowing port 443 from 0.0.0.0/0. Company policy requires that each resource has the
least access required to still be able to perform its tasks.
Which additional configuration strategy should the solutions architect use to meet these
requirements?
A. Create a security group for the web servers and allow port 443 from 0.0.0.0/0. Create a security group for the MySQL servers and allow port 3306 from the web servers security group.
B. Create a network ACL for the web servers and allow port 443 from 0.0.0.0/0. Create a network
ACL for the MySQL servers and allow port 3306 from the web servers security group.
C. Create a security group for the web servers and allow port 443 from the load balancer. Create a
security group for the MySQL servers and allow port 3306 from the web servers security group.
D. Create a network ACL for the web servers and allow port 443 from the load balancer. Create a
network ACL for the MySQL servers and allow port 3306 from the web servers security group.
C. Create a security group for the web servers and allow port 443 from the load balancer. Create a
security group for the MySQL servers and allow port 3306 from the web servers security group.
Explanation:
Load balancer is public facing accepting all traffic coming towards the VPC (0.0.0.0/0). The web
server needs to trust traffic originating from the ALB. The DB will only trust traffic originating from
the Web server on port 3306 for Mysql.
QUESTION 485
A hospital is designing a new application that gathers symptoms from patients. The hospital has
decided to use Amazon Simple Queue Service (Amazon SQS) and Amazon Simple Notification
Service (Amazon SNS) in the architecture. A solutions architect is reviewing the infrastructure
design Data must be encrypted at test and in transit. Only authorized personnel of the hospital
should be able to access the data.
Which combination of steps should the solutions architect take to meet these requirements?
(Choose two.)
A. Turn on server-side encryption on the SQS components. Update the default key policy to restrict key usage to a set of authorized principals.
B. Turn on server-side encryption on the SNS components by using an AWS Key Management Service (AWS KMS) customer managed key. Apply a key policy to restrict key usage to a set of authorized principals.
C. Turn on encryption on the SNS components. Update the default key policy to restrict key usage to a set of authorized principals. Set a condition in the topic policy to allow only encrypted connections over TLS.
D. Turn on server-side encryption on the SQS components by using an AWS Key Management Service (AWS KMS) customer managed key. Apply a key policy to restrict key usage to a set of authorized principals. Set a condition in the queue policy to allow only encrypted connections over TLS.
E. Turn on server-side encryption on the SQS components by using an AWS Key Management Service (AWS KMS) customer managed key. Apply an IAM policy to restrict key usage to a set of authorized principals. Set a condition in the queue policy to allow only encrypted connections over TLS.
B. Turn on server-side encryption on the SNS components by using an AWS Key Management
Service (AWS KMS) customer managed key. Apply a key policy to restrict key usage to a set ofauthorized principals.
D. Turn on server-side encryption on the SQS components by using an AWS Key Management
Service (AWS KMS) customer managed key. Apply a key pokey to restrict key usage to a set of authorized principals. Set a condition in the queue pokey to allow only encrypted connections
over TLS.
Explanation:
For a customer managed KMS key, you must configure the key policy to add permissions for
each queue producer and consumer.
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-key-
management.html
https://docs.aws.amazon.com/sns/latest/dg/sns-server-side-encryption.html https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-server-side-encryption.html
Encryption at transit = use SSL/TLS -> rule out A,B Encryption at rest = encryption on components -> keep C, D, E KMS always need a key policy, IAM is optional -> E out -> C, D left, one for SNS, one for SQS. TLS: checked, encryption on components: checked
QUESTION 484
An ecommerce company is running a multi-tier application on AWS. The front-end and backend
tiers both run on Amazon EC2, and the database runs on Amazon RDS for MySQL. The backend
tier communicates with the RDS instance. There are frequent calls to return identical datasets
from the database that are causing performance slowdowns.
Which action should be taken to improve the performance of the backend?
A. Implement Amazon SNS to store the database calls.
B. Implement Amazon ElasticCache to cache the large database.
C. Implement an RDS for MySQL read replica to cache database calls.
D. Implement Amazon Kinesis Data Firehose to stream the calls to the database.
B. Implement Amazon ElasticCache to cache the large database.
Explanation:
Key term is identical datasets from the database it means caching can solve this issue by cached
in frequently used dataset from DB.
QUESTION 483
A company runs an application that receives data from thousands of geographically dispersed
remote devices that use UDP. The application processes the data immediately and sends a
message back to the device if necessary. No data is stored.
The company needs a solution that minimizes latency for the data transmission from the devices.
The solution also must provide rapid failover to another AWS Region.
Which solution will meet these requirements?
A. Configure an Amazon Route 53 failover routing policy. Create a Network Load Balancer (NLB) in
each of the two Regions. Configure the NLB to invoke an AWS Lambda function to process the
data.
B. Use AWS Global Accelerator. Create a Network Load Balancer (NLB) in each of the two Regions
as an endpoint. Create an Amazon Elastic Container Service (Amazon ECS) cluster with the
Fargate launch type. Create an ECS service on the cluster. Set the ECS service as the target for
the NLProcess the data in Amazon ECS.
C. Use AWS Global Accelerator Create an Application Load Balancer (ALB) in each of the two
Regions as an endpoint. Create an Amazon Elastic Container Service (Amazon ECS) cluster with
the Fargate launch type. Create an ECS service on the cluster. Set the ECS service as the target
for the ALB Process the data in Amazon ECS.
D. Configure an Amazon Route 53 failover routing policy. Create an Application Load Balancer
(ALB) in each of the two Regions. Create an Amazon Elastic Container Service (Amazon ECS)
cluster with the Fargate launch type. Create an ECS service on the cluster. Set the ECS service
as the target for the ALB Process the data in Amazon ECS.
B. Use AWS Global Accelerator. Create a Network Load Balancer (NLB) in each of the two Regions
as an endpoint. Create an Amazon Elastic Container Service (Amazon ECS) cluster with the
Fargate launch type. Create an ECS service on the cluster. Set the ECS service as the target for
the NLProcess the data in Amazon ECS.
Explanation:
Geographically dispersed (related to UDP) - Global Accelerator - multiple entrances worldwide to
the AWS network to provide better transfer rates.
UDP - NLB (Network Load Balancer).
QUESTION 482
A company wants to migrate its 1 PB on-premises image repository to AWS. The images will be
used by a serverless web application. Images stored in the repository are rarely accessed, but
they must be immediately available Additionally, the images must be encrypted at rest and
protected from accidental deletion.
Which solution meets these requirements?
A. Implement client-side encryption and store the images in an Amazon S3 Glacier vault. Set a vault
lock to prevent accidental deletion.
B. Store the images in an Amazon S3 bucket in the S3 Standard-Infrequent Access (S3 Standard-
IA) storage class. Enable versioning default encryption and MFA Delete on the S3 bucket.
C. Store the images in an Amazon FSx for Windows File Server file share. Configure the Amazon FSx file share to use an AWS Key Management Service (AWS KMS) customer master key (CMK)
to encrypt the images in the file share. Use NTFS permission sets on the images to prevent
accidental deletion.
D. Store the images in an Amazon Elastic File System (Amazon EFS) file share in the Infrequent
Access storage class. Configure the EFS file share to use an AWS Key Management Service
(AWS KMS) customer master key (CMK) to encrypt the images in the file share. Use NFS
permission sets on the images to prevent accidental deletion.
B. Store the images in an Amazon S3 bucket in the S3 Standard-Infrequent Access (S3 Standard-
IA) storage class. Enable versioning default encryption and MFA Delete on the S3 bucket.
QUESTION 481
A company runs an application on Amazon EC2 Linux instances across multiple Availability
Zones. The application needs a storage layer that is highly available and Portable Operating
System Interface (POSIX)-compliant. The storage layer must provide maximum data durability
and must be shareable across the EC2 instances. The data in the storage layer will be accessed
frequently for the first 30 days and will be accessed infrequently after that time.
Which solution will meet these requirements MOST cost-effectively?
A. Use the Amazon S3 Standard storage class. Create an S3 Lifecycle policy to move infrequently
accessed data to S3 Glacier.
B. Use the Amazon S3 Standard storage class. Create an S3 Lifecycle policy to move infrequently
accessed data to S3 Standard-Infrequent Access (EF3 Standard-IA).
C. Use the Amazon Elastic File System (Amazon EFS) Standard storage class. Create a Lifecycle
management policy to move infrequently accessed data to EFS Standard-Infrequent Access (EFS
Standard-IA).
D. Use the Amazon Elastic File System (Amazon EFS) One Zone storage class. Create a Lifecycle
management policy to move infrequently accessed data to EFS One Zone-Infrequent Access
(EFS One Zone-IA).
C. Use the Amazon Elastic File System (Amazon EFS) Standard storage class. Create a Lifecycle
management policy to move infrequently accessed data to EFS Standard-Infrequent Access (EFS
Standard-IA).
Explanation:
https://aws.amazon.com/efs/features/infrequent-access/