SAA L2P 501-600 v24.021 Flashcards
QUESTION 600
A company has multiple Windows file servers on premises. The company wants to migrate and
consolidate its files into an Amazon FSx for Windows File Server file system. File permissions
must be preserved to ensure that access rights do not change.
Which solutions will meet these requirements? (Choose two.)
A. Deploy AWS DataSync agents on premises. Schedule DataSync tasks to transfer the data to the
FSx for Windows File Server file system.
B. Copy the shares on each file server into Amazon S3 buckets by using the AWS CLI. Schedule
AWS DataSync tasks to transfer the data to the FSx for Windows File Server file system.
C. Remove the drives from each file server. Ship the drives to AWS for import into Amazon S3.
Schedule AWS DataSync tasks to transfer the data to the FSx for Windows File Server file
system.
D. Order an AWS Snowcone device. Connect the device to the on-premises network. Launch AWS
DataSync agents on the device. Schedule DataSync tasks to transfer the data to the FSx for
Windows File Server file system.
E. Order an AWS Snowball Edge Storage Optimized device. Connect the device to the on-premises
network. Copy data to the device by using the AWS CLI. Ship the device back to AWS for import
into Amazon S3. Schedule AWS DataSync tasks to transfer the data to the FSx for Windows File
Server file system.
A. Deploy AWS DataSync agents on premises. Schedule DataSync tasks to transfer the data to the
FSx for Windows File Server file system.
D. Order an AWS Snowcone device. Connect the device to the on-premises network. Launch AWS
DataSync agents on the device. Schedule DataSync tasks to transfer the data to the FSx for
Windows File Server file system.
Explanation:
A - This option involves deploying DataSync agents on your on-premises file servers and using
DataSync to transfer the data directly to the FSx for Windows File Server. DataSync ensures that
file permissions are preserved during the migration process.
D - This option involves using an AWS Snowcone device, a portable data transfer device. You
would connect the Snowcone device to your on-premises network, launch DataSync agents on
the device, and schedule DataSync tasks to transfer the data to FSx for Windows File Server.
DataSync handles the migration process while preserving file permissions.
QUESTION 599
A company needs to minimize the cost of its 1 Gbps AWS Direct Connect connection. The
company’s average connection utilization is less than 10%. A solutions architect must
recommend a solution that will reduce the cost without compromising security.
Which solution will meet these requirements?
A. Set up a new 1 Gbps Direct Connect connection. Share the connection with another AWS
account.
B. Set up a new 200 Mbps Direct Connect connection in the AWS Management Console.
C. Contact an AWS Direct Connect Partner to order a 1 Gbps connection. Share the connection with
another AWS account.
D. Contact an AWS Direct Connect Partner to order a 200 Mbps hosted connection for an existing
AWS account.
D. Contact an AWS Direct Connect Partner to order a 200 Mbps hosted connection for an existing
AWS account.
Explanation:
For Dedicated Connections, 1 Gbps, 10 Gbps, and 100 Gbps ports are available. For Hosted
Connections, connection speeds of 50 Mbps, 100 Mbps, 200 Mbps, 300 Mbps, 400 Mbps, 500
Mbps, 1 Gbps, 2 Gbps, 5 Gbps and 10 Gbps may be ordered from approved AWS Direct
Connect Partners.
QUESTION 598
A company uses Amazon S3 to store high-resolution pictures in an S3 bucket. To minimize
application changes, the company stores the pictures as the latest version of an S3 object. The
company needs to retain only the two most recent versions of the pictures.
The company wants to reduce costs. The company has identified the S3 bucket as a large
expense.
Which solution will reduce the S3 costs with the LEAST operational overhead?
A. Use S3 Lifecycle to delete expired object versions and retain the two most recent versions.
B. Use an AWS Lambda function to check for older versions and delete all but the two most recent
versions.
C. Use S3 Batch Operations to delete noncurrent object versions and retain only the two most recent
versions.
D. Deactivate versioning on the S3 bucket and retain the two most recent versions.
A. Use S3 Lifecycle to delete expired object versions and retain the two most recent versions.
Explanation:
S3 Lifecycle policies allow you to define rules that automatically transition or expire objects based
on their age or other criteria. By configuring an S3 Lifecycle policy to delete expired object
versions and retain only the two most recent versions, you can effectively manage the storage
costs while maintaining the desired retention policy. This solution is highly automated and
requires minimal operational overhead as the lifecycle management is handled by S3 itself.
QUESTION 597
A company has a service that reads and writes large amounts of data from an Amazon S3 bucket
in the same AWS Region. The service is deployed on Amazon EC2 instances within the private
subnet of a VPC. The service communicates with Amazon S3 over a NAT gateway in the public
subnet. However, the company wants a solution that will reduce the data output costs.
Which solution will meet these requirements MOST cost-effectively?
A. Provision a dedicated EC2 NAT instance in the public subnet. Configure the route table for the
private subnet to use the elastic network interface of this instance as the destination for all S3
traffic.
B. Provision a dedicated EC2 NAT instance in the private subnet. Configure the route table for the
public subnet to use the elastic network interface of this instance as the destination for all S3
traffic.
C. Provision a VPC gateway endpoint. Configure the route table for the private subnet to use the
gateway endpoint as the route for all S3 traffic.
D. Provision a second NAT gateway. Configure the route table for the private subnet to use this NAT
gateway as the destination for all S3 traffic.
C. Provision a VPC gateway endpoint. Configure the route table for the private subnet to use the
gateway endpoint as the route for all S3 traffic.
Explanation:
A VPC gateway endpoint allows you to privately access Amazon S3 from within your VPC without
using a NAT gateway or NAT instance. By provisioning a VPC gateway endpoint for S3, the
service in the private subnet can directly communicate with S3 without incurring data transfer
costs for traffic going through a NAT gateway.
QUESTION 596
A company uses on-premises servers to host its applications. The company is running out of
storage capacity. The applications use both block storage and NFS storage. The company needs
a high-performing solution that supports local caching without re-architecting its existing
applications.
Which combination of actions should a solutions architect take to meet these requirements?
(Choose two.)
A. Mount Amazon S3 as a file system to the on-premises servers.
B. Deploy an AWS Storage Gateway file gateway to replace NFS storage.
C. Deploy AWS Snowball Edge to provision NFS mounts to on-premises servers.
D. Deploy an AWS Storage Gateway volume gateway to replace the block storage.
E. Deploy Amazon Elastic File System (Amazon EFS) volumes and mount them to on-premises
servers.
B. Deploy an AWS Storage Gateway file gateway to replace NFS storage.
D. Deploy an AWS Storage Gateway volume gateway to replace the block storage.
Explanation:
By combining the deployment of an AWS Storage Gateway file gateway and an AWS Storage
Gateway volume gateway, the company can address both its block storage and NFS storage
needs, while leveraging local caching capabilities for improved performance.
QUESTION 595
A company is conducting an internal audit. The company wants to ensure that the data in an
Amazon S3 bucket that is associated with the company’s AWS Lake Formation data lake does
not contain sensitive customer or employee data. The company wants to discover personally
identifiable information (PII) or financial information, including passport numbers and credit card
numbers.
Which solution will meet these requirements?
A. Configure AWS Audit Manager on the account. Select the Payment Card Industry Data Security
Standards (PCI DSS) for auditing.
B. Configure Amazon S3 Inventory on the S3 bucket Configure Amazon Athena to query the
inventory.
C. Configure Amazon Macie to run a data discovery job that uses managed identifiers for the
required data types.
D. Use Amazon S3 Select to run a report across the S3 bucket.
C. Configure Amazon Macie to run a data discovery job that uses managed identifiers for the
required data types.
Explanation:
Amazon Macie is a service that helps discover, classify, and protect sensitive data stored in
AWS. It uses machine learning algorithms and managed identifiers to detect various types of
sensitive information, including personally identifiable information (PII) and financial information.
By configuring Amazon Macie to run a data discovery job with the appropriate managed
identifiers for the required data types (such as passport numbers and credit card numbers), the
company can identify and classify any sensitive data present in the S3 bucket.
QUESTION 594
A company uses Amazon EC2 instances to host its internal systems. As part of a deployment
operation, an administrator tries to use the AWS CLI to terminate an EC2 instance. However, the
administrator receives a 403 (Access Denied) error message.
What is the cause of the unsuccessful request?
A. The EC2 instance has a resource-based policy with a Deny statement.
B. The principal has not been specified in the policy statement.
C. The “Action” field does not grant the actions that are required to terminate the EC2 instance.
D. The request to terminate the EC2 instance does not originate from the CIDR blocks 192.0.2.0/24
or 203.0.113.0/24.
D. The request to terminate the EC2 instance does not originate from the CIDR blocks 192.0.2.0/24
or 203.0.113.0/24.
QUESTION 593
A company wants to use artificial intelligence (AI) to determine the quality of its customer service
calls. The company currently manages calls in four different languages, including English. The
company will offer new languages in the future. The company does not have the resources to
regularly maintain machine learning (ML) models.
The company needs to create written sentiment analysis reports from the customer service call
recordings. The customer service call recording text must be translated into English.
Which combination of steps will meet these requirements? (Choose three.)
A. Use Amazon Comprehend to translate the audio recordings into English.
B. Use Amazon Lex to create the written sentiment analysis reports.
C. Use Amazon Polly to convert the audio recordings into text.
D. Use Amazon Transcribe to convert the audio recordings in any language into text.
E. Use Amazon Translate to translate text in any language to English.
F. Use Amazon Comprehend to create the sentiment analysis reports.
D. Use Amazon Transcribe to convert the audio recordings in any language into text.
E. Use Amazon Translate to translate text in any language to English.
F. Use Amazon Comprehend to create the sentiment analysis reports.
Explanation:
Amazon Transcribe will convert the audio recordings into text, Amazon Translate will translate the
text into English, and Amazon Comprehend will perform sentiment analysis on the translated text
to generate sentiment analysis reports.
QUESTION 592
A company has multiple AWS accounts for development work. Some staff consistently use
oversized Amazon EC2 instances, which causes the company to exceed the yearly budget for the
development accounts. The company wants to centrally restrict the creation of AWS resources in
these accounts.
Which solution will meet these requirements with the LEAST development effort?
A. Develop AWS Systems Manager templates that use an approved EC2 creation process. Use the
approved Systems Manager templates to provision EC2 instances.
B. Use AWS Organizations to organize the accounts into organizational units (OUs). Define and
attach a service control policy (SCP) to control the usage of EC2 instance types.
C. Configure an Amazon EventBridge rule that invokes an AWS Lambda function when an EC2
instance is created. Stop disallowed EC2 instance types.
D. Set up AWS Service Catalog products for the staff to create the allowed EC2 instance types.
Ensure that staff can deploy EC2 instances only by using the Service Catalog products.
B. Use AWS Organizations to organize the accounts into organizational units (OUs). Define and
attach a service control policy (SCP) to control the usage of EC2 instance types.
Explanation:
AWS Organizations: AWS Organizations is a service that helps you centrally manage multiple
AWS accounts. It enables you to group accounts into organizational units (OUs) and apply
policies across those accounts.
Service Control Policies (SCPs): SCPs in AWS Organizations allow you to define fine-grained
permissions and restrictions at the account or OU level. By attaching an SCP to the development
accounts, you can control the creation and usage of EC2 instance types.
Least Development Effort: Option B requires minimal development effort as it leverages the built-
in features of AWS Organizations and SCPs. You can define the SCP to restrict the use of
oversized EC2 instance types and apply it to the appropriate OUs or accounts.
QUESTION 591
A solutions architect is designing an asynchronous application to process credit card data
validation requests for a bank. The application must be secure and be able to process each
request at least once.
Which solution will meet these requirements MOST cost-effectively?
A. Use AWS Lambda event source mapping. Set Amazon Simple Queue Service (Amazon SQS)
standard queues as the event source. Use AWS Key Management Service (SSE-KMS) for
encryption. Add the kms:Decrypt permission for the Lambda execution role.
B. Use AWS Lambda event source mapping. Use Amazon Simple Queue Service (Amazon SQS)
FIFO queues as the event source. Use SQS managed encryption keys (SSE-SQS) for encryption.
Add the encryption key invocation permission for the Lambda function.
C. Use the AWS Lambda event source mapping. Set Amazon Simple Queue Service (Amazon SQS)
FIFO queues as the event source. Use AWS KMS keys (SSE-KMS). Add the kms:Decrypt
permission for the Lambda execution role.
D. Use the AWS Lambda event source mapping. Set Amazon Simple Queue Service (Amazon SQS)
standard queues as the event source. Use AWS KMS keys (SSE-KMS) for encryption. Add the
encryption key invocation permission for the Lambda function.
A. Use AWS Lambda event source mapping. Set Amazon Simple Queue Service (Amazon SQS)
standard queues as the event source. Use AWS Key Management Service (SSE-KMS) for
encryption. Add the kms:Decrypt permission for the Lambda execution role.
Explanation:
https://docs.aws.amazon.com/zh_tw/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-
least-privilege-policy.html
QUESTION 590
A gaming company uses Amazon DynamoDB to store user information such as geographic
location, player data, and leaderboards. The company needs to configure continuous backups to
an Amazon S3 bucket with a minimal amount of coding. The backups must not affect availability of the application and must not affect the read capacity units (RCUs) that are defined for the
table.
Which solution meets these requirements?
A. Use an Amazon EMR cluster. Create an Apache Hive job to back up the data to Amazon S3.
B. Export the data directly from DynamoDB to Amazon S3 with continuous backups. Turn on point-
in-time recovery for the table.
C. Configure Amazon DynamoDB Streams. Create an AWS Lambda function to consume the
stream and export the data to an Amazon S3 bucket.
D. Create an AWS Lambda function to export the data from the database tables to Amazon S3 on a
regular basis. Turn on point-in-time recovery for the table.
B. Export the data directly from DynamoDB to Amazon S3 with continuous backups. Turn on point-
in-time recovery for the table.
Explanation:
Continuous backups is a native feature of DynamoDB, it works at any scale without having to
manage servers or clusters and allows you to export data across AWS Regions and accounts to
any point-in-time in the last 35 days at a per-second granularity. Plus, it doesn’t affect the read
capacity or the availability of your production tables.
https://aws.amazon.com/blogs/aws/new-export-amazon-dynamodb-table-data-to-data-lake-
amazon-s3/
QUESTION 589
An ecommerce company runs an application in the AWS Cloud that is integrated with an on-
premises warehouse solution. The company uses Amazon Simple Notification Service (Amazon
SNS) to send order messages to an on-premises HTTPS endpoint so the warehouse application
can process the orders. The local data center team has detected that some of the order
messages were not received.
A solutions architect needs to retain messages that are not delivered and analyze the messages
for up to 14 days.
Which solution will meet these requirements with the LEAST development effort?
A. Configure an Amazon SNS dead letter queue that has an Amazon Kinesis Data Stream target
with a retention period of 14 days.
B. Add an Amazon Simple Queue Service (Amazon SQS) queue with a retention period of 14 days
between the application and Amazon SNS.
C. Configure an Amazon SNS dead letter queue that has an Amazon Simple Queue Service
(Amazon SQS) target with a retention period of 14 days.
D. Configure an Amazon SNS dead letter queue that has an Amazon DynamoDB target with a TTL
attribute set for a retention period of 14 days.
C. Configure an Amazon SNS dead letter queue that has an Amazon Simple Queue Service
(Amazon SQS) target with a retention period of 14 days.
Explanation:
The message retention period in Amazon SQS can be set between 1 minute and 14 days (the
default is 4 days). Therefore, you can configure your SQS DLQ to retain undelivered SNS
messages for 14 days. This will enable you to analyze undelivered messages with the least
development effort.
QUESTION 588
A 4-year-old media company is using the AWS Organizations all features feature set to organize
its AWS accounts. According to the company’s finance team, the billing information on the
member accounts must not be accessible to anyone, including the root user of the member accounts.
Which solution will meet these requirements?
A. Add all finance team users to an IAM group. Attach an AWS managed policy named Billing to the
group.
B. Attach an identity-based policy to deny access to the billing information to all users, including the
root user.
C. Create a service control policy (SCP) to deny access to the billing information. Attach the SCP to
the root organizational unit (OU).
D. Convert from the Organizations all features feature set to the Organizations consolidated billing
feature set.
C. Create a service control policy (SCP) to deny access to the billing information. Attach the SCP to
the root organizational unit (OU).
Explanation:
Service control policy are a type of organization policy that you can use to manage permissions in
your organization. SCPs offer central control over the maximum available permissions for all
accounts in your organization. SCPs help you to ensure your accounts stay within your
organization’s access control guidelines. SCPs are available only in an organization that has all
features enabled.
QUESTION 587
A company seeks a storage solution for its application. The solution must be highly available and
scalable. The solution also must function as a file system be mountable by multiple Linux
instances in AWS and on premises through native protocols, and have no minimum size
requirements. The company has set up a Site-to-Site VPN for access from its on-premises
network to its VPC.
Which storage solution meets these requirements?
A. Amazon FSx Multi-AZ deployments
B. Amazon Elastic Block Store (Amazon EBS) Multi-Attach volumes
C. Amazon Elastic File System (Amazon EFS) with multiple mount targets
D. Amazon Elastic File System (Amazon EFS) with a single mount target and multiple access points
C. Amazon Elastic File System (Amazon EFS) with multiple mount targets
Explanation:
Amazon EFS is a fully managed file system service that provides scalable, shared storage for
Amazon EC2 instances. It supports the Network File System version 4 (NFSv4) protocol, which is
a native protocol for Linux-based systems. EFS is designed to be highly available, durable, and
scalable.
QUESTION 586
A company is building a three-tier application on AWS. The presentation tier will serve a static
website The logic tier is a containerized application. This application will store data in a relational
database. The company wants to simplify deployment and to reduce operational costs.
Which solution will meet these requirements?
A. Use Amazon S3 to host static content. Use Amazon Elastic Container Service (Amazon ECS)
with AWS Fargate for compute power. Use a managed Amazon RDS cluster for the database.
B. Use Amazon CloudFront to host static content. Use Amazon Elastic Container Service (Amazon
ECS) with Amazon EC2 for compute power. Use a managed Amazon RDS cluster for the
database.
C. Use Amazon S3 to host static content. Use Amazon Elastic Kubernetes Service (Amazon EKS)
with AWS Fargate for compute power. Use a managed Amazon RDS cluster for the database.
D. Use Amazon EC2 Reserved Instances to host static content. Use Amazon Elastic Kubernetes
Service (Amazon EKS) with Amazon EC2 for compute power. Use a managed Amazon RDS
cluster for the database.
A. Use Amazon S3 to host static content. Use Amazon Elastic Container Service (Amazon ECS)
with AWS Fargate for compute power. Use a managed Amazon RDS cluster for the database.
Explanation:
Amazon S3 is a highly scalable and cost-effective storage service that can be used to host static
website content. It provides durability, high availability, and low latency access to the static files.
Amazon ECS with AWS Fargate eliminates the need to manage the underlying infrastructure. It
allows you to run containerized applications without provisioning or managing EC2 instances.
This reduces operational overhead and provides scalability.
By using a managed Amazon RDS cluster for the database, you can offload the management
tasks such as backups, patching, and monitoring to AWS. This reduces the operational burden
and ensures high availability and durability of the database.
QUESTION 585
A company is looking for a solution that can store video archives in AWS from old news footage.
The company needs to minimize costs and will rarely need to restore these files. When the files
are needed, they must be available in a maximum of five minutes.
What is the MOST cost-effective solution?
A. Store the video archives in Amazon S3 Glacier and use Expedited retrievals.
B. Store the video archives in Amazon S3 Glacier and use Standard retrievals.
C. Store the video archives in Amazon S3 Standard-Infrequent Access (S3 Standard-IA).
D. Store the video archives in Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA).
A. Store the video archives in Amazon S3 Glacier and use Expedited retrievals.
Explanation:
By choosing Expedited retrievals in Amazon S3 Glacier, you can reduce the retrieval time to
minutes, making it suitable for scenarios where quick access is required. Expedited retrievals
come with a higher cost per retrieval compared to standard retrievals but provide faster access to
your archived data.
QUESTION 584
A company wants to move from many standalone AWS accounts to a consolidated, multi-account
architecture. The company plans to create many new AWS accounts for different business units.
The company needs to authenticate access to these AWS accounts by using a centralized
corporate directory service.
Which combination of actions should a solutions architect recommend to meet these
requirements? (Choose two.)
A. Create a new organization in AWS Organizations with all features turned on. Create the new
AWS accounts in the organization.
B. Set up an Amazon Cognito identity pool. Configure AWS IAM Identity Center (AWS Single Sign-
On) to accept Amazon Cognito authentication.
C. Configure a service control policy (SCP) to manage the AWS accounts. Add AWS IAM Identity
Center (AWS Single Sign-On) to AWS Directory Service.
D. Create a new organization in AWS Organizations. Configure the organization’s authentication
mechanism to use AWS Directory Service directly.
E. Set up AWS IAM Identity Center (AWS Single Sign-On) in the organization. Configure IAM
Identity Center, and integrate it with the company’s corporate directory service.
A. Create a new organization in AWS Organizations with all features turned on. Create the new
AWS accounts in the organization.
E. Set up AWS IAM Identity Center (AWS Single Sign-On) in the organization. Configure IAM
Identity Center, and integrate it with the company’s corporate directory service.
Explanation:
A. By creating a new organization in AWS Organizations, you can establish a consolidated multi-
account architecture. This allows you to create and manage multiple AWS accounts for different
business units under a single organization.
E. Setting up AWS IAM Identity Center (AWS Single Sign-On) within the organization enables
you to integrate it with the company’s corporate directory service. This integration allows for
centralized authentication, where users can sign in using their corporate credentials and access
the AWS accounts within the organization.
Together, these actions create a centralized, multi-account architecture that leverages AWS
Organizations for account management and AWS IAM Identity Center (AWS Single Sign-On) for
authentication and access control.
QUESTION 583
A company containerized a Windows job that runs on .NET 6 Framework under a Windows
container. The company wants to run this job in the AWS Cloud. The job runs every 10 minutes.
The job’s runtime varies between 1 minute and 3 minutes.
Which solution will meet these requirements MOST cost-effectively?
A. Create an AWS Lambda function based on the container image of the job. Configure Amazon
EventBridge to invoke the function every 10 minutes.
B. Use AWS Batch to create a job that uses AWS Fargate resources. Configure the job scheduling
to run every 10 minutes.
C. Use Amazon Elastic Container Service (Amazon ECS) on AWS Fargate to run the job. Create a
scheduled task based on the container image of the job to run every 10 minutes.
D. Use Amazon Elastic Container Service (Amazon ECS) on AWS Fargate to run the job. Create a
standalone task based on the container image of the job. Use Windows task scheduler to run the
job every 10 minutes.
C. Use Amazon Elastic Container Service (Amazon ECS) on AWS Fargate to run the job. Create a
scheduled task based on the container image of the job to run every 10 minutes.
Explanation:
By leveraging AWS Fargate and ECS, you can achieve cost-effective scaling and resource
allocation for your containerized Windows job running on .NET 6 Framework in the AWS Cloud.
The serverless nature of Fargate ensures that you only pay for the actual resources consumed by
your containers, allowing for efficient cost management.
QUESTION 582
A company wants to migrate 100 GB of historical data from an on-premises location to an
Amazon S3 bucket. The company has a 100 megabits per second (Mbps) internet connection on
premises. The company needs to encrypt the data in transit to the S3 bucket. The company will
store new data directly in Amazon S3.
Which solution will meet these requirements with the LEAST operational overhead?
A. Use the s3 sync command in the AWS CLI to move the data directly to an S3 bucket
B. Use AWS DataSync to migrate the data from the on-premises location to an S3 bucket
C. Use AWS Snowball to move the data to an S3 bucket
D. Set up an IPsec VPN from the on-premises location to AWS. Use the s3 cp command in the AWS
CLI to move the data directly to an S3 bucket
B. Use AWS DataSync to migrate the data from the on-premises location to an S3 bucket
Explanation:
AWS DataSync is a fully managed data transfer service that simplifies and automates the
process of moving data between on-premises storage and Amazon S3. It provides secure and
efficient data transfer with built-in encryption, ensuring that the data is encrypted in transit.
By using AWS DataSync, the company can easily migrate the 100 GB of historical data from their
on-premises location to an S3 bucket. DataSync will handle the encryption of data in transit and
ensure secure transfer.
QUESTION 581
A company hosts a three-tier web application in the AWS Cloud. A Multi-AZAmazon RDS for
MySQL server forms the database layer Amazon ElastiCache forms the cache layer. The
company wants a caching strategy that adds or updates data in the cache when a customer adds
an item to the database. The data in the cache must always match the data in the database.
Which solution will meet these requirements?
A. Implement the lazy loading caching strategy
B. Implement the write-through caching strategy
C. Implement the adding TTL caching strategy
D. Implement the AWS AppConfig caching strategy
B. Implement the write-through caching strategy
Explanation:
In the write-through caching strategy, when a customer adds or updates an item in the database, the application first writes the data to the database and then updates the cache with the same
data. This ensures that the cache is always synchronized with the database, as every write
operation triggers an update to the cache.
QUESTION 580
A business application is hosted on Amazon EC2 and uses Amazon S3 for encrypted object
storage. The chief information security officer has directed that no application traffic between the
two services should traverse the public internet.
Which capability should the solutions architect use to meet the compliance requirements?
A. AWS Key Management Service (AWS KMS)
B. VPC endpoint
C. Private subnet
D. Virtual private gateway
B. VPC endpoint
Explanation:
A VPC endpoint enables you to privately access AWS services without requiring internet
gateways, NAT gateways, VPN connections, or AWS Direct Connect connections. It allows you to
connect your VPC directly to supported AWS services, such as Amazon S3, over a private
connection within the AWS network.
By creating a VPC endpoint for Amazon S3, the traffic between your EC2 instances and S3 will
stay within the AWS network and won’t traverse the public internet. This provides a more secure
and compliant solution, as the data transfer remains within the private network boundaries.
QUESTION 579
A company is making a prototype of the infrastructure for its new website by manually
provisioning the necessary infrastructure. This infrastructure includes an Auto Scaling group, an
Application Load Balancer and an Amazon RDS database. After the configuration has been
thoroughly validated, the company wants the capability to immediately deploy the infrastructure
for development and production use in two Availability Zones in an automated fashion.
What should a solutions architect recommend to meet these requirements?
A. Use AWS Systems Manager to replicate and provision the prototype infrastructure in two
Availability Zones
B. Define the infrastructure as a template by using the prototype infrastructure as a guide. Deploy
the infrastructure with AWS CloudFormation.
C. Use AWS Config to record the inventory of resources that are used in the prototype infrastructure.
Use AWS Config to deploy the prototype infrastructure into two Availability Zones.
D. Use AWS Elastic Beanstalk and configure it to use an automated reference to the prototype
infrastructure to automatically deploy new environments in two Availability Zones.
B. Define the infrastructure as a template by using the prototype infrastructure as a guide. Deploy
the infrastructure with AWS CloudFormation.
Explanation:
AWS CloudFormation is a service that allows you to define and provision infrastructure as code.
This means that you can create a template that describes the resources you want to create, and
then use CloudFormation to deploy those resources in an automated fashion.
In this case, the solutions architect should define the infrastructure as a template by using the
prototype infrastructure as a guide. The template should include resources for an Auto Scaling
group, an Application Load Balancer, and an Amazon RDS database. Once the template is
created, the solutions architect can use CloudFormation to deploy the infrastructure in two
Availability Zones.
QUESTION 578
A law firm needs to share information with the public. The information includes hundreds of files
that must be publicly readable. Modifications or deletions of the files by anyone before a
designated future date are prohibited.
Which solution will meet these requirements in the MOST secure way?
A. Upload all files to an Amazon S3 bucket that is configured for static website hosting. Grant read-
only IAM permissions to any AWS principals that access the S3 bucket until the designated date.
B. Create a new Amazon S3 bucket with S3 Versioning enabled. Use S3 Object Lock with a
retention period in accordance with the designated date. Configure the S3 bucket for static
website hosting. Set an S3 bucket policy to allow read-only access to the objects.
C. Create a new Amazon S3 bucket with S3 Versioning enabled. Configure an event trigger to run
an AWS Lambda function in case of object modification or deletion. Configure the Lambda
function to replace the objects with the original versions from a private S3 bucket.
D. Upload all files to an Amazon S3 bucket that is configured for static website hosting. Select the
folder that contains the files. Use S3 Object Lock with a retention period in accordance with the
designated date. Grant read-only IAM permissions to any AWS principals that access the S3
bucket.
B. Create a new Amazon S3 bucket with S3 Versioning enabled. Use S3 Object Lock with a
retention period in accordance with the designated date. Configure the S3 bucket for static
website hosting. Set an S3 bucket policy to allow read-only access to the objects.
Explanation:
https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock.html
QUESTION 577
A group requires permissions to list an Amazon S3 bucket and delete objects from that bucket.
An administrator has created the following IAM policy to provide access to the bucket and applied
that policy to the group. The group is not able to delete objects in the bucket. The company
follows least-privilege access rules.
Which statement should a solutions architect add to the policy to correct bucket access?
QUESTION 576
A company is expecting rapid growth in the near future. A solutions architect needs to configure
existing users and grant permissions to new users on AWS. The solutions architect has decided
to create IAM groups. The solutions architect will add the new users to IAM groups based on
department.
Which additional action is the MOST secure way to grant permissions to the new users?
A. Apply service control policies (SCPs) to manage access permissions
B. Create IAM roles that have least privilege permission. Attach the roles to the IAM groups
C. Create an IAM policy that grants least privilege permission. Attach the policy to the IAM groups
D. Create IAM roles. Associate the roles with a permissions boundary that defines the maximum
permissions
C. Create an IAM policy that grants least privilege permission. Attach the policy to the IAM groups
Explanation:
https://docs.aws.amazon.com/IAM/latest/UserGuide/id_groups_manage_attach-policy.html
Attaching a policy to an IAM user group.
You can assign an existing IAM role to an AWS Directory Service user or group. Not to IAM groups.
create role=for resource like EC2 and lambda …. create a Policy =for groups or user access policy for the resources like S3 bucket
A is wrong SCPs are mainly used along with AWS Organizations organizational units (OUs). SCPs do not replace IAM Policies such that they do not provide actual permissions. To perform an action, you would still need to grant appropriate IAM Policy permissions.
QUESTION 575
A company is designing a containerized application that will use Amazon Elastic Container
Service (Amazon ECS). The application needs to access a shared file system that is highly
durable and can recover data to another AWS Region with a recovery point objective (RPO) of 8
hours. The file system needs to provide a mount target m each Availability Zone within a Region.
A solutions architect wants to use AWS Backup to manage the replication to another Region.
Which solution will meet these requirements?
A. Amazon FSx for Windows File Server with a Multi-AZ deployment
B. Amazon FSx for NetApp ONTAP with a Multi-AZ deployment
C. Amazon Elastic File System (Amazon EFS) with the Standard storage class
D. Amazon FSx for OpenZFS
C. Amazon Elastic File System (Amazon EFS) with the Standard storage class
Explanation:
https://aws.amazon.com/efs/faq/
Q: What is Amazon EFS Replication?
EFS Replication can replicate your file system data to another Region or within the same Region
without requiring additional infrastructure or a custom process. Amazon EFS Replication
automatically and transparently replicates your data to a second file system in a Region or AZ of
your choice. You can use the Amazon EFS console, AWS CLI, and APIs to activate replication on
an existing file system. EFS Replication is continual and provides a recovery point objective
(RPO) and a recovery time objective (RTO) of minutes, helping you meet your compliance and
business continuity goals.
QUESTION 574
A company has multiple VPCs across AWS Regions to support and run workloads that are
isolated from workloads in other Regions. Because of a recent application launch requirement,
the company’s VPCs must communicate with all other VPCs across all Regions.
Which solution will meet these requirements with the LEAST amount of administrative effort?
A. Use VPC peering to manage VPC communication in a single Region. Use VPC peering across
Regions to manage VPC communications.
B. Use AWS Direct Connect gateways across all Regions to connect VPCs across regions and
manage VPC communications.
C. Use AWS Transit Gateway to manage VPC communication in a single Region and Transit
Gateway peering across Regions to manage VPC communications.
D. Use AWS PrivateLink across all Regions to connect VPCs across Regions and manage VPC
communications
C. Use AWS Transit Gateway to manage VPC communication in a single Region and Transit
Gateway peering across Regions to manage VPC communications.
Explanation:
AWS Transit Gateway: Transit Gateway is a highly scalable service that simplifies network
connectivity between VPCs and on-premises networks. By using a Transit Gateway in a single
Region, you can centralize VPC communication management and reduce administrative effort.
Transit Gateway Peering: Transit Gateway supports peering connections across AWS Regions,
allowing you to establish connectivity between VPCs in different Regions without the need for
complex VPC peering configurations. This simplifies the management of VPC communications
across Regions.
QUESTION 573
A company hosts a website on Amazon EC2 instances behind an Application Load Balancer
(ALB). The website serves static content. Website traffic is increasing, and the company is
concerned about a potential increase in cost.
A. Create an Amazon CloudFront distribution to cache state files at edge locations
B. Create an Amazon ElastiCache cluster. Connect the ALB to the ElastiCache cluster to serve
cached files
C. Create an AWS WAF web ACL and associate it with the ALB. Add a rule to the web ACL to cache
static files
D. Create a second ALB in an alternative AWS Region. Route user traffic to the closest Region to
minimize data transfer costs
A. Create an Amazon CloudFront distribution to cache state files at edge locations
Explanation:
Amazon CloudFront: CloudFront is a content delivery network (CDN) service that caches content
at edge locations worldwide. By creating a CloudFront distribution, static content from the website
can be cached at edge locations, reducing the load on the EC2 instances and improving the
overall performance.
Caching Static Files: Since the website serves static content, caching these files at CloudFront
edge locations can significantly reduce the number of requests forwarded to the EC2 instances.
This helps to lower the overall cost by offloading traffic from the instances and reducing the data
transfer costs.
QUESTION 572
A company has a mobile chat application with a data store based in Amazon DynamoDB. Users
would like new messages to be read with as little latency as possible. A solutions architect needs
to design an optimal solution that requires minimal application changes.
Which method should the solutions architect select?
A. Configure Amazon DynamoDB Accelerator (DAX) for the new messages table. Update the code
to use the DAX endpoint.
B. Add DynamoDB read replicas to handle the increased read load. Update the application to point
to the read endpoint for the read replicas.
C. Double the number of read capacity units for the new messages table in DynamoDB. Continue to
use the existing DynamoDB endpoint.
D. Add an Amazon ElastiCache for Redis cache to the application stack. Update the application to
point to the Redis cache endpoint instead of DynamoDB.
A. Configure Amazon DynamoDB Accelerator (DAX) for the new messages table. Update the code
to use the DAX endpoint.
QUESTION 571
A company is creating an application that runs on containers in a VPC. The application stores
and accesses data in an Amazon S3 bucket. During the development phase, the application will
store and access 1 TB of data in Amazon S3 each day. The company wants to minimize costs
and wants to prevent traffic from traversing the internet whenever possible.
Which solution will meet these requirements?
A. Enable S3 Intelligent-Tiering for the S3 bucket
B. Enable S3 Transfer Acceleration for the S3 bucket
C. Create a gateway VPC endpoint for Amazon S3. Associate this endpoint with all route tables in
the VPC
D. Create an interface endpoint for Amazon S3 in the VPC. Associate this endpoint with all route
tables in the VPC
C. Create a gateway VPC endpoint for Amazon S3. Associate this endpoint with all route tables in
the VPC
Explanation:
Prevent traffic from traversing the internet = Gateway VPC endpoint for S3.
QUESTION 570
A company has applications hosted on Amazon EC2 instances with IPv6 addresses. The
applications must initiate communications with other external applications using the internet. However the company’s security policy states that any external service cannot initiate a
connection to the EC2 instances.
What should a solutions architect recommend to resolve this issue?
A. Create a NAT gateway and make it the destination of the subnet’s route table
B. Create an internet gateway and make it the destination of the subnet’s route table
C. Create a virtual private gateway and make it the destination of the subnet’s route table
D. Create an egress-only internet gateway and make it the destination of the subnet’s route table
D. Create an egress-only internet gateway and make it the destination of the subnet’s route table
Explanation:
An egress-only internet gateway (EIGW) is specifically designed for IPv6-only VPCs and provides
outbound IPv6 internet access while blocking inbound IPv6 traffic. It satisfies the requirement of
preventing external services from initiating connections to the EC2 instances while allowing the
instances to initiate outbound communications.
QUESTION 569
A company stores raw collected data in an Amazon S3 bucket. The data is used for several types
of analytics on behalf of the company’s customers. The type of analytics requested determines
the access pattern on the S3 objects.
The company cannot predict or control the access pattern. The company wants to reduce its S3
costs.
Which solution will meet these requirements?
A. Use S3 replication to transition infrequently accessed objects to S3 Standard-Infrequent Access
(S3 Standard-IA)
B. Use S3 Lifecycle rules to transition objects from S3 Standard to Standard-Infrequent Access (S3
Standard-IA)
C. Use S3 Lifecycle rules to transition objects from S3 Standard to S3 Intelligent-Tiering
D. Use S3 Inventory to identify and transition objects that have not been accessed from S3 Standard
to S3 Intelligent-Tiering
C. Use S3 Lifecycle rules to transition objects from S3 Standard to S3 Intelligent-Tiering
QUESTION 568
A company is developing a microservices application that will provide a search catalog for
customers. The company must use REST APIs to present the frontend of the application to users.
The REST APIs must access the backend services that the company hosts in containers in
private VPC subnets.
Which solution will meet these requirements?
A. Design a WebSocket API by using Amazon API Gateway. Host the application in Amazon Elastic
Container Service (Amazon ECS) in a private subnet. Create a private VPC link for API Gateway
to access Amazon ECS.
B. Design a REST API by using Amazon API Gateway. Host the application in Amazon Elastic
Container Service (Amazon ECS) in a private subnet. Create a private VPC link for API Gateway
to access Amazon ECS.
C. Design a WebSocket API by using Amazon API Gateway. Host the application in Amazon Elastic
Container Service (Amazon ECS) in a private subnet. Create a security group for API Gateway to
access Amazon ECS.
D. Design a REST API by using Amazon API Gateway. Host the application in Amazon Elastic
Container Service (Amazon ECS) in a private subnet. Create a security group for API Gateway to
access Amazon ECS.
B. Design a REST API by using Amazon API Gateway. Host the application in Amazon Elastic
Container Service (Amazon ECS) in a private subnet. Create a private VPC link for API Gateway
to access Amazon ECS.
Explanation:
https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api-private-integration.html
QUESTION 567
A company uses AWS Organizations. A member account has purchased a Compute Savings
Plan. Because of changes in the workloads inside the member account, the account no longer
receives the full benefit of the Compute Savings Plan commitment. The company uses less than
50% of its purchased compute power.
A. Turn on discount sharing from the Billing Preferences section of the account console in the
member account that purchased the Compute Savings Plan.
B. Turn on discount sharing from the Billing Preferences section of the account console in the
company’s Organizations management account.
C. Migrate additional compute workloads from another AWS account to the account that has the
Compute Savings Plan.
D. Sell the excess Savings Plan commitment in the Reserved Instance Marketplace.
B. Turn on discount sharing from the Billing Preferences section of the account console in the
company’s Organizations management account.
Explanation:
https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/ri-turn-off.html
Sign in to the AWS Management Console and open the AWS Billing console at
https://console.aws.amazon.com/billing/
Ensure you’re logged in to the management account of your AWS Organizations.
QUESTION 566
A company designed a stateless two-tier application that uses Amazon EC2 in a single
Availability Zone and an Amazon RDS Multi-AZ DB instance. New company management wants
to ensure the application is highly available.
What should a solutions architect do to meet this requirement?
A. Configure the application to use Multi-AZ EC2 Auto Scaling and create an Application Load
Balancer
B. Configure the application to take snapshots of the EC2 instances and send them to a different
AWS Region
C. Configure the application to use Amazon Route 53 latency-based routing to feed requests to the
application
D. Configure Amazon Route 53 rules to handle incoming requests and create a Multi-AZ Application
Load Balancer
A. Configure the application to use Multi-AZ EC2 Auto Scaling and create an Application Load
Balancer
Explanation:
By combining Multi-AZ EC2 Auto Scaling and an Application Load Balancer, you achieve high
availability for the EC2 instances hosting your stateless two-tier application.
QUESTION 565
A company is developing an application to support customer demands. The company wants to
deploy the application on multiple Amazon EC2 Nitro-based instances within the same Availability
Zone. The company also wants to give the application the ability to write to multiple block storage
volumes in multiple EC2 Nitro-based instances simultaneously to achieve higher application
availability.
Which solution will meet these requirements?
A. Use General Purpose SSD (gp3) EBS volumes with Amazon Elastic Block Store (Amazon EBS)
Multi-Attach
B. Use Throughput Optimized HDD (st1) EBS volumes with Amazon Elastic Block Store (Amazon
EBS) Multi-Attach
C. Use Provisioned IOPS SSD (io2) EBS volumes with Amazon Elastic Block Store (Amazon EBS)
Multi-Attach
D. Use General Purpose SSD (gp2) EBS volumes with Amazon Elastic Block Store (Amazon EBS)
Multi-Attach
C. Use Provisioned IOPS SSD (io2) EBS volumes with Amazon Elastic Block Store (Amazon EBS)
Multi-Attach
Explanation:
Multi-Attach is supported exclusively on Provisioned IOPS SSD (io1 and io2) volumes.
QUESTION 564
A company hosts an online shopping application that stores all orders in an Amazon RDS for
PostgreSQL Single-AZ DB instance. Management wants to eliminate single points of failure and
has asked a solutions architect to recommend an approach to minimize database downtime
without requiring any changes to the application code.
Which solution meets these requirements?
A. Convert the existing database instance to a Multi-AZ deployment by modifying the database
instance and specifying the Multi-AZ option.
B. Create a new RDS Multi-AZ deployment. Take a snapshot of the current RDS instance and
restore the new Multi-AZ deployment with the snapshot.
C. Create a read-only replica of the PostgreSQL database in another Availability Zone. Use Amazon
Route 53 weighted record sets to distribute requests across the databases.
D. Place the RDS for PostgreSQL database in an Amazon EC2 Auto Scaling group with a minimum
group size of two. Use Amazon Route 53 weighted record sets to distribute requests across
instances.
A. Convert the existing database instance to a Multi-AZ deployment by modifying the database
instance and specifying the Multi-AZ option.
Explanation:
Compared to other solutions that involve creating new instances, restoring snapshots, or setting
up replication manually, converting to a Multi-AZ deployment is a simpler and more streamlined
approach with lower overhead.
Overall, option A offers a cost-effective and efficient way to minimize database downtime without
requiring significant changes or additional complexities.
QUESTION 563
An IoT company is releasing a mattress that has sensors to collect data about a user’s sleep. The sensors will send data to an Amazon S3 bucket. The sensors collect approximately 2 MB of data
every night for each mattress. The company must process and summarize the data for each
mattress. The results need to be available as soon as possible. Data processing will require 1 GB
of memory and will finish within 30 seconds.
Which solution will meet these requirements MOST cost-effectively?
A. Use AWS Glue with a Scala job
B. Use Amazon EMR with an Apache Spark script
C. Use AWS Lambda with a Python script
D. Use AWS Glue with a PySpark job
C. Use AWS Lambda with a Python script
Explanation:
AWS Lambda charges you based on the number of invocations and the execution time of your
function. Since the data processing job is relatively small (2 MB of data), Lambda is a cost-
effective choice. You only pay for the actual usage without the need to provision and maintain
infrastructure.
QUESTION 562
A company has an application that processes customer orders. The company hosts the
application on an Amazon EC2 instance that saves the orders to an Amazon Aurora database.
Occasionally when traffic is high the workload does not process orders fast enough.
What should a solutions architect do to write the orders reliably to the database as quickly as
possible?
A. Increase the instance size of the EC2 instance when traffic is high. Write orders to Amazon
Simple Notification Service (Amazon SNS). Subscribe the database endpoint to the SNS topic.
B. Write orders to an Amazon Simple Queue Service (Amazon SQS) queue. Use EC2 instances in
an Auto Scaling group behind an Application Load Balancer to read from the SQS queue and
process orders into the database.
C. Write orders to Amazon Simple Notification Service (Amazon SNS). Subscribe the database
endpoint to the SNS topic. Use EC2 instances in an Auto Scaling group behind an Application
Load Balancer to read from the SNS topic.
D. Write orders to an Amazon Simple Queue Service (Amazon SQS) queue when the EC2 instance
reaches CPU threshold limits. Use scheduled scaling of EC2 instances in an Auto Scaling group
behind an Application Load Balancer to read from the SQS queue and process orders into the
database.
B. Write orders to an Amazon Simple Queue Service (Amazon SQS) queue. Use EC2 instances in
an Auto Scaling group behind an Application Load Balancer to read from the SQS queue and
process orders into the database.
Explanation:
By decoupling the write operation from the processing operation using SQS, you ensure that the
orders are reliably stored in the queue, regardless of the processing capacity of the EC2
instances. This allows the processing to be performed at a scalable rate based on the available
EC2 instances, improving the overall reliability and speed of order processing.
QUESTION 561
A company is developing a mobile gaming app in a single AWS Region. The app runs on multiple
Amazon EC2 instances in an Auto Scaling group. The company stores the app data in Amazon
DynamoDB. The app communicates by using TCP traffic and UDP traffic between the users and the servers. The application will be used globally. The company wants to ensure the lowest
possible latency for all users.
Which solution will meet these requirements?
A. Use AWS Global Accelerator to create an accelerator. Create an Application Load Balancer
(ALB) behind an accelerator endpoint that uses Global Accelerator integration and listening on
the TCP and UDP ports. Update the Auto Scaling group to register instances on the ALB.
B. Use AWS Global Accelerator to create an accelerator. Create a Network Load Balancer (NLB)
behind an accelerator endpoint that uses Global Accelerator integration and listening on the TCP
and UDP ports. Update the Auto Scaling group to register instances on the NLB.
C. Create an Amazon CloudFront content delivery network (CDN) endpoint. Create a Network Load
Balancer (NLB) behind the endpoint and listening on the TCP and UDP ports. Update the Auto
Scaling group to register instances on the NLB. Update CloudFront to use the NLB as the origin.
D. Create an Amazon CloudFront content delivery network (CDN) endpoint. Create an Application
Load Balancer (ALB) behind the endpoint and listening on the TCP and UDP ports. Update the
Auto Scaling group to register instances on the ALB. Update CloudFront to use the ALB as the
origin.
B. Use AWS Global Accelerator to create an accelerator. Create a Network Load Balancer (NLB)
behind an accelerator endpoint that uses Global Accelerator integration and listening on the TCP
and UDP ports. Update the Auto Scaling group to register instances on the NLB.
Explanation:
AWS Global Accelerator is a better solution for the mobile gaming app than CloudFront.