sthithapragnakk -- SAA Exam Dumps Jan 24--old as of 1 Apr 24 Flashcards

1
Q

Question #: 652
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company has a large data workload that runs for 6 hours each day. The company cannot lose any data while the process is running. A solutions architect is designing an Amazon EMR cluster configuration to support this critical data workload.

Which solution will meet these requirements MOST cost-effectively?

A. Configure a long-running cluster that runs the primary node and core nodes on On-Demand Instances and the task nodes on Spot Instances.
B. Configure a transient cluster that runs the primary node and core nodes on On-Demand Instances and the task nodes on Spot Instances.
C. Configure a transient cluster that runs the primary node on an On-Demand Instance and the core nodes and task nodes on Spot Instances.
D. Configure a long-running cluster that runs the primary node on an On-Demand Instance, the core nodes on Spot Instances, and the task nodes on Spot Instances.

A

B. Configure a transient cluster that runs the primary node and core nodes on On-Demand Instances and the task nodes on Spot Instances.

So BNC both looks similar, but what is the difference B is going to use primary and core nodes on demand whereas C is going to use primary argument and core node and tasmota Spot Instances obviously, we want to use this even for transient cluster The reason being if both core nodes and task nodes are on Spot Instances, then you won’t have any instances to process your data at all. Even though you have the primary node on on demand, you still need the core node at least some core nodes available because those are the worker nodes right. So, if you have bought spot then no way you are going to achieve it. But option B you own you have both primary and core node on demand, which means you will always have some available to to process the application. Even the task notes even if they are not available, that’s fine because you still have your core notes on on demand. So for that reason, we will pick the option B,

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Question #: 653
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company maintains an Amazon RDS database that maps users to cost centers. The company has accounts in an organization in AWS Organizations. The company needs a solution that will tag all resources that are created in a specific AWS account in the organization. The solution must tag each resource with the cost center ID of the user who created the resource.

Which solution will meet these requirements?

A. Move the specific AWS account to a new organizational unit (OU) in Organizations from the management account. Create a service control policy (SCP) that requires all existing resources to have the correct cost center tag before the resources are created. Apply the SCP to the new OU.
B. Create an AWS Lambda function to tag the resources after the Lambda function looks up the appropriate cost center from the RDS database. Configure an Amazon EventBridge rule that reacts to AWS CloudTrail events to invoke the Lambda function.
C. Create an AWS CloudFormation stack to deploy an AWS Lambda function. Configure the Lambda function to look up the appropriate cost center from the RDS database and to tag resources. Create an Amazon EventBridge scheduled rule to invoke the CloudFormation stack.
D. Create an AWS Lambda function to tag the resources with a default value. Configure an Amazon EventBridge rule that reacts to AWS CloudTrail events to invoke the Lambda function when a resource is missing the cost center tag.

A

B. Create an AWS Lambda function to tag the resources after the Lambda function looks up the appropriate cost center from the RDS database. Configure an Amazon EventBridge rule that reacts to AWS CloudTrail events to invoke the Lambda function.

option A. Let’s cross out because this is not the right answer. This is suggesting us to use service control protocol to enforce bagging before resource creation. But saps don’t directly perform tagging operations. So for that reason, we won’t use that then we have option C. This is talking about our users CloudFormation. And it is using a scheduled event bridge role which may introduce unnecessary complexity. And that does not ensure immediate tagging upon resource resource creation at all, because you’re scheduling it to do that. And then we have option D. Under this proposes tagging resources with a default value and then reacting to events to correct the attack. This introduces a potential delay and does not guarantee that resources are immediately tagged correctly. So for that reason, we are going to go with option B. What does option B says you know, lambda function is used to tag resources and then lambda function is configured to look up the appropriate cost center from the RDS database. What does this do this ensures that each resource is tagged with the correct cost center ID we are not, you know going with the default value instead we are looking up and then we are using human to bridge in conjunction with AWS cloud trail events, we are not scheduling here, we are going to have the cloud trail events to trigger the lambda function when resources are created. This ensures that time processes are automatically initiated whenever a relevant event occurs instead of scheduling like the option C is doing. So for that reason, option B is the correct one in this case.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Question #: 654
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company recently migrated its web application to the AWS Cloud. The company uses an Amazon EC2 instance to run multiple processes to host the application. The processes include an Apache web server that serves static content. The Apache web server makes requests to a PHP application that uses a local Redis server for user sessions.

The company wants to redesign the architecture to be highly available and to use AWS managed solutions.

Which solution will meet these requirements?

A. Use AWS Elastic Beanstalk to host the static content and the PHP application. Configure Elastic Beanstalk to deploy its EC2 instance into a public subnet. Assign a public IP address.
B. Use AWS Lambda to host the static content and the PHP application. Use an Amazon API Gateway REST API to proxy requests to the Lambda function. Set the API Gateway CORS configuration to respond to the domain name. Configure Amazon ElastiCache for Redis to handle session information.
C. Keep the backend code on the EC2 instance. Create an Amazon ElastiCache for Redis cluster that has Multi-AZ enabled. Configure the ElastiCache for Redis cluster in cluster mode. Copy the frontend resources to Amazon S3. Configure the backend code to reference the EC2 instance.
D. Configure an Amazon CloudFront distribution with an Amazon S3 endpoint to an S3 bucket that is configured to host the static content. Configure an Application Load Balancer that targets an Amazon Elastic Container Service (Amazon ECS) service that runs AWS Fargate tasks for the PHP application. Configure the PHP application to use an Amazon ElastiCache for Redis cluster that runs in multiple Availability Zones.

A

D. Configure an Amazon CloudFront distribution with an Amazon S3 endpoint to an S3 bucket that is configured to host the static content. Configure an Application Load Balancer that targets an Amazon Elastic Container Service (Amazon ECS) service that runs AWS Fargate tasks for the PHP application. Configure the PHP application to use an Amazon ElastiCache for Redis cluster that runs in multiple Availability Zones.

Whenever you see static content, blindly go with s3 and cloud format CloudFront. Those two are like awesome combination for handling static content. So let’s go through the options. And with the hint I just gave you we can clearly see option D is the right answer, but let’s see why the other options are not the correct ones. Option A is talking about Elastic Beanstalk and reality no it is a fully managed service that makes it easy to deploy and run applications. While it simplifies application development it may I’d be the best fit for hosting static content directly. Assigning a public IP to an easy to instance in a public subnet suggests exposure to public internet. While this is common for web servers, it’s not the most highly available architecture. And he does not leverage AWS managed solutions for static content delivery. Hence, we don’t pick that one. Then we have option B, we have lambda, it can host serverless functions, but it may not be the best fit for hosting entire web application, especially one with PHP code. And then we have Amazon API, gateway, and lambda. These are commonly used for serverless applications. But for a web application with PHP, it may introduce complexities, then configuring elastic cache for Redis. For session information, it is a good practice. But this option lacks clear separation between static content dynamic content and session management. So now we don’t pick that. Then option C. This option maintains the back end code on an easy to instance. Again, they want managed solutions not manual solutions, which may not be the most scalable and managed solution. While using elastic cache for readies with multi AZ is a good choice for session management. The overall architecture lacks the separation of concerns between static content and dynamic content content, and also copying the front end resources to Amazon s3 is a step toward better scalability for static content. But the architecture could be further optimized. So hence for that reason, we will go with option D. Why because this option leverages Amazon CloudFront for global content delivery providing low latency access to static resources. Separation of static content, which is hosted in s3 and dynamic content running on ECS with fargate is a good practice for scalability and manageability. Using an application load balancer with AWS fargate tasks for the PHP application provides a scalable and highly available environment. And Amazon elastic cache for Redis cluster with multiple availability zones is used for session management and ensuring highly availability. So overall, Option D is a well architected solution that leverages multiple AWS managed services to achieve high availability, scalability and separation of concern. Clearly, it aligns with the best practices for hosting web applications on AWS.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Question #: 655
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company runs a web application on Amazon EC2 instances in an Auto Scaling group that has a target group. The company designed the application to work with session affinity (sticky sessions) for a better user experience.

The application must be available publicly over the internet as an endpoint. A WAF must be applied to the endpoint for additional security. Session affinity (sticky sessions) must be configured on the endpoint.

Which combination of steps will meet these requirements? (Choose two.)

A. Create a public Network Load Balancer. Specify the application target group.
B. Create a Gateway Load Balancer. Specify the application target group.
C. Create a public Application Load Balancer. Specify the application target group.
D. Create a second target group. Add Elastic IP addresses to the EC2 instances.
E. Create a web ACL in AWS WAF. Associate the web ACL with the endpoint

A

C. Create a public Application Load Balancer. Specify the application target group.
E. Create a web ACL in AWS WAF. Associate the web ACL with the endpoint

Option one, create a public network load balancer, specify the application target group network load balancers. This is one of the load balancer. The other one is application load balancers. And then we have classic load balancer and then also get a load balancer. But for now, let’s concentrate on network load balancer. These are designed for TCP UDP. That’s the hint you need to remember. Okay, that is a tip from me. network load balancer just remember TCP UDP, application load balancer HTTP and HTTPS traffic. Okay, so NL B’s are designed for TCP UDP traffic and they do not have native support for session affinity or sticky sessions for this ALP is suitable far more suitable for HTTP and HTTPS traffic. Then we have option B. Create a gateway load balancers specify the application target group. Okay, great. Well, load balancers are actually designed for handling traffic at the network and transport layers. They are not used for HTTP HTTPS traffic and they do not support session affinity. Then that will leave us with these three. Let’s go and look at option D. Which is talking about create a second target group Add elastic IP addresses to the easy two instances. Adding elastic IP addresses to easy two instances not directly related to achieving session affinity or applying a web application firewall. Session affinity is typically managed by load balancer and web application firewall is a separate service for the application security. So that will leave us with options C, I and II. Let’s see why they are the right options. And we have discussed so far. LB is designed for HTTP, HTTPS traffic and its support session affinity. So by creating a public ELB, you can expose your web application to the internet with the necessary routing and load balancing capabilities, which is all good but the question asked about endpoints and etc. So that will be handled by option C, or sorry, option E, creating a web ACL in the valve. So whilst our web application firewall provides protection against common web exploits and attacks, if you did the cloud practitioner you know, whenever we hear the words, SQL injection attacks are cross site scripting attacks, we will use the web application firewall because that is actually protects from those attacks. So by creating a web ACL, you can define rules to filter and control the web traffic. associating the web ACL with the endpoint ensures that the web application is protected by the specified security policies hence, is is one of the right option. So in summary, C and E represent the appropriate steps for exposing the web application with session affinity and applying waf for additional security.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Question #: 656
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company runs a website that stores images of historical events. Website users need the ability to search and view images based on the year that the event in the image occurred. On average, users request each image only once or twice a year. The company wants a highly available solution to store and deliver the images to users.

Which solution will meet these requirements MOST cost-effectively?

A. Store images in Amazon Elastic Block Store (Amazon EBS). Use a web server that runs on Amazon EC2.
B. Store images in Amazon Elastic File System (Amazon EFS). Use a web server that runs on Amazon EC2.
C. Store images in Amazon S3 Standard. Use S3 Standard to directly deliver images by using a static website.
D. Store images in Amazon S3 Standard-Infrequent Access (S3 Standard-IA). Use S3 Standard-IA to directly deliver images by using a static website.

A

D. Store images in Amazon S3 Standard-Infrequent Access (S3 Standard-IA). Use S3 Standard-IA to directly deliver images by using a static website.

whenever they ask for storage to store images, audio video files blindly go with s3, there is no better solution or cost effective solution than s3 to store images, audio video, so that actually leaves us with options CMD. So you can immediately eliminate A and B cross them out. So in CMD, they’re both RS three. So which one do we choose? And option C is talking about using s3 standard, then use standard to directly deliver images by using a static static website. Well, you can do it but if you have to talk about most cost effective obviously going with this is a little bit too much. Don’t you assume why? Because the emails are going to users request each email once or twice a year for that VI goes standard standard means you will which is costlier than any other s3 class. Okay, and standard means 24/7 It will be available, you can query it anytime, etc. But since this is once or twice, and the better option cost effective wise would be option D because it is going to use infrequent access, infrequent access. As the name suggests, you will use this class when the files are infrequently accessed, which is our case, obviously, we’ll go with this instead of standard so that this will be cost effective. So that’s what I’m talking about. All the options are right, all the options can actually handle the scenario given. But C and D are the best ones for images, and d is the best one cost effective. So this is kind of combination of three to four questions as I mentioned. So if you don’t know which one these particular services are, etc, then it would be very hard for you to answer these questions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Question #: 657
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company has multiple AWS accounts in an organization in AWS Organizations that different business units use. The company has multiple offices around the world. The company needs to update security group rules to allow new office CIDR ranges or to remove old CIDR ranges across the organization. The company wants to centralize the management of security group rules to minimize the administrative overhead that updating CIDR ranges requires.

Which solution will meet these requirements MOST cost-effectively?

A. Create VPC security groups in the organization’s management account. Update the security groups when a CIDR range update is necessary.
B. Create a VPC customer managed prefix list that contains the list of CIDRs. Use AWS Resource Access Manager (AWS RAM) to share the prefix list across the organization. Use the prefix list in the security groups across the organization.
C. Create an AWS managed prefix list. Use an AWS Security Hub policy to enforce the security group update across the organization. Use an AWS Lambda function to update the prefix list automatically when the CIDR ranges change.
D. Create security groups in a central administrative AWS account. Create an AWS Firewall Manager common security group policy for the whole organization. Select the previously created security groups as primary groups in the policy.

A

B. Create a VPC customer managed prefix list that contains the list of CIDRs. Use AWS Resource Access Manager (AWS RAM) to share the prefix list across the organization. Use the prefix list in the security groups across the organization.

option one, which is not the correct answer for our case. So it’s talking about creating VPC security groups in the organization’s management account, this may lead to an administrative overhead and may not be as scalable or centralized. So for that reason, we are not going to use that. Then we are going to create because how many security groups are you going to create? So right, it’s not scalable at all options, see, is talking about creating AWS manage prefix list and security hub policies, this could provide automation, it might introduce unnecessary complexity and may not be as cost effective. So for that reason, we won’t go with that as well. And then we have option D. Similar to Option A, but this is talking about creating security groups. And it is talking about you creating AWS firewall manager. Well, firewall manager is generally used for managing AWS web application firewall and AWS shield advanced. And it’s used for managing security groups might be overkill for a specific use case described. And additionally, it may introduce additional costs, right? And because they are asking about minimizing administrative overhead when they say that what does the mean means use the features of the tools that are mentioned in the question, and do it don’t try to introduce something else. So part of that reason, option B is more suitable. Why? Because it’s talking about creating a VPC customer managed request list. This will allow you to define a list of cider ranges that can be shared across AWS accounts and regions. This provides a centralized way to manage data ranges. And then they’re talking about using AWS RAM, which is just a resource access manager to share the prefix list across the organization. So what is Ram? It enables resource sharing across AWS accounts, including prefix lists. So by sharing the customer manager prefix list, you centralize the management of cider ranges. And then they’re talking about using the prefix lists and security groups across the organization. So what does this do, you can reference the shared prefix list in security group rules that you created in the previous steps. This ensures that security groups across multiple AWS accounts use the same centralized set of set of ranges. So this approach minimizes administrative overhead or loss for centralized control, and provides a scalable solution for managing security group rules globally. So this makes this option most cost effective and suitable for centralizing the management of security group roles across organizations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

674 Question #: 658
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company uses an on-premises network-attached storage (NAS) system to provide file shares to its high performance computing (HPC) workloads. The company wants to migrate its latency-sensitive HPC workloads and its storage to the AWS Cloud. The company must be able to provide NFS and SMB multi-protocol access from the file system.

Which solution will meet these requirements with the LEAST latency? (Choose two.)

A. Deploy compute optimized EC2 instances into a cluster placement group.
B. Deploy compute optimized EC2 instances into a partition placement group.
C. Attach the EC2 instances to an Amazon FSx for Lustre file system.
D. Attach the EC2 instances to an Amazon FSx for OpenZFS file system.
E. Attach the EC2 instances to an Amazon FSx for NetApp ONTAP file system.

A

A. Deploy compute optimized EC2 instances into a cluster placement group.
E. Attach the EC2 instances to an Amazon FSx for NetApp ONTAP file system.

So this question is divided into two groups a and b as one CD as another one and between A and B. Obviously, I will go with a not B. Why? Because A is talking about cluster placement groups. Well, what are cluster placement placement goes exactly. These allow you to group instances within a single availability zone to provide low latency network performance. This is suitable for tightly coupled HPC workloads. So cluster placement groups are they go hand in hand with HPC HPC workloads, which is what our question is asking. Whereas the partition placement groups, they can provide low latency networking. But cluster placement groups are generally preferred for HPC workloads, as they provide a higher degree of network performance optimization. Then among CDE, which one do we pick? Well, we pick the correct one based on these NFS and SMB multi protocol access. And out of these three, only NetApp ONTAP supports both NFS and SMB. This doesn’t. Well, if you want me to show something, I haven’t. So if you see this is a table, you can go and see here, SMB and NFS is supported by NetApp ONTAP. Open it only supports NFS whereas the luster supports none of this, okay, so the for that reason, we will go with NetApp ONTAP file system. So this satisfies both the entire question.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

675 Question #: 483
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C02 Questions]
A company is relocating its data center and wants to securely transfer 50 TB of data to AWS within 2 weeks. The existing data center has a Site-to-Site VPN connection to AWS that is 90% utilized.
Which AWS service should a solutions architect use to meet these requirements?

A. AWS DataSync with a VPC endpoint
B. AWS Direct Connect
C. AWS Snowball Edge Storage Optimized
D. AWS Storage Gateway

A

C. AWS Snowball Edge Storage Optimized

since it’s 90% utilized, you cannot use the 10% utilization to move the 50 DB data in two weeks. But whenever the question says most something within one week or two weeks, usually they are referring you to use the snowball family device. So not the other words. So in this case, if you want to look at the data sync with VPC, again, Data Sync, it will send the data over internet and we don’t have that much bandwidth available. So you can cross that out. Same thing with Direct Connect already 90% is useless Direct Connect even though it is a direct connect to connection between on premise and your cloud. Usually, we cannot use this for one time data transfer right this because right now we want to do a one time secure transfer of HTTP data. So for that, setting up Direct Connect is not feasible. And again, even if you want to set a bad connect, it takes at least one month to do the complete setup, which is an overkill. So that will leave us with options of C and D and even D is not the right option. Why because storage gateway it enables hybrid cloud storage between on premise environments and AWS. However, given the requirement to transfer a large amount of data quickly, and the existing VPN already being utilized, 90% using a physical Data Transfer Service like Snowball is preferred, right? Because this happens over the internet and storage gateway is actually used to access cloud storage from on prem is not for migration data one time thing is, so that will leave us with a snowball device, which is a physical device, right? It’s a physical device that you can use to transfer large amounts of data to and from AWS. The storage optimized variant is specifically designed for data transfer, it can be shipped to your data center and you can load the data onto the device. After that you ship the device back to AWS where the data is transferred into your s3 bucket. So this is a hint, whenever they say their bandwidth is utilized, they are hinting you to pick the storage device.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

676 Question #: 660
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company hosts an application on Amazon EC2 On-Demand Instances in an Auto Scaling group. Application peak hours occur at the same time each day. Application users report slow application performance at the start of peak hours. The application performs normally 2-3 hours after peak hours begin. The company wants to ensure that the application works properly at the start of peak hours.

Which solution will meet these requirements?

A. Configure an Application Load Balancer to distribute traffic properly to the instances.
B. Configure a dynamic scaling policy for the Auto Scaling group to launch new instances based on memory utilization.
C. Configure a dynamic scaling policy for the Auto Scaling group to launch new instances based on CPU utilization.
D. Configure a scheduled scaling policy for the Auto Scaling group to launch new instances before peak hours.

A

D. Configure a scheduled scaling policy for the Auto Scaling group to launch new instances before peak hours.

So as you can see in the question multiple times they know what happens exactly when the peak happens, how long it will run after the peak, and so on and so forth. So what do you think they are referring to? Whenever they tell you that, you know the company knows pretty much everything about the application, etc? They’re hinting you to select something scheduled? When do you pick scheduled when you know the patterns when you know the usage patterns, when the peak happens when this etc happens? Since you already know everything. In those situations, you will use schedule. If they say like, you know, the company doesn’t know when the peak happens, etc, then they are in hinting to dynamic scaling policies because you don’t know when it will peak. So obviously, you cannot schedule something that you already don’t know. So by that you can directly go and pick answer a day. But let’s look at the options other options why they are wrong? Option A, they are using configuring lb Well, this helps distribute traffic, but it may not address the issue of slow application performance or start off peak apps. LPS distribute traffic to existing instances but don’t inherently solve the problem of insufficient capacity during peak period. So then we have option B which uses dynamic scaling policy for Auto Scaling group based on memory utilization. Scaling based on memory utilization may not necessarily align with the actual demand for the application during peak hours. It’s typically better to scale based on metrics that directly related to application performance instead of memory. And similar to B C’s use, instead of CS Using CPU utilization here, I think same answer goes here as well, because it might not be the most accurate indicator of application demand during peak hours. So that will leave us with Option D, instead of using some utilizations, we already know that is going to happen, we already know the peak is going to happen. And after the peak, it is going to run so on and so forth. So we are going to go instead with scheduled. So this option addresses the specific requirements of ensuring that new instances are launched before the start of peak hours, providing the necessary capacity to handle the increased demand at the beginning of the peak period. Okay, so by using a scheduled scaling policy, you can proactively ensure that sufficient capacity is available to handle the expected peak demand, improving application performance at the start of peak hours. This will only work if you know when the peak is going to happen. If you don’t, then the dynamic scaling would be appropriate.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

677 Question #: 661
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company runs applications on AWS that connect to the company’s Amazon RDS database. The applications scale on weekends and at peak times of the year. The company wants to scale the database more effectively for its applications that connect to the database.

Which solution will meet these requirements with the LEAST operational overhead?

A. Use Amazon DynamoDB with connection pooling with a target group configuration for the database. Change the applications to use the DynamoDB endpoint.
B. Use Amazon RDS Proxy with a target group for the database. Change the applications to use the RDS Proxy endpoint.
C. Use a custom proxy that runs on Amazon EC2 as an intermediary to the database. Change the applications to use the custom proxy endpoint.
D. Use an AWS Lambda function to provide connection pooling with a target group configuration for the database. Change the applications to use the Lambda function.

A

B. Use Amazon RDS Proxy with a target group for the database. Change the applications to use the RDS Proxy endpoint.

As I always say, least operational overhead means use a feature that is part of the service that is mentioned in the question, don’t create your own solution. So that being said, let’s look at different options. Option A is trying to create their own solution. So using that logic, you can actually pick the appropriate or the correct one, you don’t even have to go through each one of them. Because option A C and D are creating a new solution, whereas option B, you are using the feature of rds obviously that is a free giveaway. But as usual, let’s go through why other options are wrong. So this option is not suitable because DynamoDB is a no SQL database service and is not a direct replacement for Amazon RDS, which might be necessary for the application. So hence that is gone. Then option C. Managing a custom proxy and easy to introduce his additional over operational complexity and maintenance overhead, which may not be the least effort solution, this easiest solution but it’s not least operational overhead. And option D if we talk about it. While lambda can be used for specific use cases, it may not be the most suitable solution for connection pooling and managing database connections due to its stateless nature and limitations in connecting persistence. So that will leave us with Option B, which uses RDS proxy, which is the Manage Database proxy that provides connection pooling, failover and security features for database applications. It allows applications to scale more effectively and by managing database connections on their behalf. It integrates well with RBS and reduces operational overhead.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

678 Question #: 662
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company uses AWS Cost Explorer to monitor its AWS costs. The company notices that Amazon Elastic Block Store (Amazon EBS) storage and snapshot costs increase every month. However, the company does not purchase additional EBS storage every month. The company wants to optimize monthly costs for its current storage usage.

Which solution will meet these requirements with the LEAST operational overhead?

A. Use logs in Amazon CloudWatch Logs to monitor the storage utilization of Amazon EBS. Use Amazon EBS Elastic Volumes to reduce the size of the EBS volumes.
B. Use a custom script to monitor space usage. Use Amazon EBS Elastic Volumes to reduce the size of the EBS volumes.
C. Delete all expired and unused snapshots to reduce snapshot costs.
D. Delete all nonessential snapshots. Use Amazon Data Lifecycle Manager to create and manage the snapshots according to the company’s snapshot policy requirements.

A

D. Delete all nonessential snapshots. Use Amazon Data Lifecycle Manager to create and manage the snapshots according to the company’s snapshot policy requirements.

so again, another list operational overhead, the previous logic will apply here as well. Okay, now let’s look at the different options here. Option A is not the right option. Why? Because cloud while while CloudWatch logs can provide insights using elastic volumes to reduce the size of sorry, using elastic volumes to resize. volumes may involve manual intervention and could be operationally overhead intensive. So for that reason, we’ve already used that and Option B is kind of similar to Option A, but here it is using a custom script which introduces operational overhead and manually resizing values may not be the most efficient solution. And then we have option C. Deleting expired and unused snapshots is a good practice, but it may not directly address the observed increase in EBS storage costs. Snapshots may not be the primary contributor to increase the storage costs. So that will leave us with Option D.

What are this option D this option addresses the specific concern of non essential snapshots and uses data lifecycle manager to automate the snapshot management process, it is more streamlined and automated approach with a less operational overhead. So therefore, Option D provides a more efficient and automated solution to manage snapshots and optimize costs for the company’s current storage usage. Think of s3 lifecycle manager that’s for s3, whereas data lifecycle manager is for stories.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

679 Question #: 663
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company is developing a new application on AWS. The application consists of an Amazon Elastic Container Service (Amazon ECS) cluster, an Amazon S3 bucket that contains assets for the application, and an Amazon RDS for MySQL database that contains the dataset for the application. The dataset contains sensitive information. The company wants to ensure that only the ECS cluster can access the data in the RDS for MySQL database and the data in the S3 bucket.

Which solution will meet these requirements?

A. Create a new AWS Key Management Service (AWS KMS) customer managed key to encrypt both the S3 bucket and the RDS for MySQL database. Ensure that the KMS key policy includes encrypt and decrypt permissions for the ECS task execution role.
B. Create an AWS Key Management Service (AWS KMS) AWS managed key to encrypt both the S3 bucket and the RDS for MySQL database. Ensure that the S3 bucket policy specifies the ECS task execution role as a user.
C. Create an S3 bucket policy that restricts bucket access to the ECS task execution role. Create a VPC endpoint for Amazon RDS for MySQL. Update the RDS for MySQL security group to allow access from only the subnets that the ECS cluster will generate tasks in.
D. Create a VPC endpoint for Amazon RDS for MySQL. Update the RDS for MySQL security group to allow access from only the subnets that the ECS cluster will generate tasks in. Create a VPC endpoint for Amazon S3. Update the S3 bucket policy to allow access from only the S3 VPC endpoint.

A

D. Create a VPC endpoint for Amazon RDS for MySQL. Update the RDS for MySQL security group to allow access from only the subnets that the ECS cluster will generate tasks in. Create a VPC endpoint for Amazon S3. Update the S3 bucket policy to allow access from only the S3 VPC endpoint.

considering all the requirements that our application wants, first of all, where is this rd RDS for that contains the data set for the company wants to ensure that only ECS cluster can access the data. So whenever they say only something wants access to something else, we are talking about giving permission to the security groups. Okay, for that particular service, okay, only easy to cluster so easy to cluster, you can have a security group that can provide access, you will provide access to that on the RDS and the data in s3 bucket. And data in s3 bucket. As you already know, it goes through a public. It goes to public Internet, but if you want to provide access. Since it has sensitive information, they don’t want it to go through public internet. You want it to go through private internet and we already discussed I think somewhere. If you want to connect it to s3 privately instead of publicly then the service you use is VPC endpoints, okay? And okay that and the security groups or subnet, it is indirectly hinting us towards option D because option D is the only one that is using VPC endpoint. But let’s go through all the options. Option A, which is not the right one is talking about creating a new AWS kms customer managed key to encrypt both the s3 bucket and the RDS for MySQL database. And ensuring that that came is key policy includes encrypt and decrypt from the permissions for the ECS task execution role. But it is not the most direct way to restrict access to data sources, right? Because KMS is encrypt encryption. It’s not about permissions and access restrictions, etc. So forget about that. And then even be you can just cross it out because same day, they’re talking about a mess kms same options, so we will ignore it. Okay, looks like even option C is using VPC endpoints, but let’s go through that as well, which is not the correct answer. Option C. So we are in it is creating an s3 bucket policy that restricts bucket access to the ECS task execution role, which is a good practice, but it does not address securing access to RDS and MySQL database, right. So for that reason, we can ignore that. Create a VPC endpoint for Amazon RDS for MySQL, then update the RDS for MySQL security group to allow access from only the subnets that the cluster will generate tasks in. Well, this is a good one. But this is far too make the ease only the easiest cluster to access it. But what about the s3, we are using bucket policies that restrict bucket access file, roll, then create no it doesn’t work that way. You’re going to create VPC endpoint for s3, not for rds. So if you look at option D, which actually talks about that, we are going to create a VPC endpoint for RDS for MySQL. Then also update the RDS for MySQL security group to allow access from only the subnets that the ECS cluster will generate tasks in. Then we’ll create a VPC endpoint for s3 Then we will update the s3 bucket policy to allow access Only the VPC end point. You might be thinking like, oh, CS almost, but yes, almost it’s not the same. We are restricting to ECS task execution role, but instead you should be doing giving access only to the VPC endpoint.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

680 Question #: 664
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company has a web application that runs on premises. The application experiences latency issues during peak hours. The latency issues occur twice each month. At the start of a latency issue, the application’s CPU utilization immediately increases to 10 times its normal amount.

The company wants to migrate the application to AWS to improve latency. The company also wants to scale the application automatically when application demand increases. The company will use AWS Elastic Beanstalk for application deployment.

Which solution will meet these requirements?

A. Configure an Elastic Beanstalk environment to use burstable performance instances in unlimited mode. Configure the environment to scale based on requests.
B. Configure an Elastic Beanstalk environment to use compute optimized instances. Configure the environment to scale based on requests.
C. Configure an Elastic Beanstalk environment to use compute optimized instances. Configure the environment to scale on a schedule.
D. Configure an Elastic Beanstalk environment to use burstable performance instances in unlimited mode. Configure the environment to scale on predictive metrics.

A

D. Configure an Elastic Beanstalk environment to use burstable performance instances in unlimited mode. Configure the environment to scale on predictive metrics.

Well, basically they already gave the answer wherein they are going to use Beanstalk, but which configuration or instance type of the beanstalk you want to use? Fall if you are not aware of it, then you won’t be able to answer it. So let’s take this moment to answer that one. Looks like a and d are similar B and C ad similar. So let’s go through option A why that is not the right option. Option A is talking about configuring Bienstock with burstable performance, while using bustable performance instance in unlimited mald can help with bustable workloads. Configuring the environment to scale based on requests may not address this specific requirement of scaling automatically when the CPU utilization increases 10 times during latency issues. So for that reason, we don’t do that we don’t use that, then we have option B and C which are also wrong both of them are using Compute optimized instances. These are used to provide better performance, but scaling based on requests based on request as similar to Option A may not directly address the increase in CPU utilization during latency issues. And option C again, instead of based on request, they are scheduling it and obviously that is not dynamic at all, scheduling may not be responsive enough to handle unpredictable spikes in demand during latency issues. And it may not be the most effective solution for dynamic scaling. So for that reason, we are going to pick option D because it is scaling on predictive metrics. So what is it first of all, it is going to use burstable performance which we have already seen. And configuring the environment to scale on pre two metrics allows you to proactively scale based on anticipated demand, like you don’t know how the demand is going to be. So depending on that it is going to scare not scheduled not on requests, but dynamically. So this aligns well with the requirement to automatically scale when the CPU utilization increases 10 times during latency issues. Therefore, Option D is the most suitable solution for improving latency and automatically scaling the application during peak hours.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

681 Question #: 665
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company has customers located across the world. The company wants to use automation to secure its systems and network infrastructure. The company’s security team must be able to track and audit all incremental changes to the infrastructure.

Which solution will meet these requirements?

A. Use AWS Organizations to set up the infrastructure. Use AWS Config to track changes.
B. Use AWS CloudFormation to set up the infrastructure. Use AWS Config to track changes.
C. Use AWS Organizations to set up the infrastructure. Use AWS Service Catalog to track changes.
D. Use AWS CloudFormation to set up the infrastructure. Use AWS Service Catalog to track changes.

A

B. Use AWS CloudFormation to set up the infrastructure. Use AWS Config to track changes.

And whenever you want to track incremental changes to the infrastructure, which is like changing the count for the you know, you want to audit the configuration, changes of the services etc. And in that case, we know we will use AWS config. AWS config is specifically used to handle that. Okay, let’s go through the options though. Clearly, two options are using organizations, combination of catalog and conflict. Let’s learn what they are so that we can pick the right option. Organizations we already know is more focused on managing multiple AWS accounts within an organization. Okay, while config is the service designed for tracking managing changes to AWS resources, Option A lacks the automation right because organizations is very common. So for that reason, even option D is wrong. We clearly know that CloudFormation is the one that actually gives you the automation capability of the infrastructure right you cannot automate, provisioning infrastructure etc. So but both option B and D has CloudFormation. So which one is the right option then we have to go see AWS configure a label service catalog. Alright well, service catalog is designed for creating and managing catalogs have IT resources. While it can be used for governance and tracking, it may not provide the same level of you know, inventory of resources like AWS config. For that reason, we will cross this out, and instead option B is the character.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

683 Question #: 667
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company is moving its data and applications to AWS during a multiyear migration project. The company wants to securely access data on Amazon S3 from the company’s AWS Region and from the company’s on-premises location. The data must not traverse the internet. The company has established an AWS Direct Connect connection between its Region and its on-premises location.

Which solution will meet these requirements?

A. Create gateway endpoints for Amazon S3. Use the gateway endpoints to securely access the data from the Region and the on-premises location.
B. Create a gateway in AWS Transit Gateway to access Amazon S3 securely from the Region and the on-premises location.
C. Create interface endpoints for Amazon S3. Use the interface endpoints to securely access the data from the Region and the on-premises location.
D. Use an AWS Key Management Service (AWS KMS) key to access the data securely from the Region and the on-premises location.

A

C. Create interface endpoints for Amazon S3. Use the interface endpoints to securely access the data from the Region and the on-premises location.

Amazon S3 supports both gateway endpoints and interface endpoints. With a gateway endpoint, you can access Amazon S3 from your VPC, without requiring an internet gateway or NAT device for your VPC, and with no additional cost. However, gateway endpoints do not allow access from on-premises networks, from peered VPCs in other AWS Regions, or through a transit gateway. For those scenarios, you must use an interface endpoint, which is available for an additional cost. For more information, see Types of VPC endpoints for Amazon S3 in the Amazon S3 User Guide. https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-s3.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

684 Question #: 668
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company created a new organization in AWS Organizations. The organization has multiple accounts for the company’s development teams. The development team members use AWS IAM Identity Center (AWS Single Sign-On) to access the accounts. For each of the company’s applications, the development teams must use a predefined application name to tag resources that are created.

A solutions architect needs to design a solution that gives the development team the ability to create resources only if the application name tag has an approved value.

Which solution will meet these requirements?

A. Create an IAM group that has a conditional Allow policy that requires the application name tag to be specified for resources to be created.
B. Create a cross-account role that has a Deny policy for any resource that has the application name tag.
C. Create a resource group in AWS Resource Groups to validate that the tags are applied to all resources in all accounts.
D. Create a tag policy in Organizations that has a list of allowed application names.

A

D. Create a tag policy in Organizations that has a list of allowed application names.

Okay, they’re talking about creating it tag policies. But since we are talking about organization, we have to create it at an organization level. So let’s see which option is actually talking about that option is not the right right one because I m policies can include conditions, they are more focused on actions and resources and may not be more suitable for enforcing specific tag values. And then we have option B which talks about cross account role. And Denae policies are generally not recommended unless absolutely necessary and using them for tag enforcement might lead to complex policies. So no, usually you don’t do that through the normal account policies, and then we have option C, which is talking about AWS resource groups, they can help in organizing and searching their sources based on tags, but it does not enforce or control the tags that can be applied. So it will take us to tag policies that is an option that you can create in organizations that has a list of a load application names, which is very robust way if you want to implement tags. This is an effective way to ensure that only approved application names are used as tags. Therefore D creating a tag policy in organizations is the most appropriate solution for enforcing the required tag values.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

685 Question #: 669
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company runs its databases on Amazon RDS for PostgreSQL. The company wants a secure solution to manage the master user password by rotating the password every 30 days.

Which solution will meet these requirements with the LEAST operational overhead?

A. Use Amazon EventBridge to schedule a custom AWS Lambda function to rotate the password every 30 days.
B. Use the modify-db-instance command in the AWS CLI to change the password.
C. Integrate AWS Secrets Manager with Amazon RDS for PostgreSQL to automate password rotation.
D. Integrate AWS Systems Manager Parameter Store with Amazon RDS for PostgreSQL to automate password rotation.

A

C. Integrate AWS Secrets Manager with Amazon RDS for PostgreSQL to automate password rotation.

Whenever you see the question talking about database credentials, API credentials, etc. There is only one service that should come to your mind which is a secret manager. Which option see you don’t have to even look at anything else.

password rotation = AWS Secrets Manager

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/rds-secrets-manager.html#rds-secrets-manager-overview

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

686 Question #: 670
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company performs tests on an application that uses an Amazon DynamoDB table. The tests run for 4 hours once a week. The company knows how many read and write operations the application performs to the table each second during the tests. The company does not currently use DynamoDB for any other use case. A solutions architect needs to optimize the costs for the table.

Which solution will meet these requirements?

A. Choose on-demand mode. Update the read and write capacity units appropriately.
B. Choose provisioned mode. Update the read and write capacity units appropriately.
C. Purchase DynamoDB reserved capacity for a 1-year term.
D. Purchase DynamoDB reserved capacity for a 3-year term.

A

B. Choose provisioned mode. Update the read and write capacity units appropriately.

With provisioned capacity mode, you specify the number of reads and writes per second that you expect your application to require, and you are billed based on that. Furthermore if you can forecast your capacity requirements you can also reserve a portion of DynamoDB provisioned capacity and optimize your costs even further. https://docs.aws.amazon.com/wellarchitected/latest/serverless-applications-lens/capacity.html

The company knows how many read and write operations the application performs to the table each second during the tests.” so ideally they can set this as max

mentioned “Update the read and write capacity units appropriately.” which are automatically set in “on-demand”

solutions architect needs to optimize the cost for the table. And then when they literally tell you how many read and write operations it takes, and if you already know the pattern, then you don’t go for on demand on demand, you will use it when you don’t know a specific pattern like the traffic when it is big when it is not or how many read write operations it will take extra, but clearly we know that when we know that we are not going to go for on demand data. Okay. So for that, we can cancel out this one. And since we run for hours once a week, do you want to go with a reserved instance for one and three years? Of course not we are not going to usually in these cases you will go with on demand if you are only going to run that, but we have another one or another option available or another mode available for parameter which is the provision mode. Okay. So what does this do the provision mode, you manually provision the read and write capacity units based on your known workload. And in the question that clearly said they know it. Since the company knows the read and write operations during the test, they can provision the exact capacity needed for those specific periods. Optimizing costs by not paying for unused capacity during other times.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

687 Question #: 671
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company runs its applications on Amazon EC2 instances. The company performs periodic financial assessments of its AWS costs. The company recently identified unusual spending.

The company needs a solution to prevent unusual spending. The solution must monitor costs and notify responsible stakeholders in the event of unusual spending.

Which solution will meet these requirements?

A. Use an AWS Budgets template to create a zero spend budget.
B. Create an AWS Cost Anomaly Detection monitor in the AWS Billing and Cost Management console.
C. Create AWS Pricing Calculator estimates for the current running workload pricing details.
D. Use Amazon CloudWatch to monitor costs and to identify unusual spending.

A

B. Create an AWS Cost Anomaly Detection monitor in the AWS Billing and Cost Management console.

AWS Cost Anomaly Detection is designed to automatically detect unusual spending patterns based on machine learning algorithms. It can identify anomalies and send notifications when it detects unexpected changes in spending. This aligns well with the requirement to prevent unusual spending and notify stakeholders.

https://aws.amazon.com/aws-cost-management/aws-cost-anomaly-detection/

clearly, there is a tool that does it but pricing calculator, you can eliminate it because this is something that you use before moving to cloud or the event the services are not yet deployed, if you want to know the costs and estimates etc, you will use that. So that’s got budgets, we already know if you want to get notified when particular cost is breached. For example, if you put $100 on the account, if it goes beyond that, then it will get notified. But again, that doesn’t give you whenever there is unusual spending, right, because what you want to do here is not limit the costs of certain things. But whenever there is unusual spending, you want to it’s and then you have cloud option D which is the cloud watch. While CloudWatch can be used for monitoring various metrics, it is not specifically designed for cost anomaly detection. So this is considered as an anomaly. This is not designed for that. For that purpose, we have actually a service or a feature which is Option B which is about cost anomaly detection. This uses machine learning to identify unexpected spending patterns and anomalies in your costs. It can automatically detect unusual spending and send notifications making it suitable for the scenario described think of this as like credit cards right whenever you swipe a card, if you swipe it in a different location that you usually stay in then obviously, it is going to send a call give it give you a call or text messages asking you is this you who made the transaction. So those are considered as anomaly detections. And even AWS has something called anomaly detection monitor here.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

688 Question #: 672
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A marketing company receives a large amount of new clickstream data in Amazon S3 from a marketing campaign. The company needs to analyze the clickstream data in Amazon S3 quickly. Then the company needs to determine whether to process the data further in the data pipeline.

Which solution will meet these requirements with the LEAST operational overhead?

A. Create external tables in a Spark catalog. Configure jobs in AWS Glue to query the data.
B. Configure an AWS Glue crawler to crawl the data. Configure Amazon Athena to query the data.
C. Create external tables in a Hive metastore. Configure Spark jobs in Amazon EMR to query the data.
D. Configure an AWS Glue crawler to crawl the data. Configure Amazon Kinesis Data Analytics to use SQL to query the data.

A

B. Configure an AWS Glue crawler to crawl the data. Configure Amazon Athena to query the data.

Option B - leverages serverless services that minimise management tasks and allows the company to focus on querying and analysing the data with the LEAST operational overhead. AWS Glue with Athena (Option B): AWS Glue is a fully managed extract, transform, and load (ETL) service, and Athena is a serverless query service that allows you to analyze data directly in Amazon S3 using SQL queries. By configuring an AWS Glue crawler to crawl the data, you can create a schema for the data, and then use Athena to query the data directly without the need to load it into a separate database. This minimizes operational overhead.

All they have to do is they have to analyze the Clickstream data in s3 quickly. So if you want to analyze data, that is honestly usually what do you do you run SQL queries, but then somebody will think like, oh, we have the SQL queries, s3 has s3 Select, but the problem we already discussed In one of the previous question is s3 Select works only on one file not on bunch of files. So what is the next quick solution if it is not as three, then Athena, right. So one of the options is right here because this is least operational overhead because Athena is serverless, etc. But let’s go through the other options as well. And option is talking about using Spark catalog and glues. While AWS glue can be used to query data using Spark with AWS glue introduces additional operational overhead. Spark jobs typically typically require more configuration and maintenance. So for that reason, we won’t use that. And then we have option C, which is talking about hive meta store and Spark job similar to Option A, this involves using Spark with EMR option A spark with glue here spark with EMR, again, it introduces additional complexity compared to serverless solution, and it’s not happening. And option D, which is somewhat similar to Option B, at least the first half of it. But using Amazon Kinesis data analytics is it’s more suitable for real time analytics on streaming data. And it might be an over engineered solution for analyzing Clickstream data stored in s3. So for that reason, we’ll go with option A, or Option B, wherein we are using glue crawler, we are not using glue jobs glue crawler, which can automatically discover and catalog metadata aboard the Clickstream data in s3, then we will use Athena being a serverless query service which allows for quick ad hoc SQL queries on the data without the need to set up and manage infrastructure. So therefore option B configuring an AWS glue crawler to crawl the data and using Amazon Athena for query is the most suitable solution for quickly analyzing the data with minimal operational overhead.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

689 Question #: 673
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company runs an SMB file server in its data center. The file server stores large files that the company frequently accesses for up to 7 days after the file creation date. After 7 days, the company needs to be able to access the files with a maximum retrieval time of 24 hours.

Which solution will meet these requirements?

A. Use AWS DataSync to copy data that is older than 7 days from the SMB file server to AWS.
B. Create an Amazon S3 File Gateway to increase the company’s storage space. Create an S3 Lifecycle policy to transition the data to S3 Glacier Deep Archive after 7 days.
C. Create an Amazon FSx File Gateway to increase the company’s storage space. Create an Amazon S3 Lifecycle policy to transition the data after 7 days.
D. Configure access to Amazon S3 for each user. Create an S3 Lifecycle policy to transition the data to S3 Glacier Flexible Retrieval after 7 days.

A

B. Create an Amazon S3 File Gateway to increase the company’s storage space. Create an S3 Lifecycle policy to transition the data to S3 Glacier Deep Archive after 7 days.

S3 File Gateway will connect SMB to S3. Lifecycle policy will move objects to S3 Glacier Deep Archive which support 12 hours retrieval https://aws.amazon.com/blogs/aws/new-amazon-s3-storage-class-glacier-deep-archive/

Amazon S3 File Gateway supports SMB and NFS, Amazon FSx File Gateway SMB for windows workloads.

S3 file gateway supports SMB and S3 Glacier Deep Archive can retrieve data within 12 hours. https://aws.amazon.com/storagegateway/file/s3/ https://docs.aws.amazon.com/prescriptive-guidance/latest/backup-recovery/amazon-s3-glacier.html

Option A clearly data sync so far we have what we have learned that data sync is usually used for transferring make Creating, copying data from on premise to the cloud. So copy data that is older than seven days agenda and than seven days from SMB files. For these scenarios, we don’t actually use Data Sync, this is more for data transfer services, then we have option B. Sorry, Option C, wherein it is using Fs X for Windows file server and transitioning data with s3 lifecycle policy after seven days, it is a suitable approach. However, this option does not explicitly address the specified maximum retrieval time of 24 hours. Right? It doesn’t talk about it, it just says lifecycle policy to transition after seven days, but what about this one retrieval time point for where are you transitioning it to which storage class already transitioning it to, etc. So let’s forget about that. It’s not good. Then we have option D, wherein it’s trying to access s3 for each user created s3 lifecycle policy to transition data to s3 glacier flexible retrieval after seven days, direct access to s3 may not be the most efficient solution for this scenario, right? Because it is storing the files where are the files, SMB file server, right? But here it’s saying access to s3 for each user etc, we have to fast you know, get those files access for them from the Assembly files are here it is not talking about it. Hence, we will go with option B wherein we are going to create an s3 file gateway to increase the company’s storage space. And we already know that file gateway is used to access the cloud storage from within on premise This is not addressing that, then we will create a lifecycle policy to transition the data to s3 glacier deep archive, because flexible retrieval is what five to 12 hours, but here this is 20 they are okay with 24 hours. So the glacier deep archive it I think it is around two well 248 hours. So we are fine with this and this is cheaper than this one. So we would go with option B instead.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

690 Question #: 674
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company runs a web application on Amazon EC2 instances in an Auto Scaling group. The application uses a database that runs on an Amazon RDS for PostgreSQL DB instance. The application performs slowly when traffic increases. The database experiences a heavy read load during periods of high traffic.

Which actions should a solutions architect take to resolve these performance issues? (Choose two.)

A. Turn on auto scaling for the DB instance.
B. Create a read replica for the DB instance. Configure the application to send read traffic to the read replica.
C. Convert the DB instance to a Multi-AZ DB instance deployment. Configure the application to send read traffic to the standby DB instance.
D. Create an Amazon ElastiCache cluster. Configure the application to cache query results in the ElastiCache cluster.
E. Configure the Auto Scaling group subnets to ensure that the EC2 instances are provisioned in the same Availability Zone as the DB instance.

A

B. Create a read replica for the DB instance. Configure the application to send read traffic to the read replica.
D. Create an Amazon ElastiCache cluster. Configure the application to cache query results in the ElastiCache cluster.

B: Read replicas distribute load and help improving performance D: Caching of any kind will help with performance Remember: “ The database experiences a heavy read load during periods of high traffic.”

By creating a read replica, you offload read traffic from the primary DB instance to the replica, distributing the load and improving overall performance during periods of heavy read traffic. Amazon ElastiCache can be used to cache frequently accessed data, reducing the load on the database. This is particularly effective for read-heavy workloads, as it allows the application to retrieve data from the cache rather than making repeated database queries.

B. Create a read replica for the DB instance. Configure the application to send read traffic to the read replica. By creating a read replica, you offload read traffic from the primary DB instance to the replica, distributing the load and improving overall performance during periods of heavy read traffic. D. Create an Amazon ElastiCache cluster. Configure the application to cache query results in the ElastiCache cluster. Amazon ElastiCache can be used to cache frequently accessed data, reducing the load on the database. This is particularly effective for read-heavy workloads, as it allows the application to retrieve data from the cache rather than making repeated database queries.

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/creating-elasticache-cluster-with-RDS-settings.html

Turn on auto scaling for the DB instance aka what happens when you do that? Well, this is not applicable for RDS instances, auto scaling is typically used for what EC two instances and not RDS instances. So this doesn’t work at all. Then we have option C, which is talking about converting the DB instance to a multi AZ deployment. This is mainly for enhancing availability, and fault tolerance, but might not significantly improve read performance. Okay. And then we have option II, which is talking about again, configuring Auto Scaling group subnets to ensure easy two instances are provisioned in the same availability zone as the DB instance this might not directly address the read performance issues. It’s more about optimizing the architecture for locality, which has nothing to do with improving the performance during heavy read. So whenever you see the question that talks about raid, Lord, you usually the answers options will have something called read replicas read replicas are specifically used, wherein it will help the database performance because the read workload will be completely offloaded from the main database to the read replica, which is the option B which talks about that one. So what is it saying it is saying create a read replica where you will offload your read traffic from the primary DB instance to the replica distributing the read load and improving overall performance this is a common approach to horizontally scale read heavy database workloads. And then we are another option is which is the wherein we are using Amazon elastic cache. We already know it is a managed caching service that can help improve application performance by doing what by caching frequently accessed data. So caching query results in elastic cache can reduce the load on Postgres equal database, especially for repeat Add read queries when you are reading the same data again and again. Instead of hitting the main database, not the second time onwards, it will instead hit the caching solution to get the same data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Question #: 675
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company uses Amazon EC2 instances and Amazon Elastic Block Store (Amazon EBS) volumes to run an application. The company creates one snapshot of each EBS volume every day to meet compliance requirements. The company wants to implement an architecture that prevents the accidental deletion of EBS volume snapshots. The solution must not change the administrative rights of the storage administrator user.

Which solution will meet these requirements with the LEAST administrative effort?

A. Create an IAM role that has permission to delete snapshots. Attach the role to a new EC2 instance. Use the AWS CLI from the new EC2 instance to delete snapshots.
B. Create an IAM policy that denies snapshot deletion. Attach the policy to the storage administrator user.
C. Add tags to the snapshots. Create retention rules in Recycle Bin for EBS snapshots that have the tags.
D. Lock the EBS snapshots to prevent deletion.

A

D. Lock the EBS snapshots to prevent deletion.

The “lock” feature in AWS allows you to prevent accidental deletion of resources, including EBS snapshots. This can be set at the snapshot level, providing a straightforward and effective way to meet the requirements without changing the administrative rights of the storage administrator user. Exactly what a locked EBS snapshot is used for https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-snapshot-lock.html

whenever the talk about least which means there is some feature that is available on EBS that you have to use the all you have to do is enable that don’t try to implement a new solution. Okay. So when you gaze through these options, option A and B are trying to create a create something new, whereas C and D are trying to use the features of EBS, etc. So let’s go through these options anyways, option one, or Option A, which is not the right answer. Let’s cross that out. Why is it not the right answer? Well, because this, this involves creating a new Iam role. With permission to delete snapshots and attaching it to an easy to instance, the idea is to delegate the Delete permission to an easy to instance, and the user would use the AWS CLI from the instance. Violet is a valid approach. It introduces additional components and complexity. Again, don’t try to reinvent the wheel. This works, but this is too much administrative effort. Hence, that is not the right answer. Option B. What is he trying to do? It involves creating an IAM policy that explicitly denies permission to delete snapshots. This policy is then attached to the storage administrator user. While it achieves the goal of preventing accidental deletion. It involves modifying the administrator rights of the storage administrator user which might not be desired, because this clearly says do not change the administrator heads. And then we have options see. wherein we are tagging the snapshots and using a service like AWS resource access manager, or recycling been previously called to enforce retention rules based on the tax. It adds complexity and introduces a new service which might be more than what is needed for the simple requirement of preventing accidental deletion, which is where we are heading. We don’t have to reinvent the wheel. All you have to do is use the creation that is already there for the EBS snapshots. Okay, so here, all we are doing is the last leave a snapshot so well how are you doing it because EBS provides a built in feature to lock snapshots, preventing them from being deleted. This is a straightforward and effective solution that does not involve creating additional IAM roles or policies or tags. It directly addresses the requirement of preventing accidental deletion with minimal administrative effort.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

692 Question #: 676
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company’s application uses Network Load Balancers, Auto Scaling groups, Amazon EC2 instances, and databases that are deployed in an Amazon VPC. The company wants to capture information about traffic to and from the network interfaces in near real time in its Amazon VPC. The company wants to send the information to Amazon OpenSearch Service for analysis.

Which solution will meet these requirements?

A. Create a log group in Amazon CloudWatch Logs. Configure VPC Flow Logs to send the log data to the log group. Use Amazon Kinesis Data Streams to stream the logs from the log group to OpenSearch Service.
B. Create a log group in Amazon CloudWatch Logs. Configure VPC Flow Logs to send the log data to the log group. Use Amazon Kinesis Data Firehose to stream the logs from the log group to OpenSearch Service.
C. Create a trail in AWS CloudTrail. Configure VPC Flow Logs to send the log data to the trail. Use Amazon Kinesis Data Streams to stream the logs from the trail to OpenSearch Service.
D. Create a trail in AWS CloudTrail. Configure VPC Flow Logs to send the log data to the trail. Use Amazon Kinesis Data Firehose to stream the logs from the trail to OpenSearch Service.

A

B. Create a log group in Amazon CloudWatch Logs. Configure VPC Flow Logs to send the log data to the log group. Use Amazon Kinesis Data Firehose to stream the logs from the log group to OpenSearch Service.

CloudTrail is for logging administrative actions, we need CloudWatch. We want the data in another AWS service (OpenSearch), not Kinesis, thus we need Firehose, not Streams. VPC Flow Logs capture information about the IP traffic going to and from network interfaces in a VPC. By configuring VPC Flow Logs to send the log data to a log group in Amazon CloudWatch Logs, you can then use Amazon Kinesis Data Firehose to stream the logs from the log group to Amazon OpenSearch Service for analysis. This approach provides near real-time streaming of logs to the analytics service.

So, option so most you know you need to remember near real time that is another thing that is what you need and capture information about the traffic to and from the network interfaces. Okay that is another thing that you need to not and we already covered this right whenever you want traffic information about traffic to and from in within the VPC, what should come to your mind server admins, VPC flow logs should come to your mind because that is what provides you detailed information about traffic to and fro from a PC and all the options has it so that is not a big help here at all. But let’s go through each option. Option is not the right one because this involves setting up VPC flow logs to capture network traffic information, and then send the logs to an Amazon CloudWatch To log group, and subsequently it suggests using Amazon Kinesis data streams to stream the logs from the log group to Amazon OpenSearch service with Viola, this is feasible, trust me it works. OK, using Kinesis data streams might introduce what might introduce unnecessary complexity for this use case. Okay, so let’s this will work. But there are there might be some other options in this options where you don’t have to do all this where it maybe there is a feature in one of these or VPC flow logs that you can use to make it possible. So let’s go through other wrong options, which will be landed the options see, and this option involves using Cloud trail to capture VPC flow logs and then using Kinesis data streams to stream the logs to the open source service. However, what is cloud trail it is typically used for logging API activity on your account and it may not may not provide the detailed network traffic information captured by VPC flow logs hence, that’s gone or similarly, Option D which is same as d where in it is using Cloud trail. So that will leave us with Option B okay, if you are looking at these two and like okay, both look the same. Why not? Why are you crossing out a and b? Well, here you are using data streams. Here we are using data firehose. Well, what is the difference? Well, this option again, similar to Option A, but suggests using Kinesis data firehose, instead of data streams. Firehose is a managed service that simplifies the process of delivering data to destinations, such as open source service, data streams cannot open data Firehose can send data to open source service, it is a more suitable option for near real time log analysis, making it a better solution for this scenario.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

693 Question #: 677
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company is developing an application that will run on a production Amazon Elastic Kubernetes Service (Amazon EKS) cluster. The EKS cluster has managed node groups that are provisioned with On-Demand Instances.

The company needs a dedicated EKS cluster for development work. The company will use the development cluster infrequently to test the resiliency of the application. The EKS cluster must manage all the nodes.

Which solution will meet these requirements MOST cost-effectively?

A. Create a managed node group that contains only Spot Instances.
B. Create two managed node groups. Provision one node group with On-Demand Instances. Provision the second node group with Spot Instances.
C. Create an Auto Scaling group that has a launch configuration that uses Spot Instances. Configure the user data to add the nodes to the EKS cluster.
D. Create a managed node group that contains only On-Demand Instances.

A

B. Create two managed node groups. Provision one node group with On-Demand Instances. Provision the second node group with Spot Instances.

he keywords are infrequent and resiliency.. This solution allows you to have a mix of On-Demand Instances and Spot Instances within the same EKS cluster. You can use the On-Demand Instances for the development work where you need dedicated resources and then leverage Spot Instances for testing the resiliency of the application. Spot Instances are generally more cost-effective but can be terminated with short notice, so using a combination of On-Demand and Spot Instances provides a balance between cost savings and stability. Option A (Create a managed node group that contains only Spot Instances) might be cost-effective, but it could introduce potential challenges for tasks that require dedicated resources and might not be the best fit for all scenarios.

the company needs a dedicated Eks cluster for development work, the company will use the development cluster frequently to test the resiliency of the application, I think we have done similar one, but for a different use case, the Eks cluster must manage all nodes which solution most cost effectively. Okay, here we are seeing cost effectively. Okay. So what is it saying it has managed node groups. And then they want to test the resiliency of the application. And they are also using on demand, which is costly. But we want to choose the most cost effective solution here. So let’s go through this option is talking about creating a managed node group that contains only Spot Instances. Well, yeah, this is cheaper, but this might not be the best solution because stability and predictability are required for certain development activities I and we are talking about making it available right needs dedicated Eks cluster for development. If you go with Spot Instances, there will be instances where you won’t have any spot instances. So there won’t be any development Eks cluster in the development to work with that also having Spot Instances for all of them, no, we are not going to it is not going to work it out okay, then we have option C wherein it is talking about both actually both option C and Option D involves using auto scaling groups and user data scripts. While they could work managing multiple managed node groups, provides a cleaner and more straightforward approach in the context of eks. So here they are talking again, same thing manage node group that contains only on demand this is going to be costly for that reason, we won’t use it. And this is talking about auto scaling groups confident users Spot Instances again same thing as this. So instead what we will do is we will use a combination of dedicated on demand instances and Spot Instances. So whenever the Spot Instances are not available, it doesn’t mean that complete Eks cluster is down no we will have the on demand instances we will use it so option B allows the company to have a dedicated cluster for development work by creating two managed node instances instance one, one using on demand instances and the other using Spot Instances. The company can manage costs effectively, on demand on demand instances provides stability and reliability, which is suitable for development work that requires consistent and predictable performance. And the Spot Instances offer cost savings but come with the trade off of potential termination with short notice. However, for free infrequent testing and resilience experiments, Spot Instances can be utilized to optimize costs. Therefore, option B is the most cost effective solution that aligns with the specified requirements.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

694 Question #: 678
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company stores sensitive data in Amazon S3. A solutions architect needs to create an encryption solution. The company needs to fully control the ability of users to create, rotate, and disable encryption keys with minimal effort for any data that must be encrypted.

Which solution will meet these requirements?

A. Use default server-side encryption with Amazon S3 managed encryption keys (SSE-S3) to store the sensitive data.
B. Create a customer managed key by using AWS Key Management Service (AWS KMS). Use the new key to encrypt the S3 objects by using server-side encryption with AWS KMS keys (SSE-KMS).
C. Create an AWS managed key by using AWS Key Management Service (AWS KMS). Use the new key to encrypt the S3 objects by using server-side encryption with AWS KMS keys (SSE-KMS).
D. Download S3 objects to an Amazon EC2 instance. Encrypt the objects by using customer managed keys. Upload the encrypted objects back into Amazon S3.

A

B. Create a customer managed key by using AWS Key Management Service (AWS KMS). Use the new key to encrypt the S3 objects by using server-side encryption with AWS KMS keys (SSE-KMS).

This option allows you to create a customer managed key using AWS KMS. With a customer managed key, you have full control over key lifecycle management, including the ability to create, rotate, and disable keys with minimal effort. SSE-KMS also integrates with AWS Identity and Access Management (IAM) for fine-grained access control.

as you see encryption, we have to you have to think about kms. But there are other encryptions that are available as well. For example like SSE s3, but here we are talking about create rotate and disabled keys SSE s3, you cannot rotate them because it is managed by actually s3. So since it’s managed is SSE s3 is my s3 managed encryption grace. It does provide encryption but it does not give you the same level of control or what key management as done by kms. So let’s forget about that. So option C even though it is using AWS kms. It probably is important to note that AWS managed keys do not provide the same level of control over key creation, rotation and disabling as customer manage key because these are managed by kms itself. Okay, okay, and Option D involves manually downloading encrypting uploading s3 objects, which is a less efficient and more error prone process competitor leveraging AWS kms server side encryption for that reason we are going to go with option B. wherein we are going to create and manage customer managed keys. giving you full control over the lifecycle of the keys. This includes the ability to create rotate, disable keys as needed, and using server side encryption with AWS kms keys ensures that the s3 objects are encrypted with the specified customer man is the key. This provides a secure and managed approach to encrypting sensitive data in Amazon s3.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

695 Question #: 679
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company wants to back up its on-premises virtual machines (VMs) to AWS. The company’s backup solution exports on-premises backups to an Amazon S3 bucket as objects. The S3 backups must be retained for 30 days and must be automatically deleted after 30 days.

Which combination of steps will meet these requirements? (Choose three.)

A. Create an S3 bucket that has S3 Object Lock enabled.
B. Create an S3 bucket that has object versioning enabled.
C. Configure a default retention period of 30 days for the objects.
D. Configure an S3 Lifecycle policy to protect the objects for 30 days.
E. Configure an S3 Lifecycle policy to expire the objects after 30 days.
F. Configure the backup solution to tag the objects with a 30-day retention period

A

A. Create an S3 bucket that has S3 Object Lock enabled.
C. Configure a default retention period of 30 days for the objects.
E. Configure an S3 Lifecycle policy to expire the objects after 30 days.

In theory, E alone would be enough because the objects are “retained for 30 days” without any configuration as long as no one deletes them. But let’s assume that they want us to prevent deletion. A: Yes, required to prevent deletion. Object Lock requires Versioning, so if we ‘create an S3 bucket that has S3 Object Lock enabled’ that this also has object versioning enabled, otherwise we would not be able to create it. C: Yes, “default retention period” specifies how long object lock will be applied to new objects by default, we need this to protect objects from deletion.

https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock.html

A. Create an S3 bucket that has S3 Object Lock enabled. Enable the S3 Object Lock feature on S3. C. Configure a default retention period of 30 days for the objects. To lock the objects for 30 days. E. Configure an S3 Lifecycle policy to expire the objects after 30 days. -> to delete the objects after 30 days.

First of all, they should be retained for 30 days, they should be automatically deleted after 30 days. Okay, so those are our requirements looks like so let’s go through these options. Let’s eliminate what you know the wrong answers obviously, it is talking about create s3 bucket that has object versioning enabled. Okay, so why do we need this? This is not necessarily for meeting the specific requirements mentioned in the question at all. Object versioning is more relevant when managing different versions of objects, but it does not directly address the retention and deletion policies. So forget about that. Then we have option D configure in lifecycle policy to protect the objects for 30 days. Well, this is obviously not applicable here as well because the goal is to retain the object for 30 days then automatically delete them not to protect them nowhere in the question they’re talking about protecting so forget about that. And then option f is not explicitly needed for implementing the retention and deletion policy using s3 object locking lifecycle policies. I mean, we can do that using that but here it is saying tagging while tagging can be useful for organizational purposes it is not a primary requirement, if you want to delete or retain them. So for that purpose, what are we going to do we are going to use s3 object or lock enable. What does this do it will ensure that the objects in the bucket cannot be deleted or modified fire a specified retention period. This helps in meeting the requirement of retaining backups for 30 days. Then, Option C configure a default retention period of 30 days for the objects Yes. By specifying the objects within the bucket are locked for a duration of 30 days when you do this. This enforces the retention policy on the objects. Finally we have option E which will make sure to delete a data medically after that it is.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

696 Question #: 680
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A solutions architect needs to copy files from an Amazon S3 bucket to an Amazon Elastic File System (Amazon EFS) file system and another S3 bucket. The files must be copied continuously. New files are added to the original S3 bucket consistently. The copied files should be overwritten only if the source file changes.

Which solution will meet these requirements with the LEAST operational overhead?

A. Create an AWS DataSync location for both the destination S3 bucket and the EFS file system. Create a task for the destination S3 bucket and the EFS file system. Set the transfer mode to transfer only data that has changed.
B. Create an AWS Lambda function. Mount the file system to the function. Set up an S3 event notification to invoke the function when files are created and changed in Amazon S3. Configure the function to copy files to the file system and the destination S3 bucket.
C. Create an AWS DataSync location for both the destination S3 bucket and the EFS file system. Create a task for the destination S3 bucket and the EFS file system. Set the transfer mode to transfer all data.
D. Launch an Amazon EC2 instance in the same VPC as the file system. Mount the file system. Create a script to routinely synchronize all objects that changed in the origin S3 bucket to the destination S3 bucket and the mounted file system.

A

A. Create an AWS DataSync location for both the destination S3 bucket and the EFS file system. Create a task for the destination S3 bucket and the EFS file system. Set the transfer mode to transfer only data that has changed.

A fulfils the requirement of “copied files should be overwritten only if the source file changes” so A is correct.

Transfer only data that has changed – DataSync copies only the data and metadata that differs between the source and destination location. Transfer all data – DataSync copies everything in the source to the destination without comparing differences between the locations. https://docs.aws.amazon.com/datasync/latest/userguide/configure-metadata.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

602# A company has five organizational units (OUs) as part of its organization in AWS Organizations. Each OU correlates with the five businesses the company owns. The company’s research and development (R&D) business is being separated from the company and will need its own organization. A solutions architect creates a new, separate management account for this purpose. What should the solutions architect do next in the new management account?

A. Make the AWS R&D account part of both organizations during the transition.
B. Invite the AWS R&D account to be part of the new organization after the R&D AWS account has left the old organization.
C. Create a new AWS R&D account in the new organization. Migrate resources from the old AWS R&D account to the new AWS R&D account.
D. Have the AWS R&D account join the new organization. Make the new management account a member of the old organization.

A

B. Invite the AWS R&D account to be part of the new organization after the R&D AWS account has left the old organization.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

603# A company is designing a solution to capture customer activity across different web applications to process analytics and make predictions. Customer activity in web applications is unpredictable and can increase suddenly. The company needs a solution that integrates with other web applications. The solution must include an authorization step for security reasons. What solution will meet these requirements?
A. Configure a gateway load balancer (GWLB) in front of an Amazon Elastic Container Service (Amazon ECS) container instance that stores information that the business receives on an Amazon Elastic File System (Amazon EFS) file system. ). Authorization is resolved in the GWLB.
B. Configure an Amazon API Gateway endpoint in front of an Amazon Kinesis data stream that stores the information the business receives in an Amazon S3 bucket. Use an AWS Lambda function to resolve authorization.
C. Configure an Amazon API Gateway endpoint in front of an Amazon Kinesis Data Firehose that stores the information the business receives in an Amazon S3 bucket. Use an API Gateway Lambda authorizer to resolve authorization.
D. Configure a gateway load balancer (GWLB) in front of an Amazon Elastic Container Service (Amazon ECS) container instance that stores information that the business receives on an Amazon Elastic File System (Amazon EFS) file system. ). Use an AWS Lambda function to resolve authorization.

A

C. Configure an Amazon API Gateway endpoint in front of an Amazon Kinesis Data Firehose that stores the information the business receives in an Amazon S3 bucket. Use an API Gateway Lambda authorizer to resolve authorization.

Amazon API Gateway: Acts as a managed service to create, publish, and secure APIs at scale. Allows the creation of API endpoints that can be integrated with other web applications. Amazon Kinesis Data Firehose: Used to capture and upload streaming data to other AWS services. In this case, you can store the information in an Amazon S3 bucket. API Gateway Lambda Authorizer: Provides a way to control access to your APIs using Lambda functions. Allows you to implement custom authorization logic. This solution offers scalability, the ability to handle unpredictable surges in activity, and integration capabilities. Using a Lambda API Gateway authorizer ensures that the authorization step is performed securely.

The other options have some limitations or are less aligned with the specified requirements:
A. GWLB vs. Amazon ECS: This option involves a load balancer vs. ECS, which might be better suited for scenarios requiring container orchestration, but could introduce an unnecessary complexity for the given requirements.
B. Amazon API Gateway vs. Amazon Kinesis Data Flow: This option lacks the Lambda authorizer to resolve authorization and may not be as easy to integrate with other web applications.
D. GWLB vs. Amazon ECS with Lambda function for authorization: Similar to option A, this introduces a load balancer and ECS, which might be more complex than necessary for the given requirements. In summary, Option C offers a streamlined solution with the necessary scalability, integration capabilities, and authorization control.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

604# An e-commerce company wants a disaster recovery solution for its Amazon RDS DB instances running Microsoft SQL Server Enterprise Edition. The company’s current recovery point objective (RPO) and recovery time objective (RTO) are 24 hours. Which solution will meet these requirements in the MOST cost-effective way?

A. Create a cross-region read replica and promote the read replica to the primary instance.
B. Use AWS Database Migration Service (AWS DMS) to create RDS replication between regions.
C. Use 24-hour cross-region replication to copy native backups to an Amazon S3 bucket.
D. Copy automatic snapshots to another region every 24 hours.

A

D. Copy automatic snapshots to another region every 24 hours

Amazon RDS creates and saves automated backups of your DB instance or Multi-AZ DB cluster during your DB instance backup window. RDS creates a storage volume snapshot of your database instance, backing up the entire database instance and not just individual databases. RDS saves automated backups of your DB instance according to the backup retention period that you specify. If necessary, you can recover your DB instance at any time during the backup retention period.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

605# A company runs a web application on Amazon EC2 instances in an auto-scaling group behind an application load balancer that has sticky sessions enabled. The web server currently hosts the user’s session state. The company wants to ensure high availability and prevent the loss of user session state in the event of a web server outage. What solution will meet these requirements?

A. Use an Amazon ElastiCache for Memcached instance to store session data. Update the application to use ElastiCache for Memcached to store session state.
B. Use Amazon ElastiCache for Redis to store session state. Update the application to use ElastiCache for Redis to store session state.
C. Use an AWS Storage Gateway cached volume to store session data. Update the application to use the AWS Storage Gateway cached volume to store session state.
D. Use Amazon RDS to store session state. Update your application to use Amazon RDS to store session state.

A

B. Use Amazon ElastiCache for Redis to store session state. Update the application to use ElastiCache for Redis to store session state.

In summary, option B (Amazon ElastiCache for Redis) is a common and effective solution for maintaining user session state in a web application, providing high availability and preventing loss of session state during server outages. Web.

Amazon ElastiCache for Redis: Redis is an in-memory data store that can be used to store session data. It offers high availability and persistence options, making it suitable for maintaining session state. Sticky sessions and auto-scaling group: Using ElastiCache for Redis enables centralized storage of session state, ensuring that sticky sessions can still be maintained even if an EC2 instance is unavailable or replaced due to scaling automatic.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

606# A company migrated a MySQL database from the company’s on-premises data center to an Amazon RDS for MySQL DB instance. The company sized the RDS database instance to meet the company’s average daily workload. Once a month, the database runs slowly when the company runs queries for a report. The company wants the ability to run reports and maintain performance of daily workloads. What solution will meet these requirements?

A. Create a read replica of the database. Directs queries to the read replica.
B. Create a backup of the database. Restore the backup to another database instance. Direct queries to the new database.
C. Export the data to Amazon S3. Use Amazon Athena to query the S3 bucket.
D. Resize the database instance to accommodate the additional workload.

A

A. Create a read replica of the database. Directs queries to the read replica.

This is the most cost-effective solution because it does not require any additional AWS services. A read replica is a copy of a database that is synchronized with the primary database. You can route report queries to the read replica, which will not impact performance for daily workloads

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

607# A company runs a container application using Amazon Elastic Kubernetes Service (Amazon EKS). The application includes microservices that manage customers and place orders. The business needs to direct incoming requests to the appropriate microservices. Which solution will meet this requirement in the MOST cost-effective way?

A. Use the AWS Load Balancing Controller to provision a network load balancer.
B. Use the AWS Load Balancer Controller to provision an application load balancer.
C. Use an AWS Lambda function to connect requests to Amazon EKS.
D. Use Amazon API Gateway to connect requests to Amazon EKS.

A

D. Use Amazon API Gateway to connect requests to Amazon EKS.

You are charged for each hour or partial hour that an application load balancer is running, and the number of load balancer capacity units (LCUs) used per hour. With Amazon API Gateway, you only pay when your APIs are in use. https://aws.amazon.com/blogs/containers/integrate-amazon-api-gateway-with-amazon-eks/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

608# A company uses AWS and sells access to copyrighted images. The company’s global customer base needs to be able to access these images quickly. The company must deny access to users from specific countries. The company wants to minimize costs as much as possible. What solution will meet these requirements?

A. Use Amazon S3 to store images. Enable multi-factor authentication (MFA) and public access to the bucket. Provide clients with a link to the S3 bucket.
B. Use Amazon S3 to store images. Create an IAM user for each customer. Add users to a group that has permission to access the S3 bucket.
C. Use the Amazon EC2 instances that are behind the application load balancers (ALBs) to store the images. Deploy instances only to countries where your company serves. Provide customers with links to ALBs for their country-specific instances.
D. Use Amazon S3 to store images. Use Amazon CloudFront to distribute geo-restricted images. Provide a signed URL for each client to access data in CloudFront.

A

D. Use Amazon S3 to store images. Use Amazon CloudFront to distribute geo-restricted images. Provide a signed URL for each client to access data in CloudFront.

In summary, option D (Use Amazon S3 to store the images. Use Amazon CloudFront to distribute the geo-restricted images. Provide a signed URL for each client to access the data in CloudFront) is the most appropriate solution to meet the requirements. specified.

Amazon S3 for Storage: Amazon S3 is used to store the images. Provides scalable, durable, low-latency storage for images. Amazon

CloudFront for content delivery: CloudFront is used as a content delivery network (CDN) to distribute images globally. This reduces latency and ensures fast access for customers around the world. Geo Restrictions in CloudFront: CloudFront supports geo restrictions, allowing the company to deny access to users from specific countries. This satisfies the requirement of controlling access based on the user’s location.

Signed URLs for secure access: Signed URLs are provided to clients for secure access to images. This ensures that only authorized customers can access the content.

Cost Minimization: CloudFront is a cost-effective solution for content delivery, and can significantly reduce data transfer costs by serving content from edge locations close to end users.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

609# A solutions architect is designing a solution based on Amazon ElastiCache for highly available Redis. The solutions architect must ensure that failures do not result in performance degradation or data loss locally and within an AWS Region. The solution must provide high availability at the node level and at the region level. What solution will meet these requirements?

A. Use Multi-AZ Redis replication groups with shards containing multiple nodes.
B. Use Redis shards containing multiple nodes with Redis stub-only files (AOF) enabled.
C. Use a Multi-AZ Redis cluster with more than one read replica in the replication group.
D. Use Redis shards containing multiple nodes with auto-scaling enabled.

A

A. Use Multi-AZ Redis replication groups with shards containing multiple nodes.

In summary, option A (Use Multi-AZ Redis Replication Groups with shards containing multiple nodes) is the most appropriate option to achieve high availability at both the node level and the AWS Region level in Amazon ElastiCache for Redis.

Multi-AZ Redis Replication Groups: Amazon ElastiCache provides Multi-AZ support for Redis, allowing the creation of replication groups that span multiple availability zones (AZs) within a region. This guarantees high availability at a regional level.

Shards with Multi-node: Shards within the replication group can contain multiple nodes, providing scalability and redundancy at the node level. This contributes to high availability and performance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

610# A company plans to migrate to AWS and use Amazon EC2 on-demand instances for its application. During the migration testing phase, a technical team observes that the application takes a long time to start and load memory to be fully productive. Which solution will reduce the app launch time during the next testing phase?

A. Start two or more EC2 instances on demand. Enable auto-scaling features and make EC2 on-demand instances available during the next testing phase.
B. Start EC2 Spot Instances to support the application and scale the application to be available during the next testing phase.
C. Start the EC2 on-demand instances with hibernation enabled. Configure EC2 Auto Scaling hot pools during the next testing phase.
D. Start EC2 on-demand instances with capacity reservations. Start additional EC2 instances during the next testing phase.

A

C. Start the EC2 on-demand instances with hibernation enabled. Configure EC2 Auto Scaling hot pools during the next testing phase.

In summary, option C (Start EC2 on-demand instances with hibernation enabled. Configure EC2 Auto Scaling hot pools during the next testing phase) addresses the problem of reducing launch time by utilizing hibernation and maintenance. from hot pools for faster response.

EC2 On-Demand Instances with Hibernation: Hibernation allows EC2 instances to persist their in-memory state to Amazon EBS. When an instance is hibernated, it can quickly resume with its previous memory state intact. This is particularly useful for reducing startup time and loading memory quickly.

EC2 Auto Scaling Warm Pools: Auto Scaling warm pools allow you to keep a specific number of instances running even when demand is low. Warm pools keep instances in a state where they can respond quickly to increased demand. This helps reduce the time it takes for an instance to become fully productive.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

611# An enterprise’s applications run on Amazon EC2 instances in auto-scaling groups. The company notices that its apps experience traffic spikes on random days of the week. The company wants to maintain application performance during traffic surges. Which solution will meet these requirements in the MOST cost-effective way?

A. Use manual scaling to change the size of the auto-scaling group.
B. Use predictive scaling to change the size of the Auto Scaling group.
C. Use dynamic scaling to change the size of the Auto Scaling group.
D. Use schedule scaling to change the size of the auto-scaling group.

A

C. Use dynamic scaling to change the size of the Auto Scaling group.

Dynamic Scaling: Dynamic scaling adjusts the size of the Auto Scaling group in response to changing demand. Allows the Auto Scaling group to automatically increase or decrease the number of instances based on defined policies. This is well suited for handling surges in traffic as the group enters or exits as needed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

612# An e-commerce application uses a PostgreSQL database running on an Amazon EC2 instance. During a monthly sales event, database usage increases and causes database connection issues for the application. Traffic is unpredictable for subsequent monthly sales events, which affects sales forecasting. The business needs to maintain performance when there is an unpredictable increase in traffic. Which solution solves this problem in the MOST cost-effective way?

A. Migrate the PostgreSQL database to Amazon Aurora Serverless v2.
B. Enable PostgreSQL database auto-scaling on the EC2 instance to accommodate increased usage.
C. Migrate the PostgreSQL database to Amazon RDS for PostgreSQL with a larger instance type.
D. Migrate the PostgreSQL database to Amazon Redshift to accommodate increased usage.

A

A. Migrate the PostgreSQL database to Amazon Aurora Serverless v2.

Amazon Aurora Serverless v2: Aurora Serverless v2 is designed for variable and unpredictable workloads. Automatically adjusts database capacity based on actual usage, allowing you to scale down during low demand periods and scale up during peak periods. This ensures that the application can handle increased traffic without manual intervention.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

613# A company hosts an internal serverless application on AWS using Amazon API Gateway and AWS Lambda. Company employees report issues with high latency when they start using the app every day. The company wants to reduce latency. What solution will meet these requirements?

A. Increase the API gateway throttling limit.
B. Configure scheduled scaling to increase Lambda-provisioned concurrency before employees start using the application each day.
C. Create an Amazon CloudWatch alarm to start a Lambda function as a target for the alarm at the beginning of each day.
D. Increase the memory of the Lambda function.

A

B. Configure scheduled scaling to increase Lambda-provisioned concurrency before employees start using the application each day.

Scheduled scaling for provisioned concurrency: Provisioned concurrency ensures that a specified number of function instances are available and hot to handle requests. By configuring scheduled scaling to increase provisioned concurrency ahead of anticipated maximum usage each day, you ensure that there are enough warm instances to handle incoming requests, reducing cold starts and latency.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

614# A research company uses local devices to generate data for analysis. The company wants to use the AWS cloud to analyze the data. The devices generate .csv files and support writing the data to an SMB shared file. Business analysts must be able to use SQL commands to query data. Analysts will consult periodically throughout the day. What combination of steps will meet these requirements in the MOST cost-effective way? (Choose three.)

A. Deploy an on-premises AWS Storage Gateway in Amazon S3 File Gateway mode.
B. Deploy an on-premises AWS Storage Gateway to the Amazon FSx File Gateway.
C. Configure an AWS Glue crawler to create a table based on the data that is in Amazon S3.
D. Set up an Amazon EMR cluster with EMR File System (EMRFS) to query data that is in Amazon S3. Provide access to analysts.
E. Set up an Amazon Redshift cluster to query data in Amazon S3. Provide access to analysts.
F. Configure Amazon Athena to query data that is in Amazon S3. Provide access to analysts.

A

MORE FOLLOW UP NEEDED!

A. Deploy an on-premises AWS Storage Gateway in Amazon S3 File Gateway mode.
C. Configure an AWS Glue crawler to create a table based on the data that is in Amazon S3.
F. Configure Amazon Athena to query data that is in Amazon S3. Provide access to analysts.

Deploy an on-premises AWS Storage Gateway in Amazon S3 File Gateway mode (Option A): This allows on-premises devices to write data to an SMB file share, and the data is stored in Amazon S3. This option provides a scalable and cost-effective way to ingest data into the cloud. Configure an AWS Glue crawler to create a table based on data in Amazon S3 (Option C): AWS Glue can automatically discover the schema of the data in Amazon S3 and create a table in the AWS Glue data catalog. This makes it easier for analysts to query data using SQL commands.

Set up Amazon Athena to Query Data in Amazon S3 (Option F): Amazon Athena is a serverless query service that allows analysts to run SQL queries directly on data stored in Amazon S3. It’s cost-effective because you charge per query, and there’s no need to provision or manage infrastructure.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

615# A company wants to use Amazon Elastic Container Service (Amazon ECS) clusters and Amazon RDS DB instances to build and run a payment processing application. The company will run the application in its local data center for compliance purposes. A solutions architect wants to use AWS Outposts as part of the solution. The solutions architect is working with the company’s operational team to build the application. What activities are the responsibility of the company’s operational team? (Choose three.)

A. Provide resilient power and network connectivity to the Outposts racks
B. Management of the virtualization hypervisor, storage systems, and AWS services running on the Outposts
C. Physical security and access controls of the Outposts data center environment
D. Availability of Outposts infrastructure, including power supplies, servers, and networking equipment within Outposts racks
E. Physical maintenance of Outposts components
F. Provide additional capacity for clusters Amazon ECS to mitigate server failures and maintenance events

A

A. Provide resilient power and network connectivity to the Outposts racks
C. Physical security and access controls of the Outposts data center environment
F. Plan additional capacity for clusters Amazon ECS to mitigate server failures and maintenance events

From https://docs.aws.amazon.com/whitepapers/latest/aws-outposts-high-availability-design/aws-outposts-high-availability-design.html With Outposts, you are responsible for providing resilient power and network connectivity to the Outpost racks to meet your availability requirements for workloads running on Outposts. You are responsible for the physical security and access controls of the data center environment. You must provide sufficient power, space, and cooling to keep the Outpost operational and network connections to connect the Outpost back to the Region. Since Outpost capacity is finite and determined by the size and number of racks AWS installs at your site, you must decide how much EC2, EBS, and S3 on Outposts capacity you need to run your initial workloads, accommodate future growth, and to provide extra capacity to mitigate server failures and maintenance events.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

616# A company is planning to migrate a TCP-based application to the company’s VPC. The application is publicly accessible on a non-standard TCP port through a hardware device in the company’s data center. This public endpoint can process up to 3 million requests per second with low latency. The company requires the same level of performance for the new public endpoint on AWS. What should a solutions architect recommend to meet this requirement?

A. Implement a network load balancer (NLB). Configure the NLB to be publicly accessible over the TCP port that the application requires.
B. Implement an application load balancer (ALB). Configure the ALB to be publicly accessible over the TCP port that the application requires.
C. Deploy an Amazon CloudFront distribution that listens on the TCP port that the application requires. Use an application load balancer as a source.
D. Deploy an Amazon API Gateway API that is configured with the TCP port required by the application. Configure AWS Lambda functions with provisioned concurrency to process requests.

A

A. Implement a network load balancer (NLB). Configure the NLB to be publicly accessible over the TCP port that the application requires.

Network Load Balancer (NLB): NLB is designed to handle TCP traffic with extremely low latency. It is a Layer 4 (TCP/UDP) load balancer that provides high performance and scales horizontally. NLB is suitable for scenarios where low latency and high throughput are critical, making it a good choice for TCP-based applications with strict performance requirements.

Publicly Accessible: NLB can be configured to be publicly accessible, allowing it to accept incoming requests from the Internet.

TCP Port Configuration: NLB allows you to configure it to listen on the specific non-standard TCP port required by the application.

Options B, C, and D are less suitable for the given requirements: Application Load Balancer (ALB) (Option B): ALB is designed for HTTP/HTTPS traffic and operates at the application layer (layer 7). It may introduce additional overhead and may not be as suitable for non-TCP HTTP based applications. Amazon CloudFront (Option C): CloudFront is a content delivery network service designed primarily for content delivery, and is typically used for HTTP/HTTPS traffic. It may not be the best option for handling arbitrary TCP traffic. Amazon API Gateway (Option D): API Gateway is designed to create RESTful APIs and is not optimized for arbitrary TCP traffic. It may not provide the low latency performance required for the described scenario.

Therefore, NLB is the recommended option to maintain high throughput and low latency for a TCP-based application on a non-standard port.

NLB is able to handle up to tens of millions of requests per second, while providing high performance and low latency. https://aws.amazon.com/blogs/aws/new-network-load-balancer-effortless-scaling-to-millions-of-requests-per-second/

https://aws.amazon.com/elasticloadbalancing/network-load-balancer Network Load Balancer operates at the connection level (Layer 4), routing connections to targets (Amazon EC2 instances, microservices, and containers) within Amazon VPC, based on IP protocol data. Ideal for load balancing of both TCP and UDP traffic, Network Load Balancer is capable of handling millions of requests per second while maintaining ultra-low latencies. Network Load Balancer is optimized to handle sudden and volatile traffic patterns while using a single static IP address per Availability Zone. It is integrated with other popular AWS services such as Auto Scaling, Amazon EC2 Container Service (ECS), Amazon CloudFormation, and AWS Certificate Manager (ACM).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

617# A company runs its critical database on an Amazon RDS for PostgreSQL DB instance. The company wants to migrate to Amazon Aurora PostgreSQL with minimal downtime and data loss. Which solution will meet these requirements with the LESS operating overhead?

A. Create a DB snapshot of the RDS instance for the PostgreSQL database to populate a new Aurora PostgreSQL DB cluster.
B. Creates an Aurora read replica of the RDS instance for the PostgreSQL database. Promote the Aurora read replica to a new Aurora PostgreSQL DB cluster.
C. Use data import from Amazon S3 to migrate the database to an Aurora PostgreSQL database cluster.
D. Use the pg_dump utility to backup the RDS for PostgreSQL database. Restore the backup to a new Aurora PostgreSQL database cluster.

A

B. Creates an Aurora read replica of the RDS instance for the PostgreSQL database. Promote the Aurora read replica to a new Aurora PostgreSQL DB cluster.

Aurora Read Replica: Creates an Aurora Read Replica of the existing RDS instance for the PostgreSQL database. This read replica is continually updated with changes to the source database.

Promotion: Promote the Aurora read replica to become the primary instance of the new Aurora PostgreSQL database cluster. This process involves minimal downtime as it does not affect the source RDS for the PostgreSQL database instance.

Advantages of Option B:
Low Downtime: Read replica can be promoted with minimal downtime, allowing for a smooth transition.

Continuous Replication: Read replication ensures continuous replication of changes from the source database to the Aurora PostgreSQL database cluster.

Operating Overhead: This approach minimizes operating overhead compared to other options. Take advantage of Aurora’s replication capabilities for seamless migration.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

618# An enterprise’s infrastructure consists of hundreds of Amazon EC2 instances using Amazon Elastic Block Store (Amazon EBS) storage. A solutions architect must ensure that each EC2 instance can be recovered after a disaster. What should the solutions architect do to meet this requirement with the LEAST amount of effort?

A. Take a snapshot of the EBS storage that is attached to each EC2 instance. Create an AWS CloudFormation template to launch new EC2 instances from EBS storage.
B. Take a snapshot of the EBS storage that is attached to each EC2 instance. Use AWS Elastic Beanstalk to establish the environment based on the EC2 template and attach the EBS storage.
C. Use AWS Backup to configure a backup plan for the entire EC2 instance group. Use the AWS Backup API or AWS CLI to speed up the process of restoring multiple EC2 instances.
D. Create an AWS Lambda function to take a snapshot of the EBS storage that is connected to each EC2 instance and copy the Amazon Machine Images (AMIs). Create another Lambda function to perform the restores with the copied AMIs and attach the EBS storage.

A

C. Use AWS Backup to configure a backup plan for the entire EC2 instance group. Use the AWS Backup API or AWS CLI to speed up the process of restoring multiple EC2 instances.

AWS Backup: AWS Backup is a fully managed backup service that centralizes and automates data backup across all AWS services. Supports backup of Amazon EBS volumes and enables efficient backup management.

Backup plan: Create a backup plan in AWS Backup that includes the entire EC2 instance group. This ensures a centralized and consistent backup strategy for all instances.

API or CLI: AWS Backup provides an API and CLI that can be used to automate and speed up the process of restoring multiple EC2 instances. This allows for a simplified disaster recovery process.

Advantages of Option C:
Centralized Management: AWS Backup provides a centralized management interface for backup plans, making it easy to manage and track backups of a large number of resources.

Automation: Using the AWS Backup API or CLI allows automation of backup and restore processes, reducing manual effort.

Consistent backups: AWS Backup ensures consistent and reliable backups of EBS volumes associated with EC2 instances.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

619# A company recently migrated to the AWS cloud. The company wants a serverless solution for large-scale on-demand parallel processing of a semi-structured data set. The data consists of logs, media files, sales transactions, and IoT sensor data that is stored in Amazon S3. The company wants the solution to process thousands of items in the data set in parallel. Which solution will meet these requirements with the MOST operational efficiency?

A. Use the AWS step functions map state in inline mode to process data in parallel.
B. Use AWS passing function map state in distributed mode to process data in parallel.
C. Use AWS Glue to process data in parallel.
D. Use multiple AWS Lambda functions to process data in parallel.

A

B. Use AWS passing function map state in distributed mode to process data in parallel.

AWS Step Functions allow you to orchestrate and scale distributed processing using map state. Map state can process elements in a large data set in parallel by distributing work across multiple resources.

Using map state in distributed mode will automatically take care of parallel processing and scaling. Step Functions will add more workers to process the data as needed.

Step Functions is serverless, so there are no servers to manage. It will automatically scale based on demand.

https://docs.aws.amazon.com/step-functions/latest/dg/use-dist-map-orchestrate-large-scale-parallel-workloads.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

620# A company will migrate 10PB of data to Amazon S3 in 6 weeks. The current data center has a 500 Mbps uplink to the Internet. Other local applications share the uplink. The company can use 80% of the Internet bandwidth for this one-time migration task. What solution will meet these requirements?

A. Configure AWS DataSync to migrate data to Amazon S3 and verify it automatically.
B. Use rsync to transfer data directly to Amazon S3.
C. Use the AWS CLI and multiple copy processes to send data directly to Amazon S3.
D. Order multiple AWS Snowball devices. Copy data to devices. Send the devices to AWS to copy the data to Amazon S3.

A

D. Order multiple AWS Snowball devices. Copy data to devices. Send the devices to AWS to copy the data to Amazon S3.

1 Gbps will make about 7 TB in 24 hours. This means that 400 Mbps will only do 2.8 TB in 24 hours. Or, 510 weeks to transmit.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

621# A company has several local Internet Small Computer Systems Interface (ISCSI) network storage servers. The company wants to reduce the number of these servers by moving to the AWS cloud. A solutions architect must provide low-latency access to frequently used data and reduce dependency on on-premises servers with minimal infrastructure changes. What solution will meet these requirements?

A. Deploy an Amazon S3 file gateway.
B. Deploy Amazon Elastic Block Store (Amazon EBS) storage with backups to Amazon S3.
C. Deploy an AWS Storage Gateway volume gateway that is configured with stored volumes.
D. Deploy an AWS Storage Gateway volume gateway that is configured with cached volumes.

A

D. Deploy an AWS Storage Gateway volume gateway that is configured with cached volumes.

AWS Storage Gateway: AWS Storage Gateway is a hybrid cloud storage service that provides seamless, secure integration between on-premises IT environments and AWS storage services. Supports different gateway configurations, including volume gateways.

Cached volumes provide low latency access to frequently used data because frequently accessed data is stored locally on premises. The entire dataset is backed up on Amazon S3, ensuring durability and accessibility.

Minimal Changes to Infrastructure:
Using a cached volume gateway minimizes the need for significant changes to existing infrastructure. It allows the company to keep frequently accessed data on-premises while taking advantage of the scalability and durability of Amazon S3.

Incorrect Option C (AWS Storage Gateway volume gateway with stored volumes): Stored volumes keep the entire data set on-premises, and may not be best suited for low-latency access to frequently used data.

Therefore, option D, which uses an AWS Storage Gateway volume gateway with cached volumes, is the most appropriate option for the given requirements.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

622# A solutions architect is designing an application that will allow business users to upload objects to Amazon S3. The solution must maximize the durability of the object. Objects must also be available at any time and for any period of time. Users will access objects frequently within the first 30 days after objects are uploaded, but users are much less likely to access objects older than 30 days. Which solution meets these requirements in the MOST cost-effective way?

A. Store all objects in S3 Standard with an S3 lifecycle rule to transition objects to the S3 glacier after 30 days.
B. Store all objects in S3 Standard with an S3 lifecycle rule to transition the objects to S3 Standard-Infrequent Access (S3 Standard-IA) after 30 days.
C. Store all objects in S3 Standard with an S3 lifecycle rule to transition objects to S3 One Zone-Infrequent Access (S3 One Zone-IA) after 30 days.
D. Store all objects in S3 Intelligent-Tiering with an S3 lifecycle rule to transition the objects to S3 Standard-Infrequent Access (S3 Standard-IA) after 30 days.

A

B. Store all objects in S3 Standard with an S3 lifecycle rule to transition the objects to S3 Standard-Infrequent Access (S3 Standard-IA) after 30 days.

Before you transition objects to S3 Standard-IA or S3 One Zone-IA, you must store them for at least 30 days in Amazon S3. For example, you cannot create a Lifecycle rule to transition objects to the S3 Standard-IA storage class one day after you create them. Amazon S3 doesn’t support this transition within the first 30 days because newer objects are often accessed more frequently or deleted sooner than is suitable for S3 Standard-IA or S3 One Zone-IA storage. Similarly, if you are transitioning noncurrent objects (in versioned buckets), you can transition only objects that are at least 30 days noncurrent to S3 Standard-IA or S3 One Zone-IA storage. https://docs.aws.amazon.com/AmazonS3/latest/userguide/lifecycle-transition-general-considerations.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

623# A company has migrated a two-tier application from its on-premises data center to the AWS cloud. The data tier is a multi-AZ implementation of Amazon RDS for Oracle with 12 TB of general-purpose Amazon Elastic Block Store (Amazon EBS) SSD storage. The application is designed to process and store documents in the database as large binary objects (blobs) with an average document size of 6 MB. Database size has grown over time, reducing performance and increasing the cost of storage. The company must improve database performance and needs a solution that is highly available and resilient. Which solution will meet these requirements in the MOST cost-effective way?

A. Reduce the size of the RDS database instance. Increase storage capacity to 24 TiB. Change the storage type to Magnetic.
B. Increase the size of the RDS database instance. Increase storage capacity to 24 Ti. Change the storage type to Provisioned IOPS.
C. Create an Amazon S3 bucket. Update the app to store documents in S3 bucket. Store the object metadata in the existing database.
D. Create an Amazon DynamoDB table. Update the application to use DynamoDB. Use AWS Database Migration Service (AWS DMS) to migrate data from Oracle database to DynamoDB.

A

C. Create an Amazon S3 bucket. Update the app to store documents in S3 bucket. Store the object metadata in the existing database.

Storing the blobs in the db is more expensive than s3 with references in the db. DynamoDB’s limit on the size of each record is 400 KB, so D is wrong.

Considerations: Storing large objects (blobs) in Amazon S3 is a scalable and cost-effective solution. Storing metadata in the existing database allows you to maintain the necessary information for each document. The load on the RDS instance has been reduced as large objects are stored in S3.

Conclusion: This option is recommended as it leverages the strengths of Amazon S3 and RDS, providing scalability, cost-effectiveness, and maintaining metadata. Option C stands out as the most suitable to address the requirements while taking into account factors such as performance, scalability and cost-effectiveness.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
51
Q

624# A company has an application that serves customers that are deployed in more than 20,000 retail stores around the world. The application consists of backend web services that are exposed over HTTPS on port 443. The application is hosted on Amazon EC2 instances behind an application load balancer (ALB). Points of sale communicate with the web application over the public Internet. The company allows each retail location to register the IP address to which the retail location has been assigned by its local ISP. The company’s security team recommends increasing application endpoint security by restricting access to only IP addresses registered by retail locations. What should a solutions architect do to meet these requirements?

A. Associate an AWS WAF web ACL with the ALB. Use IP rule sets in the ALB to filter traffic. Update the IP addresses in the rule to include the registered IP addresses.
B. Deploy AWS Firewall Manager to manage ALConfigure firewall rules to restrict traffic to ALModify firewall rules to include registered IP addresses.
C. Store the IP addresses in an Amazon DynamoDB table. Configure an AWS Lambda authorization function in the ALB to validate that incoming requests are from the registered IP addresses.
D. Configure the network ACL on the subnet that contains the ALB public interface. Update the inbound rules in the network ACL with entries for each of the registered IP addresses.

A

A. Associate an AWS WAF web ACL with the ALB. Use IP rule sets in the ALB to filter traffic. Update the IP addresses in the rule to include the registered IP addresses.

AWS Web Application Firewall (WAF) is designed to protect web applications from common web exploits. By associating a WAF web ACL with the ALB, you can configure IP rule sets to filter incoming traffic based on source IP addresses.
Updating the IP addresses in the rule to include registered IP addresses allows you to control and restrict access to only authorized locations. Conclusion: This option provides a secure and scalable solution to restrict web application access based on registered IP addresses.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
52
Q

625# A company is building a data analytics platform on AWS using AWS Lake Formation. The platform will ingest data from different sources, such as Amazon S3 and Amazon RDS. The company needs a secure solution to prevent access to parts of the data that contain sensitive information. Which solution will meet these requirements with the LESS operating overhead?

A. Create an IAM role that includes permissions to access Lake Formation tables.
B. Create data filters to implement row-level security and cell-level security.
C. Create an AWS Lambda function that removes sensitive information before Lake Formation ingests the data.
D. Create an AWS Lambda function that periodically queries and deletes sensitive information from Lake Formation tables.

A

B. Create data filters to implement row-level security and cell-level security.

Data filters in AWS Lake Formation are designed to implement row-level and cell-level security. This option aligns with the requirement to control access at the data level and is an appropriate approach for this scenario.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
53
Q

626# A company deploys Amazon EC2 instances running in a VPC. EC2 instances upload source data to Amazon S3 buckets so that the data can be processed in the future. In accordance with compliance laws, data must not be transmitted over the public Internet. Servers in the company’s on-premises data center will consume the output of an application running on the EC2 instances. What solution will meet these requirements?

A. Deploy an interface VPC endpoint for Amazon EC2. Creates an AWS site-to-site VPN connection between the enterprise and the VPC.
B. Deploy a gateway VPC endpoint for Amazon S3. Set up an AWS Direct Connect connection between your on-premises network and the VPC.
C. Configure an AWS Transit Gateway connection from the VPC to the S3 buckets. Create an AWS site-to-site VPN connection between the enterprise and the VPC.
D. Configure EC2 proxy instances that have routes to NAT gateways. Configure EC2 proxy instances to fetch data from S3 and feed the application instances.

A

B. Deploy a gateway VPC endpoint for Amazon S3. Set up an AWS Direct Connect connection between your on-premises network and the VPC.

Deploy a gateway VPC endpoint for Amazon S3: This allows EC2 instances in the VPC to access Amazon S3 directly without traversing the public Internet. Ensures that data is transmitted securely over the AWS network. Set up an AWS Direct Connect connection: Direct Connect provides a dedicated network connection between the on-premises network and the VPC, ensuring a private, trusted link.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
54
Q

627# A company has an application with a REST-based interface that allows it to receive data in near real time from an external provider. Once received, the request processes and stores the data for later analysis. The application runs on Amazon EC2 instances. The third party provider has received many 503 service unavailable errors when sending data to the application. When the data volume increases, the computing capacity reaches its maximum limit and the application cannot process all requests. What design should a solutions architect recommend to provide a more scalable solution?

A. Use Amazon Kinesis Data Streams to ingest the data. Process data using AWS Lambda functions.
B. Use Amazon API Gateway on top of the existing application. Create a usage plan with a quota limit for the third-party provider.
C. Use Amazon Simple Notification Service (Amazon SNS) to ingest the data. Put the EC2 instances in an auto-scaling group behind an application load balancer.
D. Repackage the application as a container. Deploy the application using Amazon Elastic Container Service (Amazon ECS) using the EC2 release type with an auto-scaling group.

A

A. Use Amazon Kinesis Data Streams to ingest the data. Process data using AWS Lambda functions.

Amazon Kinesis Data Streams can handle large volumes of streaming data, providing a scalable and resilient solution. AWS Lambda functions can be triggered by Kinesis Data Streams, allowing the application to process data in near real time. Lambda automatically scales based on the rate of incoming events, ensuring the system can handle spikes in data volume.

Amazon Kinesis Data Streams, for ingesting data and processing it with AWS Lambda functions, is the recommended design for handling near real-time streaming data at scale. Provides the scalability and resilience needed to process large volumes of data.

The keyword is “real time”. Kinesis data streams are meant for real time data processing.

https://aws.amazon.com/about-aws/whats-new/2021/11/amazon-kinesis-data-streams-on-demand/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
55
Q

628# A company has an application running on Amazon EC2 instances in a private subnet. The application needs to process sensitive information from an Amazon S3 bucket. The application must not use the Internet to connect to the S3 bucket. What solution will meet these requirements?

A. Configure an Internet gateway. Update the S3 bucket policy to allow access from the Internet gateway. Update the app to use the new Internet gateway.
B. Set up a VPN connection. Update the S3 bucket policy to allow access from the VPN connection. Update the app to use the new VPN connection.
C. Configure a NAT gateway. Update the S3 bucket policy to allow access from the NAT gateway. Update the app to use the new NAT gateway.
D. Configure a VPC endpoint. Update the S3 bucket policy to allow access from the VPC endpoint. Update the application to use the new VPC endpoint.

A

D. Configure a VPC endpoint. Update the S3 bucket policy to allow access from the VPC endpoint. Update the application to use the new VPC endpoint.

A VPC endpoint allows your EC2 instances to connect to services like Amazon S3 directly within the AWS network without traversing the Internet. An Internet gateway or NAT gateway is not required for this solution, ensuring that the application does not use the Internet to connect to the S3 bucket. Improves security by keeping traffic within the AWS network and avoiding exposure to the public Internet.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
56
Q

629# A company uses Amazon Elastic Kubernetes Service (Amazon EKS) to run a container application. The EKS cluster stores sensitive information in the Kubernetes secrets object. The company wants to make sure that the information is encrypted. Which solution will meet these requirements with the LESS operating overhead?

A. Use the container application to encrypt information using AWS Key Management Service (AWS KMS).
B. Enable secret encryption on the EKS cluster using the AWS Key Management Service (AWS KMS).
C. Implement an AWS Lambda function to encrypt information using the AWS Key Management Service (AWS KMS).
D. Use the AWS Systems Manager parameter store to encrypt information using the AWS Key Management Service (AWS KMS).

A

B. Enable secret encryption on the EKS cluster using the AWS Key Management Service (AWS KMS).

Amazon EKS offers the option to encrypt Kubernetes secrets at rest using AWS Key Management Service (AWS KMS). This is a native and managed solution within the EKS service, reducing operational overhead. Kubernetes secrets are automatically encrypted using the default AWS KMS key for the EKS cluster. This ensures that sensitive information stored in Kubernetes secrets is encrypted, providing security.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
57
Q

630# A company is designing a new multi-tier web application that consists of the following components:
* Web and application servers running on Amazon EC2 instances as part of Auto Scaling groups.
* An Amazon RDS DB instance for data storage.

A solutions architect needs to limit access to application servers so that only web servers can access them. What solution will meet these requirements? A. Deploy AWS PrivateLink in front of the application servers. Configure the network ACL to allow only web servers to access application servers.
B. Deploy a VPC endpoint in front of the application servers. Configure the security group to allow only web servers to access application servers.
C. Deploy a network load balancer with a target group that contains the auto-scaling group of application servers. Configure the network ACL to allow only web servers to access application servers.
D. Deploy an application load balancer with a target group that contains the auto-scaling group of application servers. Configure the security group to allow only web servers to access application servers.

A

D. Deploy an application load balancer with a target group that contains the auto-scaling group of application servers. Configure the security group to allow only web servers to access application servers.

An application load balancer (ALB) can be used to distribute incoming web traffic across multiple Amazon EC2 instances. The ALB can be configured with a target group that contains the Auto Scaling group of application servers. Security groups can be used to control incoming and outgoing traffic to instances. By configuring the security group associated with the application servers to only allow security group traffic from the web servers, you limit access to only the web servers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
58
Q

631# A company runs a critical, customer-facing application on Amazon Elastic Kubernetes Service (Amazon EKS). The application has a microservices architecture. The company needs to implement a solution that collects, aggregates, and summarizes application metrics and logs in a centralized location. Which solution meets these requirements?

A. Run the Amazon CloudWatch agent on the existing EKS cluster. View metrics and logs in the CloudWatch console.
B. Run AWS App Mesh on the existing EKS cluster. View metrics and logs in the App Mesh console.
C. Configure AWS CloudTrail to capture data events. Query CloudTrail using the Amazon OpenSearch service.
D. Configure Amazon CloudWatch Container Insights on the existing EKS cluster. View metrics and logs in the CloudWatch console.

A

D. Configure Amazon CloudWatch Container Insights on the existing EKS cluster. View metrics and logs in the CloudWatch console.

Amazon CloudWatch Container Insights provides a comprehensive solution for monitoring and analyzing containerized applications, including those running on Amazon Elastic Kubernetes Service (Amazon EKS). Collects performance metrics, logs, and events from EKS clusters and containerized applications, allowing you to gain insight into their performance and health. CloudWatch Container Insights integrates with CloudWatch Logs, allowing you to view logs and metrics in the CloudWatch console for analysis. Provides a centralized location to collect, aggregate, and summarize metrics and logs for your customer-facing application’s microservices architecture.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
59
Q

632# A company has deployed its new product on AWS. The product runs in an auto-scaling group behind a network load balancer. The company stores product objects in an Amazon S3 bucket. The company recently experienced malicious attacks against its systems. The company needs a solution that continuously monitors malicious activity in the AWS account, workloads, and S3 bucket access patterns. The solution should also report suspicious activity and display the information in a dashboard. What solution will meet these requirements?

A. Configure Amazon Macie to monitor and report findings to AWS Config.
B. Configure Amazon Inspector to monitor and report findings to AWS CloudTrail.
C. Configure Amazon GuardDuty to monitor and report findings to AWS Security Hub.
D. Configure AWS Config to monitor and report findings to Amazon EventBridge.

A

C. Configure Amazon GuardDuty to monitor and report findings to AWS Security Hub.

Amazon GuardDuty is a threat detection service that continuously monitors malicious activity and unauthorized behavior in AWS accounts. Analyzes VPC flow logs, AWS CloudTrail event logs, and DNS logs for potential threats. GuardDuty findings can be sent to AWS Security Hub, which acts as a central hub for monitoring security alerts and compliance status across all AWS accounts. AWS Security Hub consolidates and prioritizes findings from multiple AWS services, including GuardDuty, and provides a unified view of security alerts. Security Hub can integrate with third-party security tools and allows the creation of custom actions to remediate security findings. This solution provides continuous monitoring, detection, and reporting of malicious activities in your AWS account, including S3 bucket access patterns.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
60
Q

633# A company wants to migrate an on-premises data center to AWS. The data center houses a storage server that stores data on an NFS-based file system. The storage server contains 200 GB of data. The company needs to migrate data without interruption to existing services. Various resources on AWS must be able to access data using the NFS protocol. What combination of steps will most cost-effectively meet these requirements? (Choose two.) A. Create an Amazon FSx file system for Luster. B. Create an Amazon Elastic File System (Amazon EFS) file system. C. Create an Amazon S3 bucket to receive the data. D. Manually use a copy operating system command to send the data to the AWS destination. E. Install an AWS DataSync agent in the on-premises data center. Use a data synchronization task between on-premises and AWS.

A

B. Create an Amazon Elastic File System (Amazon EFS) file system.
E. Install an AWS DataSync agent in the on-premises data center. Use a data synchronization task between on-premises and AWS.

Option B: Amazon EFS provides a fully managed, scalable NFS file system that can be mounted by multiple Amazon EC2 instances simultaneously. You can create an Amazon EFS file system and then mount it to the necessary AWS resources.
Option E: AWS DataSync is a data transfer service that simplifies and accelerates data migration between on-premises storage systems and AWS. By installing a DataSync agent in your on-premises data center, you can use DataSync tasks to efficiently transfer data to Amazon EFS. This approach helps minimize downtime and ensure a smooth migration.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
61
Q

634# A company wants to use Amazon FSx for Windows File Server for its Amazon EC2 instances that have an SMB file share mounted as a volume in the us-east-1 region. The company has a recovery point objective (RPO) of 5 minutes for planned system maintenance or unplanned service interruptions. The company needs to replicate the file system in the us-west-2 region. Replicated data must not be deleted by any user for 5 years. What solution will meet these requirements?

A. Create an FSx file system for Windows File Server on us-east-1 that has a Single-AZ deployment type 2. Use AWS Backup to create a daily backup plan that includes a backup rule that copy the backup to us-west-2. Configure AWS Backup Vault Lock in compliance mode for a target vault on us-west-2. Set a minimum duration of 5 years.
B. Create an FSx file system for Windows File Server on us-east-1 that has a Multi-AZ deployment type. Use AWS Backup to create a daily backup plan that includes a backup rule that copies the backup to us-west-2. Configure AWS Backup Vault Lock in governance mode for a target vault on us-west-2. Set a minimum duration of 5 years.
C. Create an FSx file system for Windows File Server on us-east-1 that has a Multi-AZ deployment type. Use AWS Backup to create a daily backup plan that includes a backup rule that copies the backup to us-west-2. Configure AWS Backup Vault Lock in compliance mode for a target vault on us-west-2. Set a minimum duration of 5 years.
D. Create an FSx file system for Windows File Server on us-east-1 that has a Single-AZ deployment type 2. Use AWS Backup to create a daily backup plan that includes a backup rule that copy the backup to us-west-2. Configure AWS Backup Vault Lock in governance mode for a target vault on us-west-2. Set a minimum duration of 5 years.

A

C. Create an FSx file system for Windows File Server on us-east-1 that has a Multi-AZ deployment type. Use AWS Backup to create a daily backup plan that includes a backup rule that copies the backup to us-west-2. Configure AWS Backup Vault Lock in compliance mode for a target vault on us-west-2. Set a minimum duration of 5 years.

A Multi-AZ deployment type in us-east-1 provides high availability within the region. AWS Backup can be used to automate the backup process and create a backup copy on us-west-2. Using AWS Backup Vault Lock in compliance mode ensures that data is retained for the specified duration (5 years) and cannot be deleted by any user. In summary, Option C with Multi-AZ deployment and compliance mode for Vault Lock is considered the most robust solution to ensure high availability and long-term data retention with strict controls.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
62
Q

635# A solutions architect is designing a security solution for a company that wants to provide developers with individual AWS accounts across AWS organizations, while maintaining standard security controls. Because individual developers will have AWS account root-level access to their own accounts, the solutions architect wants to ensure that the mandatory AWS CloudTrail settings that apply to new developer accounts are not changed. What action meets these requirements?

A. Create an IAM policy that prohibits changes to CloudTrail and attach it to the root user.
B. Create a new trail in CloudTrail from developer accounts with the organization trails option enabled.
C. Create a service control policy (SCP) that prohibits changes to CloudTrail and attach it to the developer accounts.
D. Create a service-linked role for CloudTrail with a policy condition that allows changes only from an Amazon Resource Name (ARN) in the management account.

A

C. Create a service control policy (SCP) that prohibits changes to CloudTrail and attach it to the developer accounts.

Service control policies (SCPs) are applied at the root level of an AWS organization to set fine-grained permissions for all accounts in the organization. By creating an SCP that explicitly prohibits changes to CloudTrail, you can enforce this policy across all developer accounts. This approach ensures that even if individual developers have root access to their AWS accounts, they will not be able to modify CloudTrail settings due to SCP restrictions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
63
Q

636# A company is planning to deploy a business-critical application to the AWS cloud. The application requires durable storage with consistent, low-latency performance. What type of storage should a solutions architect recommend to meet these requirements?

A. Instance store volume
B. Amazon ElastiCache for the Memcached cluster
C. SSD IOPS provisioned Amazon Elastic Block Store (Amazon EBS) volume
D. Amazon Elastic Block Store (Amazon EBS) optimized hard drive volume

A

C. SSD IOPS provisioned Amazon Elastic Block Store (Amazon EBS) volume

Provisioned IOPS SSD volumes are designed for applications that require predictable and consistent I/O performance. You can provision a specific number of IOPS when creating the volume to ensure consistent low-latency performance. These volumes provide durability and are suitable for business-critical applications.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
64
Q

637# An online photo sharing company stores its photos in an Amazon S3 bucket that exists in the us-west-1 region. The company needs to store a copy of all new photos in the us-east-1 region. Which solution will meet this requirement with the LEAST operational effort?

A. Create a second S3 bucket on us-east-1. Use S3 cross-region replication to copy photos from the existing S3 bucket to the second S3 bucket.
B. Create a cross-origin share (CORS) configuration from the existing S3 bucket. Specify us-east-1 in the AllowedOrigin element of the CORS rule.
C. Create a second S3 bucket on us-east-1 in multiple availability zones. Create an S3 lifecycle rule to save photos to the second S3 bucket.
D. Create a second S3 bucket on us-east-1. Configure S3 event notifications on object creation and update events to invoke an AWS Lambda function to copy photos from the existing S3 bucket to the second S3 bucket.

A

A. Create a second S3 bucket on us-east-1. Use S3 cross-region replication to copy photos from the existing S3 bucket to the second S3 bucket.

This is a simple and fully managed solution.

To automatically replicate new objects as they are written to the bucket, use live replication, such as Cross-Region Replication (CRR).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
65
Q

638# A company is creating a new web application for its subscribers. The application will consist of a single static page and a persistent database layer. The app will have millions of users for 4 hours in the morning, but the app will only have a few thousand users for the rest of the day. The company’s data architects have requested the ability to quickly evolve their schema. Which solutions will meet these requirements and provide the MOST scalability? (Choose two.)

A. Implement Amazon DynamoDB as a database solution. Supply of capacity on demand.
B. Deploy Amazon Aurora as the database solution. Choose Serverless Database Engine mode.
C. Implement Amazon DynamoDB as a database solution. Make sure DynamoDB auto-scaling is enabled.
D. Deploy the static content to an Amazon S3 bucket. Provision an Amazon CloudFront distribution with the S3 bucket as the origin.
E. Deploy web servers for static content to a fleet of Amazon EC2 instances in Auto Scaling groups. Configure instances to periodically update the contents of an Amazon Elastic File System (Amazon EFS) volume.

A

C. Implement Amazon DynamoDB as a database solution. Make sure DynamoDB auto-scaling is enabled.
D. Deploy the static content to an Amazon S3 bucket. Provision an Amazon CloudFront distribution with the S3 bucket as the origin.

C. DynamoDB auto-scaling dynamically adjusts provisioned capacity based on actual traffic. This is a good option to handle different workloads and ensure optimal performance.
D. This is a valid approach to serving static content with low latency globally using Amazon CloudFront. It helps in scalability and improves performance by distributing content to edge locations.

Based on scalability and ease of management, the recommended options are C (DynamoDB with auto-scaling) and D (S3 with CloudFront). These options take advantage of fully managed services and provide scalability without the need for manual intervention.

With provisioned capacity you can also use auto scaling to automatically adjust your table’s capacity based on the specified utilization rate to ensure application performance, and also to potentially reduce costs. To configure auto scaling in DynamoDB, set the minimum and maximum levels of read and write capacity in addition to the target utilization percentage.

It is important to note that DynamoDB auto scaling modifies provisioned throughput settings only when the actual workload stays elevated or depressed for a sustained period of several minutes.

This means that provisioned capacity (with on-demand) is probably best for you if you have relatively predictable application traffic, run applications whose traffic is consistent, and ramps up or down gradually.

Whereas on-demand capacity mode is probably best when you have new tables with unknown workloads, unpredictable application traffic and also if you only want to pay exactly for what you use. The on-demand pricing model is ideal for bursty, new, or unpredictable workloads whose traffic can spike in seconds or minutes, and when under-provisioned capacity would impact the user experience.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
66
Q

639# A company uses Amazon API Gateway to manage its REST APIs that are accessed by third-party service providers. The enterprise must protect REST APIs from SQL injection and cross-site scripting attacks. What is the most operationally efficient solution that meets these requirements?

A. Configure AWS Shield.
B. Configure AWS WAF.
C. Configure the API gateway with an Amazon CloudFront distribution. Configure AWS Shield on CloudFront.
D. Configure the API gateway with an Amazon CloudFront distribution. Configure AWS WAF on CloudFront.

A

B. Configure AWS WAF.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
67
Q

640# A company wants to provide users with access to AWS resources. The company has 1,500 users and manages their access to local resources through Active Directory user groups on the corporate network. However, the company does not want users to have to maintain another identity to access resources. A solutions architect must manage user access to AWS resources while preserving access to local resources. What should the solutions architect do to meet these requirements?

A. Create an IAM user for each user in the company. Attach the appropriate policies to each user.
B. Use Amazon Cognito with a group of Active Directory users. Create roles with the appropriate policies attached.
C. Define cross-account roles with appropriate policies attached. Assign roles to Active Directory groups.
D. Configure Security Assertion Markup Language (SAML)-based federation 2 0. Create roles with the appropriate policies attached. Assign roles to Active Directory groups.

A

D. Configure Security Assertion Markup Language (SAML)-based federation 2 0. Create roles with the appropriate policies attached. Assign roles to Active Directory groups.

SAML enables single sign-on (SSO) between the company’s Active Directory and AWS. Users can use their existing corporate credentials to access AWS resources without having to manage a separate set of credentials for AWS.

Roles and policies: With SAML-based federation, roles are created in AWS that define the permissions that users will have. Policies are attached to these roles to specify what actions users can take. Assignment to Active Directory Groups: Roles in AWS can be assigned to Active Directory groups. This allows you to centrally manage permissions across Active Directory groups, and users inherit these permissions when they assume the associated roles in AWS. In summary, SAML-based federation provides a standardized way to enable single sign-on between AWS and the enterprise Active Directory, ensuring a seamless experience for users while maintaining centralized access control across Active Directory groups.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
68
Q

641# A company is hosting a website behind multiple application load balancers. The company has different distribution rights for its content around the world. A solutions architect must ensure that users receive the correct content without violating distribution rights. What configuration should the solutions architect choose to meet these requirements?

A. Configure Amazon CloudFront with AWS WAF.
B. Configure application load balancers with AWS WAF
C. Configure Amazon Route 53 with a geolocation policy
D. Configure Amazon Route 53 with a geoproximity routing policy

A

C. Configure Amazon Route 53 with a geolocation policy

With Amazon Route 53, you can create a geolocation routing policy that routes traffic based on the user’s geographic location. This allows you to serve different content or direct users to different application load balancers based on their geographic location. By setting geolocation policies on Amazon Route 53, you can achieve the desired content distribution while complying with distribution rights. In summary, for the specific requirement of serving different content based on the geographic location of users, the most appropriate option is to use geolocation routing policies with Amazon Route 53.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
69
Q

642# A company stores its data on premises. The amount of data is growing beyond the company’s available capacity. The company wants to migrate its data from on-premises to an Amazon S3 bucket. The company needs a solution that automatically validates the integrity of data after transfer. What solution will meet these requirements?

A. Order an AWS Snowball Edge device. Configure the Snowball Edge device to perform online data transfer to an S3 bucket.
B. Deploy an AWS DataSync agent on-premises. Configure the DataSync agent to perform online data transfer to an S3 bucket.
C. Create an on-premises Amazon S3 file gateway. Configure the S3 File Gateway to perform online data transfer to an S3 bucket.
D. Configure an accelerator in Amazon S3 Transfer Acceleration on-premises. Configure the accelerator to perform online data transfer to an S3 bucket.

A

B. Deploy an AWS DataSync agent on-premises. Configure the DataSync agent to perform online data transfer to an S3 bucket.

AWS DataSync is a service designed for online data transfer to and from AWS. Deploying a DataSync agent on-premises enables efficient and secure transfers over the network. DataSync automatically verifies data integrity, ensuring that Amazon S3 data matches the source

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
70
Q

643# A company wants to migrate two DNS servers to AWS. The servers host a total of approximately 200 zones and receive 1 million requests each day on average. The company wants to maximize availability while minimizing operational overhead related to managing the two servers. What should a solutions architect recommend to meet these requirements?

A. Create 200 new hosted zones in the Amazon Route 53 console Import zone files.
B. Start a single large Amazon EC2 instance. Import zone mosaics. Configure Amazon CloudWatch alarms and notifications to alert the company of any downtime.
C. Migrate the servers to AWS using the AWS Server Migration Service (AWS SMS). Configure Amazon CloudWatch alarms and notifications to alert the company of any downtime.
D. Start an Amazon EC2 instance in an auto-scaling group in two availability zones. Import zone files. Set the desired capacity to 1 and the maximum capacity to 3 for the Auto Scaling group. Configure scaling alarms to scale based on CPU utilization.

A

A. Create 200 new hosted zones in the Amazon Route 53 console Import zone files.

https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/migrate-dns-domain-in-use.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
71
Q

644# A global company runs its applications on multiple AWS accounts in AWS organizations. The company’s applications use multi-party uploads to upload data to multiple Amazon S3 buckets across AWS Regions. The company wants to report incomplete multiplepart uploads for cost compliance purposes. Which solution will meet these requirements with the LESS operating overhead?

A. Configure AWS Config with a rule to report the undercount of multipart payload objects.
B. Create a service control policy (SCP) to report the undercount of multipart payload objects.
C. Configure the S3 storage lens to report the incomplete multipart upload object count.
D. Create an S3 multi-region hotspot to report the incomplete multi-part upload object count.

A

C. Configure the S3 storage lens to report the incomplete multipart upload object count.

S3 Storage Lens is a fully managed analytics solution that provides organization-wide visibility into object storage usage, activity trends, and helps identify cost-saving opportunities. It is designed to minimize operational overhead and provides comprehensive information about your S3 usage.

Incompleteness Reporting: S3 Storage Lens allows you to configure metrics, including multi-part incomplete uploads, without the need for complex configuration. It provides a holistic view of your storage usage, including the status of loads from various parties, making it suitable for compliance and cost monitoring purposes.

S3 storage lens is specifically designed to obtain information about S3 usage.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
72
Q

645# A company has a production database on Amazon RDS for MySQL. The company wants to update the database version for security compliance reasons. Because the database contains critical data, the company wants a quick solution to update and test functionality without losing any data. Which solution will meet these requirements with the LESS operating overhead?

A. Create a manual RDS snapshot. Upgrade to the new version of Amazon RDS for MySQL.
B. Use native backup and restore. Restores data to the new updated version of Amazon RDS for MySQL.
C. Use the AWS Database Migration Service (AWS DMS) to replicate the data to the new updated version of Amazon RDS for MySQL.
D. Use Amazon RDS blue/green deployments to deploy and test production changes.

A

D. Use Amazon RDS blue/green deployments to deploy and test production changes.

You can make changes to RDS database instances in the green environment without affecting production workloads. For example, you can upgrade the major or minor version of the database engine, update the underlying file system configuration, or change database parameters in the staging environment. You can thoroughly test the changes in the green environment. https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/blue-green-deployments-overview.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
73
Q

646# A solutions architect is creating a data processing job that runs once a day and can take up to 2 hours to complete. If the job is interrupted, it has to be restarted from the beginning. How should the solutions architect address this problem in the MOST cost-effective way?

A. Create a script that runs locally on an Amazon EC2 Reserved Instance that is triggered by a cron job.
B. Create an AWS Lambda function triggered by an Amazon EventBridge scheduled event.
C. Use an Amazon Elastic Container Service (Amazon ECS) Fargate task triggered by an Amazon EventBridge scheduled event.
D. Use an Amazon Elastic Container Service (Amazon ECS) task running on Amazon EC2 triggered by an Amazon EventBridge scheduled event.

A

C. Use an Amazon Elastic Container Service (Amazon ECS) Fargate task triggered by an Amazon EventBridge scheduled event.

Serverless, suitable for long-running tasks. Automatically scale and manage the underlying infrastructure. EC2 instances from option D would cost more.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
74
Q

647# A social media company wants to store its database of user profiles, relationships, and interactions in the AWS cloud. The company needs an application to monitor any changes to the database. The application needs to analyze the relationships between data entities and provide recommendations to users. Which solution will meet these requirements with the LESS operating overhead?

A. Use Amazon Neptune to store the information. Use Amazon Kinesis Data Streams to process changes to the database.
B. Use Amazon Neptune to store the information. Use Neptune Streams to process changes to the database.
C. Use the Amazon Quantum Ledger database (Amazon QLDB) to store the information. Use Amazon Kinesis Data Streams to process changes to the database.
D. Use the Amazon Quantum Ledger database (Amazon QLDB) to store the information. Use Neptune Streams to process changes to the database.

A

B. Use Amazon Neptune to store the information. Use Neptune Streams to process changes to the database.

Amazon Neptune is a fully managed graph database, and Neptune Streams allows you to capture changes to the database. This option provides a fully managed solution for storing and monitoring database changes, minimizing operational overhead. Both storage and change monitoring are handled by Amazon Neptune.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
75
Q

648# A company is creating a new application that will store a large amount of data. Data will be analyzed hourly and modified by multiple Amazon EC2 Linux instances that are deployed across multiple availability zones. The amount of storage space needed will continue to grow over the next 6 months. Which storage solution should a solutions architect recommend to meet these requirements?

A. Store data in Amazon S3 Glacier. Update the S3 Glacier vault policy to allow access to the application instances.
B. Store the data on an Amazon Elastic Block Store (Amazon EBS) volume. Mount the EBS volume on the application instances.
C. Store the data in an Amazon Elastic File System (Amazon EFS) file system. Mount the file system on the application instances.
D. Store the data on an Amazon Elastic Block Store (Amazon EBS) provisioned IOPS volume shared between the application instances.

A

C. Store the data in an Amazon Elastic File System (Amazon EFS) file system. Mount the file system on the application instances.

Amazon EFS is a scalable, elastic file storage service that can be mounted on multiple EC2 instances simultaneously. It is suitable for applications that require shared access to data across multiple instances. This option is a good option for the scenario described.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
76
Q

649# A company manages an application that stores data in an Amazon RDS for PostgreSQL Multi-AZ DB instance. Increases in traffic are causing performance issues. The company determines that database queries are the main reason for slow performance. What should a solutions architect do to improve application performance?

A. Serves read traffic from the Multi-AZ standby replica.
B. Configure the database instance to use transfer acceleration.
C. Create a read replica from the source database instance. Serves read traffic from the read replica.
D. Use Amazon Kinesis Data Firehose between the application and Amazon RDS to increase the concurrency of database requests.

A

C. Create a read replica from the source database instance. Serves read traffic from the read replica.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
77
Q

650# A company collects 10 GB of telemetry data daily from multiple machines. The company stores data in an Amazon S3 bucket in a source data account. The company has hired several consulting agencies to use this data for analysis. Every agency needs to read data access for its analysts. The company should share source data account data by choosing a solution that maximizes security and operational efficiency. What solution will meet these requirements?

A. Configure global S3 tables to replicate data for each agency.
B. Make the S3 bucket public for a limited time. Report only to agencies.
C. Configure cross-account access for S3 bucket to accounts owned by agencies.
D. Configure an IAM user for each analyst in the source data account. Grant each user access to the S3 bucket.

A

C. Configure cross-account access for S3 bucket to accounts owned by agencies.

This is a suitable option. You can configure cross-account access by creating AWS Identity and Access Management (IAM) roles in the source data account and allowing consulting agency AWS accounts to assume these roles. This way you can grant temporary and secure access to the S3 bucket.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
78
Q

651# A company uses Amazon FSx for NetApp ONTAP in its primary AWS Region for CIFS and NFS file shares. Applications running on Amazon EC2 instances access file shares. The company needs a storage disaster recovery (DR) solution in a secondary region. Data that is replicated in the secondary region needs to be accessed using the same protocols as the primary region. Which solution will meet these requirements with the LESS operating overhead?

A. Create an AWS Lambda function to copy the data to an Amazon S3 bucket. Replicates the S3 bucket to the secondary region.
B. Create an FSx backup for ONTAP volumes using AWS Backup. Copy the volumes to the secondary region. Create a new FSx instance for ONTAP from the backup.
C. Create an FSx instance for ONTAP in the secondary region. Use NetApp SnapMirror to replicate data from the primary region to the secondary region.
D. Create an Amazon Elastic File System (Amazon EFS) volume. Migrate the current data to the volume. Replicates the volume to the secondary region.

A

C. Create an FSx instance for ONTAP in the secondary region. Use NetApp SnapMirror to replicate data from the primary region to the secondary region.

NetApp SnapMirror is a data replication feature designed for ONTAP systems, enabling efficient data replication between primary and secondary systems. It meets the requirement of replicating data using the same protocols (CIFS and NFS) and involves less operational expenses compared to other options.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
79
Q

652# A development team is building an event-driven application that uses AWS Lambda functions. Events will be raised when files are added to an Amazon S3 bucket. The development team currently has Amazon Simple Notification Service (Amazon SNS) configured as the Amazon S3 event target. What should a solutions architect do to process Amazon S3 events in a scalable way?

A. Create an SNS subscription that processes the event in Amazon Elastic Container Service (Amazon ECS) before the event runs in Lambda.
B. Create an SNS subscription that processes the event in Amazon Elastic Kubernetes Service (Amazon EKS) before the event runs in Lambda
C. Create an SNS subscription that sends the event to Amazon Simple Queue Service (Amazon SQS). Configure the SOS queue to trigger a Lambda function.
D. Create an SNS subscription that sends the event to the AWS Server Migration Service (AWS SMS). Configure the Lambda function to poll from the SMS event.

A

C. Create an SNS subscription that sends the event to Amazon Simple Queue Service (Amazon SQS). Configure the SOS queue to trigger a Lambda function.

It follows the pattern of using an SQS queue as an intermediary for event handling, providing scalable and decoupled event processing. SQS can handle bursts of events, and the Lambda function can be triggered from the SQS queue. This solution is the recommended scalable approach to handle Amazon S3 events in a decoupled manner.

Primary advantage of having a SQS in between SNS and Lambda is Reprocessing. Assume that the Lambda fails to process certain event for some reason (e.g. timeout or lack of memory footprint), you can increase the timeout (to max 5 minutes) or memory (to max of 1.5GB) and restart your polling and you can reprocess the older events.

This would not be possible in case of SNS to Lambda, wherein if Lambda fails the event is lost. And even if you configure DLQ you would still have to make provisions for reading that separately and processing the message

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
80
Q

653# A solutions architect is designing a new service behind Amazon API Gateway. Request patterns for the service will be unpredictable and may suddenly change from 0 requests to over 500 per second. The total size of data that must be persisted in a backend database is currently less than 1 GB with unpredictable future growth. Data can be queried using simple key value requests. What combination of AWS services would meet these requirements? (Choose two.)

A. AWS Fargate
B. AWS Lambda
C. Amazon DynamoDB
D. Amazon EC2 Auto Scaling
E. Amazon Aurora with MySQL support

A

B. AWS Lambda
C. Amazon DynamoDB

B. AWS Lambda is a serverless computing service that automatically scales in response to incoming request traffic. It is suitable for handling unpredictable request patterns, and you only pay for the calculation time consumed. Lambda can integrate with other AWS services, including API Gateway, to handle the backend logic of your service.

C. DynamoDB is a fully managed NoSQL database that provides fast, predictable performance with seamless scalability. It is well suited for simple key-value queries and can handle different workloads. DynamoDB automatically scales based on demand, making it suitable for unpredictable request patterns. It also supports automatic scaling of read and write capacity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
81
Q

654# A company collects and shares research data with the company’s employees around the world. The company wants to collect and store the data in an Amazon S3 bucket and process it in the AWS cloud. The company will share the data with the company’s employees. The business needs a secure AWS cloud solution that minimizes operational overhead. What solution will meet these requirements?

A. Use an AWS Lambda function to create a URL pre-signed by S3. Instruct employees to use the URL.
B. Create an IAM user for each employee. Create an IAM policy for each employee to allow access to S3. Instruct employees to use the AWS Management Console.
C. Create an S3 file gateway. Create a share for uploading and a share for downloading. Allow employees to mount actions on their local computers to use S3 File Gateway.
D. Configure AWS Transfer Family SFTP endpoints. Select custom identity provider options. Use AWS Secrets Manager to manage user credentials. Instruct employees to use Transfer Family.

A

A. Use an AWS Lambda function to create a URL pre-signed by S3. Instruct employees to use the URL.

A. By using an AWS Lambda function, you can generate S3 pre-signed URLs on the fly. These URLs grant temporary access to specific S3 resources. Employees can use these URLs to securely upload or download data without requiring direct access to AWS credentials. This approach minimizes operational overhead, as you only need to manage the Lambda function, and there is no need for complex user management. Minimizing operational overhead: AWS Lambda is a serverless computing service, meaning there is no need to manage the underlying infrastructure. The Lambda function can be triggered by specific events (for example, an S3 upload trigger), and can be designed to handle the generation of pre-signed URLs automatically.

This solution simplifies the process of securely sharing data without the need for extensive user management or additional infrastructure management.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
82
Q

655# A company is building a new furniture inventory application. The company has deployed the application on a fleet of Amazon EC2 instances across multiple availability zones. EC2 instances run behind an application load balancer (ALB) in your VPC. A solutions architect has observed that incoming traffic appears to favor an EC2 instance, resulting in latency for some requests. What should the solutions architect do to solve this problem?

A. Disable session affinity (sticky sessions) on the ALB
B. Replace the ALB with a network load balancer
C. Increase the number of EC2 instances in each Availability Zone
D. Adjust the frequency of health checks in the ALB target group

A

A. Disable session affinity (sticky sessions) on the ALB

Session affinity (sticky sessions): When session affinity (sticky sessions) is enabled, the load balancer routes requests from a particular client to the same backend EC2 instance. While this It can be beneficial in certain scenarios, it can lead to uneven traffic distribution and higher latency if one instance receives more requests than others. Disabling sticky sessions: By disabling session affinity, ALB distributes incoming requests more evenly across all healthy instances, helping to balance load and reduce latency.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
83
Q

656# A company has an application workflow that uses an AWS Lambda function to download and decrypt files from Amazon S3. These files are encrypted using AWS Key Management Service (AWS KMS) keys. A solutions architect needs to design a solution that ensures the required permissions are set correctly. What combination of actions accomplishes this? (Choose two.)

A. Attach the kms:decrypt permission to the Lambda function’s resource policy.
B. Grant decryption permission for the Lambda IAM role in the KMS key’s policy
C. Grant the decryption permission for the Lambda resource policy in the KMS key policy.
D. Create a new IAM policy with the kms:decrypt permission and attach the policy to the Lambda function.
E. Create a new IAM role with the kms:decrypt permission and attach the execute role to the Lambda function.

A

B. Grant decryption permission for the Lambda IAM role in the KMS key’s policy
* This action ensures that the IAM role associated with the Lambda function has the necessary permission to decrypt files using the specified KMS key.

E. Create a new IAM role with the kms:decrypt permission and attach the execute role to the Lambda function.
* If the existing IAM role lacks the required kms:decrypt permission, you may need to create a new IAM role with this permission and attach it to the Lambda function.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
84
Q

657# A company wants to monitor its AWS costs for its financial review. The Cloud Operations team is architecting the AWS Organizations management account to query AWS cost and usage reports for all member accounts. The team must run this query once a month and provide a detailed analysis of the bill. Which solution is the MOST scalable and cost-effective way to meet these requirements?

A. Enable cost and usage reporting in the management account. Deliver reports to Amazon Kinesis. Use Amazon EMR for analysis.
B. Enable cost and usage reporting in the management account. Deliver reports to Amazon S3. Use Amazon Athena for analysis.
C. Enable cost and usage reporting for member accounts. Deliver reports to Amazon S3. Use Amazon Redshift for analysis.
D. Enable cost and usage reporting for member accounts. Deliver the reports to Amazon Kinesis. Use Amazon QuickSight analytics.

A

B. Enable cost and usage reporting in the management account. Deliver reports to Amazon S3. Use Amazon Athena for analysis.

Directly stores reports in S3 and leverages Athena for SQL-based analysis.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
85
Q

658# A company wants to run a gaming application on Amazon EC2 instances that are part of an Auto Scaling group in the AWS cloud. The application will transmit data using UDP packets. The company wants to make sure the app can come and go as traffic rises and falls. What should a solutions architect do to meet these requirements?

A. Connect a network load balancer to the Auto Scaling group.
B. Connect an application load balancer to the Auto Scaling group.
C. Deploy an Amazon Route 53 record set with a weighted policy to route traffic appropriately.
D. Deploy a NAT instance that is configured with port forwarding to the EC2 instances in the Auto Scaling group.

A

A. Connect a network load balancer to the Auto Scaling group.

Network Load Balancers (NLBs) are designed to handle TCP, UDP, and TLS traffic.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
86
Q

659# A company has several websites on AWS for its different brands. Each website generates tens of gigabytes of web traffic logs every day. A solutions architect needs to design a scalable solution to give company developers the ability to analyze traffic patterns across all company websites. This analysis by the developers will be done on demand once a week over the course of several months. The solution must support standard SQL queries. Which solution will meet these requirements in the MOST cost-effective way?

A. Store logs in Amazon S3. Use Amazon Athena for analytics.
B. Store the logs in Amazon RDS. Use a database client for analysis.
C. Stores the logs in the Amazon OpenSearch service. Use the OpenSearch service for analysis.
D. Store the logs in an Amazon EMR cluster. Use a supported open source framework for SQL-based analysis.

A

A. Store logs in Amazon S3. Use Amazon Athena for analytics.

Amazon S3 is a highly scalable object storage service, and Amazon Athena allows you to run SQL queries directly on data stored in S3. This option is cost-effective as you only pay for the queries you run. It is suitable for on-demand analysis with standard SQL queries.

Given the requirement for cost-effectiveness, scalability, and on-demand analysis with standard SQL queries, option A (Amazon S3 with Amazon Athena) is probably the most suitable option. It enables efficient storage, scalable queries, and cost-effective on-demand analysis for large amounts of web traffic logs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
87
Q

660# An international company has a subdomain for each country in which the company operates. The subdomains are formatted as example.com, country1.example.com, and country2.example.com. Enterprise workloads are behind an application load balancer. The company wants to encrypt website data that is in transit. What combination of steps will meet these requirements? (Choose two.)

A. Use the AWS Certificate Manager (ACM) console to request a public certificate for the apex top com domain example and a wildcard certificate for *.example.com.
B. Use the AWS Certificate Manager (ACM) console to request a private certificate for the apex domain top example.com and a wildcard certificate for *.example.com.
C. Use the AWS Certificate Manager (ACM) console to request a public and private certificate for the top apex domain example.com.
D. Validate domain ownership by email address. Switch to DNS validation by adding the necessary DNS records to the DNS provider.
E. Validate domain ownership by adding the necessary DNS records to the DNS provider.

A

A. Use the AWS Certificate Manager (ACM) console to request a public certificate for the apex top com domain example and a wildcard certificate for *.example.com.
E. Validate domain ownership by adding the necessary DNS records to the DNS provider.

A. This option is valid to protect both the apex domain and its subdomains with a single wildcard certificate.

E. This is part of the domain validation process. DNS validation is commonly used for issuing SSL/TLS certificates.

https://docs.aws.amazon.com/acm/latest/userguide/domain-ownership-validation.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
88
Q

661# A company is required to use cryptographic keys in its local key manager. The key manager is outside of the AWS cloud due to regulatory and compliance requirements. The company wants to manage encryption and decryption by using cryptographic keys that are held outside the AWS cloud and that support a variety of external key managers from different vendors. Which solution will meet these requirements with the LESS operating overhead?

A. Use the AWS CloudHSM key vault backed by a CloudHSM cluster.
B. Use an AWS Key Management Service (AWS KMS) external keystore backed by an external key manager.
C. Use the default AWS Key Management Service (AWS KMS) managed key vault.
D. Use a custom keystore backed by an AWS CloudHSM cluster.

A

B. Use an AWS Key Management Service (AWS KMS) external keystore backed by an external key manager.

With the AWS KMS Foreign Key Vault, AWS KMS can use an external key manager for key storage. This allows the company to use its own key management infrastructure outside of the AWS cloud. This option provides flexibility and allows integration with multiple external key managers while minimizing operational overhead on the AWS side.

https://docs.aws.amazon.com/kms/latest/developerguide/keystore-external.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
89
Q

662# A solutions architect needs to host a high-performance computing (HPC) workload in the AWS cloud. The workload will run on hundreds of Amazon EC2 instances and will require parallel access to a shared file system to enable distributed processing of large data sets. Data sets will be accessed through multiple instances simultaneously. The workload requires access latency within 1 ms. Once processing is complete, engineers will need access to the data set for manual post-processing. What solution will meet these requirements?

A. Use Amazon Elastic File System (Amazon EFS) as a shared file system. Access the data set from Amazon EFS.
B. Mount an Amazon S3 bucket to serve as a shared file system. Perform post-processing directly from the S3 bucket.
C. Use Amazon FSx for Luster as a shared file system. Link the file system to an Amazon S3 bucket for further processing.
D. Configure AWS Resource Access Manager to share an Amazon S3 bucket so that it can be mounted on all instances for processing and post-processing.

A

C. Use Amazon FSx for Luster as a shared file system. Link the file system to an Amazon S3 bucket for further processing.

Amazon FSx for Luster is well suited for HPC workloads, providing parallel access with low latency. Linking the file system to an S3 bucket allows for seamless integration for post-processing, leveraging the strengths of both services.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
90
Q

663# A gaming company is building an application with voice over IP capabilities. The application will serve traffic to users around the world. The application must be highly available with automated failover across all AWS regions. The company wants to minimize user latency without relying on IP address caching on user devices. What should a solutions architect do to meet these requirements?

A. Use AWS Global Accelerator with health checks.
B. Use Amazon Route 53 with a geolocation routing policy.
C. Create an Amazon CloudFront distribution that includes multiple origins.
D. Create an application load balancer that uses path-based routing.

A

A. Use AWS Global Accelerator with health checks.

A. AWS Global Accelerator provides static IP addresses that act as a fixed entry point to your applications. Supports health checks, which helps route traffic only to healthy endpoints. This can provide high availability and low latency access.

B. Amazon Route 53 can route traffic based on users’ geographic location. While it supports failover, failover may not be as automated or fast as using AWS Global Accelerator.

C. Amazon CloudFront is a content delivery network that can distribute content globally. While it can handle multiple sources, it may not provide the same level of control over failover and latency as AWS Global Accelerator.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
91
Q

664# A weather forecasting company needs to process hundreds of gigabytes of data with a latency of less than milliseconds. The company has a high-performance computing (HPC) environment in its data center and wants to expand its forecasting capabilities. A solutions architect must identify a highly available cloud storage solution that can handle large amounts of sustained performance. Files stored in the solution must be accessible to thousands of compute instances that simultaneously access and process the entire data set. What should the solutions architect do to meet these requirements?

A. Use Amazon FSx for Luster scratch file systems.
B. Use Amazon FSx for Luster persistent file systems.
C. Use Amazon Elastic File System (Amazon EFS) with Bursting Throughput mode.
D. Use Amazon Elastic File System (Amazon EFS) with provisioned performance mode.

A

B. Use Amazon FSx for Luster persistent file systems.

Amazon FSx persistent file systems for Luster are durable and can maintain data. It provides high performance and is designed for HPC and other high-performance workloads.

A. Amazon FSx for Luster provides high-performance file systems optimized for HPC workloads. Scratch filesystems are temporary and do not persist data after the filesystem is deleted.

For large-scale HPC workloads with sub-millisecond latency requirements and the need for sustained performance, Amazon FSx for Luster Persistent File Systems (Option B) is best suited. It is designed to deliver high performance for compute-intensive workloads, making it a good choice for weather forecasting enterprise requirements.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
92
Q

665# An e-commerce company runs a PostgreSQL database on premises. The database stores data using high-IOPS Amazon Elastic Block Store (Amazon EBS) block storage. Maximum daily I/O transactions per second do not exceed 15,000 IOPS. The company wants to migrate the database to Amazon RDS for PostgreSQL and provision disk IOPS performance regardless of disk storage capacity. Which solution will meet these requirements in the most cost-effective way?

A. Configure General Purpose SSD (gp2) EBS volume storage type and provision 15,000 IOPS.
B. Configure the Provisioned IOPS SSD (io1) EBS volume storage type and provision 15,000 IOPS.
C. Configure the General Purpose SSD (gp3) EBS volume storage type and provision 15,000 IOPS.
D. Configure the EBS magnetic volume type to achieve maximum IOPS.

A

C. Configure the General Purpose SSD (gp3) EBS volume storage type and provision 15,000 IOPS.

Gp3 is more cost effective than io1 and is designed for higher performance. Considering cost effectiveness, gp3 may be a suitable option, providing the necessary IOPS.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
93
Q

666# A company wants to migrate its on-premises database from Microsoft SQL Server Enterprise edition to AWS. The company’s online application uses the database to process transactions. The data analytics team uses the same production database to run reports for analytical processing. The company wants to reduce operational overhead by moving to managed services wherever possible. Which solution will meet these requirements with the LESS operating overhead?

A. Migrate to Amazon RDS for Microsoft SOL Server. Use read replicas for reporting purposes
B. Migrate to Microsoft SQL Server on Amazon EC2. Use Always On read replicas for reporting purposes
C. Migrate to Amazon DynamoDB. Use DynamoDB on-demand replicas for reporting purposes
D. Migrate to Amazon Aurora MySQL. Use Aurora Read Replicas for Reporting Purposes

A

A. Migrate to Amazon RDS for Microsoft SOL Server. Use read replicas for reporting purposes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
94
Q

667# A company uses AWS CloudFormation to deploy an application that uses an Amazon API Gateway REST API with AWS Lambda function integration. The application uses Amazon DynamoDB for data persistence. The application has three stages: development, testing and production. Each stage uses its own DynamoDB table. The company has encountered unexpected problems when promoting changes at the production stage. The changes were successful in the development and testing stages. A developer needs to route 20% of traffic to the new production API with the next production release. The developer needs to direct the remaining 80% of traffic to the existing production stage. The solution should minimize the number of errors that any individual customer experiences. What approach should the developer take to meet these requirements?

A. Update 20% of planned changes to production. Deploy the new production stage. Monitor results. Repeat this process five times to test all planned changes.
B. Update the Amazon Route 53 DNS record entry for the production API to use a weighted routing policy. Set the weight to a value of 80. Add a second record for the production domain name. Change the second routing policy to a weighted routing policy. Set the weight of the second policy to a value of 20. Change the alias of the second policy to use the test stage API.
C. Deploy an application load balancer (ALB) in front of the REST API. Change the Amazon Route 53 production API registration to point traffic to the ALB. Record the production and test stages as ALB targets with weights of 80% and 20%, respectively.
D. Configure canary settings for the production stage API. Change the percentage of traffic directed to the canary deployment to 20%. Make planned updates to the production stage. Implement the changes

A

D. Configure canary settings for the production stage API. Change the percentage of traffic directed to the canary deployment to 20%. Make planned updates to the production stage. Implement the changes

Canary release is a software development strategy in which a new version of an API (as well as other software) is deployed for testing purposes, and the base version remains deployed as a production release for normal operations on the same stage. For purposes of discussion, we refer to the base version as a production release in this documentation. Although this is reasonable, you are free to apply canary release on any non-production version for testing.

In a canary release deployment, total API traffic is separated at random into a production release and a canary release with a pre-configured ratio. Typically, the canary release receives a small percentage of API traffic and the production release takes up the rest. The updated API features are only visible to API traffic through the canary. You can adjust the canary traffic percentage to optimize test coverage or performance.

https://docs.aws.amazon.com/apigateway/latest/developerguide/canary-release.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
95
Q

668 # A company has a large data workload that lasts 6 hours a day. The company cannot lose any data while the process is running. A solutions architect is designing an Amazon EMR cluster configuration to support this critical data workload. Which solution will meet these requirements in the MOST cost-effective way?

A. Configure a long-running cluster that runs the primary node and core nodes on on-demand instances and task nodes on spot instances.
B. Configure a transient cluster that runs the primary node and core nodes on on-demand instances and task nodes on spot instances.
C. Configure a transient cluster that runs the master node on an on-demand instance and the master nodes and task nodes on spot instances.
D. Configure a long-running cluster that runs the master node on an on-demand instance, master nodes on spot instances, and task nodes on spot instances.

A

B. Configure a transient cluster that runs the primary node and core nodes on on-demand instances and task nodes on spot instances

In this option, the cluster is configured as transient, indicating that it is intended for short-lived workloads. The master node and core nodes are instantiated on-demand for stability and reliability. Task nodes are in one-time instances, which can be more profitable, but have the risk of outages.

From the documentation: When you configure termination after step execution, the cluster starts, runs bootstrap actions, and then runs the steps that you specify. As soon as the last step completes, Amazon EMR terminates the cluster’s Amazon EC2 instances. Clusters that you launch with the Amazon EMR API have step execution enabled by default. Termination after step execution is effective for clusters that perform a periodic processing task, such as a daily data processing run. Step execution also helps you ensure that you are billed only for the time required to process your data.

https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-plan-longrunning-transient.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
96
Q

669 # A company maintains an Amazon RDS database that maps users to cost centers. The company has accounts in an organization in AWS Organizations. The company needs a solution that labels all resources that are created in a specific AWS account in the organization. The solution must tag each resource with the cost center ID of the user who created the resource. What solution will meet these requirements?

A. Move the specific AWS account to a new organizational unit (OU) in Organizations from the management account. Create a service control policy (SCP) that requires all existing resources to have the correct cost center tag before the resources are created. Apply the SCP to the new OU.
B. Create an AWS Lambda function to tag the resources after the Lambda function finds the appropriate cost center from the RDS database. Configure an Amazon EventBridge rule that reacts to AWS CloudTrail events to invoke the Lambda function.
C. Create an AWS CloudFormation stack to implement an AWS Lambda function. Configure the Lambda function to look up the appropriate cost center in the RDS database and to tag the resources. Creates an Amazon EventBridge scheduled rule to invoke the CloudFormation stack.
D. Create an AWS Lambda function to tag resources with a default value. Configure an Amazon EventBridge rule that reacts to AWS CloudTrail events to invoke the Lambda function when a resource is missing the cost center tag.

A

B. Create an AWS Lambda function to tag the resources after the Lambda function looks up the appropriate cost center from the RDS database. Configure an Amazon EventBridge rule that reacts to AWS CloudTrail events to invoke the Lambda function.

In option B, an AWS Lambda function is used to tag the resources. This Lambda function is configured to look up the appropriate cost center from the RDS database. This ensures that each resource is tagged with the correct cost center ID. Using Amazon EventBridge in conjunction with AWS CloudTrail events allows you to trigger the Lambda function when resources are created. This ensures that the tagging process is automatically started whenever a relevant event occurs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
97
Q

670 # A company recently migrated its web application to the AWS cloud. The company uses an Amazon EC2 instance to run multiple processes to host the application. The processes include an Apache web server that serves static content. The Apache web server makes requests to a PHP application that uses a local Redis server for user sessions. The company wants to redesign the architecture to be highly available and use solutions managed by AWS. What solution will meet these requirements?

A. Use AWS Elastic Beanstalk to host the static content and the PHP application. Configure Elastic Beanstalk to deploy your EC2 instance on a public subnet. Assign a public IP address.
B. Use AWS Lambda to host the static content and the PHP application. Use an Amazon API Gateway REST API to proxy requests to the Lambda function. Set the APIs Gateway CORS configuration to respond to the domain name. Configure Amazon ElastiCache for Redis to handle session information.
C. Keep the backend code in the EC2 instance. Create an Amazon ElastiCache for Redis cluster that has Multi-AZ enabled. Configure the ElastiCache for Redis cluster in cluster mode. Copy the interface resources to Amazon S3. Configure the backend code to reference the EC2 instance.
D. Configure an Amazon CloudFront distribution with an Amazon S3 endpoint to an S3 bucket that is configured to host the static content. Configure an application load balancer that targets an Amazon Elastic Container Service (Amazon ECS) service that runs AWS Fargate tasks for the PHP application. Configure the PHP application to use an Amazon ElastiCache for Redis cluster that runs in multiple availability zones.

A

D. Configure an Amazon CloudFront distribution with an Amazon S3 endpoint to an S3 bucket that is configured to host the static content. Configure an application load balancer that targets an Amazon Elastic Container Service (Amazon ECS) service that runs AWS Fargate tasks for the PHP application. Configure the PHP application to use an Amazon ElastiCache for Redis cluster that runs in multiple availability zones.

This option leverages Amazon CloudFront for global content delivery, providing low-latency access to static resources. Separating static content (hosted on S3) and dynamic content (running on ECS with Fargate) is a good practice for scalability and manageability. Using an application load balancer with AWS Fargate tasks for your PHP application provides a highly available and scalable environment. The Amazon ElastiCache for Redis cluster with Multi-AZ is used for session management, ensuring high availability. Option D is a well-designed solution that leverages multiple AWS managed services to achieve high availability, scalability, and separation of concerns. It aligns with best practices for hosting web applications on AWS.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
98
Q

671 # A company runs a web application on Amazon EC2 instances in an Auto Scaling group that has a target group. The company designed the app to work with session affinity (sticky sessions) for a better user experience. The application must be publicly available via the Internet as an endpoint. A WAF should be applied to the endpoint for added security. Session affinity (sticky sessions) must be configured on the endpoint. What combination of steps will meet these requirements? (Choose two.)
A. Create a public network load balancer. Specify the target group for the application.
B. Create a gateway load balancer. Specify the target group for the application.
C. Create a public application load balancer. Specify the application target group.
D. Create a second target group. Add elastic IP addresses to EC2 instances.
E. Create a web ACL in AWS WAF. Associate the web ACL with the endpoint

A

C. Create a public application load balancer. Specify the application target group.
E. Create a web ACL in AWS WAF. Associate the web ACL with the endpoint

An application load balancer (ALB) is designed for HTTP/HTTPS traffic and supports session affinity (sticky sessions). By creating a public ALB, you can expose your web application to the Internet with the necessary routing and load balancing capabilities.

AWS WAF (Web Application Firewall) provides protection against common web exploits and attacks. By creating a web ACL, you can define rules to filter and control web traffic.

Associating the web ACL with the endpoint ensures that the web application is protected by the specified security policies.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
99
Q

672 # A company has a website that stores images of historical events. Website users need the ability to search and view images based on the year the event in the image occurred. On average, users request each image only once or twice a year. The company wants a highly available solution for storing and delivering images to users. Which solution will meet these requirements in the MOST cost-effective way?

A. Store images in Amazon Elastic Block Store (Amazon EBS). Use a web server running on Amazon EC2.
B. Store images in Amazon Elastic File System (Amazon EFS). Use a web server running on Amazon EC2.
C. Store images in Amazon S3 Standard. Use S3 Standard to deliver images directly using a static website.
D. Store images in Amazon S3 Standard-Infrequent Access (S3 Standard-IA). Use S3 Standard-IA to deliver images directly using a static website.

A

D. Store images in Amazon S3 Standard-Infrequent Access (S3 Standard-IA). Use S3 Standard-IA to deliver images directly using a static website.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
100
Q

673 # A company has multiple AWS accounts in an organization in AWS organizations that use different business units. The company has several offices around the world. The company needs to update the security group rules to allow new office CIDR ranges or to remove old CIDR ranges across the organization. The company wants to centralize the management of security group rules to minimize the administrative overhead required by updating CIDR ranges. Which solution will meet these requirements in the MOST cost-effective way?

A. Create VPC security groups in the organization’s management account. Update security groups when a CIDR range update is necessary.
B. Create a VPC customer-managed prefix list that contains the list of CIDRs. Use AWS Resource Access Manager (AWS RAM) to share the prefix list across your organization. Use the prefix list in the security groups throughout your organization.
C. Create a list of prefixes managed by AWS. Use an AWS Security Hub policy to enforce security group updates across your organization. Use an AWS Lambda function to update the prefix list automatically when CIDR ranges change.
D. Create security groups in a central AWS administrative account. Create a common AWS Firewall Manager security group policy for your entire organization. Select the previously created security groups as primary groups in the policy.

A

B. Create a VPC customer-managed prefix list that contains the list of CIDRs. Use AWS Resource Access Manager (AWS RAM) to share the prefix list across your organization. Use the prefix list in the security groups throughout your organization.

A VPC customer-managed prefix list allows you to define a list of CIDR ranges that can be shared across AWS accounts and Regions. This provides a centralized way to manage CIDR ranges. Use AWS Resource Access Manager (AWS RAM) to share the prefix list across your organization: AWS RAM allows you to share resources between AWS accounts, including prefix lists. By sharing the list of prefixes managed by the client, the management of CIDR ranges is centralized. Use the prefix list in organization-wide security groups: You can reference the shared prefix list in security group rules. This ensures that security groups in multiple AWS accounts use the same centralized set of CIDR ranges. This approach minimizes administrative overhead, enables centralized control, and provides a scalable solution for managing security group rules globally.

https://docs.aws.amazon.com/vpc/latest/userguide/managed-prefix-lists.htm

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
101
Q

674 # A company uses an on-premises network attached storage (NAS) system to provide file shares to its high-performance computing (HPC) workloads. The company wants to migrate its latency-sensitive HPC workloads and their storage to the AWS cloud. The enterprise must be able to provide NFS and SMB multiprotocol access from the file system. Which solution will meet these requirements with the lowest latency? (Choose two.)

A. Deploy compute-optimized EC2 instances in a cluster placement group. Top Voted
B. Deploy compute-optimized EC2 instances in a partition placement group.
C. Attach the EC2 instances to an Amazon FSx file system for Luster.
D. Connect the EC2 instances to an Amazon FSx file system for OpenZFS.
E. Connect the EC2 instances to an Amazon FSx for NetApp ONTAP file system.

A

A. Deploy compute-optimized EC2 instances in a cluster placement group. Top Voted
E. Connect the EC2 instances to an Amazon FSx for NetApp ONTAP file system.

https://aws.amazon.com/fsx/when-to-choose-fsx/

Cluster placement groups allow you to group instances within a single availability zone to provide low-latency network performance. This is suitable for tightly coupled HPC workloads.

Amazon FSx for NetApp ONTAP supports both NFS and SMB protocols, making it suitable for multi-protocol access.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
102
Q

675 # A company is relocating its data center and wants to securely transfer 50TB of data to AWS within 2 weeks. The existing data center has a site-to-site VPN connection to AWS that is 90% utilized. Which AWS service should a solutions architect use to meet these requirements?

A. AWS DataSync with a VPC endpoint
B. AWS Direct Connect
C. Optimized AWS Snowball Edge Storage
D. AWS Storage Gateway

A

C. Optimized AWS Snowball Edge Storage

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
103
Q

676 # A company hosts an application on Amazon EC2 on-demand instances in an auto-scaling group. The app’s peak times occur at the same time every day. App users report slow app performance at the start of peak times. The app normally works 2-3 hours after peak hours start. The company wants to make sure the app works properly at the start of peak times. What solution will meet these requirements?

A. Configure an application load balancer to correctly distribute traffic to the instances.
B. Configure a dynamic scaling policy so that the Auto Scaling group launches new instances based on memory utilization.
C. Configure a dynamic scaling policy for the Auto Scaling group to start new instances based on CPU utilization.
D. Configure a scheduled scaling policy so that the auto-scaling group starts new instances before peak times.

A

D. Configure a scheduled scaling policy so that the auto-scaling group starts new instances before peak times.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
104
Q

677 # A company runs applications on AWS that connect to the company’s Amazon RDS database. Applications scale on weekends and during peak times of the year. The company wants to scale the database more effectively for its applications that connect to the database. Which solution will meet these requirements with the LESS operating overhead?

A. Use Amazon DynamoDB with connection pooling with a target pool configuration for the database. Switch applications to use the DynamoDB endpoint.
B. Use Amazon RDS Proxy with a target group for the database. Change the applications to use the RDS proxy endpoint.
C. Use a custom proxy running on Amazon EC2 as the database broker. Change applications to use the custom proxy endpoint.
D. Use an AWS Lambda function to provide a connection pool with a target pool configuration for the database. Change applications to use the Lambda function.

A

B. Use Amazon RDS Proxy with a target group for the database. Change the applications to use the RDS proxy endpoint.

Amazon RDS Proxy is a managed database proxy that provides connection pooling, failover, and security features for database applications. It allows applications to scale more effectively and efficiently by managing database connections on their behalf. It integrates well with RDS and reduces operating expenses.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
105
Q

678 # A company uses AWS Cost Explorer to monitor its AWS costs. The company notices that Amazon Elastic Block Store (Amazon EBS) storage and snapshot costs are increasing every month. However, the company does not purchase additional storage from EBS every month. The company wants to optimize monthly costs for its current storage usage. Which solution will meet these requirements with the LESS operating overhead?

A. Use Amazon CloudWatch Logs to monitor Amazon EBS storage utilization. Use Amazon EBS Elastic Volumes to reduce the size of EBS volumes.
B. Use a custom script to monitor space usage. Use Amazon EBS Elastic Volumes to reduce the size of EBS volumes.
C. Delete all expired and unused snapshots to reduce snapshot costs.
D. Delete all non-essential snapshots. Use Amazon Data Lifecycle Manager to create and manage snapshots according to your company’s snapshot policy requirements.

A

D. Delete all non-essential snapshots. Use Amazon Data Lifecycle Manager to create and manage snapshots according to your company’s snapshot policy requirements.

Delete all nonessential snapshots: This reduces costs by eliminating unnecessary snapshot storage. Use Amazon Data Lifecycle Manager (DLM): DLM can automate the creation and deletion of snapshots based on defined policies. This reduces operational overhead by automating snapshot management according to the company’s snapshot policy requirements.

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/snapshot-lifecycle.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
106
Q

679 # A company is developing a new application on AWS. The application consists of an Amazon Elastic Container Service (Amazon ECS) cluster, an Amazon S3 bucket that contains assets for the application, and an Amazon RDS for MySQL database that contains the application’s data set. The data set contains sensitive information. The company wants to ensure that only the ECS cluster can access data in the RDS for MySQL database and data in the S3 bucket. What solution will meet these requirements?

A. Create a new AWS Key Management Service (AWS KMS) customer-managed key to encrypt both the S3 bucket and the RDS database for MySQL. Ensure that the KMS key policy includes encryption and decryption permissions for the ECS task execution role.
B. Create an AWS Key Management Service (AWS KMS) AWS Managed Key to encrypt both the S3 bucket and the RDS database for MySQL. Ensure that the S3 bucket policy specifies the ECS task execution role as user.
C. Create an S3 bucket policy that restricts bucket access to the ECS task execution role. Create a VPC endpoint for Amazon RDS for MySQL. Update the RDS security group for MySQL to allow access only from the subnets on which the ECS cluster will generate tasks.
D. Create a VPC endpoint for Amazon RDS for MySQL. Update the RDS security group for MySQL to allow access only from the subnets on which the ECS cluster will generate tasks. Create a VPC endpoint for Amazon S3. Update the S3 bucket policy to allow access only from the S3 VPC endpoint.

A

D. Create a VPC endpoint for Amazon RDS for MySQL. Update the RDS security group for MySQL to allow access only from the subnets on which the ECS cluster will generate tasks. Create a VPC endpoint for Amazon S3. Update the S3 bucket policy to allow access only from the S3 VPC endpoint.

This approach controls access at the network level by ensuring that the RDS database and S3 bucket are accessible only through the specified VPC endpoints, limiting access to resources within the ECS cluster VPC.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
107
Q

680 # A company has a web application that runs on premises. The app experiences latency issues during peak hours. Latency issues occur twice a month. At the beginning of a latency issue, the application’s CPU utilization immediately increases to 10 times its normal amount. The company wants to migrate the application to AWS to improve latency. The company also wants to scale the app automatically when demand for the app increases. The company will use AWS Elastic Beanstalk for application deployment. What solution will meet these requirements?

A. Configure an Elastic Beanstalk environment to use burst throughput instances in unlimited mode. Configure the environment to scale based on requests.
B. Configure an Elastic Beanstalk environment to use compute-optimized instances. Configure the environment to scale based on requests.
C. Configure an Elastic Beanstalk environment to use compute-optimized instances. Configure the environment to scale on a schedule.
D. Configure an Elastic Beanstalk environment to use burst throughput instances in unlimited mode. Configure the environment to scale on predictive metrics.

A

D. Configure an Elastic Beanstalk environment to use burst throughput instances in unlimited mode. Configure the environment to scale on predictive metrics.

Predictive scaling works by analyzing historical load data to detect daily or weekly patterns in traffic flows. It uses this information to forecast future capacity needs so Amazon EC2 Auto Scaling can proactively increase the capacity of your Auto Scaling group to match the anticipated load.

Predictive scaling is well suited for situations where you have:

Burst performance instances are designed to handle explosive workloads, and configuring your environment to scale on predictive metrics allows you to proactively scale based on anticipated demand. This aligns well with the requirement to autoscale when CPU utilization increases 10x during latency issues. Therefore, option D is the most suitable solution to improve latency and automatically scale the application during peak hours.

Cyclical traffic, such as high use of resources during regular business hours and low use of resources during evenings and weekends
Recurring on-and-off workload patterns, such as batch processing, testing, or periodic data analysis
Applications that take a long time to initialize, causing a noticeable latency impact on application performance during scale-out events
In general, if you have regular patterns of traffic increases and applications that take a long time to initialize, you should consider using predictive scaling. Predictive scaling can help you scale faster by launching capacity in advance of forecasted load, compared to using only dynamic scaling, which is reactive in nature. Predictive scaling can also potentially save you money on your EC2 bill by helping you avoid the need to over provision capacity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
108
Q

681 # A company has customers located all over the world. The company wants to use automation to protect its systems and network infrastructure. The company’s security team must be able to track and audit all incremental changes to the infrastructure. What solution will meet these requirements?

A. Use AWS Organizations to configure infrastructure. Use AWS Config to track changes.
B. Use AWS CloudFormation to configure the infrastructure. Use AWS Config to track changes.
C. Use AWS Organizations to configure the infrastructure. Use the AWS Service Catalog to track changes.
D. Use AWS CloudFormation to configure the infrastructure. Use the AWS Service Catalog to track changes.

A

B. Use AWS CloudFormation to configure the infrastructure. Use AWS Config to track changes

AWS CloudFormation is an infrastructure as code (IaC) service that allows you to define and provision AWS infrastructure. Using CloudFormation ensures automation in infrastructure configuration, and AWS Config can be used to track changes and maintain an inventory of resources.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
109
Q

682 # A startup is hosting a website for its customers on an Amazon EC2 instance. The website consists of a stateless Python application and a MySQL database. The website only serves a small amount of traffic. The company is concerned about instance reliability and needs to migrate to a highly available architecture. The company cannot modify the application code. What combination of actions should a solutions architect take to achieve high availability for the website? (Choose two.)

A. Provision an Internet gateway in each availability zone in use.
B. Migrate the database to an Amazon RDS for MySQL Multi-AZ DB instance.
C. Migrate the database to Amazon DynamoDB and enable DynamoDB auto-scaling.
D. Use AWS DataSync to synchronize database data across multiple EC2 instances.
E. Create an application load balancer to distribute traffic to an auto-scaling group of EC2 instances that are spread across two availability zones.

A

B. Migrate the database to an Amazon RDS for MySQL Multi-AZ DB instance.
E. Create an application load balancer to distribute traffic to an auto-scaling group of EC2 instances that are spread across two availability zones.

Amazon RDS Multi-AZ (Availability Zone) deployments provide high availability for database instances. Automatically replicates the database to a standby instance in a different Availability Zone, ensuring failover in the event of a primary AZ failure.

This option ensures that traffic is distributed across multiple EC2 instances for the website. The combination with an auto-scaling group enables demand-based auto-scaling, providing high availability.

Therefore, options E and B together provide a solution to achieve high availability for the website without modifying the application code.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
110
Q

683 # A company is moving its data and applications to AWS during a multi-year migration project. The company wants to securely access Amazon S3 data from the company’s AWS region and from the company’s on-premises location. The data must not traverse the Internet. Your company has established an AWS Direct Connect connection between your region and your on-premises location. What solution will meet these requirements?

A. Create gateway endpoints for Amazon S3. Use gateway endpoints to securely access the data from the Region and the on-premises location.
B. Create a gateway on AWS Transit Gateway to access Amazon S3 securely from your region and on-premises location.
C. Create interface endpoints for Amazon S3. Use interface endpoints to securely access local location and region data.
D. Use an AWS Key Management Service (AWS KMS) key to securely access data from your region and on-premises location.

A

A. Create gateway endpoints for Amazon S3. Use gateway endpoints to securely access the data from the Region and the on-premises location.

Amazon S3 Gateway Endpoints allow you to connect to S3 securely without traversing the public Internet. This is especially useful when using AWS Direct Connect, as it ensures that data does not go over the public Internet when S3 is accessed from both the AWS Region and on-premises.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
111
Q

684 # A company created a new organization in AWS Organizations. The organization has multiple accounts for the company’s development teams. Development team members use AWS IAM Identity Center (AWS Single Sign-On) to access accounts. For each of the company’s applications, development teams must use a predefined application name to label the resources that are created. A solutions architect needs to design a solution that gives the development team the ability to create resources only if the application name tag has an approved value. What solution will meet these requirements?

A. Create an IAM group that has a conditional permission policy that requires the application name tag to be specified to create resources.
B. Create a cross-account role that has a deny policy for any resources that have the application name tag.
C. Create a resource group in AWS Resource Groups to validate that the tags are applied to all resources in all accounts.
D. Create a tag policy in Organizations that has a list of allowed application names.

A

D. Create a tag policy in Organizations that has a list of allowed application names.

AWS Organizations allows you to create tag policies that define which tags should be applied to resources and what values ​​are allowed. This is an effective way to ensure that only approved app names are used as labels. Therefore, option D, creating a tag policy in organizations with a list of allowed application names, is the most appropriate solution to enforce required tag values.

Wrong: A. Create an IAM group that has a conditional permission policy that requires the application name tag to be specified to create resources. While IAM policies may include conditions, they focus more on actions and resources, and may not be best suited to enforce specific tag values.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
112
Q

685 # A company runs its databases on Amazon RDS for PostgreSQL. The company wants a secure solution to manage the user’s master password by rotating the password every 30 days. Which solution will meet these requirements with the LESS operating overhead?

A. Use Amazon EventBridge to schedule a custom AWS Lambda function to rotate the password every 30 days.
B. Use the modify-db-instance command in the AWS CLI to change the password.
C. Integrate AWS Secrets Manager with Amazon RDS for PostgreSQL to automate password rotation.
D. Integrate AWS Systems Manager Parameter Store with Amazon RDS for PostgreSQL to automate password rotation.

A

C. Integrate AWS Secrets Manager with Amazon RDS for PostgreSQL to automate password rotation.

AWS Secrets Manager provides a managed solution for rotating database credentials, including built-in support for Amazon RDS. Enables automatic master user password rotation for RDS for PostgreSQL with minimal operational overhead.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
113
Q

686 # A company tests an application that uses an Amazon DynamoDB table. Testing is done for 4 hours once a week. The company knows how many read and write operations the application performs on the table each second during testing. The company does not currently use DynamoDB for any other use cases. A solutions architect needs to optimize tabletop costs. What solution will meet these requirements?

A. Choose on-demand mode. Update read and write capacity units appropriately.
B. Choose provisioned mode. Update read and write capacity units appropriately.
C. Purchase DynamoDB reserved capacity for a period of 1 year.
D. Purchase DynamoDB reserved capacity for a period of 3 years.

A

B. Choose provisioned mode. Update read and write capacity units appropriately.

In provisioned capacity mode, you manually provision read and write capacity units based on your known workload. Because the company knows the read and write operations during testing, it can provide the exact capacity needed for those specific periods, optimizing costs by not paying for unused capacity at other times.

On-demand mode in DynamoDB automatically adjusts read and write capacity based on actual usage. However, since the workload is known and occurs during specific periods, the provisioning mode would probably be more cost-effective.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
114
Q

687 # A company runs its applications on Amazon EC2 instances. The company conducts periodic financial evaluations of its AWS costs. The company recently identified an unusual expense. The company needs a solution to avoid unusual expenses. The solution should monitor costs and notify responsible stakeholders in case of unusual expenses. What solution will meet these requirements?

A. Use an AWS budget template to create a zero-spend budget.
B. Create an AWS Cost Anomaly Detection Monitor in the AWS Billing and Cost Management Console.
C. Create AWS Pricing Calculator estimates for current running workload pricing details.
D. Use Amazon CloudWatch to monitor costs and identify unusual expenses.

A

B. Create an AWS Cost Anomaly Detection Monitor in the AWS Billing and Cost Management Console.

AWS Cost Anomaly Detection uses machine learning to identify unexpected spending patterns and anomalies in your costs. It can automatically detect unusual expenses and send notifications, making it suitable for the described scenario.

115
Q

688 # A marketing company receives a large amount of new clickstream data in Amazon S3 from a marketing campaign. The business needs to analyze clickstream data in Amazon S3 quickly. Next, the business needs to determine whether to process the data further in the data pipeline. Which solution will meet these requirements with the LESS operating overhead?

A. Create external tables in a Spark catalog. Set up jobs in AWS Glue to query the data.
B. Configure an AWS Glue crawler to crawl the data. Configure Amazon Athena to query data.
C. Create external tables in a Hive metastore. Configure Spark jobs in Amazon EMR to query data.
D. Configure an AWS Glue crawler to crawl the data. Configure Amazon Kinesis Data Analytics to use SQL to query data.

A

B. Configure an AWS Glue crawler to crawl the data. Configure Amazon Athena to query data.

AWS Glue Crawler can automatically discover and catalog metadata about clickstream data in S3. Amazon Athena, as a serverless query service, allows you to perform fast ad hoc SQL queries on data without needing to configure and manage the infrastructure.

AWS Glue is a fully managed extract, transform, and load (ETL) service, and Athena is a serverless query service that allows you to analyze data directly in Amazon S3 using SQL queries. By configuring an AWS Glue crawler to crawl the data, you can create a schema for the data, and then use Athena (remember, where it lives) to query the data directly without the need to load it into a separate database. This minimizes operational overhead.

116
Q

689 # A company runs an SMB file server in its data center. The file server stores large files that are frequently accessed by the company for up to 7 days after the file creation date. After 7 days, the company should be able to access the files with a maximum recovery time of 24 hours. What solution will meet these requirements?

A. Use AWS DataSync to copy data older than 7 days from the SMB file server to AWS.
B. Create an Amazon S3 file gateway to increase the company’s storage space. Create an S3 lifecycle policy to transition the data to S3 Glacier Deep Archive after 7 days.
C. Create an Amazon FSx file gateway to increase your company’s storage space. Create an Amazon S3 lifecycle policy to transition data after 7 days.
D. Configure access to Amazon S3 for each user. Create an S3 lifecycle policy to transition data to S3 Glacier Flexible Retrieval after 7 days.

A

B. Create an Amazon S3 file gateway to increase the company’s storage space. Create an S3 lifecycle policy to transition the data to S3 Glacier Deep Archive after 7 days.

With an S3 file gateway, you can present an S3 bucket as a file share. Using an S3 lifecycle policy to transition data to Glacier Deep Archive after 7 days allows for cost savings, and recovery time is within the specified 24 hours.

117
Q

690 # A company runs a web application on Amazon EC2 instances in an Auto Scaling group. The application uses a database running on an Amazon RDS for PostgreSQL DB instance. The app runs slowly when traffic increases. The database experiences heavy read load during high traffic periods. What actions should a solutions architect take to resolve these performance issues? (Choose two.) A. Enable auto-scaling for the database instance. B. Create a read replica for the database instance. Configure the application to send read traffic to the read replica. C. Convert the database instance to a Multi-AZ DB instance deployment. Configure the application to send read traffic to the standby database instance. D. Create an Amazon ElastiCache cluster. Configure the application to cache query results in the ElastiCache cluster. E. Configure Auto Scaling group subnets to ensure that EC2 instances are provisioned in the same availability zone as the database instance.

A

B. Create a read replica for the database instance. Configure the application to send read traffic to the read replica.
D. Create an Amazon ElastiCache cluster. Configure the application to cache query results in the ElastiCache cluster.

By creating a read replica, you offload read traffic from the primary database instance to the replica, distributing the read load and improving overall performance. This is a common approach to scale out read-heavy database workloads.

Amazon ElastiCache is a managed caching service that can help improve application performance by caching frequently accessed data. Cached query results in ElastiCache can reduce the load on the PostgreSQL database, especially for repeated read queries.

118
Q

691 # A company uses Amazon EC2 instances and Amazon Elastic Block Store (Amazon EBS) volumes to run an application. The company creates a snapshot of each EBS volume every day to meet compliance requirements. The company wants to implement an architecture that prevents accidental deletion of EBS volume snapshots. The solution should not change the administrative rights of the storage administrator user. Which solution will meet these requirements with the LEAST administrative effort?

A. Create an IAM role that has permission to delete snapshots. Attach the role to a new EC2 instance. Use the AWS CLI from the new EC2 instance to delete snapshots.
B. Create an IAM policy that denies the deletion of snapshots. Attaches the policy to the storage administrator user.
C. Add tags to snapshots. Create Recycle Bin retention rules for EBS snapshots that have the labels.
D. Lock EBS snapshots to prevent deletion.

A

D. Lock EBS snapshots to prevent deletion.

Amazon EBS provides a built-in feature to lock snapshots, preventing them from being deleted. This is a straightforward and effective solution that does not involve creating additional IAM roles, policies, or tags. It directly addresses the requirement to prevent accidental deletion with minimal administrative effort.

119
Q

692 # An enterprise application uses network load balancers, auto-scaling groups, Amazon EC2 instances, and databases that are deployed in an Amazon VPC. The company wants to capture information about traffic to and from network interfaces in near real time in its Amazon VPC. The company wants to send the information to Amazon OpenSearch Service for analysis. What solution will meet these requirements?

A. Create a log group in Amazon CloudWatch Logs. Configure VPC flow logs to send log data to the log group. Use Amazon Kinesis Data Streams to stream log group logs to the OpenSearch service.
B. Create a log group in Amazon CloudWatch Logs. Configure VPC flow logs to send log data to the log group. Use Amazon Kinesis Data Firehose to stream log group logs to the OpenSearch service.
C. Create a trail in AWS CloudTrail. Configure VPC flow logs to send log data to the trace. Use Amazon Kinesis Data Streams to stream trace records to the OpenSearch service.
D. Create a trail in AWS CloudTrail. Configure VPC flow logs to send log data to the trace. Use Amazon Kinesis Data Firehose to stream the track logs to the OpenSearch service.

A

B. Create a log group in Amazon CloudWatch Logs. Configure VPC flow logs to send log data to the log group. Use Amazon Kinesis Data Firehose to stream log group logs to the OpenSearch service.

Other answers:
A. Create a log group in Amazon CloudWatch Logs. Configure VPC flow logs to send log data to the log group. Use Amazon Kinesis Data Streams to stream log group logs to the OpenSearch service. This option involves configuring VPC flow logs to capture network traffic information and send the logs to an Amazon CloudWatch log group. Next, it suggests using Amazon Kinesis Data Streams to stream the log group logs to the Amazon OpenSearch service. While this is technically feasible, using Kinesis Data Streams could introduce unnecessary complexity for this use case.

C. Create a trail in AWS CloudTrail. Configure VPC flow logs to send log data to the trace. Use Amazon Kinesis Data Streams to stream trace records to the OpenSearch service. This option involves using AWS CloudTrail to capture VPC stream logs and then using Amazon Kinesis Data Streams to stream the logs to the Amazon OpenSearch service. However, CloudTrail is typically used to log API activity and may not provide the detailed network traffic information captured by VPC flow logs.

D. Create a trail in AWS CloudTrail. Configure VPC flow logs to send log data to the trace. Use Amazon Kinesis Data Firehose to stream the track logs to the OpenSearch service. Like option C, this option involves using AWS CloudTrail to capture VPC flow logs, but suggests using Amazon Kinesis Data Firehose instead of Kinesis Data Streams. Again, CloudTrail might not be the optimal choice for capturing detailed information about network traffic.

120
Q

693 # A company is developing an application that will run on an Amazon Elastic Kubernetes Service (Amazon EKS) production cluster. The EKS cluster has managed groups of nodes that are provisioned with on-demand instances. The company needs a dedicated EKS cluster for development work. The company will use the development cluster infrequently to test the resilience of the application. The EKS cluster must manage all nodes. Which solution will meet these requirements in the MOST cost-effective way?

A. Create a managed node group that contains only spot instances.
B. Create two managed node groups. Provision a group of nodes with on-demand instances. Provision the second node group with spot instances.
C. Create an auto-scaling group that has a startup configuration that uses spot instances. Configure the user details to add the nodes to the EKS cluster.
D. Create a managed node group that contains only on-demand instances.

A

B. Create two managed node groups. Provision a group of nodes with on-demand instances. Provision the second node group with spot instances.

This option allows the company to have a dedicated EKS cluster for development work. By creating two pools of managed nodes, one using on-demand instances and the other using spot instances, the company can manage costs effectively. On-demand instances provide stability and reliability, which is suitable for development work that requires consistent and predictable performance.

Spot instances offer cost savings, but come with the trade-off of potential short-notice termination. However, for infrequent testing and resilience experiments, one-time instances can be used to optimize costs.

121
Q

694 # A company stores sensitive data in Amazon S3. A solutions architect needs to create an encryption solution. The enterprise needs to fully control users’ ability to create, rotate, and deactivate encryption keys with minimal effort for any data that needs to be encrypted. What solution will meet these requirements?

A. Use default server-side encryption with Amazon S3 Managed Encryption Keys (SSE-S3) to store sensitive data.
B. Create a customer-managed key using AWS Key Management Service (AWS KMS). Use the new key to encrypt S3 objects by using server-side encryption with AWS KMS keys (SSE-KMS).
C. Create an AWS managed key using the AWS Key Management Service (AWS KMS). Use the new key to encrypt S3 objects using server-side encryption with AWS KMS keys (SSE-KMS).
D. Download S3 objects to an Amazon EC2 instance. Encrypts objects using customer-managed keys. Upload the encrypted objects back to Amazon S3.

A

B. Create a customer-managed key using AWS Key Management Service (AWS KMS). Use the new key to encrypt S3 objects by using server-side encryption with AWS KMS keys (SSE-KMS).

AWS KMS allows you to create and manage customer managed keys (CMKs), giving you full control over the key lifecycle. This includes the ability to create, rotate, and deactivate keys as needed. Using server-side encryption with AWS KMS keys (SSE-KMS) ensures that S3 objects are encrypted with the specified customer-managed key. This provides a secure, managed approach to encrypt sensitive data in Amazon S3.

122
Q

695 # A company wants to backup its on-premises virtual machines (VMs) to AWS. The company’s backup solution exports local backups to an Amazon S3 bucket as objects. S3 backups should be retained for 30 days and should be automatically deleted after 30 days. What combination of steps will meet these requirements? (Choose three.) A. Create an S3 bucket that has S3 object locking enabled. B. Create an S3 bucket that has object versioning enabled. C. Set a default retention period of 30 days for the objects. D. Configure an S3 lifecycle policy to protect objects for 30 days. E. Configure an S3 lifecycle policy to expire the objects after 30 days. F. Configure the backup solution to tag objects with a 30-day retention period

A

A. Create an S3 bucket that has S3 object locking enabled.
C. Set a default retention period of 30 days for the objects.
E. Configure an S3 lifecycle policy to expire the objects after 30 days.

S3 object locking ensures that objects in the bucket cannot be deleted or modified for a specified retention period. This helps meet the requirement to retain backups for 30 days.

Set a default retention period on the S3 bucket, specifying that objects within the bucket are locked for a duration of 30 days. This enforces the retention policy on the objects.

Use an S3 lifecycle policy to automatically expire (delete) objects in the S3 bucket after the specified 30-day retention period. This ensures that backups are automatically deleted after the retention period.

123
Q

696 # A solutions architect needs to copy files from an Amazon S3 bucket to an Amazon Elastic File System (Amazon EFS) file system and to another S3 bucket. Files must be copied continuously. New files are added to the original S3 bucket consistently. Copied files should be overwritten only if the source file changes. Which solution will meet these requirements with the LESS operating overhead?

A. Create an AWS DataSync location for both the destination S3 bucket and the EFS file system. Create a task for the target S3 bucket and EFS file system. Set the transfer mode to transfer only the data that has changed.
B. Create an AWS Lambda function. Mount the file system in the function. Configure an S3 event notification to invoke the function when files are created and changed in Amazon S3. Configure the function to copy files to the file system and the destination S3 bucket.
C. Create an AWS DataSync location for both the destination S3 bucket and the EFS file system. Create a task for the target S3 bucket and EFS file system. Set the transfer mode to transfer all data.
D. Start an Amazon EC2 instance in the same VPC as the file system. Mount the file system. Create a script to routinely synchronize all objects that changed in the source S3 bucket with the destination S3 bucket and the mounted file system.

A

A. Create an AWS DataSync location for both the destination S3 bucket and the EFS file system. Create a task for the target S3 bucket and EFS file system. Set the transfer mode to transfer only the data that has changed.

AWS DataSync is a managed service that makes it easy to automate, accelerate, and simplify data transfers between on-premises storage systems and AWS storage services. By configuring the transfer mode to transfer only data that has changed, the solution ensures that only changed data is transferred, reducing operational overhead.

124
Q

697 # A company uses Amazon EC2 instances and stores data on Amazon Elastic Block Store (Amazon EBS) volumes. The company must ensure that all data is encrypted at rest using AWS Key Management Service (AWS KMS). The company must be able to control the rotation of encryption keys. Which solution will meet these requirements with the LESS operating overhead?

A. Create a customer-managed key. Use the key to encrypt EBS volumes.
B. Use an AWS managed key to encrypt EBS volumes. Use the key to set automatic key rotation.
C. Create an external KMS key with imported key material. Use the key to encrypt EBS volumes.
D. Use an AWS-owned key to encrypt EBS volumes.

A

A. Create a customer-managed key. Use the key to encrypt EBS volumes.

By creating a customer-managed key in AWS Key Management Service (AWS KMS), your business gains control over key rotation and can manage key policies. This enables encryption of EBS volumes with a key that the enterprise can rotate as needed, providing flexibility and control with minimal operational overhead.

125
Q

698 # An enterprise needs a solution to enforce encryption of data at rest on Amazon EC2 instances. The solution should automatically identify non-compliant resources and enforce compliance policies based on the findings. Which solution will meet these requirements with the LEAST administrative overhead?

A. Use an IAM policy that allows users to create only encrypted Amazon Elastic Block Store (Amazon EBS) volumes. Use AWS Config and AWS Systems Manager to automate the detection and remediation of unencrypted EBS volumes.
B. Use AWS Key Management Service (AWS KMS) to manage access to encrypted Amazon Elastic Block Store (Amazon EBS) volumes. Use AWS Lambda and Amazon EventBridge to automate the detection and remediation of unencrypted EBS volumes.
C. Use Amazon Macie to discover unencrypted Amazon Elastic Block Store (Amazon EBS) volumes. Use AWS Systems Manager automation rules to automatically encrypt existing and new EBS volumes.
D. Use the Amazon Inspector to detect unencrypted Amazon Elastic Block Store (Amazon EBS) volumes. Use AWS Systems Manager automation rules to automatically encrypt existing and new EBS volumes.

A

A. Use an IAM policy that allows users to create only encrypted Amazon Elastic Block Store (Amazon EBS) volumes. Use AWS Config and AWS Systems Manager to automate the detection and remediation of unencrypted EBS volumes.

IAM policies can help control the creation of encrypted EBS volumes, and combining them with AWS Config and Systems Manager provides a comprehensive solution for detection and remediation.

126
Q

699 # A company wants to migrate its on premises web applications to AWS. The company is located close the eu-central-1 region. Because of regulations, the company cannot launch some of its applications on eu-central-1. The company wants to achieve single-digit millisecond latency. What solution will meet these requirements?

A. Deploy the applications to eu-central-1. Extend the enterprise VPC from eu-central-1 to an edge location on Amazon CloudFront.
B. Deploy the applications in AWS local zones by extending the company’s VPC from eu-central-1 to the chosen local zone.
C. Deploy the applications to eu-central-1. Extend the eu-central-1 enterprise VPC to regional edge caches on Amazon CloudFront.
D. Deploy applications to AWS wavelength zones by extending the eu-central-1 enterprise VPC to the chosen wavelength zone.

A

B. Deploy the applications in AWS local zones by extending the company’s VPC from eu-central-1 to the chosen local zone.

AWS Local Zones provide low-latency access to AWS services in specific geographic locations. Deploying applications to local zones can deliver the desired low-latency experience.

127
Q

700 # A company is migrating its on-premises multi-tier application to AWS. The application consists of a single-node MySQL database and a multi-node web tier. The company should minimize changes to the application during the migration. The company wants to improve the resilience of applications after migration. What combination of steps will meet these requirements? (Choose two.) A. Migrate the web tier to Amazon EC2 instances in an auto-scaling group behind an application load balancer. B. Migrate the database to Amazon EC2 instances in an auto-scaling group behind a network load balancer. C. Migrate the database to an Amazon RDS Multi-AZ deployment. D. Migrate the web tier to an AWS Lambda function. E. Migrate the database to an Amazon DynamoDB table.

A

A. Migrate the web tier to Amazon EC2 instances in an auto-scaling group behind an application load balancer.
C. Migrate the database to an Amazon RDS Multi-AZ deployment.

128
Q

701 # A company’s e-commerce website has unpredictable traffic and uses AWS Lambda functions to directly access a private Amazon RDS for PostgreSQL DB instance. The company wants to maintain predictable database performance and ensure that Lambda invocations do not overload the database with too many connections. What should a solutions architect do to meet these requirements?

A. Point the client controller to a custom RDS endpoint. Deploy Lambda functions within a VPC.
B. Point the client driver to an RDS proxy endpoint. Deploy Lambda functions within a VPC.
C. Point the client controller to a custom RDS endpoint. Deploy Lambda functions outside of a VPC.
D. Point the client controller to an RDS proxy endpoint. Deploy Lambda functions outside of a VPC.

A

B. Point the client driver to an RDS proxy endpoint. Deploy Lambda functions within a VPC.

129
Q

702 # A company is creating an application. The company stores application testing data in multiple local locations. Your business needs to connect on-premises locations to VPCs in an AWS Region in the AWS Cloud. The number of accounts and VPCs will increase over the next year. The network architecture must simplify the management of new connections and must provide the ability to scale. Which solution will meet these requirements with the LEAST administrative overhead?

A. Create a peering connection between the VPCs. Create a VPN connection between VPCs and on-premises locations.
B. Start an Amazon EC2 instance. On your instance, include VPN software that uses a VPN connection to connect all VPCs and on-premises locations.
C. Create a transit gateway. Create VPC attachments for VPC connections. Create VPN attachments for on-premises connections.
D. Create an AWS Direct Connect connection between on-premises locations and a central VPC. Connect the core VPC to other VPCs by using peering connections.

A

C. Create a transit gateway. Create VPC attachments for VPC connections. Create VPN attachments for on-premises connections.

130
Q

703 # A company using AWS needs a solution to predict the resources needed for manufacturing processes each month. The solution must use historical values ​​that are currently stored in an Amazon S3 bucket. The company has no experience in machine learning (ML) and wants to use a managed service for training and predictions. What combination of steps will meet these requirements? (Choose two.)

A. Deploy an Amazon SageMaker model. Create a SageMaker endpoint for inference.
B. Use Amazon SageMaker to train a model by using the historical data in the S3 bucket.
C. Configure an AWS Lambda function with a function URL that uses Amazon SageMaker endpoints to create predictions based on the inputs.
D. Configure an AWS Lambda function with a function URL that uses an Amazon forecast predictor to create a prediction based on the inputs.
E. Train an Amazon Forsecast predictor using historical data from the S3 bucket.

A

A. Deploy an Amazon SageMaker model. Create a SageMaker endpoint for inference.
B. Use Amazon SageMaker to train a model by using the historical data in the S3 bucket.

Amazon SageMaker is a fully managed service that allows you to build, train, and deploy machine learning (ML) models. To predict the resources needed for manufacturing processes based on historical data, you can use SageMaker to train a model.

Once the model is trained, you can deploy it using SageMaker, creating an endpoint for inference. This endpoint can be used to make predictions based on new data.

131
Q

704 # A company manages AWS accounts in AWS organizations. AWS IAM (AWS Single Sign-On) Identity Center and AWS Control Tower are configured for the accounts. The company wants to manage multiple user permissions across all accounts. Permissions will be used by multiple IAM users and should be split between the developer and administrator teams. Each team requires different permissions. The company wants a solution that includes new users who are hired on both teams. Which solution will meet these requirements with the LESS operating overhead?

A. Create individual users in the IAM Identity Center for each account. Create separate developer and administrator groups in IAM Identity Center. Assign users to the appropriate groups. Create a custom IAM policy for each group to set detailed permissions.
B. Create individual users in the IAM Identity Center for each account. Create separate developer and administrator groups in IAM Identity Center. Assign users to the appropriate groups. Attach AWS managed IAM policies to each user as needed for fine-grained permissions.
C. Create individual users in the IAM Identity Center. Create new developer and administrator groups in the IAM Identity Center. Create new permission sets that include the appropriate IAM policies for each group. Assign the new groups to the appropriate accounts. Assign the new permission sets to the new groups. When new users are hired, add them to the appropriate group.
D. Create individual users in the IAM Identity Center. Create new permission sets that include the appropriate IAM policies for each user. Assign users to the appropriate accounts. Grant additional IAM permissions to users from specific accounts. When new users are hired, add them to the IAM Identity Center and assign them to accounts.

A

C. Create individual users in the IAM Identity Center. Create new developer and administrator groups in the IAM Identity Center. Create new permission sets that include the appropriate IAM policies for each group. Assign the new groups to the appropriate accounts. Assign the new permission sets to the new groups. When new users are hired, add them to the appropriate group.

The AWS Identity and Access Management (IAM) Identity Hub (AWS SSO) provides a centralized place to manage access to multiple AWS accounts. In this scenario, creating separate groups for developers and administrators in IAM Identity Center allows for easier management of permissions. By creating new permission sets that include the appropriate IAM policies for each group, you can assign these permission sets to the respective groups. This approach provides a simplified way to manage permissions for developer and administrator teams. When new users are hired, you can add them to the appropriate group, and they automatically inherit the permissions associated with that group. This reduces operational overhead when onboarding new users, ensuring they get the necessary permissions based on their roles.

132
Q

705 # A company wants to standardize its encryption strategy by Amazon Elastic Block Store (Amazon EBS) volume. The company also wants to minimize the cost and configuration effort required to operate bulk encryption verification. What solution will meet these requirements?

A. Write API calls to describe the EBS volumes and to confirm that the EBS volumes are encrypted. Use Amazon EventBridge to schedule an AWS Lambda function to execute API calls.
B. Write API calls to describe the EBS volumes and to confirm that the EBS volumes are encrypted. Run the API calls in an AWS Fargate task.
C. Create an AWS Identity and Access Management (IAM) policy that requires the use of tags on EBS volumes. Use AWS Cost Explorer to display resources that are not tagged correctly. Encrypt untagged resources manually.
D. Create an AWS configuration rule for Amazon EBS to evaluate whether a volume is encrypted and to flag the volume if it is not encrypted.

A

D. Create an AWS configuration rule for Amazon EBS to evaluate whether a volume is encrypted and to flag the volume if it is not encrypted.

AWS Config allows you to create rules that automatically check whether your AWS resources meet your desired configurations. In this scenario, you want to standardize your Amazon Elastic Block Store (Amazon EBS) volume encryption strategy and minimize the configuration cost and effort to operate volume encryption verification.

By creating an AWS configuration rule specifically for Amazon EBS to evaluate whether a volume is encrypted, you can automate the process of verifying and flagging non-compliant resources. This solution is cost-effective because AWS Config provides a managed service for configuration compliance.

133
Q

706 # A company regularly uploads GB-sized files to Amazon S3. After the company uploads the files, the company uses a fleet of Amazon EC2 spot instances to transcode the file format. The business needs to scale performance when the business uploads data from the on-premises data center to Amazon S3 and when the business downloads data from Amazon S3 to EC2 instances. What solutions will meet these requirements? (Choose two.)

A. Use the access point to the S3 bucket instead of accessing the S3 bucket directly.
B. Upload the files to multiple S3 buckets.
C. Use S3 multipart uploads.
D. Fetch multiple byte-ranges from an object in parallel.
E. Add a random prefix to each object when uploading files.

A

C. Use S3 multipart uploads.
D. Fetch multiple byte-ranges from an object in parallel.

S3 multipart uploads allow parallel uploading of parts of a large object, improving performance during the upload process. This is particularly beneficial for large files as it allows simultaneous uploads of different parts, improving overall upload performance. Multipart loads are well suited for scaling performance during data loads.

Fetching multiple byte ranges in parallel is a strategy to improve download performance. By making simultaneous requests for different parts or ranges of an object, you can efficiently use available bandwidth and reduce the time required to download large files. This approach aligns with the goal of scaling performance during data downloads.

134
Q

707 # A solutions architect is designing a shared storage solution for a web application that is deployed across multiple availability zones. The web application runs on Amazon EC2 instances that are in an Auto Scaling group. The company plans to make frequent changes to the content. The solution should have great consistency in returning new content as soon as changes occur. What solutions meet these requirements? (Choose two.) A. Use the AWS Storage Gateway Volume Gateway Internet Small Computer Systems Interface (iSCSI) block storage that is mounted on the individual EC2 instances. B. Create an Amazon Elastic File System (Amazon EFS) file system. Mount the EFS file system on the individual EC2 instances. C. Create an Amazon Elastic Block Store (Amazon EBS) shared volume. Mount the EBS volume on the individual EC2 instances. D. Use AWS DataSync to perform continuous data synchronization between EC2 hosts in the Auto Scaling group. E. Create an Amazon S3 bucket to store web content. Set the Cache-Control header metadata to no-cache. Use Amazon CloudFront to deliver content.

A

B. Create an Amazon Elastic File System (Amazon EFS) file system. Mount the EFS file system on the individual EC2 instances.
E. Create an Amazon S3 bucket to store web content. Set the metadata for the Cache-Control header to no-cache. Use Amazon CloudFront to deliver content.

Amazon EFS is a scalable, shared file storage service. It supports NFS and allows multiple EC2 instances to access the same file system at the same time. This option is suitable for achieving strong consistency and sharing content between instances, making it a good choice for web applications deployed across multiple availability zones.

Amazon S3 can be used as a highly available and scalable storage solution. However, when immediate consistency is crucial, S3 may have potential consistency delays. Setting the cache control header to no cache helps minimize caching, but might not ensure strong consistency for immediate content updates.

In summary, options B (Amazon EFS) and E (Amazon S3 with CloudFront) are more aligned with the goal of achieving strong consistency and sharing content between multiple instances in an Auto Scaling group. Among them, Amazon EFS is a dedicated file storage service designed for this purpose and is often a suitable choice for shared storage in distributed environments.

135
Q

708 # A company is deploying an application to three AWS regions using an application load balancer. Amazon Route 53 will be used to distribute traffic between these regions. Which Route 53 configuration should a solutions architect use to provide the highest performing experience?

A. Create an A record with a latency policy.
B. Create an A record with a geolocation policy.
C. Create a CNAME record with a failover policy.
D. Create a CNAME record with a geoproximity policy.

A

A. Create an A record with a latency policy.

136
Q

709 # A company has a web application that includes an embedded NoSQL database. The application runs on Amazon EC2 instances behind an application load balancer (ALB). Instances run in an Amazon EC2 auto-scaling group in a single availability zone. A recent increase in traffic requires the application to be highly available and the database to finally be consistent. Which solution will meet these requirements with the LESS operating overhead?

A. Replace the ALB with a network load balancer. Keep the NoSQL database integrated with your replication service on EC2 instances.
B. Replace the ALB with a network load balancer. Migrate the integrated NoSQL database to Amazon DynamoDB using the AWS Database Migration Service (AWS DMS).
C. Modify the Auto Scaling group to use EC2 instances in three availability zones. Keep the NoSQL database integrated with your replication service on EC2 instances.
D. Modify the Auto Scaling group to use EC2 instances across three availability zones. Migrate the embedded NoSQL database to Amazon DynamoDB by using the AWS Database Migration Service (AWS DMS).

A

D. Modify the Auto Scaling group to use EC2 instances across three availability zones. Migrate the embedded NoSQL database to Amazon DynamoDB by using the AWS Database Migration Service (AWS DMS).

Option D (Modify the auto-scaling group to use EC2 instances in three availability zones and migrate the integrated NoSQL database to Amazon DynamoDB) provides high availability, scalability, and reduces operational overhead by leveraging a managed service like DynamoDB. It aligns well with the requirements of a highly available system and is ultimately consistent with the least operational overhead.

NOTE: C. Modify the auto-scaling group to use EC2 instances in three availability zones. Keep the NoSQL database integrated with your replication service on EC2 instances.
Explanation: Scaling across three availability zones can improve availability, but maintaining the NoSQL database integrated with the replication service on EC2 instances still involves operational complexity.

137
Q

710 # A company is building a shopping application on AWS. The application offers a catalog that changes once a month and needs to scale with traffic volume. The company wants the lowest possible latency of the application. Each user’s shopping cart data must be widely available. The user’s session data must be available even if the user is offline and logs back in. What should a solutions architect do to ensure that shopping cart data is preserved at all times?

A. Configure an application load balancer to enable the sticky sessions (session affinity) feature to access the catalog in Amazon Aurora.
B. Configure Amazon ElastiCache for Redis to cache catalog data from Amazon DynamoDB and shopping cart data from the user’s session.
C. Configure the Amazon OpenSearch service to cache Amazon DynamoDB catalog data and shopping cart data from the user session.
D. Configure an Amazon EC2 instance with Amazon Elastic Block Store (Amazon EBS) storage for the catalog and shopping cart. Set up automated snapshots.

A

B. Configure Amazon ElastiCache for Redis to cache catalog data from Amazon DynamoDB and shopping cart data from the user’s session.

Amazon ElastiCache for Redis is a managed caching service that can be used to cache frequently accessed data. You can improve performance and help preserve shopping cart data by storing it in Redis, which is an in-memory data store.

138
Q

711 # A company is building a microservices-based application to be deployed to Amazon Elastic Kubernetes Service (Amazon EKS). The microservices will interact with each other. The company wants to ensure that the application is observable to identify performance issues in the future. What solution will meet these requirements?

A. Configure the application to use Amazon ElastiCache to reduce the number of requests that are sent to the microservices.
B. Configure Amazon CloudWatch Container Insights to collect metrics from EKS clusters. Configure AWS X-Ray to trace the requests between microservices.
C. Configure AWS CloudTrail to review API calls. Create an Amazon QuickSight dashboard to observe microservice interactions.
D. Use AWS Trusted Advisor to understand application performance.

A

B. Configure Amazon CloudWatch Container Insights to collect metrics from EKS clusters. Configure AWS X-Ray to trace the requests between microservices.

Amazon CloudWatch Container Insights provides monitoring and observability for containerized applications. Collects metrics from EKS clusters, providing information on resource utilization and application performance. AWS X-Ray, on the other hand, helps track requests as they flow through different microservices, helping to identify bottlenecks and performance issues.

139
Q

712 # A company needs to provide customers with secure access to their data. The company processes customer data and stores the results in an Amazon S3 bucket. All data is subject to strict regulations and security requirements. Data must be encrypted at rest. Each customer should be able to access their data only from their AWS account. Company employees must not be able to access the data. What solution will meet these requirements?

A. Provide an AWS Certificate Manager (ACM) certificate for each client. Encrypt data on the client side. In the private certificate policy, deny access to the certificate for all directors except for a customer-provided IAM role.
B. Provision a separate AWS Key Management Service (AWS KMS) key for each client. Encrypt data server side. In the S3 bucket policy, deny decryption of data for all principals except a customer-provided IAM role.
C. Provision a separate AWS Key Management Service (AWS KMS) key for each customer. Encrypt data server side. In each KMS key policy, deny decryption of data for all principals except an IAM role that the customer provides.
D. Provide an AWS Certificate Manager (ACM) certificate for each client. Encrypt data on the client side. In the public certificate policy, deny access to the certificate for all principals except for an IAM role that the customer provides.

A

C. Provision a separate AWS Key Management Service (AWS KMS) key for each customer. Encrypt data server side. In each KMS key policy, deny decryption of data for all principals except an IAM role that the customer provides.

With separate KMS keys for each client and access control through KMS key policies, you can achieve the desired level of security. This allows you to explicitly deny decryption for unauthorized IAM roles.

NOTE: B. Provide a separate AWS Key Management Service (AWS KMS) key for each customer. Encrypt data server side. In the S3 bucket policy, deny data decryption for all principals except a customer-provided IAM role.– This option may not be effective because denying decryption in the S3 bucket policy might not override the key policy. It could grant access to unauthorized parties.

140
Q

713 # A solutions architect creates a VPC that includes two public subnets and two private subnets. A corporate security mandate requires the solutions architect to launch all Amazon EC2 instances on a private subnet. However, when the solutions architect starts an EC2 instance running a web server on ports 80 and 443 on a private subnet, no external Internet traffic can connect to the server. What should the solutions architect do to solve this problem?

A. Connect the EC2 instance to an auto-scaling group on a private subnet. Make sure the website’s DNS record resolves to the auto-scaling group ID.
B. Provision an Internet-facing application load balancer (ALB) in a public subnet. Add the EC2 instance to the target group that is associated with the ALEB. Ensure that the DNS record for the website resolves to the ALB.
C. Start a NAT gateway on a private subnet. Update the route table for private subnets to add a default route to the NAT gateway. Attach a public elastic IP address to the NAT gateway.
D. Ensure that the security group that is connected to the EC2 instance allows HTTP traffic on port 80 and HTTPS traffic on port 443. Ensure that the website’s DNS record resolves to the public IP address of the EC2 instance.

A

B. Provision an Internet-facing application load balancer (ALB) in a public subnet. Add the EC2 instance to the target group that is associated with the ALEB. Ensure that the DNS record for the website resolves to the ALB.

By placing an ALB on the public subnet and adding the EC2 instance to a target group associated with the ALB, external Internet traffic can reach the EC2 instance on the private subnet through the ALB. This configuration allows proper handling of web traffic.

141
Q

714 # A company is deploying a new application on Amazon Elastic Kubernetes Service (Amazon EKS) with an AWS Fargate cluster. The application needs a storage solution for data persistence. The solution must be highly available and fault tolerant. The solution must also be shared between multiple application containers. Which solution will meet these requirements with the LESS operating overhead?

A. Create Amazon Elastic Block Store (Amazon EBS) volumes in the same Availability Zones where EKS worker nodes are placed. Register the volumes to a StorageClass object on an EKS cluster. Use EBS Multi-Attach to share data between containers.
B. Create an Amazon Elastic File System (Amazon EFS) file system. Register the file system in a StorageClass object on an EKS cluster. Use the same file system for all containers.
C. Create an Amazon Elastic Block Store (Amazon EBS) volume. Register the volume to a StorageClass object in an EKS cluster. Use the same volume for all containers.
D. Create Amazon Elastic File System (Amazon EFS) file systems in the same availability zones where the EKS worker nodes are placed. Register the file systems to a StorageClass object in an EKS cluster. Create an AWS Lambda function to synchronize data between file systems.

A

B. Create an Amazon Elastic File System (Amazon EFS) file system. Register the file system in a StorageClass object on an EKS cluster. Use the same file system for all containers.

Amazon EFS is a fully managed file storage service that supports NFS, making it easy to share data between multiple containers in an EKS cluster. It is highly available and fault tolerant by design, and its use as a shared storage solution requires minimal operational overhead.

142
Q

715 # A company has an application that uses Docker containers in its on-premises data center. The application runs on a container host that stores persistent data on a volume on the host. Container instances use stored persistent data. The company wants to move the application to a fully managed service because the company does not want to manage any servers or storage infrastructure. What solution will meet these requirements?

A. Use Amazon Elastic Kubernetes Service (Amazon EKS) with self-managed nodes. Create an Amazon Elastic Block Store (Amazon EBS) volume attached to an Amazon EC2 instance. Use the EBS volume as a persistent volume mounted on the containers.
B. Use Amazon Elastic Container Service (Amazon ECS) with an AWS Fargate launch type. Create an Amazon Elastic File System (Amazon EFS) volume. Add the EFS volume as a persistent storage volume mounted to the containers.
C. Use Amazon Elastic Container Service (Amazon ECS) with an AWS Fargate release type. Create an Amazon S3 bucket. Assign the S3 bucket as a persistent storage volume mounted to the containers.
D. Use Amazon Elastic Container Service (Amazon ECS) with an Amazon EC2 release type. Create an Amazon Elastic File System (Amazon EFS) volume. Adds the EFS volume as a persistent storage volume mounted to the containers.

A

B. Use Amazon Elastic Container Service (Amazon ECS) with an AWS Fargate launch type. Create an Amazon Elastic File System (Amazon EFS) volume. Add the EFS volume as a persistent storage volume mounted to the containers.

AWS Fargate is a fully managed serverless computing engine for containers, eliminating the need to manage servers. Amazon EFS is a fully managed, scalable file storage service that allows you to seamlessly share data between containers. This option meets the requirement of not managing servers or storage infrastructure.

143
Q

716 # A gaming company wants to launch a new Internet-facing application in multiple AWS regions. The application will use TCP and UDP protocols for communication. The company needs to provide high availability and minimal latency for global users. What combination of actions should a solutions architect take to meet these requirements? (Choose two.)

A. Create internal network load balancers in front of the application in each region.
B. Create external application load balancers in front of the application in each region.
C. Create an AWS global accelerator to route traffic to load balancers in each region.
D. Configure Amazon Route 53 to use a geolocation routing policy to distribute traffic.
E. Configure Amazon CloudFront to manage traffic and route requests to the application in each region

A

A. Create internal network load balancers in front of the application in each region.
C. Create an AWS global accelerator to route traffic to load balancers in each region.

  • Network Load Balancers (NLBs) can handle both TCP and UDP traffic, making them suitable for distributing game traffic within a region. However, NLBs are specific to a single region and do not provide global routing.
  • AWS Global Accelerator supports TCP and UDP protocols, making it suitable for global routing. You can direct traffic to the best-performing endpoints in different regions, providing high availability and low latency.

Given the requirement for global high availability and support for TCP and UDP traffic, option C (AWS Global Accelerator) would be the most appropriate option. It can handle global routing of TCP and UDP traffic, directing users to the best performing endpoints in different regions. If more advanced traffic management is needed within each region, you can use network load balancers (NLBs) as optional components to handle TCP/UDP traffic within each region.

144
Q

717 # A city has deployed a web application running on Amazon EC2 instances behind an Application Load Balancer (ALB). Users of the app have reported sporadic performance, which appears to be related to DDoS attacks originating from random IP addresses. The city needs a solution that requires minimal configuration changes and provides an audit trail for DDoS sources. Which solution meets these requirements?

A. Enable an AWS WAF web ACL on the ALB and configure rules to block traffic from unknown sources.
B. Subscribe to Amazon Inspector. Engage the AWS DDoS Response Team (DRT) to integrate mitigation controls into the service.
C. Subscribe to AWS Shield Advanced. Engage the AWS DDoS Response Team (DRT) to integrate mitigation controls into the service.
D. Create an Amazon CloudFront distribution for the application and set the ALB as the origin. Enable an AWS WAF web ACL on your distribution and configure rules to block traffic from unknown sources

A

C. Subscribe to AWS Shield Advanced. Engage the AWS DDoS Response Team (DRT) to integrate mitigation controls into the service.

AWS Shield Advanced is designed specifically for DDoS protection, and the involvement of the AWS DDoS Response Team (DRT) provides additional support to mitigate DDoS attacks. Requires a subscription to AWS Shield Advanced, which comes with more advanced DDoS protection features.

145
Q

718 # A company copies 200 TB of data from a recent ocean survey to AWS Snowball Edge Storage Optimized devices. The company has a high-performance computing (HPC) cluster that is hosted on AWS to search for oil and gas deposits. A solutions architect must provide the cluster with consistent sub-millisecond latency and high-performance access to data from Snowball Edge Storage Optimized devices. The company is shipping the devices back to AWS. What solution will meet these requirements?

A. Create an Amazon S3 bucket. Import the data into the S3 bucket. Configure an AWS Storage Gateway file gateway to use the S3 bucket. Access the file gateway from the HPC cluster instances.
B. Create an Amazon S3 bucket. Import the data into the S3 bucket. Set up an Amazon FSx file system for Luster and integrate it with the S3 bucket. Access the FSx for Luster file system from the HPC cluster instances.
C. Create an Amazon S3 bucket and an Amazon Elastic File System (Amazon EFS) file system. Import the data into the S3 bucket. Copy the data from the S3 bucket to the EFS file system. Access the EFS file system from the HPC cluster instances.
D. Create an Amazon FSx for Luster file system. Import data directly into the FSx for Luster file system. Access the FSx for Luster file system from the HPC cluster instances.

A

D. Create an Amazon FSx for Luster file system. Import data directly into the FSx for Luster file system. Access the FSx for Luster file system from the HPC cluster instances.

  • Amazon FSx for Luster is a high-performance, fully managed file system optimized for use with HPC workloads. By importing data directly into FSx for Luster, you can achieve low latency access and high performance. It is designed to provide high-performance, scalable access to data, making it well suited for HPC scenarios. This option minimizes the need for additional data transfer steps, resulting in efficient access to data on Snowball Edge Storage Optimized devices.
146
Q

719 # A company has NFS servers in an on-premises data center that need to periodically back up small amounts of data to Amazon S3. Which solution meets these requirements and is MOST cost effective?

A. Configure AWS Glue to copy data from on-premises servers to Amazon S3.
B. Set up an AWS DataSync agent on the on-premises servers, and synchronize the data to Amazon S3.
C. Set up an SFTP sync using AWS Transfer for SFTP to sync on-premises data to Amazon S3.
D. Set up an AWS Direct Connect connection between the on-premises data center and a VPC, and copy the data to Amazon S3.

A

B. Set up an AWS DataSync agent on the on-premises servers, and synchronize the data to Amazon S3.

  • AWS DataSync is a fully managed data transfer service that can efficiently and securely transfer data between local storage and Amazon S3. Configuring a DataSync agent on Local servers can perform incremental and parallel transfers, optimizing the use of available bandwidth and minimizing costs.
147
Q

720 # An online gaming company must maintain ultra-low latency for its game servers. The game servers run on Amazon EC2 instances. The company needs a solution that can handle millions of UDP Internet traffic requests every second. Which solution will meet these requirements in the MOST cost-effective way?

A. Configure an application load balancer with the protocol and ports required for Internet traffic. Specify EC2 instances as targets.
B. Configure a gateway load balancer for Internet traffic. Specify EC2 instances as targets.
C. Configure a network load balancer with the required protocol and ports for the Internet traffic. Specify the EC2 instances as the targets.
D. Start an identical set of game servers on EC2 instances in separate AWS regions. Routes Internet traffic to both sets of EC2 instances.

A

C. Configure a network load balancer with the required protocol and ports for the Internet traffic. Specify the EC2 instances as the targets.

  • Network Load Balancers (NLB) are designed to provide ultra-low latency and high throughput performance. They operate at the connection or network layer (layer 4) and are well suited for UDP traffic. NLB is optimized to handle millions of requests per second with minimal latency compared to application load balancers (ALBs) or gateway load balancers.
148
Q

721 # A company runs a three-tier application in a VPC. The database tier uses an Amazon RDS instance for the MySQL database. The company plans to migrate the RDS for MySQL DB instance to an Amazon Aurora PostgreSQL DB cluster. The company needs a solution that replicates the data changes that occur during migration to the new database. What combination of steps will meet these requirements? (Choose two.) A. Use AWS Database Migration Service (AWS DMS) schema conversion to transform the database objects. B. Use AWS Database Migration Service (AWS DMS) schema conversion to create an Aurora PostgreSQL read replica on the RDS instance for the MySQL database. C. Configure an Aurora MySQL read replica for the RDS for MySQL DB instance. D. Define an AWS Database Migration Service (AWS DMS) task with change data capture (CDC) to migrate the data. E. Promote the Aurora PostgreSQL read replica to a standalone Aurora PostgreSQL database cluster when the replication lag is zero.

A

A. Use AWS Database Migration Service (AWS DMS) schema conversion to transform the database objects.
D. Define an AWS Database Migration Service (AWS DMS) task with change data capture (CDC) to migrate the data.

To migrate your RDS for MySQL DB instance to an Amazon Aurora PostgreSQL DB cluster and replicate data changes during the migration, you can use the following combination of steps:
A. **Use Database Migration Service Schema Conversion AWS Data Management System (AWS DMS) to transform database objects. ** - AWS DMS Schema Conversion can be used to convert MySQL database schema and objects to PostgreSQL-compatible syntax.
D. **Define an AWS Database Migration Service (AWS DMS) task with change data capture (CDC) to migrate the data. ** - AWS DMS supports change data capture (CDC), which allows the migration service to capture changes that occur to the source database (RDS for MySQL) during the migration process. This ensures that any changes in progress are replicated to the Aurora PostgreSQL database cluster.

149
Q

722 # A company hosts a database running on an Amazon RDS instance that is deployed across multiple availability zones. The company periodically runs a script against the database to report new entries added to the database. The script running on the database negatively affects the performance of a critical application. The company needs to improve application performance with minimal costs. Which solution will meet these requirements with the LESS operating overhead?

A. Adds functionality to the script to identify the instance that has the fewest active connections. Configure the script to read from that instance to report total new entries.
B. Create a read replica of the database. Configure the script to query only the read replica to report total new entries.
C. Instruct the development team to manually export the day’s new entries into the database at the end of each day.
D. Use Amazon ElastiCache to cache common queries that the script runs against the database.

A

B. Create a read replica of the database. Configure the script to query only the read replica to report total new entries.

  • When creating a database read replica, it offloads the read-intensive workload of the reporting script from the primary database instance. This helps improve performance on the primary instance, minimizing the impact on the critical application. Additionally, read replicas are kept up to date through asynchronous replication, providing near real-time data for reporting without impacting the performance of the primary instance.
150
Q

723 # A company is using an application load balancer (ALB) to present its application to the Internet. The company finds abnormal traffic access patterns throughout the application. A solutions architect needs to improve infrastructure visibility to help the business better understand these anomalies. What is the most operationally efficient solution that meets these requirements?

A. Create a table in Amazon Athena logs for AWS CloudTrail. Create a query for the relevant information.
B. Enable ALB access logging on Amazon S3. Create a table in Amazon Athena and query the logs.
C. Enable ALB access logging in Amazon S3. Open each file in a text editor and look at each line for the relevant information.
D. Use Amazon EMR on a dedicated Amazon EC2 instance to directly query the ALB and acquire traffic access log information.

A

B. Enable ALB access logging on Amazon S3. Create a table in Amazon Athena and query the logs.

  • Enabling ALB access logging to Amazon S3 allows you to capture detailed logs of incoming requests to ALB. By creating a table in Amazon Athena and querying these logs, you gain the ability to analyze and understand traffic patterns, identify anomalies, and perform queries efficiently. Athena provides an interactive, serverless query service that allows you to analyze data directly in Amazon S3 without needing to manage the infrastructure.
151
Q

724 # A company wants to use NAT gateways in its AWS environment. Your company’s Amazon EC2 instances on private subnets must be able to connect to the public Internet through NAT gateways. What solution will meet these requirements?

A. Create public NAT gateways on the same private subnets as the EC2 instances.
B. Create private NAT gateways on the same private subnets as the EC2 instances.
C. Create public NAT gateways in public subnets in the same VPCs as the EC2 instances.
D. Create private NAT gateways in public subnets in the same VPCs as the EC2 instances.

A

C. Create public NAT gateways in public subnets in the same VPCs as the EC2 instances.

152
Q

725 # A company has an organization in AWS Organizations. The company runs Amazon EC2 instances in four AWS accounts in the root organizational unit (OU). There are three non-production accounts and one production account. The company wants to prohibit users from launching EC2 instances of a certain size on non-production accounts. The company has created a service control policy (SCP) to deny access to launch instances that use the prohibited types. What solutions for implementing the SCP will meet these requirements? (Choose two.) A. Attach the SCP to the organization’s root OU. B. Attach the SCP to the three non-production organizations member accounts. C. Attaches the SCP to the organizations management account. D. Create an OU for the production account. Attach the SCP to the OU. Move the production member account to the new OU. E. Create an OU for the required accounts. Attach the SCP to the OU. Move non-production member accounts to the new OU.

A

B. Attach the SCP to the three non-production organizations member accounts.
- Attaching the SCP directly to non-production member accounts ensures that the policy applies specifically to those accounts. This way, the policy denies launching EC2 instances of the prohibited size on non-production accounts.
E. Create an OU for the required accounts. Attach the SCP to the OU. Move non-production member accounts to the new OU.
- By creating a separate OU for non-production accounts and attaching the SCP to that OU, you can isolate the application of the policy to only non-production accounts. Moving non-production member accounts to the new OU associates them with the SCP.

In summary, options B and E are appropriate solutions based on the requirement to prohibit users from launching EC2 instances of a certain size in non-production accounts while allowing them in the production account. Option D should be avoided, and option A and option C are not recommended as they apply to the organization-wide SCP or management account.

153
Q

726 # A company website hosted on Amazon EC2 instances processes classified data stored in Amazon S3. Due to security concerns, the company requires a private, secure connection between its EC2 resources and Amazon S3. Which solution meets these requirements?

A. Set up S3 bucket policies to allow access from a VPC endpoint.
B. Configure an IAM policy to grant read and write access to the S3 bucket.
C. Configure a NAT gateway to access resources outside the private subnet.
D. Configure an access key ID and secret access key to access the S3 bucket.

A

A. Set up S3 bucket policies to allow access from a VPC endpoint.

  • This option involves creating a VPC endpoint for Amazon S3 in your Amazon VPC. A VPC endpoint allows you to privately connect your VPC to S3 without going over the public Internet. By configuring S3 bucket policies to allow access from the VPC endpoint, you ensure that EC2 instances within your VPC can securely access S3 without requiring public Internet access. This is a more secure and recommended approach to handling sensitive data.
154
Q

727 # A company is designing a web application on AWS. The application will use a VPN connection between the company’s existing data centers and the company’s VPCs. The company uses Amazon Route 53 as its DNS service. Your application must use private DNS records to communicate with on-premises services from a VPC. Which solution will meet these requirements in the MOST secure way?

A. Create a Route 53 Resolver exit endpoint. Create a resolution rule. Associate the Resolver rule with the VPC.
B. Create a Route 53 Resolver entry endpoint. Create a resolution rule. Associate the Resolver rule with the VPC.
C. Create a Route 53 private hosted zone. Associate the private hosted zone with the VPC.
D. Create a public area of ​​Route 53. Create a registry for each service to allow service communication

A

C. Create a Route 53 private hosted zone. Associate the private hosted zone with the VPC.

  • This option involves creating a Route 53 private hosted zone, which allows you to define custom DNS records for private communication within your VPC. Associating the private hosted zone with the VPC ensures that DNS records are used to resolve domain names within the specified VPC. This approach is secure because it allows you to control DNS records for private communication.
155
Q

728 # A company is running a photo hosting service in the us-east-1 region. The service allows users from various countries to upload and view photos. Some photos are viewed a lot for months, and others are viewed for less than a week. The application allows uploads of up to 20 MB for each photo. The service uses photo metadata to determine which photos to show to each user. Which solution provides access to the right user in the MOST cost-effective way?

A. Save photos to Amazon DynamoDB. Turn on DynamoDB Accelerator (DAX) to cache frequently viewed items.
B. Store the photos in the Amazon S3 Intelligent-Tiering storage class. Store the photo metadata and its S3 location in DynamoDB.
C. Store photos in the standard Amazon S3 storage class. Configure an S3 lifecycle policy to move photos older than 30 days to the S3 Standard-Infrequent Access (S3 Standard-IA) storage class. Use object tags to keep track of metadata.
D. Save photos to the Amazon S3 Glacier storage class. Configure an S3 lifecycle policy to move photos older than 30 days to the S3 Glacier Deep Archive storage class. Store the photo metadata and its S3 location in Amazon OpenSearch Service.

A

B. Store the photos in the Amazon S3 Intelligent-Tiering storage class. Store the photo metadata and its S3 location in DynamoDB.

  • Amazon S3 Intelligent-Tiering automatically moves objects between two access levels: frequent and infrequent access. It is designed for objects with unknown or changing access patterns. This is suitable for a photo hosting service where some photos are viewed a lot for months, and others are viewed for a short period. - Storing photo metadata and S3 location in DynamoDB provides a fast and scalable way to query and retrieve information about photos. DynamoDB is well suited for handling metadata and providing fast searches based on metadata.
156
Q

729 # A company runs a highly available web application on Amazon EC2 instances behind an application load balancer. The company uses Amazon CloudWatch metrics. As traffic to the web application increases, some EC2 instances become overloaded with many pending requests. CloudWatch metrics show that the number of requests processed and the time to receive responses for some EC2 instances are higher compared to other EC2 instances. The company does not want new requests to be forwarded to EC2 instances that are already overloaded. What solution will meet these requirements?

A. Use the round robin routing algorithm based on the RequestCountPerTarget and ActiveConnectionCount CloudWatch metrics.
B. Use the least outstanding requests algorithm based on the RequestCountPerTarget and ActiveConnectionCount CloudWatch metrics.
C. Use the round robin routing algorithm based on the RequestCount and TargetResponseTime CloudWatch metrics.
D. Use the least outstanding requests algorithm based on the RequestCount and TargetResponseTime CloudWatch metrics.

A

D. Use the least outstanding requests algorithm based on the RequestCount and TargetResponseTime CloudWatch metrics.

  • The “least outstanding requests” algorithm, also known as the least outstanding requests balancing algorithm, considers the number of outstanding requests and the target response time. Its goal is to distribute new requests to the instances that have fewer outstanding requests and optimal response time.
  • In this scenario, using RequestCount (to measure the number of requests) and TargetResponseTime (to evaluate the responsiveness of the instances) CloudWatch metrics together allow for a more informed decision about routing traffic to the instances that are less loaded.
157
Q

730 # A company uses Amazon EC2, AWS Fargate, and AWS Lambda to run multiple workloads in the company’s AWS account. The company wants to make full use of its calculation savings plans. The company wants to be notified when the coverage of the calculation savings plans decreases. Which solution will meet these requirements with the GREATEST operational efficiency?

A. Create a daily budget for savings plans using AWS Budgets. Configure the budget with a coverage threshold to send notifications to appropriate email recipients.
B. Create a Lambda function that runs a coverage report against the Savings Plans. Use Amazon Simple Email Service (Amazon SES) to email the report to the appropriate email recipients.
C. Create an AWS Budgets report for the savings plans budget. Set the frequency to daily.
D. Create a savings plan alert subscription. Enable all notification options. Enter an email address to receive notifications.

A

D. Create a savings plan alert subscription. Enable all notification options. Enter an email address to receive notifications.

  • Savings plan alert subscriptions allow you to set up notifications based on various thresholds, including coverage thresholds. By enabling all notification options, you can receive timely alerts through different channels when coverage decreases.
158
Q

731 # A company runs a real-time data ingestion solution on AWS. The solution consists of the latest version of Amazon Managed Streaming for Apache Kafka (Amazon MSK). The solution is deployed in a VPC on private subnets in three availability zones. A solutions architect needs to redesign the data ingestion solution to make it publicly available over the Internet. Data in transit must also be encrypted. Which solution will meet these requirements with the GREATEST operational efficiency? A. Configure public subnets in the existing VPC. Deploy an MSK cluster in the public subnets. Update the MSK cluster security settings to enable mutual TLS authentication. B. Create a new VPC that has public subnets. Deploy an MSK cluster on the public subnets. Update the MSK cluster security configuration to enable TLS mutual authentication. C. Deploy an application load balancer (ALB) that uses private subnets. Configure an ALB security group inbound rule to allow incoming traffic from the VPC CIDR block for the HTTPS protocol. D. Deploy a network load balancer (NLB) that uses private subnets. Configure an NLB listener for HTTPS communication over the Internet.

A

A. Configure public subnets in the existing VPC. Deploy an MSK cluster in the public subnets. Update the MSK cluster security settings to enable mutual TLS authentication.

  • This option takes advantage of the existing VPC, minimizing the need to create a new VPC. By deploying the MSK cluster on public subnets and enabling mutual TLS authentication, you can ensure that the MSK cluster is publicly accessible while protecting data in transit.
159
Q

732 # A company wants to migrate an on-premises legacy application to AWS. The application ingests customer order files from a local enterprise resource planning (ERP) system. The application then uploads the files to an SFTP server. The application uses a scheduled job that checks the order files every hour. The company already has an AWS account that has connectivity to the local network. The new application on AWS must support integration with the existing ERP system. The new application must be secure and resilient and must use the SFTP protocol to process orders from the ERP system immediately. What solution will meet these requirements?

A. Create an Internet-facing AWS Transfer Family SFTP internal server in two availability zones. Use Amazon S3 storage. Create an AWS Lambda function to process the order files. Use S3 event notifications to send s3:ObjectCreated:* events to the Lambda function.
B. Create an Internet-facing AWS Transfer Family SFTP server in an Availability Zone. Use Amazon Elastic File System (Amazon EFS) storage. Create an AWS Lambda function to process the order files. Use a workflow managed by the transfer family to invoke the Lambda function.
C. Create an internal AWS Transfer Family SFTP server in two availability zones. Use Amazon Elastic File System (Amazon EFS) storage. Create an AWS Step Functions state machine to process order files. Use Amazon EventBridge Scheduler to invoke the state machine and periodically check Amazon EFS order files.
D. Create an AWS Transfer Family SFTP internal server in two availability zones. Use Amazon S3 storage. Create an AWS Lambda function to process order files. Use a transfer family managed workflow to invoke the Lambda function.

A

D. Create an AWS Transfer Family SFTP internal server in two availability zones. Use Amazon S3 storage. Create an AWS Lambda function to process order files. Use a transfer family managed workflow to invoke the Lambda function.

  • Uses an internal SFTP server. - Amazon S3 provides durable, scalable storage.
  • AWS Lambda function processes order files efficiently.
  • Lambda-managed workflow allows for streamlined processing.

In summary, taking into account the clarified requirements, Option D stands out as a suitable option, leveraging an internal SFTP server in two availability zones with Amazon S3 and AWS Lambda storage for efficient order file processing.

160
Q

733 # An enterprise’s applications use Apache Hadoop and Apache Spark to process data on premises. The existing infrastructure is not scalable and is complex to manage. A solutions architect must design a scalable solution that reduces operational complexity. The solution must keep data processing on-premises. What solution will meet these requirements?

A. Use AWS Site-to-Site VPN to access on-premises Hadoop Distributed File System (HDFS) data and application. Use an Amazon EMR cluster to process the data.
B. Use AWS DataSync to connect to the local Hadoop Distributed File System (HDFS) cluster. Create an Amazon EMR cluster to process the data.
C. Migrate the Apache Hadoop application and the Apache Spark application to Amazon EMR clusters on AWS Outposts. Use the EMR clusters to process the data.
D. Use an AWS Snowball device to migrate data to an Amazon S3 bucket. Create an Amazon EMR cluster to process the data.

A

C. Migrate the Apache Hadoop application and the Apache Spark application to Amazon EMR clusters on AWS Outposts. Use the EMR clusters to process the data.

  • Use AWS Outposts for a local extension of AWS infrastructure.
  • EMR clusters for scalable and managed data processing.
161
Q

734 # A company is migrating a large amount of data from local storage to AWS. Windows, Mac, and Linux-based Amazon EC2 instances in the same AWS Region will access data using SMB and NFS storage protocols. The company will access some of the data on a routine basis. The company will access the remaining data infrequently. The company needs to design a solution to host the data. Which solution will meet these requirements with the LESS operating overhead?

A. Create an Amazon Elastic File System (Amazon EFS) volume that uses EFS Intelligent-Tiering. Use AWS DataSync to migrate data to the EFS volume.
B. Create an Amazon FSx instance for ONTAP. Create an FSx file system for ONTAP with a root volume that uses the auto-stretch policy. Migrate the data to the FSx volume for ONTAP.
C. Create an Amazon S3 bucket that uses S3 Intelligent-Tiering. Migrate data to the S3 bucket using an AWS Amazon Gateway storage S3 File gateway.
D. Create an Amazon FSx file system for OpenZFS. Migrate the data to the new volume.

A

C. Create an Amazon S3 bucket that uses S3 Intelligent-Tiering. Migrate data to the S3 bucket using an AWS Amazon Gateway storage S3 File gateway.

  • S3 Intelligent-Tiering automatically moves objects between access levels based on changing access patterns. - Storage Gateway supports SMB and NFS protocols.

https://aws.amazon.com/s3/faqs/
- The total volume of data and number of objects you can store in Amazon S3 are unlimited.

162
Q

735 # A manufacturing company runs its reporting application on AWS. The application generates each report in about 20 minutes. The application is built as a monolith running on a single Amazon EC2 instance. The application requires frequent updates of its tightly coupled modules. The application becomes complex to maintain as the company adds new features. Every time the company patches a software module, the application experiences downtime. Report generation must be restarted from the beginning after any interruption. The company wants to redesign the application so that the application can be flexible, scalable and improve gradually. The company wants to minimize application downtime. What solution will meet these requirements?

A. Run the application in AWS Lambda as a single function with maximum provisioned concurrency.
B. Run the application on Amazon EC2 Spot Instances as microservices with a default Spot Fleet allocation strategy.
C. Run the application on Amazon Elastic Container Service (Amazon ECS) as microservices with service auto-scaling.
D. Run the application on AWS Elastic Beanstalk as a single application environment with a one-time deployment strategy.

A

C. Run the application on Amazon Elastic Container Service (Amazon ECS) as microservices with service auto-scaling.

  • ECS allows running microservices with automatic service scaling based on demand.
  • Offers flexibility and scalability.
  • Option C (Amazon ECS with microservices and service auto-scaling) appears to better align with the company’s requirements for flexibility, scalability, and minimal downtime.
163
Q

736 # A company wants to redesign a large-scale web application to a serverless microservices architecture. The application uses Amazon EC2 instances and is written in Python. The company selected a web application component to test as a microservice. The component supports hundreds of requests every second. The company wants to build and test the microservice on an AWS solution that supports Python. The solution should also scale automatically and require minimal infrastructure and operational support. What solution will meet these requirements?

A. Use an auto-scaling Spot fleet of EC2 instances running the latest Amazon Linux operating system.
B. Use an AWS Elastic Beanstalk web server environment that has high availability configured.
C. Use Amazon Elastic Kubernetes Service (Amazon EKS). Start auto-scaling groups of self-managed EC2 instances.
D. Use an AWS Lambda function that runs custom-developed code.

A

D. Use an AWS Lambda function that runs custom-developed code.

  • Serverless architecture, without the need to manage the infrastructure.
  • Automatic scaling based on demand.
  • Minimal operational support.
  • Option D (AWS Lambda): Given the company’s requirements for a serverless microservices architecture with minimal infrastructure and operational support, AWS Lambda is a strong contender. It aligns well with the principles of serverless computing, automatically scaling based on demand and eliminating the need to manage the underlying infrastructure. It is important to note that the final choice could also depend on the specific application requirements and development preferences.
164
Q

737 # A company has an AWS Direct Connect connection from its on-premises location to an AWS account. The AWS account has 30 different VPCs in the same AWS Region. VPCs use virtual private interfaces (VIFs). Each VPC has a CIDR block that does not overlap with other networks under the company’s control. The company wants to centrally manage the network architecture while allowing each VPC to communicate with all other VPCs and on-premises networks. Which solution will meet these requirements with the LEAST amount of operational overhead?
A. Create a transit gateway and associate the Direct Connect connection with a new transit VIF. Turn on the transit gateway’s route propagation feature.
B. Create a direct connection gateway. Recreate the private VIFs to use the new gateway. Associate each VPC by creating new virtual private gateways.
C. Create a transit VP Connect the Direct Connect connection to the transit VP Create a peering connection between all other VPCs in the region. Update route tables.
D. Create AWS site-to-site VPN connections from on-premises to each VPC. Make sure both VPN tunnels are enabled for each connection. Activates the route propagation function.

A

A. Create a transit gateway and associate the Direct Connect connection with a new transit VIF. Turn on the transit gateway’s route propagation feature.

  • Centralized management with a transit gateway. - Simplifies routing by using route propagation.
  • Option A (Transit Gateway): This option provides centralized management using a transit gateway, simplifies routing with route propagation, and avoids the need to recreate VIFs. It is a scalable and efficient solution for connecting multiple VPCs and local networks.
165
Q

738 # A company has applications running on Amazon EC2 instances. EC2 instances connect to Amazon RDS databases using an IAM role that has policies associated with it. The company wants to use AWS Systems Manager to patch EC2 instances without disrupting running applications. What solution will meet these requirements?

A. Create a new IAM role. Attaches the AmazonSSMManagedInstanceCore policy to the new IAM role. Attach the new IAM role to the EC2 instances and the existing IAM role.
B. Create an IAM user. Attaches the AmazonSSMManagedInstanceCore policy to the IAM user. Configure Systems Manager to use the IAM user to manage EC2 instances.
C. Enable default host configuration management in Systems Manager to manage EC2 instances.
D. Delete the existing policies from the existing IAM role. Add the AmazonSSMManagedInstanceCore policy to the existing IAM role.

A

C. Enable default host configuration management in Systems Manager to manage EC2 instances.

  • This option, as clarified, seems to be a direct and efficient solution. It eliminates the need for manual changes to IAM roles and aligns with the requirement for no application disruption.
  • Default Host Management Configuration creates and applies a default IAM role to ensure that Systems Manager has permissions to manage all instances in the Region and perform automated patch scans using Patch Manager.

NOTE: Only one role can be assigned to an Amazon EC2 instance at a time, and all applications on the instance share the same role and permissions. (https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2.html#) Suggested: Instead create 2 managed policies and attach them to the same IAM Role. Attach that IAM Role to the EC2 instance.

166
Q

739 # A company runs container applications by using Amazon Elastic Kubernetes Service (Amazon EKS) and the Kubernetes Horizontal Pod Autoscaler. The workload is not consistent throughout the day. A solutions architect notices that the number of nodes does not automatically scale out when the existing nodes have reached maximum capacity in the cluster, which causes performance issues. Which solution will resolve this issue with the LEAST administrative overhead?

A. Scale out the nodes by tracking the memory usage.
B. Use the Kubernetes Cluster Autoscaler to manage the number of nodes in the cluster.
C. Use an AWS Lambda function to resize the EKS cluster automatically.
D. Use an Amazon EC2 Auto Scaling group to distribute the workload.

A

B. Use the Kubernetes Cluster Autoscaler to manage the number of nodes in the cluster.

  • This option is more aligned with Kubernetes best practices. The Kubernetes Cluster Autoscaler automatically scales the cluster size based on the resource requirements of the pods. It is designed to handle the dynamic nature of containerized workloads.
  • Option B, using Kubernetes Cluster Autoscaler, is probably the best option to solve the problem with the least administrative overhead. It aligns well with the Kubernetes ecosystem and provides the automation needed to scale the cluster based on the pod’s resource requirements.
167
Q

740 # A company maintains around 300 TB in standard Amazon S3 storage month after month. S3 objects are typically around 50GB in size and are often replaced by multipart uploads by your global application. The number and size of S3 objects remain constant, but the company’s S3 storage costs increase each month. How should a solutions architect reduce costs in this situation?

A. Switch from multi-party uploads to Amazon S3 transfer acceleration.
B. Enable an S3 lifecycle policy that removes incomplete multipart uploads.
C. Configure S3 inventory to prevent objects from being archived too quickly.
D. Configure Amazon CloudFront to reduce the number of objects stored in Amazon S3.

A

B. Enable an S3 lifecycle policy that removes incomplete multipart uploads.

  • Incomplete multi-part uploads may consume additional storage. Enabling a lifecycle policy to remove incomplete multi-part uploads can help reduce storage costs by cleaning up unnecessary data.
168
Q

741 # A company has implemented a multiplayer game for mobile devices. The game requires tracking the live location of players based on latitude and longitude. The game data store must support fast updates and location recovery. The game uses an Amazon RDS for PostgreSQL DB instance with read replicas to store location data. During periods of peak usage, the database cannot maintain the performance necessary to read and write updates. The user base of the game is increasing rapidly. What should a solutions architect do to improve data tier performance?

A. Take a snapshot of the existing database instance. Restore the snapshot with Multi-AZ enabled.
B. Migrate from Amazon RDS to the Amazon OpenSearch service with OpenSearch Dashboards.
C. Deploy Amazon DynamoDB Accelerator (DAX) against the existing DB instance. Modify the game to use DAX.
D. Deploy an Amazon ElastiCache for Redis cluster in front of the existing database instance. Modify the game to use Redis.

A

D. Deploy an Amazon ElastiCache for Redis cluster in front of the existing database instance. Modify the game to use Redis.

  • Amazon ElastiCache for Redis is a caching service, and using it can help improve read performance by caching frequently accessed data. However, it might not be the best choice for a primary data store.
169
Q

742 # A company stores critical data in Amazon DynamoDB tables in the company’s AWS account. An IT administrator accidentally deleted a DynamoDB table. The deletion caused significant data loss and disrupted the company’s operations. The company wants to avoid these types of interruptions in the future. Which solution will meet this requirement with the LESS operating overhead?

A. Set up a trail in AWS CloudTrail. Create an Amazon EventBridge rule to delete actions. Create an AWS Lambda function to automatically restore deleted DynamoDB tables.
B. Create a backup and restore plan for the DynamoDB tables. Recover DynamoDB tables manually.
C. Configure delete protection on DynamoDB tables.
D. Enable point-in-time recovery on DynamoDB tables.

A

C. Configure delete protection on DynamoDB tables.

  • Enabling delete protection on DynamoDB tables prevents accidental deletion of the entire table. This is a simple and effective way to mitigate the risk of accidental deletions.
170
Q

743 # A company has an on-premises data center that is running out of storage capacity. The company wants to migrate its storage infrastructure to AWS while minimizing bandwidth costs. The solution must allow immediate data recovery at no additional cost. How can these requirements be met?

A. Deploy Amazon S3 Glacier Vault and enable accelerated recovery. Enable provisioned recovery capability for the workload.
B. Deploy AWS Storage Gateway using cached volumes. Use Storage Gateway to store data in Amazon S3 while retaining copies of frequently accessed data subsets locally.
C. Deploy AWS Storage Gateway using stored volumes to store data locally. Use Storage Gateway to asynchronously backup point-in-time snapshots of your data to Amazon S3.
D. Deploy AWS Direct Connect to connect to the on-premises data center. Configure AWS Storage Gateway to store data locally. Use Storage Gateway to asynchronously backup point-in-time snapshots of your data to Amazon S3.

A

B. Deploy AWS Storage Gateway using cached volumes. Use Storage Gateway to store data in Amazon S3 while retaining copies of frequently accessed data subsets locally.

  • when AWS Storage Gateway is configured with stored volumes, it effectively keeps all data locally Meeting the requirement for immediate recovery of all data.
  • Asynchronous backups to Amazon S3 provide durability and off-site storage.
  • Cached Mode: In this mode, your primary data resides in Amazon S3, while frequently accessed data is cached locally for low-latency access. Stores everything in Storage Gateway On-Prem while asynchronously backing up to the cloud
171
Q

744 # A company runs a three-tier web application in a VPC across multiple availability zones. Amazon EC2 instances run in an auto-scaling group for the application tier. The company needs to make an automated scaling plan that analyzes the historical trends of the daily and weekly workload of each resource. The configuration should scale resources appropriately based on the forecast and changes in utilization. What scaling strategy should a solutions architect recommend to meet these requirements?

A. Implement dynamic scaling with step scaling based on the average CPU utilization of EC2 instances.
B. Enable predictive scaling to forecast and scale. Configure dynamic scaling with target tracking
C. Create an automated scheduled scaling action based on web application traffic patterns.
D. Establish a simple escalation policy. Increase the cooldown period based on the startup time of the EC2 instance.

A

B. Enable predictive scaling to forecast and scale. Configure dynamic scaling with target tracking

  • Predictive scaling uses machine learning algorithms to forecast future resource utilization based on historical data. Goal tracking helps maintain a specific utilization goal
  • Considering the requirement to analyze both daily and weekly historical workload trends and adapt to live and forecasted changes, Option B (Enable predictive scaling to forecast and scale. Configure dynamic scaling with the goal tracking) is probably the most suitable. Predictive scaling, with its machine learning capabilities, provides a proactive approach to scaling based on historical patterns.
  • Remember that the effectiveness of predictive scaling depends on the quality and stability of historical data. If the workload is highly dynamic or unpredictable, a combination of options may be necessary.
172
Q

745 # A package delivery company has an application that uses Amazon EC2 instances and an Amazon Aurora MySQL DB cluster. As the application becomes more popular, EC2 instance usage increases only slightly. Database cluster usage is increasing at a much faster rate. The company adds a read replica, which reduces database cluster usage for a short period of time. However, the burden continues to increase. The operations that cause the database cluster usage to increase are all repeated read statements that are related to the delivery details. The company needs to alleviate the effect of repeated reads on the database cluster. Which solution will meet these requirements in the MOST cost-effective way?

A. Deploy an Amazon ElastiCache for Redis cluster between the application and the database cluster.
B. Add an additional read replica to the database cluster.
C. Configure Aurora auto-scaling for Aurora read replicas.
D. Modify the database cluster to have multiple write instances.

A

A. Implement an Amazon ElastiCache for Redis cluster between the application and the DB cluster.

  • Amazon ElastiCache for Redis can serve as an in-memory caching solution, reducing the need for repeated reads from the Aurora MySQL DB cluster.
    -Considering the requirement to alleviate the effect of repeated reads on the database cluster, and the cost-effectiveness aspect, Option A (Deploy an Amazon ElastiCache for Redis cluster between the application and the database cluster data) could be the most profitable. Caching can significantly reduce the load on the database cluster by serving repeated read requests from memory.
173
Q

746 # A company has an application that uses an Amazon DynamoDB table for storage. A solutions architect discovers that many requests to the table do not return the most recent data. Company users have not reported any other issues with database performance. Latency is in an acceptable range. What design change should the solutions architect recommend?

A. Add read replicas to the table.
B. Use a global secondary index (GSI).
C. Request strongly consistent reads for the table.
D. Request eventually consistent reads for the table.

A

C. Request strongly consistent reads for the table.

  • Stongly consistent reads ensure that the most up-to-date data is returned, but can have a greater impact on performance.
174
Q

747 # A company has deployed its application on Amazon EC2 instances with an Amazon RDS database. The company used the principle of least privilege to configure database access credentials. The company’s security team wants to protect the application and database from SQL injection and other web-based attacks. Which solution will meet these requirements with the LESS operating overhead?

A. Use security groups and network ACLs to protect the database and application servers.
B. Use AWS WAF to protect the application. Use RDS parameter groups to configure security settings.
C. Use the AWS network firewall to protect the application and database.
D. Use different database accounts in the application code for different functions. Avoid granting excessive privileges to database users.

A

B. Use AWS WAF to protect the application. Use RDS parameter groups to configure security settings.

AWS WAF is designed specifically for web application firewall protection. RDS parameter groups can be used to configure database-specific security settings.

Considering the specific requirement to protect against SQL injection and other web-based attacks, the most suitable option is **Option B (Use AWS WAF to protect the application. Use RDS parameter groups to configure security settings ). ** AWS WAF is designed for web application firewall protection and allows you to create rules to filter and monitor HTTP requests, helping to mitigate common web-based attacks. RDS parameter groups can be used to configure additional database-specific security settings. This combination provides a comprehensive solution to protect both the application and the database.

175
Q

748 # An e-commerce company runs applications in AWS accounts that are part of an organization in AWS Organizations. Applications run on Amazon Aurora PostgreSQL databases in all accounts. The company needs to prevent malicious activities and must identify abnormal failed and incomplete login attempts to databases. Which solution will meet these requirements in the MOST operationally efficient manner?

A. Attach service control policies (SCPs) to the organization root to identify failed login attempts.
B. Enable the Amazon RDS protection feature in Amazon GuardDuty for the member accounts of the organization.
C. Publish the Aurora general logs to a log group to Amazon CloudWatch Logs. Exports log data to a central Amazon S3 bucket.
D. Publishes all events from the Aurora PostgreSQL database on AWS CloudTrail to a central Amazon S3 bucket.

A

B. Enable the Amazon RDS protection feature in Amazon GuardDuty for the member accounts of the organization.

  • RDS Protection in GuardDuty analyzes RDS login activity for potential access threats and generates findings when suspicious behavior is detected.
  • This option directly addresses the requirement of preventing malicious activity and identifying abnormal login attempts to Amazon Aurora databases, making it an effective option.

The most operationally efficient solution to prevent malicious activity and identify abnormal login attempts to Amazon Aurora databases. Provides automated threat detection designed specifically for RDS login activity without the need for additional infrastructure

176
Q

749 # A company has an AWS Direct Connect connection from its corporate data center to its VPC in the us-east-1 region. The company recently acquired a corporation that has multiple VPCs and a Direct Connect connection between its on-premises data center and the eu-west-2 region. CIDR blocks for enterprise and corporation VPCs do not overlap. The company requires connectivity between two regions and data centers. The company needs a solution that is scalable while reducing operating expenses. What should a solutions architect do to meet these requirements?

A. Establish VPC cross-region peering between the VPC in us-east-1 and the VPCs in eu-west-2.
B. Create virtual private interfaces from the Direct Connect connection on us-east-1 to the VPCs on eu-west-2.
C. Establish VPN devices in a fully interleaved VPN network hosted by Amazon EC2. Use AWS VPN CloudHub to send and receive data between data centers and each VPC.
D. Connect the existing Direct Connect connection to a Direct Connect gateway. Routes traffic from the virtual private gateways in the VPCs in each region to the Direct Connect gateway.

A

D. Connect the existing Direct Connect connection to a Direct Connect gateway. Routes traffic from the virtual private gateways in the VPCs in each region to the Direct Connect gateway.

  • The Direct Connect gateway allows you to connect multiple VPCs in different regions to the same Direct Connect connection. Simplifies network architecture.
177
Q

750 # A company is developing a mobile game that transmits score updates to a backend processor and then publishes the results to a leaderboard. A solutions architect needs to design a solution that can handle large traffic spikes, process mobile game updates in order of receipt, and store the processed updates in a highly available database. The company also wants to minimize the management overhead required to maintain the solution. What should the solutions architect do to meet these requirements?

A. Push score updates to Amazon Kinesis Data Streams. Process the updates in Kinesis Data Streams with AWS Lambda. Store the processed updates in Amazon DynamoDB.
B. Push score updates to Amazon Kinesis Data Streams. Process updates with a fleet of Amazon EC2 instances configured for auto-scaling. Stores processed updates in Amazon Redshift.
C. Push score updates to an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe an AWS Lambda function to the SNS topic to process updates. Store the processed updates in a SQL database running on Amazon EC2.
D. Send score updates to an Amazon Simple Queue Service (Amazon SQS) queue. Use a fleet of auto-scaling Amazon EC2 instances to process updates to the SQS queue. Store the processed updates in an Amazon RDS Multi-AZ DB instance.

A

A. Push score updates to Amazon Kinesis Data Streams. Process the updates in Kinesis Data Streams with AWS Lambda. Store the processed updates in Amazon DynamoDB.

  • Amazon Kinesis Data Streams can handle large traffic spikes, and AWS Lambda allows for first-come, first-served processing. Storing processed updates in DynamoDB provides a highly available database.
178
Q

751 # A company has multiple AWS accounts with applications deployed in the US West-2 region. Application logs are stored within Amazon S3 buckets in each account. The company wants to build a centralized log analysis solution that uses a single S3 bucket. Logs should not leave West-2, and the company wants to incur minimal operating overhead. Which solution meets these requirements and is MOST cost effective?

A. Create an S3 lifecycle policy that copies objects from one of the S3 application buckets to the centralized S3 bucket.
B. Use S3 Same-Region Replication to replicate logs from the S3 buckets to another S3 bucket on us-west-2. Use this S3 bucket for log analysis.
C. Write a script that uses the PutObject API operation every day to copy all the contents of the buckets to another S3 bucket on us-west-2. Use this S3 bucket for log analysis.
D. Write AWS Lambda functions in these accounts that are triggered every time logs are delivered to S3 buckets (event s3:ObjectCreated:*). Copy the logs to another S3 bucket on us-west-2. Use this S3 bucket for log analysis.

A

B. Use S3 Same-Region Replication to replicate logs from the S3 buckets to another S3 bucket on us-west-2. Use this S3 bucket for log analysis.

  • Built-in S3 function, minimal operating overhead.
  • Reduced latency and near real-time replication.
  • Amazon S3 SRR is an S3 feature that automatically replicates data between buckets within the same AWS Region. With SRR, you can set up replication at a bucket level, a shared prefix level, or an object level using S3 object tags. You can use SRR to make one or more copies of your data in the same AWS Region. SRR helps you address data sovereignty and compliance requirements by keeping a copy of your data in a separate AWS account in the same region as the original. (https://aws.amazon.com/s3/features/replication/#:~:text=Amazon%20S3%20SRR%20is%20an,in%20the%20same%20AWS%20Region.)

Same-Region Replication (SRR) is used to copy objects across Amazon S3 buckets in the same AWS Region. SRR can help you do the following: Aggregate logs into a single bucket – If you store logs in multiple buckets or across multiple accounts, you can easily replicate logs into a single, in-Region bucket. Doing so allows for simpler processing of logs in a single location. Configure live replication between production and test accounts – If you or your customers have production and test accounts that use the same data, you can replicate objects between those multiple accounts, while maintaining object metadata. Abide by data sovereignty laws – You might be required to store multiple copies of your data in separate AWS accounts within a certain Region. Same-Region Replication can help you automatically replicate critical data when compliance regulations don’t allow the data to leave your country.

Same-Region Replication (SRR) is used to copy objects across Amazon S3 buckets in the same AWS Region. SRR can help you do the following:

Aggregate logs into a single bucket – If you store logs in multiple buckets or across multiple accounts, you can easily replicate logs into a single, in-Region bucket. Doing so allows for simpler processing of logs in a single location.
Configure live replication between production and test accounts – If you or your customers have production and test accounts that use the same data, you can replicate objects between those multiple accounts, while maintaining object metadata.
Abide by data sovereignty laws – You might be required to store multiple copies of your data in separate AWS accounts within a certain Region. Same-Region Replication can help you automatically replicate critical data when compliance regulations don’t allow the data to leave your country.

179
Q

752 # A company has an app that offers on-demand training videos to students around the world. The app also allows authorized content developers to upload videos. The data is stored in an Amazon S3 bucket in the us-east-2 region. The company has created an S3 bucket in the eu-west-2 region and an S3 bucket in the ap-southeast-1 region. The company wants to replicate the data in the new S3 buckets. The company needs to minimize latency for developers uploading videos and students streaming videos near eu-west-2 and ap-southeast-1. What combination of steps will meet these requirements with the LEAST changes to the application? (Choose two.)

A. Configure one-way replication from the us-east-2 S3 bucket to the eu-west-2 S3 bucket. Configure one-way replication from the us-east-2 S3 bucket to the ap-southeast-1 S3 bucket.
B. Configure one-way replication from the us-east-2 S3 bucket to the eu-west-2 S3 bucket. Configure one-way replication from the eu-west-2 S3 bucket to the ap-southeast-1 S3 bucket.
C. Configure two-way (two-way) replication between the S3 buckets located in the three regions.
D. Create an S3 multi-region access point. Modify the application to use the Amazon Resource Name (ARN) of the multi-region access point for video streaming. Do not modify the application to upload videos.
E. Create an S3 multi-region access point. Modify the application to use the Amazon Resource Name (ARN) of the multi-region access point for video streaming and uploading.

A

C. Configure two-way (bidirectional) replication among the S3 buckets located in all three regions.
E. Create an S3 multi-region access point. Modify the application to use the Amazon Resource Name (ARN) of the multi-region access point for video streaming and uploading.

Option C: - Settings: Bi-directional replication between S3 buckets in all three regions. - Analysis: This option guarantees bidirectional synchronization of data between the three regions, providing consistency and minimizing latency.

Option E: - Settings: Create an S3 multi-region access point for both video streaming and uploads. - Analysis: This option combines the benefits of the S3 multi-region hotspot to optimize access latency and replicate loads to the nearest low-latency region, making it a strong candidate. Considering the need to minimize latency for both uploads and access, Option C and Option E stand out. However, Option E explicitly mentions using the S3 multi-region hotspot for both streaming and uploads, making it a more complete solution. Therefore, Option E appears to be the most appropriate option.

180
Q

753 # A company has a new mobile application. Anywhere in the world, users can watch local news on the topics of their choice. Users can also post photos and videos from within the app. Users access content often in the first few minutes after the content is published. New content quickly replaces older content, and then the older content disappears. The local nature of news means that users consume 90% of the content within the AWS Region where it is uploaded. Which solution will optimize the user experience by providing the LOWEST latency for content uploads?

A. Upload and store content to Amazon S3. Use Amazon CloudFront for uploads.
B. Upload and store content to Amazon S3. Use S3 transfer acceleration for uploads.
C. Upload content to Amazon EC2 instances in the region closest to the user. Copy the data to Amazon S3.
D. Upload and store content to Amazon S3 in the region closest to the user. Use multiple Amazon CloudFront distributions.

A

B. Upload and store content to Amazon S3. Use S3 transfer acceleration for uploads.

S3 Transfer Acceleration is designed to accelerate uploads to Amazon S3 by utilizing Amazon CloudFront’s globally distributed edge locations. This option can improve the speed of content uploads.

Considering the emphasis on minimizing latency for content uploads, Option B (using S3 transfer acceleration) appears to be the most appropriate. S3 transfer acceleration is explicitly designed to speed up uploads to Amazon S3, making it a good choice for optimizing the user experience during content uploads.

181
Q

754 # A company is building a new application that uses a serverless architecture. The architecture will consist of an Amazon API Gateway REST API and AWS Lambda functions to handle incoming requests. The company wants to add a service that can send messages received from the gateway’s REST API to multiple target Lambda functions for processing. The service must provide message filtering that gives target Lambda functions the ability to receive only the messages that the functions need. Which solution will meet these requirements with the LESS operating overhead?

A. Send REST API requests from the API gateway to an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe to Amazon Simple Queue Service (Amazon SQS) queues in the SNS topic. Configure the target Lambda functions to poll the different SQS queues.
B. Send the requests from the API Gateway REST API to Amazon EventBridge. Configure EventBridge to invoke the target Lambda functions.
C. Send requests from the REST API Gateway to Amazon Managed Streaming for Apache Kafka (Amazon MSK). Configure Amazon MSK to publish messages to target Lambda functions.
D. Send requests from the REST API Gateway to multiple Amazon Simple Queue Service (Amazon SQS) queues. Configure the target Lambda functions to poll the different SQS queues.

A

B. Send the requests from the API Gateway REST API to Amazon EventBridge. Configure EventBridge to invoke the target Lambda functions.

Amazon EventBridge is a serverless event bus that simplifies event management. This option provides a scalable, serverless solution with minimal operational overhead. It allows direct invocation of Lambda functions, reducing the need for additional components.

NOTE: Option A: - Configuration: Send requests to an SNS topic, subscribe to SQS queues to the SNS topic, and configure Lambda functions to poll SQS queues. - Analysis: This option introduces additional components (SNS and SQS), but provides flexibility in component decoupling. It might have more operational overhead than other options due to the need to manage SNS topics and SQS queues.

Option D: - Configuration: Send requests to multiple SQS queues and configure Lambda functions to poll those queues. - Analysis: Similar to Option A, this introduces additional components (SQS queues). While it offers decoupling, it may have higher operational overhead due to SQS managing multiple queues.

182
Q

755 # A company migrated millions of archive files to Amazon S3. A solutions architect needs to implement a solution that encrypts all file data using a customer-provided key. The solution must encrypt existing unencrypted objects and future objects. What solution will meet these requirements?

A. Create a list of unencrypted objects by filtering an Amazon S3 inventory report. Configure an S3 batch operations job to encrypt the objects from the list using server-side encryption with a customer-provided key (SSE-C). Configure the default S3 encryption feature to use server-side encryption with a client-provided key (SSE-C).
B. Use S3 Storage Lens metrics to identify unencrypted S3 buckets. Configure the S3 default encryption feature to use server-side encryption with AWS KMS (SSE-KMS) keys.
C. Create a list of unencrypted objects by filtering the AWS Usage Report for Amazon S3. Configure an AWS Batch job to encrypt the objects in the list using server-side encryption with AWS KMS (SSE-KMS) keys. Configure the S3 default encryption feature to use server-side encryption with AWS KMS (SSE-KMS) keys.
D. Create a list of unencrypted objects by filtering the AWS Usage Report for Amazon S3. Configure the default S3 encryption feature to use server-side encryption with a client-provided key (SSE-C).

A

A. Create a list of unencrypted objects by filtering an Amazon S3 inventory report. Configure an S3 batch operations job to encrypt the objects from the list using server-side encryption with a customer-provided key (SSE-C). Configure the default S3 encryption feature to use server-side encryption with a client-provided key (SSE-C).

  • Analysis: This option allows encryption of existing unencrypted objects and applies the default encryption for future objects. It is suitable for a customer-provided key (SSE-C).

https://aws.amazon.com/blogs/storage/encrypting-objects-with-amazon-s3-batch-operations/

183
Q

756 # The DNS provider that hosts a company’s domain name records is experiencing outages that are causing downtime for a website running on AWS. The company needs to migrate to a more resilient managed DNS service and wants the service to run on AWS. What should a solutions architect do to quickly migrate DNS hosting service?

A. Create an Amazon Route 53 public hosted zone for the domain name. Import the zone file that contains the domain records hosted by the previous provider.
B. Create a private Amazon Route 53 hosted zone for the domain name. Import the zone file that contains the domain records hosted by the previous provider.
C. Create a simple AD directory in AWS. Enable zone transfer between the DNS provider and the AWS Directory Service for Microsoft Active Directory for domain records.
D. Create an Amazon Route 53 Resolver ingress endpoint in the VPC. Specify the IP addresses to which the provider’s DNS will forward DNS queries. Configure the provider’s DNS to forward DNS queries for the domain to the IP addresses that are specified on the ingress endpoint.

A

A. Create an Amazon Route 53 public hosted zone for the domain name. Import the zone file that contains the domain records hosted by the previous provider.

  • Analysis: This option involves creating a public zone hosted on Amazon Route 53 and importing existing records. It’s a quick and easy approach to migrating DNS hosting, and is suitable for a public website.
184
Q

757 # A company is building an application on AWS that connects to an Amazon RDS database. The company wants to manage application configuration and securely store and retrieve credentials from the database and other services. Which solution will meet these requirements with the LEAST administrative overhead?

A. Use AWS AppConfig to store and manage application configuration. Use AWS Secrets Manager to store and retrieve credentials.
B. Use AWS Lambda to store and manage application configuration. Use AWS Systems Manager Parameter Store to store and retrieve credentials.
C. Use an encrypted application configuration file. Store the file in Amazon S3 for application configuration. Create another S3 file to store and retrieve the credentials.
D. Use AWS AppConfig to store and manage application configuration. Use Amazon RDS to store and retrieve credentials.

A

A. Use AWS AppConfig to store and manage application configuration. Use AWS Secrets Manager to store and retrieve credentials.

  • Analysis: AWS AppConfig is designed to manage application configurations, and AWS Secrets Manager is designed to securely store and manage sensitive information, such as database credentials. This option provides a dedicated and secure solution for both aspects.
185
Q

758 # To meet security requirements, an enterprise needs to encrypt all of its application data in transit while communicating with an Amazon RDS MySQL DB instance. A recent security audit revealed that encryption at rest is enabled using AWS Key Management Service (AWS KMS), but data in transit is not enabled. What should a solutions architect do to satisfy security requirements?

A. Enable IAM database authentication on the database.
B. Provide self-signed certificates. Use the certificates on all connections to the RDS instance.
C. Take a snapshot of the RDS instance. Restore the snapshot to a new instance with encryption enabled.
D. Download the root certificates provided by AWS. Provide the certificates on all connections to the RDS instance.

A

D. Download the root certificates provided by AWS. Provide the certificates on all connections to the RDS instance.

This option involves using the root certificates provided by AWS for SSL/TLS encryption. Downloading and configuring these certificates on application connections will encrypt data in transit. Make sure your application and database settings are configured correctly to use SSL/TLS.

To encrypt data in transit with Amazon RDS MySQL, option D is best suited. It involves using AWS-provided root certificates for SSL/TLS encryption, providing a secure way to encrypt communication between the application and the RDS instance.

186
Q

759 # A company is designing a new web service that will run on Amazon EC2 instances behind an Elastic Load Balancing (ELB) load balancer. However, many web services clients can only reach authorized IP addresses in their firewalls. What should a solutions architect recommend to meet customer needs?

A. A network load balancer with an associated Elastic IP address.
B. An application load balancer with an associated Elastic IP address.
C. An A record in an Amazon Route 53 hosted zone that points to an elastic IP address.
D. An EC2 instance with a public IP address running as a proxy in front of the load balancer.

A

A. A network load balancer with an associated Elastic IP address.

Using a Network Load Balancer instead of a Classic Load Balancer has the following benefits: Support for static IP addresses for the load balancer. https://docs.aws.amazon.com/elasticloadbalancing/latest/network/introduction.html

187
Q

760 # A company has established a new AWS account. The account was recently provisioned and no changes were made to the default settings. The company is concerned about the security of the root user of the AWS account. What should be done to protect the root user?

A. Create IAM users for daily administrative tasks. Disable the root user.
B. Create IAM users for daily administrative tasks. Enable multi-factor authentication on the root user.
C. Generate an access key for the root user. Use the access key for daily management tasks instead of the AWS Management Console.
D. Provide the root user credentials to the senior solutions architect. Have the solutions architect use the root user for daily administration tasks.

A

B. Create IAM users for daily administrative tasks. Enable multi-factor authentication on the root user.

188
Q

761 # A company is implementing an application that processes streaming data in near real time. The company plans to use Amazon EC2 instances for the workload. The network architecture must be configurable to provide the lowest possible latency between nodes. What combination of network solutions will meet these requirements? (Choose two.)

A. Enable and configure enhanced networking on each EC2 instance.
B. Group EC2 instances into separate accounts.
C. Run the EC2 instances in a cluster placement group.
D. Connect multiple elastic network interfaces to each EC2 instance.
E. Use optimized Amazon Elastic Block Store (Amazon EBS) instance types.

A

A. Enable and configure enhanced networking on each EC2 instance.
C. Run the EC2 instances in a cluster placement group.

  • Improved networking provides higher performance by offloading some of the network processing to the underlying hardware. This can help reduce latency.
  • A cluster placement group is a logical grouping of instances within a single availability zone. It is designed to provide low latency communication between instances. This can be particularly beneficial for applications that require high network performance.
189
Q

762 # A financial services company wants to close two data centers and migrate more than 100 TB of data to AWS. The data has an intricate directory structure with millions of small files stored in deep hierarchies of subfolders. Most data is unstructured, and the enterprise file storage consists of SME-based storage types from multiple vendors. The company does not want to change its applications to access data after the migration. What should a solutions architect do to meet these requirements with LESS operational overhead?

A. Use AWS Direct Connect to migrate data to Amazon S3.
B. Use AWS DataSync to migrate data to Amazon FSx for Luster.
C. Use AWS DataSync to migrate the data to Amazon FSx for Windows File Server.
D. Use AWS Direct Connect to migrate on-premises data file storage to an AWS Storage Gateway volume gateway.

A

C. Use AWS DataSync to migrate the data to Amazon FSx for Windows File Server.

AWS DataSync can be used to migrate data efficiently, and Amazon FSx for Windows File Server provides a highly available, fully managed Windows file system with support for SMB-based storage. This option allows the company to maintain compatibility of existing applications without changing the way applications access data after migration.

190
Q

763 # An organization in AWS Organizations is used by a company to manage AWS accounts that contain applications. The company establishes a dedicated monitoring member account in the organization. The company wants to query and view observability data across all accounts using Amazon CloudWatch. What solution will meet these requirements?

A. Enable CloudWatch cross-account observability for the monitoring account. Deploy an AWS CloudFormation template provided by the monitoring account in each AWS account to share the data with the monitoring account.
B. Configure service control policies (SCP) to provide access to CloudWatch in the monitoring account under the organizations root organizational unit (OU).
C. Configure a new IAM user in the monitoring account. In each AWS account, configure an IAM policy to access and view CloudWatch data in the account. Attaches the new IAM policy to the new IAM user.
D. Create a new IAM user in the monitoring account. Create cross-account IAM policies in each AWS account. Attaches the IAM policies to the new IAM user.

A

A. Enable CloudWatch cross-account observability for the monitoring account. Deploy an AWS CloudFormation template provided by the monitoring account in each AWS account to share the data with the monitoring account.

  • This option involves enabling observability between CloudWatch accounts, allowing the monitoring account to access data from other accounts. Deploying an AWS CloudFormation template to each AWS account makes it easy to share observability data. This approach can work effectively to centralize monitoring across multiple accounts.
191
Q

764 # A company’s website is used to sell products to the public. The site runs on Amazon EC2 instances in an auto-scaling group behind an application load balancer (ALB). There is also an Amazon CloudFront distribution, and AWS WAF is being used to protect against SQL injection attacks. The ALB is the source of the CloudFront distribution. A recent review of security logs revealed an external malicious IP that must be blocked to access the website. What should a solutions architect do to secure the application?

A. Modify the network ACL on the CloudFront distribution to add a deny rule for the malicious IP address.
B. Modify the configuration of AWS WAF to add an IP match condition to block the malicious IP address.
C. Modify the network ACL for the EC2 instances in the target groups behind the ALB to deny the malicious IP address.
D. Modify the security groups for the EC2 instances in the target groups behind the ALB to deny the malicious IP address.

A

B. Modify the configuration of AWS WAF to add an IP match condition to block the malicious IP address.

  • AWS WAF (Web Application Firewall) is designed to protect web applications from various attacks, including SQL injection. Since AWS WAF is already being used in this scenario, modifying its configuration to add an IP match condition is a suitable approach.
  • Modifying AWS WAF to add an IP match condition allows you to specify rules to block or allow requests based on specific IP addresses. This way, you can block access to the website of the identified malicious IP address.

NOTE: regarding option A: - Network ACLs are more concerned with controlling access to Amazon CloudFront based on IP addresses. CloudFront with WAF is how address filtering is performed with the action on the WAF.

192
Q

765 # A company establishes an organization in AWS Organizations that contains 10 AWS accounts. A solutions architect must design a solution to provide account access to several thousand employees. The company has an existing identity provider (IdP). The company wants to use the existing IdP for authentication to AWS. What solution will meet these requirements?

A. Create IAM users for employees in the required AWS accounts. Connect IAM users to the existing IdP. Configure federated authentication for IAM users.
B. Configure the AWS account root users with email addresses and passwords of the users that are synchronized from the existing IdP.
C. Configure AWS IAM Identity Center (AWS Single Sign-On). Connect the IAM Identity Center to the existing IdP. Provision users and groups from the existing IdP.
D. Use AWS Resource Access Manager (AWS RAM) to share access to AWS accounts with existing IdP users.

A

C. Configure AWS IAM Identity Center (AWS Single Sign-On). Connect the IAM Identity Center to the existing IdP. Provision users and groups from the existing IdP.

Explanation:
1. AWS IAM Identity Center (AWS Single Sign-On - SSO): - AWS SSO is a fully managed service that allows users to access multiple AWS accounts and applications using their existing corporate credentials. Simplifies user access management across all AWS accounts.
2. Connect to Existing IdP: - AWS Single Sign-On can be configured to connect to your existing Identity Provider (IdP), allowing users to sign in with their existing corporate credentials. This takes advantage of the existing authentication mechanism.
3. Provisioning Users and Groups: - AWS SSO allows you to provision users and groups from the existing IdP. This eliminates the need to manually create IAM users in each AWS account, providing a more centralized and efficient approach.

193
Q

766 # A solutions architect is designing an AWS Identity and Access Management (IAM) authorization model for a company’s AWS account. The company has designated five specific employees to have full access to AWS services and resources in the AWS account. The solutions architect has created an IAM user for each of the five designated employees and created an IAM user group. What solution will meet these requirements?

A. Attach the AdministratorAccess resource-based policy to the IAM user group. Place each of the five designated employee IAM users in the IAM user group.
B. Attach the SystemAdministrator identity-based policy to the IAM user group. Place each of the five designated employee IAM users in the IAM user group.
C. Attach the AdministratorAccess identity-based policy to the IAM user group. Place each of the five designated employee IAM users in the IAM user group.
D. Attach the SystemAdministrator resource-based policy to the IAM user group. Place each of the five designated employee IAM users in the IAM user group.

A

C. Attach the AdministratorAccess identity-based policy to the IAM user group. Place each of the five designated employee IAM users in the IAM user group.

Explanation:
1. Administrator Access Policy: - The “Administrator Access” policy is a managed policy in AWS IAM that grants full access to AWS services and resources. It is designed to provide unrestricted access to perform any action in your AWS account.
2. Identity-based policy: - Identity-based policies are attached directly to IAM users, groups, or roles. In this case, attaching the “AdministratorAccess” policy directly to the IAM user group ensures that all users within that group inherit the permissions.
3. IAM User Group: - Creating an IAM user group allows for easy permissions management. By placing each of the five designated employee IAM users in the IAM user group, you can efficiently manage and grant full access to the specified resources.

NOTE: - A. Attach the AdministratorAccess resource-based policy to the IAM user group: - Resource-based policies are used to define permissions on resources, such as S3 buckets or Lambda functions, not for IAM user groups. The “AdministratorAccess” policy is an identity-based policy.

194
Q

767 # A company has a multi-tier payment processing application that relies on virtual machines (VMs). Communication between tiers occurs asynchronously through a third-party middleware solution that guarantees exactly-once delivery. The company needs a solution that requires the least amount of infrastructure management. The solution must guarantee exactly-once delivery for in-app messaging. What combination of actions will meet these requirements? (Choose two.)

A. Use AWS Lambda for the compute layers of the architecture.
B. Use Amazon EC2 instances for the compute layers of the architecture.
C. Use Amazon Simple Notification Service (Amazon SNS) as a messaging component between compute layers.
D. Use Amazon Simple Queue Service (Amazon SQS) FIFO queues as the messaging component between the compute layers.
E. Use containers that are based on Amazon Elastic Kubernetes Service (Amazon EKS) for the compute layers in the architecture.

A

A. Use AWS Lambda for the compute layers of the architecture.
D. Use Amazon Simple Queue Service (Amazon SQS) FIFO queues as the messaging component between the compute layers.

  • AWS Lambda is a serverless computing service that requires minimal infrastructure management. It automatically scales based on the number of incoming requests, and you don’t have to provision or manage servers. Lambda may be suitable for stateless and event-based processing, making it a good choice for certain types of applications.
  • Amazon SQS FIFO (First-First In-Out) queues provide message processing and orderly delivery of messages exactly once. Using SQS FIFO queues ensures that messages are processed in the order they are received and delivered exactly once. This helps maintain the integrity of the payment processing application.
195
Q

768 # A company has a nightly batch processing routine that analyzes the report files that a local file system receives daily via SFTP. The company wants to move the solution to the AWS cloud. The solution must be highly available and resilient. The solution must also minimize operational effort. Which solution meets these requirements?

A. Deploy AWS Transfer for SFTP and an Amazon Elastic File System (Amazon EFS) file system for storage. Use an Amazon EC2 instance in an auto-scaling group with a scheduled scaling policy to run the batch operation.
B. Deploy an Amazon EC2 instance running Linux and an SFTP service. Use an Amazon Elastic Block Store (Amazon EBS) volume for storage. Use an auto-scaling group with the minimum number of instances and the desired number of instances set to 1.
C. Deploy an Amazon EC2 instance running Linux and an SFTP service. Use an Amazon Elastic File System (Amazon EFS) file system for storage. Use an auto-scaling group with the minimum number of instances and the desired number of instances set to 1.
D. Deploy AWS Transfer for SFTP and an Amazon S3 bucket for storage. Modify the application to extract batch files from Amazon S3 to an Amazon EC2 instance for processing. Use an EC2 instance in an auto-scaling group with a scheduled scaling policy to run the batch operation.

A

D. Deploy AWS Transfer for SFTP and an Amazon S3 bucket for storage. Modify the application to extract batch files from Amazon S3 to an Amazon EC2 instance for processing. Use an EC2 instance in an auto-scaling group with a scheduled scaling policy to run the batch operation.

196
Q

769 # A company has users around the world accessing its HTTP-based application deployed on Amazon EC2 instances in multiple AWS Regions. The company wants to improve the availability and performance of the application. The company also wants to protect the application against common web exploits that can affect availability, compromise security or consume excessive resources. Static IP addresses are required. What should a solutions architect recommend to achieve this?

A. Put EC2 instances behind network load balancers (NLBs) in each region. Deploy AWS WAF on NLBs. Create an accelerator using AWS Global Accelerator and register NLBs as endpoints.
B. Put the EC2 instances behind application load balancers (ALBs) in each region. Deploy AWS WAF in the ALBs. Create an accelerator using AWS Global Accelerator and register ALBs as endpoints.
C. Put EC2 instances behind network load balancers (NLBs) in each region. Deploy AWS WAF on NLBs. Create an Amazon CloudFront distribution with an origin that uses Amazon Route 53 latency-based routing to route requests to NLBs.
D. Put EC2 instances behind application load balancers (ALBs) in each region. Create an Amazon CloudFront distribution with an origin that uses Amazon Route 53 latency-based routing to route requests to ALBs. Deploy AWS WAF to your CloudFront distribution.

A

B. Put the EC2 instances behind application load balancers (ALBs) in each region. Deploy AWS WAF in the ALBs. Create an accelerator using AWS Global Accelerator and register ALBs as endpoints.

  • ALBs are designed to route HTTP/HTTPS traffic and provide advanced features including content-based routing, making them suitable for web applications.
  • AWS WAF on ALB provides protection against common web exploits.
  • AWS Global Accelerator is used to improve availability and performance by providing a static Anycast IP address and directing traffic to optimal AWS endpoints.
197
Q

770 # A company’s data platform uses an Amazon Aurora MySQL database. The database has multiple read replicas and multiple database instances in different availability zones. Users have recently reported database errors indicating there are too many connections. The company wants to reduce failover time by 20% when a read replica is promoted to primary writer. What solution will meet this requirement?

A. Switch from Aurora to Amazon RDS with Multi-AZ cluster deployment.
B. Use Amazon RDS Proxy in front of the Aurora database.
C. Switch to Amazon DynamoDB with DynamoDB Accelerator (DAX) for read connections.
D. Switch to Amazon Redshift with relocation capability.

A

B. Use Amazon RDS Proxy in front of the Aurora database.

To reduce failover time and improve connection handling on an Amazon Aurora MySQL database, the recommended solution is: **Option B: Use Amazon RDS Proxy in front of the Aurora database. **

Explanation:
1. Amazon RDS Proxy: - Amazon RDS Proxy is a highly available, fully managed database proxy for Amazon RDS (Relational Database Service) that makes applications more scalable , more resistant to database failures and more secure. It helps manage database connections, which can be particularly beneficial in scenarios with too many connections.
2. Benefits of using Amazon RDS Proxy: - Efficient connection pooling: RDS Proxy efficiently manages connections to the database, reducing the potential for connection-related issues. - Reduced failover time: RDS Proxy can significantly reduce failover time when a read replica is promoted to the primary writer. Maintains persistent connections during failovers, minimizing impact on applications.

NOTE:Discussion of other options: - Option A (Switch from Aurora to Amazon RDS with Multi-AZ cluster deployment): This option may not address the specific need to reduce failover time, and Aurora is known for its fast failovers. Multi-AZ implementation is already a feature available in Aurora.

198
Q

771 # A company stores text files in Amazon S3. Text files include customer chat messages, date and time information, and customer personally identifiable information (PII). The company needs a solution to provide conversation samples to a third-party service provider for quality control. The external service provider needs to randomly choose sample conversations up to the most recent conversation. The company must not share the customer’s PII with the third-party service provider. The solution must scale as the number of customer conversations increases. Which solution will meet these requirements with the LESS operating overhead?

A. Create a Object Lambda access point. Create an AWS Lambda function that redacts the PII when the function reads the file. Instruct the external service provider to access the object Lambda access point.
B. Create a batch process on an Amazon EC2 instance that regularly reads all new files, redacts the files’ PII, and writes the redacted files to a different S3 bucket. Instruct the third-party service provider to access the repository that does not contain the PII.
C. Create a web application on an Amazon EC2 instance that lists the files, redacts the PII of the files, and allows the third-party service provider to download new versions of the files that have the redacted PII.
D. Create an Amazon DynamoDB table. Create an AWS Lambda function that reads only data from files that do not contain PII. Configure the Lambda function to store non-PII data in the DynamoDB table when a new file is written to Amazon S3. Grant the external service provider access to the DynamoDB table.

A

A. Create a Object Lambda access point. Create an AWS Lambda function that redacts the PII when the function reads the file. Instruct the external service provider to access the object Lambda access point.

  • Object Lambda access points allow custom processing of S3 object data before returning it to the requester. A Lambda function can be attached to the access point to dynamically redact PII from text files when they are accessed. This ensures that the third-party service provider only receives information redacted without PII.
  • AWS Lambda provides a scalable, low-overhead, serverless environment for processing.
199
Q

772 # A company is running a legacy system on an Amazon EC2 instance. The application code cannot be modified and the system cannot run in more than one instance. A solutions architect must design a resilient solution that can improve system recovery time. What should the solutions architect recommend to meet these requirements?

A. Enable termination protection for the EC2 instance.
B. Configure the EC2 instance for Multi-AZ deployment.
C. Create an Amazon CloudWatch alarm to recover the EC2 instance in case of failure.
D. Start the EC2 instance with two Amazon Elastic Block Store (Amazon EBS) volumes that use RAID configurations for storage redundancy.

A

C. Create an Amazon CloudWatch alarm to recover the EC2 instance in case of failure.

This option involves using Amazon CloudWatch to monitor the health of the EC2 instance. A CloudWatch alarm can be configured to detect failures or problems and trigger an automated recovery action. The recovery action could be implemented using AWS Lambda functions or other automation tools. While this option does not modify the application code, it introduces operational automation for system recovery.

200
Q

773 # A company wants to deploy its containerized application workloads in a VPC across three availability zones. The business needs a solution that is highly available across all availability zones. The solution should require minimal changes to the application. Which solution will meet these requirements with the LESS operating overhead?

A. Use Amazon Elastic Container Service (Amazon ECS). Configure Amazon ECS service auto-scaling to use target tracking scaling. Set the minimum capacity to 3. Set the task placement strategy type to spread with an availability zone attribute.
B. Use Amazon Elastic Kubernetes Service (Amazon EKS) self-managed nodes. Configure application auto-scaling to use target tracking scaling. Set the minimum capacity to 3.
C. Use Amazon EC2 Reserved Instances. Start three EC2 instances in a propagation placement group. Configure an auto-scaling group to use target tracking scaling. Set the minimum capacity to 3.
D. Use an AWS Lambda function. Configure the Lambda function to connect to a VPC. Configure application auto-scaling to use Lambda as a scalable target. Set the minimum capacity to 3.

A

A. Use Amazon Elastic Container Service (Amazon ECS). Configure Amazon ECS service auto-scaling to use target tracking scaling. Set the minimum capacity to 3. Set the task placement strategy type to spread with an availability zone attribute.

This option involves using ECS ​​for container orchestration. Amazon ECS Service Auto Scaling allows you to automatically adjust the number of tasks running on a service. Setting the task placement strategy to be “spread” with an Availability Zone attribute ensures that tasks are distributed equally across Availability Zones. This solution is designed for high availability with minimal application changes.

201
Q

774 # A media company stores movies on Amazon S3. Each movie is stored in a single video file ranging from 1 GB to 10 GB in size. The company must be able to provide streaming content for a movie within 5 minutes of a user purchasing it. There is a greater demand for films less than 20 years old than for films more than 20 years old. The company wants to minimize the costs of the hosting service based on demand. What solution will meet these requirements?

A. Store all media in Amazon S3. Use S3 lifecycle policies to move media data to the infrequent access tier when demand for a movie decreases.
B. Store newer movie video files in S3 Standard. Store older movie video files in S3 Standard-Infrequent Access (S3 Standard-IA). When a user requests an older movie, recover the video file using standard retrieval.
C. Store newer movie video files in S3 Intelligent-Tiering. Store older movie video files in S3 Glacier Flexible Retrieval. When a user requests an older movie, recover the video file using expedited retrieval.
D. Store newer movie video files in S3 Standard. Store older movie video files in S3 Glacier Flexible Retrieval. When a user requests an older movie, recover the video file using bulk retrieval.

A

C. Store newer movie video files in S3 Intelligent-Tiering. Store older movie video files in S3 Glacier Flexible Retrieval. When a user requests an older movie, recover the video file using expedited retrieval.

This option uses S3 Intelligent-Tiering for newer movies, automatically optimizing costs based on access patterns. Older movies are stored in S3 Glacier Flexible Retrieval, and accelerated retrieval is used when a user requests an older movie. Accelerated recovery on S3 Glacier typically provides data recovery times in 1-5 minutes, making it suitable for meeting the 5-minute recovery requirement.

202
Q

775 # A solutions architect needs to design the architecture of an application that a vendor provides as a Docker container image. The container needs 50 GB of available storage for temporary files. The infrastructure must be serverless. Which solution meets these requirements with the LESS operating overhead?

A. Create an AWS Lambda function that uses the Docker container image with a volume mounted on Amazon S3 that has more than 50 GB of space.
B. Create an AWS Lambda function that uses the Docker container image with an Amazon Elastic Block Store (Amazon EBS) volume that has more than 50 GB of space.
C. Create an Amazon Elastic Container Service (Amazon ECS) cluster that uses the AWS Fargate launch type. Create a task definition for the container image with an Amazon Elastic File System (Amazon EFS) volume. Create a service with that task definition.
D. Create an Amazon Elastic Container Service (Amazon ECS) cluster that uses the Amazon EC2 launch type with an Amazon Elastic Block Store (Amazon EBS) volume that has more than 50 GB of space. Create a task definition for the container image. Create a service with that task definition.

A

C. Create an Amazon Elastic Container Service (Amazon ECS) cluster that uses the AWS Fargate launch type. Create a task definition for the container image with an Amazon Elastic File System (Amazon EFS) volume. Create a service with that task definition.

Key here is that it requires 50GB. REMEMBER Lambda supports images up to 10 GB in size. Ephemeral Storage Restrictions

Lambdas have limited temporary storage capacity for the ephemeral directory /tmp. You can increase the default size of 512 MB up to 10 GB https://blog.awsfundamentals.com/lambda-limitations

This option involves using Amazon ECS with Fargate, a serverless computing engine for containers. Using Amazon EFS enables persistent storage across multiple containers and instances. This approach meets the requirement of providing 50GB of storage and is serverless as it uses Fargate.

AWS Fargate is a serverless, pay-as-you-go compute engine that lets you focus on building applications without managing servers. AWS Fargate is compatible with both Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS). https://docs.aws.amazon.com/AmazonECS/latest/developerguide/fargate-task-storage.html When provisioned, each Amazon ECS task hosted on AWS Fargate receives the following ephemeral storage (temporary file storage) for bind mounts. This can be mounted and shared among containers using the volumes, mountPoints and volumesFrom parameters in the task definition.

203
Q

776 # A company needs to use its on-premises LDAP directory service to authenticate its users to the AWS Management Console. The directory service does not support Security Assertion Markup Language (SAML). Which solution meets these requirements?

A. Enable AWS IAM (AWS Single Sign-On) Identity Center between AWS and on-premises LDAP.
B. Create an IAM policy that uses AWS credentials and integrate the policy into LDAP.
C. Configure a process that rotates IAM credentials each time LDAP credentials are updated.
D. Develop an on-premises custom identity broker application or process that uses AWS Security Token Service (AWS STS) to obtain short-lived credentials.

A

D. Develop an on-premises custom identity broker application or process that uses AWS Security Token Service (AWS STS) to obtain short-lived credentials.

This option involves creating a custom on-premises identity broker application or process that communicates with AWS Security Token Service (STS) to obtain short-lived credentials. This custom solution acts as an intermediary between the on-premises LDAP directory and AWS. Provides a way to obtain temporary security credentials without requiring direct LDAP support. This is a common approach for scenarios where SAML is not an option.

**Option A: Enable AWS IAM (AWS Single Sign-On) Identity Center between AWS and on-premises LDAP. ** - Explanation: AWS Single Sign-On (SSO) is designed to simplify AWS access management for enterprise users and administrators. Supports integration with local directories, but primarily uses SAML for federation. Since the local LDAP directory does not support SAML, option A may not be suitable for the given scenario. **Option B: Create an IAM policy that uses AWS credentials and integrate the policy into LDAP. ** - Explanation: This option suggests creating an IAM policy that uses AWS credentials and integrating it into LDAP. However, AWS IAM policies are typically associated with AWS identities, they are not integrated directly into LDAP. This option does not align with common practices for federated authentication. **Option C: Configure a process that rotates IAM credentials each time LDAP credentials are updated. ** - Explanation: Rotating IAM credentials every time LDAP credentials are updated introduces complexity and operational overhead. Additionally, IAM credentials are typically long-lived, and this approach does not provide the typical single sign-on (SSO) experience that federated authentication solutions offer.

204
Q

777 # A company stores multiple Amazon Machine Images (AMIs) in an AWS account to launch its Amazon EC2 instances. AMIs contain critical data and configurations that are necessary for business operations. The company wants to implement a solution that recovers accidentally deleted AMIs quickly and efficiently. Which solution will meet these requirements with the LESS operating overhead?

A. Create Amazon Elastic Block Store (Amazon EBS) snapshots of the AMIs. Store snapshots in a separate AWS account.
B. Copy all AMIs to another AWS account periodically.
C. Create a Recycle Bin retention rule.
D. Upload the AMIs to an Amazon S3 bucket that has cross-region replication.

A

C. Create a Recycle Bin retention rule.

205
Q

778 # A company has 150TB of archived image data stored on-premises that needs to be moved to the AWS cloud within the next month. The company’s current network connection allows uploads of up to 100 Mbps for this purpose only during the night. What is the MOST cost effective mechanism to move this data and meet the migration deadline?

A. Use AWS Snowmobile to send data to AWS.
B. Order multiple AWS Snowball devices to send data to AWS.
C. Enable Amazon S3 transfer acceleration and upload data securely.
D. Create an Amazon S3 VPC endpoint and establish a VPN to upload the data.

A

B. Order multiple AWS Snowball devices to send data to AWS.

206
Q

779 # A company wants to migrate its three-tier application from on-premises to AWS. The web tier and application tier run on third-party virtual machines (VMs). The database tier is running on MySQL. The company needs to migrate the application by making as few architectural changes as possible. The company also needs a database solution that can restore data to a specific point in time. Which solution will meet these requirements with the LESS operating overhead?

A. Migrate the web tier and application tier to Amazon EC2 instances in private subnets. Migrate the database tier to Amazon RDS for MySQL on private subnets.
B. Migrate the web tier to Amazon EC2 instances in public subnets. Migrate the application tier to EC2 instances in private subnets. Migrate the database tier to Amazon Aurora MySQL in private subnets.
C. Migrate the web tier to Amazon EC2 instances on public subnets. Migrate the application tier to EC2 instances in private subnets. Migrate the database tier to Amazon RDS for MySQL on private subnets.
D. Migrate the web tier and application tier to Amazon EC2 instances on public subnets. Migrate the database tier to Amazon Aurora MySQL on public subnets.

A

B. Migrate the web tier to Amazon EC2 instances in public subnets. Migrate the application tier to EC2 instances in private subnets. Migrate the database tier to Amazon Aurora MySQL in private subnets.

This option introduces Amazon Aurora MySQL for the database tier, which is a fully managed relational database service compatible with MySQL. Aurora supports Timely recovery. While it adds a managed service, it also requires changes to the database technology, which can introduce some operational considerations.

Aurora provides automated backup and point-in-time recovery, simplifying backup management and data protection. Continuous incremental backups are taken automatically and stored in Amazon S3, and data retention periods can be specified to meet compliance requirements.

NOTE: **Option A: Migrate the web tier and application tier to Amazon EC2 instances in private subnets. Migrate the database tier to Amazon RDS for MySQL on private subnets. ** - Explanation: This option migrates the web and application tier to EC2 instances and the database tier to Amazon RDS for MySQL. RDS for MySQL provides point-in-time recovery capabilities, allowing you to restore the database to a specific point in time. This option minimizes architectural changes and operational overhead while using managed services for the database.

207
Q

780 # A development team is collaborating with another company to create an integrated product. The other company needs to access an Amazon Simple Queue Service (Amazon SQS) queue that is contained in the development team’s account. The other company wants to poll the queue without giving up its own account permissions to do so. How should a solutions architect provide access to the SQS queue?

A. Create an instance profile that provides the other company with access to the SQS queue.
B. Create an IAM policy that gives the other company access to the SQS queue.
C. Create an SQS access policy that provides the other company with access to the SQS queue.
D. Create an Amazon Simple Notification Service (Amazon SNS) access policy that provides the other company with access to the SQS queue.

A

C. Create an SQS access policy that provides the other company with access to the SQS queue.

SQS access policies are specifically designed to control access to SQS resources. You can create an SQS access policy that allows the other company’s AWS account Or specific identities to access the SQS queue. This is a suitable option for sharing access to an SQS queue across all accounts.

Summary: - Option B (Create an IAM policy) and Option C (Create an SQS access policy) are valid and common approaches to granting cross-account access to an SQS queue . Choosing between them may depend on factors such as whether you want to manage access through IAM or directly through SQS policies. Both options allow you to grant fine-grained permissions for the other company to poll the SQS queue without exposing broader permissions in your AWS account.

208
Q

781 # A company’s developers want a secure way to gain SSH access to the company’s Amazon EC2 instances running the latest version of Amazon Linux. Developers work remotely and in the corporate office. The company wants to use AWS services as part of the solution. EC2 instances are hosted in a private VPC subnet and access the Internet through a NAT gateway that is deployed on a public subnet. What should a solutions architect do to meet these requirements in the most cost-effective way?

A. Create a bastion host on the same subnet as the EC2 instances. Grant the ec2:CreateVpnConnection IAM permission to developers. Install EC2 Instance Connect so that developers can connect to EC2 instances.
B. Create an AWS Site-to-Site VPN connection between the corporate network and the VPC. Instruct developers to use the site-to-site VPN connection to access EC2 instances when the developers are on the corporate network. Instruct developers to set up another VPN connection to access when working remotely.
C. Create a bastion host on the VP public subnet. Configure the bastion host’s security groups and SSH keys to only allow SSH connections and authentication from developers’ remote and corporate networks. Instruct developers to connect through the bastion host using SSH to reach the EC2 instances.
D. Attach the AmazonSSSMManagedInstanceCore IAM policy to an IAM role that is associated with the EC2 instances. Instruct developers to use AWS Systems Manager Session Manager to access EC2 instances.

A

D. Attach the AmazonSSSMManagedInstanceCore IAM policy to an IAM role that is associated with the EC2 instances. Instruct developers to use AWS Systems Manager Session Manager to access EC2 instances.

This option involves using AWS Systems Manager Session Manager, which provides a secure and auditable way to access EC2 instances. Eliminates the need for a bastion host and allows access directly through the AWS Management Console. This can be a cost-effective and efficient solution.

Summary: - Option A (Create a bastion host with EC2 Instance Connect), Option Create a bastion host in the public subnet) and Option D (Use AWS Systems Manager Session Manager) are all viable options for secure SSH access. - Option D (AWS Systems Manager Session Manager) is often considered a cost-effective and secure solution without the need for a separate bastion host. Simplifies access and provides audit trails.
- Option A and Option C involve bastion hosts, but have different implementation details. Option A focuses on EC2 Instance Connect, while Option C uses a traditional bastion host with restricted access. Conclusion: - Option D (AWS Systems Manager Session Manager) is probably the most cost-effective and operationally efficient solution for secure SSH access to EC2 instances on a private subnet. It aligns with AWS best practices and simplifies management without the need for a separate bastion host.

209
Q

782 # A pharmaceutical company is developing a new medicine. The volume of data that the company generates has grown exponentially in recent months. The company’s researchers regularly require that a subset of the entire data set be made available immediately with minimal delay. However, it is not necessary to access the entire data set daily. All data currently resides on local storage arrays, and the company wants to reduce ongoing capital expenditures. Which storage solution should a solutions architect recommend to meet these requirements?

A. Run AWS DataSync as a scheduled cron job to migrate data to an Amazon S3 bucket continuously.
B. Deploy an AWS Storage Gateway file gateway with an Amazon S3 bucket as target storage. Migrate data to the Storage Gateway appliance.
C. Deploy an AWS Storage Gateway volume gateway with cached volumes with an Amazon S3 bucket as target storage. Migrate data to the Storage Gateway appliance.
D. Configure an AWS site-to-site VPN connection from the on-premises environment to AWS. Migrate data to an Amazon Elastic File System (Amazon EFS) file system.

A

C. Deploy an AWS Storage Gateway volume gateway with cached volumes with an Amazon S3 bucket as target storage. Migrate data to the Storage Gateway appliance.

This option involves using Storage Gateway with cached volumes, storing frequently accessed data locally for low-latency access, and asynchronously backing up the entire data set to Amazon S3.

  • For the specific requirement of having a subset of the data set immediately available with minimal delay, Option C (Storage Gate Volume Gateway with Cached Volumes) appears to be well aligned. Supports low-latency access to frequently accessed data stored on-premises, while ensuring durability of the overall data set in Amazon S3.
210
Q

783 # A company has a business-critical application running on Amazon EC2 instances. The application stores data in an Amazon DynamoDB table. The company must be able to revert the table to any point within the last 24 hours. Which solution meets these requirements with the LESS operating overhead?

A. Configure point-in-time recovery for the table.
B. Use AWS Backup for the table.
C. Use an AWS Lambda function to make an on-demand backup of the table every hour.
D. Turn on streams on the table to capture a log of all changes to the table in the last 24 hours. Store a copy of the stream in an Amazon S3 bucket.

A

A. Configure point-in-time recovery for the table.

211
Q

784 # A company hosts an application that is used to upload files to an Amazon S3 bucket. Once uploaded, files are processed to extract metadata, which takes less than 5 seconds. The volume and frequency of uploads varies from a few files every hour to hundreds of simultaneous uploads. The company has asked a solutions architect to design a cost-effective architecture that meets these requirements. What should the solutions architect recommend?

A. Configure AWS CloudTrail trails to record S3 API calls. Use AWS AppSync to process the files.
B. Configure an object-created event notification within the S3 bucket to invoke an AWS Lambda function to process the files.
C. Configure Amazon Kinesis data streams to process and send data to Amazon S3. Invokes an AWS Lambda function to process the files.
D. Configure an Amazon Simple Notification Service (Amazon SNS) topic to process files uploaded to Amazon S3. Invokes an AWS Lambda function to process the files.

A

B. Configure an object-created event notification within the S3 bucket to invoke an AWS Lambda function to process the files.

  • This option leverages Amazon S3 event notifications to trigger an AWS Lambda function when an object (file) is created in the S3 bucket.
  • AWS Lambda provides a serverless computing service, enabling code execution without the need to provision or manage servers.
  • Lambda can be programmed to process the files, extract metadata and perform any other necessary tasks.
  • Lambda can automatically scale based on the number of incoming events, making it suitable for variable uploads, from a few files per hour to hundreds of simultaneous uploads.
212
Q

785 # An enterprise application is deployed on Amazon EC2 instances and uses AWS Lambda functions for an event-driven architecture. The company uses non-production development environments in a different AWS account to test new features before the company deploys the features to production. Production instances show constant usage due to clients in different time zones. The company uses non-production instances only during business hours Monday through Friday. The company does not use non-production instances on weekends. The company wants to optimize costs for running its application on AWS. Which solution will meet these requirements in the MOST cost-effective way?

A. Use on-demand instances for production instances. Use dedicated hosts for non-production instances only on weekends.
B. Use reserved instances for production and non-production instances. Shut down non-production instances when they are not in use.
C. Use compute savings plans for production instances. Use on-demand instances for non-production instances. Shut down non-production instances when they are not in use.
D. Use dedicated hosts for production instances. Use EC2 instance savings plans for non-production instances.

A

C. Use compute savings plans for production instances. Use on-demand instances for non-production instances. Shut down non-production instances when they are not in use.

  • Compute Savings Plans provide significant cost savings for a commitment to a constant amount of compute usage (measured in $/hr) over a 1 or 3 year term. This is suitable for production instances that show constant usage.
  • Using on-demand instances for non-production instances allows for flexibility without compromise, and shutting down non-production instances when they are not in use helps minimize costs.
  • This approach takes advantage of the cost-effectiveness of savings plans for predictable workloads and the flexibility of on-demand instances for sporadic use.
213
Q

786 # A company stores data in an on-premises Oracle relational database. The company needs the data to be available in Amazon Aurora PostgreSQL for analysis. The company uses an AWS site-to-site VPN connection to connect its on-premises network to AWS. The company must capture changes that occur to the source database during migration to Aurora PostgreSQL. What solution will meet these requirements?

A. Use the AWS Schema Conversion Tool (AWS SCT) to convert the Oracle schema to an Aurora PostgreSQL schema. Use the AWS Database Migration Service (AWS DMS) full load migration task to migrate the data.
B. Use AWS DataSync to migrate data to an Amazon S3 bucket. Import data from S3 to Aurora PostgreSQL using the Aurora PostgreSQL aws_s3 extension.
C. Use the AWS Schema Conversion Tool (AWS SCT) to convert the Oracle schema to an Aurora PostgreSQL schema. Use AWS Database Migration Service (AWS DMS) to migrate existing data and replicate ongoing changes.
D. Use an AWS Snowball device to migrate data to an Amazon S3 bucket. Import data from S3 to Aurora PostgreSQL using the Aurora PostgreSQL aws_s3 extension.

A

C. Use the AWS Schema Conversion Tool (AWS SCT) to convert the Oracle schema to an Aurora PostgreSQL schema. Use AWS Database Migration Service (AWS DMS) to migrate existing data and replicate ongoing changes.

  • AWS Schema Conversion Tool (AWS SCT): This tool helps convert the source database schema to a format compatible with the target database. In this case, it will help to convert the Oracle schema to an Aurora PostgreSQL schema. - AWS Database Migration Service (AWS DMS):
  • Full Load Migration: Can be used initially to migrate existing data from on-premises Oracle database to Aurora PostgreSQL.
  • Ongoing Change Replication: AWS DMS can be configured for continuous replication, capturing changes to the source database and applying them to the target Aurora PostgreSQL database. This ensures that changes made to the Oracle database during the migration process are also reflected in Aurora PostgreSQL.
214
Q

787 # A company built an application with Docker containers and needs to run the application in the AWS cloud. The company wants to use a managed service to host the application. The solution must scale appropriately according to the demand for individual container services. The solution should also not result in additional operational overhead or infrastructure to manage. What solutions will meet these requirements? (Choose two.)

A. Use Amazon Elastic Container Service (Amazon ECS) with AWS Fargate.
B. Use Amazon Elastic Kubernetes Service (Amazon EKS) with AWS Fargate.
C. Provision an Amazon API Gateway API. Connect the API to AWS Lambda to run the containers.
D. Use Amazon Elastic Container Service (Amazon ECS) with Amazon EC2 worker nodes.
E. Use Amazon Elastic Kubernetes Service (Amazon EKS) with Amazon EC2 worker nodes.

A

A. Use Amazon Elastic Container Service (Amazon ECS) with AWS Fargate.
C. Provision an Amazon API Gateway API. Connect the API to AWS Lambda to run the containers.

  • AWS Fargate is a serverless compute engine for containers, eliminating the need to manage the underlying EC2 instances.
  • Automatically scales to meet application demand without manual intervention.
  • Abstracts infrastructure management, providing a serverless experience for containerized applications.
215
Q

788 # An e-commerce company is running a seasonal online sale. The company hosts its website on Amazon EC2 instances that span multiple availability zones. The company wants its website to handle traffic surges during the sale. Which solution will meet these requirements in the MOST cost-effective way?

A. Create an auto-scaling group that is large enough to handle the maximum traffic load. Stop half of your Amazon EC2 instances. Configure the auto-scaling group to use stopped instances to scale when traffic increases.
B. Create an Auto Scaling group for the website. Set the minimum auto-scaling group size so that it can handle large volumes of traffic without needing to scale.
C. Use Amazon CloudFront and Amazon ElastiCache to cache dynamic content with an auto-scaling group set as the origin. Configure the Auto Scaling group with the instances necessary to populate CloudFront and ElastiCache. Scales after the cache is completely full.
D. Configure an auto-scaling group to scale as traffic increases. Create a launch template to start new instances from a preconfigured Amazon Machine Image (AMI).

A

D. Configure an auto-scaling group to scale as traffic increases. Create a launch template to start new instances from a preconfigured Amazon Machine Image (AMI).

Provides elasticity, automatically scaling to handle increased traffic. The launch template allows for consistent instance configuration.

In summary, while each option has its merits, Option D, with its focus on dynamic scaling using auto-scaling and a launch template, is often preferred for its balance of cost-effectiveness and responsiveness to different traffic patterns. . Aligns with best practices for scaling web applications on AWS.

216
Q

789 # A solutions architect must provide an automated solution for an enterprise’s compliance policy that states that security groups cannot include a rule that allows SSH starting at 0.0.0.0/0. It is necessary to notify the company if there is any violation in the policy.

A solution is needed as soon as possible. What should the solutions architect do to meet these requirements with the least operational overhead? A. Write an AWS Lambda script that monitors security groups so that SSH is open to 0.0.0.0/0 addresses and creates a notification whenever it finds one.
B. Enable the restricted ssh AWS Config managed rule and generate an Amazon Simple Notification Service (Amazon SNS) notification when a non-compliant rule is created.
C. Create an IAM role with permissions to globally open security groups and network ACLs. Create an Amazon Simple Notification Service (Amazon SNS) topic to generate a notification each time a user assumes the role.
D. Configure a service control policy (SCP) that prevents non-administrative users from creating or editing security groups. Create a notification in the ticket system when a user requests a rule that requires administrator permissions.

A

B. Enable the restricted ssh AWS Config managed rule and generate an Amazon Simple Notification Service (Amazon SNS) notification when a non-compliant rule is created.

Takes advantage of the AWS Config managed rule, minimizing manual scripting. Config provides automated compliance checks.

217
Q

790 # Use Amazon Elastic Kubernetes Service (Amazon EKS) with Amazon EC2 worker nodes. A company has deployed an application to an AWS account. The application consists of microservices running on AWS Lambda and Amazon Elastic Kubernetes Service (Amazon EKS). A separate team supports each microservice. The company has multiple AWS accounts and wants to give each team their own account for their microservices. A solutions architect needs to design a solution that provides service-to-service communication over HTTPS (port 443). The solution must also provide a service registry for service discovery. Which solution will meet these requirements with the LEAST administrative overhead?

A. Create an inspection VPC. Deploy an AWS Network Firewall firewall in the inspection VPC. Attach the inspection VPC to a new transit gateway. Routes VPC-to-VPC traffic to the inspection VPC. Apply firewall rules to allow only HTTPS communication.
B. Create a VPC Lattice service network. Associate the microservices with the service network. Define HTTPS listeners for each service. Register microservices computing resources as targets. Identify VPCs that need to communicate with the services. Associate those VPCs with the service network.
C. Create a network load balancer (NLB) with an HTTPS listener and target groups for each microservice. Create an AWS PrivateLink endpoint service for each microservice. Create a VPC interface endpoint in each VPC that needs to consume that microservice.
D. Create peering connections between VPCs that contain microservices. Create a list of prefixes for each service that requires a connection to a client. Create route tables to route traffic to the appropriate VPC. Create security groups to allow only HTTPS communication.

A

B. Create a VPC Lattice service network. Associate the microservices with the service network. Define HTTPS listeners for each service. Register microservices computing resources as targets. Identify VPCs that need to communicate with the services. Associate those VPCs with the service network.

  • Uses a network of services for association and communication.
  • Specific HTTPS listeners and targets for each service.

Taking into account the limitations and the need for the least administrative overhead, option B provides a decentralized approach with a network of services. While it may involve some initial configuration, it allows for specific association and communication between microservices.

218
Q

791 # A company has a mobile game that reads most of its metadata from an Amazon RDS DB instance. As the game increased in popularity, developers noticed slowdowns related to game metadata loading times. Performance metrics indicate that simply scaling the database will not help. A solutions architect should explore all options including capabilities for snapshots, replication, and sub-millisecond response times. What should the solutions architect recommend to solve these problems?

A. Migrate the database to Amazon Aurora with Aurora Replicas.
B. Migrate the database to Amazon DynamoDB with global tables.
C. Add an Amazon ElastiCache for Redis layer in front of the database.
D. Add an Amazon ElastiCache layer for Memcached in front of the database.

A

B. Migrate the database to Amazon DynamoDB with global tables.

  • DynamoDB is designed for low latency access and can provide sub-millisecond response times.
  • Global tables offer multi-region replication for high availability.
  • DynamoDB’s architecture and features are well suited for scenarios with strict performance expectations.

Other Considerations:
A. Migrate the database to Amazon Aurora with Aurora Replicas:
- Aurora is known for its high performance, but sub-millisecond response times may not be guaranteed in all scenarios.
- Aurora replicas provide read scalability, but may not meet the submillisecond requirement.
C. Add an Amazon ElastiCache for Redis layer in front of the database: - ElastiCache for Redis is an in-memory caching solution. - While it may improve read performance, it may not guarantee sub-millisecond response times for all use cases.
D. Add a layer of Amazon ElastiCache for Memcached in front of the database: - Similar to Redis, ElastiCache for Memcached is a caching solution. - Caching may improve read performance, but may not guarantee sub-millisecond response times.

219
Q

792 # A company uses AWS Organizations for its multi-account AWS setup. The enterprise security organizational unit (OU) needs to share approved Amazon Machine Images (AMIs) with the development OU. AMIs are created by using encrypted AWS Key Management Service (AWS KMS) snapshots. What solution will meet these requirements? (Choose two.)

A. Add the development team’s OU Amazon Resource Name (ARN) to the list of release permissions for AMIs.
B. Add the organizations root Amazon Resource Name (ARN) to the launch permissions list for AMIs.
C. Update the key policy to allow the development team’s OU to use the AWS KMS keys that are used to decrypt snapshots.
D. Add the Amazon Resource Name (ARN) development team account to the list of launch permissions for AMIs.
E. Recreate the AWS KMS key. Add a key policy to allow the root of Amazon Resource Name (ARN) organizations to use the AWS KMS key.

A

A. Add the development team’s OU Amazon Resource Name (ARN) to the list of release permissions for AMIs.
C. Update the key policy to allow the development team’s OU to use the AWS KMS keys that are used to decrypt snapshots.

  • Option A: - Add the Amazon Resource Name (ARN) of the development team’s OU to the launch permissions list for AMIs:
  • Explanation: This option is relevant to control who can start AMIs. By adding the development team’s OU to release permissions, you give them the ability to use AMIs.
  • Fits the requirement: Share AMI.

Option C:
- Update key policy to allow the development team OU to use AWS KMS keys that are used to decrypt snapshots:
- Explanation: This option addresses decryption permissions. If you want your development team’s OU to use AWS KMS keys to decrypt snapshots (required to launch AMIs), adjusting the key policy is the right approach.
- Fits the requirement: Share encrypted snapshots.

220
Q

793 # A data analysis company has 80 offices that are distributed worldwide. Each office hosts 1 PB of data and has between 1 and 2 Gbps of Internet bandwidth. The company needs to perform a one-time migration of a large amount of data from its offices to Amazon S3. The company must complete the migration within 4 weeks. Which solution will meet these requirements in the MOST cost-effective way?

A. Establish a new 10 Gbps AWS Direct Connect connection to each office. Transfer the data to Amazon S3.
B. Use multiple AWS Snowball Edge storage-optimized devices to store and transfer data to Amazon S3.
C. Use an AWS snowmobile to store and transfer the data to Amazon S3.
D. Configure an AWS Storage Gateway Volume Gateway to transfer data to Amazon S3.

A

B. Use multiple AWS Snowball Edge storage-optimized devices to store and transfer data to Amazon S3.

  • Considerations: This option can be cost-effective and efficient, especially when dealing with large data sets. Take advantage of physical transportation, reducing the impact on Internet bandwidth.

Other Options:
- Option C: Use an AWS Snowmobile:
- Explanation: AWS Snowmobile is a high-capacity data transfer service that involves a secure shipping container. It is designed for massive data migrations.
- Considerations: While Snowmobile is efficient for extremely large data volumes, it could be overkill for the described scenario of 80 offices with 1 PB of data each.

221
Q

794 # A company has an Amazon Elastic File System (Amazon EFS) file system that contains a set of reference data. The company has applications on Amazon EC2 instances that need to read the data set. However, applications should not be able to change the data set. The company wants to use IAM access control to prevent applications from modifying or deleting the data set. What solution will meet these requirements?

A. Mount the EFS file system in read-only mode from within the EC2 instances.
B. Create a resource policy for the EFS file system that denies the elasticfilesystem:ClientWrite action to IAM roles that are attached to EC2 instances.
C. Create an identity policy for the EFS file system that denies the elasticfilesystem:ClientWrite action on the EFS file system.
D. Create an EFS access point for each application. Use Portable Operating System Interface (POSIX) file permissions to allow read-only access to files in the root directory.

A

C. Create an identity policy for the EFS file system that denies the elasticfilesystem:ClientWrite action on the EFS file system.

  • Option C: Create an identity policy for the EFS file system that denies the elasticfilesystem:ClientWrite action on the EFS file system:
  • This option is also aligned with IAM access control, denying actions using identity policies.
  • Option C is a valid option to control modifications through IAM

Other Options:
- Option B: Create a resource policy for the EFS file system that denies the elasticfilesystem:ClientWrite action to IAM roles that are associated with EC2 instances:
- This option involves IAM roles and policies of resources, aligning with the IAM access control requirement.
- Option B is a valid option to use IAM access control to prevent modifications.

222
Q

795 # A company has hired a third-party vendor to perform work on the company’s AWS account. The provider uses an automated tool that is hosted in an AWS account that the provider owns. The provider does not have IAM access to the company’s AWS account. The company must grant the provider access to the company’s AWS account. Which solution will MOST securely meet these requirements?

A. Create an IAM role in the company account to delegate access to the provider IAM role. Attach the appropriate IAM policies to the role for the permissions the provider requires.
B. Create an IAM user in the company account with a password that meets the password complexity requirements. Attaches the appropriate IAM policies to the user for the permissions the provider requires.
C. Create an IAM group in the company account. Adds the automated tool IAM user from the provider account to the group. Attach the appropriate IAM policies to the group for the permissions that the provider requires.
D. Create an IAM user in the company account that has a permission limit that the provider account allows. Attaches the appropriate IAM policies to the user for the permissions the provider requires.

A

A. Create an IAM role in the company account to delegate access to the provider IAM role. Attach the appropriate IAM policies to the role for the permissions the provider requires.

  • Explanation: This option involves creating a cross-account IAM role to delegate access to the provider IAM role. The role will have policies attached for the required permissions.
  • Security: This is a secure approach as it follows the principle of least privilege and uses cross-account roles for access.

Other Options:
- Option B: Create an IAM user in the company account with a password that meets the password complexity requirements:
- Explanation: This option involves creating a local IAM user in the account of the company with the attached policies for the required permits.
- Security: Using a local IAM user with a password could introduce security risks, and it is generally recommended to use temporary roles and credentials instead.

  • Option C: Create an IAM group in the company account. Add the automated tool IAM user from the provider account to the group:
  • Explanation: This option involves grouping the provider IAM user into the enterprise IAM group and attaching policies to the group for permissions.
  • Security: While IAM groups are a good practice, directly adding external IAM users (from another account) to a group in the company account is less secure and may not be a best practice.
  • Option D: Create an IAM user in the company account that has a permission limit that allows the provider account
  • Explanation: This option involves creating an IAM User with a permission limit that allows the provider account. Policies are attached to the user to obtain the required permissions.
  • Security: This approach uses permissions limits for control, but directly creating the IAM user might not be as secure as using roles.
223
Q

796 # A company wants to run its experimental workloads in the AWS cloud. The company has a budget for cloud spending. The company’s CFO is concerned about the responsibility of each department’s cloud spending. The CFO wants to be notified when the spending threshold reaches 60% of the budget. What solution will meet these requirements?

A. Use cost allocation tags on AWS resources to label owners. Create usage budgets in AWS Budgets. Add an alert threshold to receive notification when spending exceeds 60% of the budget.
B. Use AWS Cost Explorer forecasts to determine resource owners. Use AWS Cost Anomaly Detection to create alert threshold notifications when spending exceeds 60% of budget.
C. Use cost allocation tags on AWS resources to tag owners. Use the AWS Support API in AWS Trusted Advisor to create alert threshold notifications when spending exceeds 60% of budget.
D. Use AWS Cost Explorer forecasts to determine resource owners. Create usage budgets in AWS Budgets. Add an alert threshold to be notified when spending exceeds 60% of budget.

A

A. Use cost allocation tags on AWS resources to label owners. Create usage budgets in AWS Budgets. Add an alert threshold to receive notification when spending exceeds 60% of the budget.

  • Explanation: This option suggests using cost allocation tags to tag owners, create usage budgets in AWS budgets, and set an alert threshold for notification when spending exceeds 60% of budget.
  • Pros: Uses cost allocation tags to identify resource owners, and AWS budgets are designed specifically for budgeting and cost tracking.
224
Q

797 # A company wants to deploy an internal web application on AWS. The web application should only be accessible from the company office. The company needs to download security patches for the web application from the Internet. The company has created a VPC and configured an AWS site-to-site VPN connection to the company office. A solutions architect must design a secure architecture for the web application. What solution will meet these requirements?

A. Deploy the web application to Amazon EC2 instances on public subnets behind a public application load balancer (ALB). Connect an Internet gateway to the VPC. Set the ALB security group input source to 0.0.0.0/0.
B. Deploy the web application on Amazon EC2 instances in private subnets behind an internal application load balancer (ALB). Deploy NAT gateways on public subnets. Attach an Internet gateway to the VPC. Set the inbound source of the ALB’s security group to the company’s office network CIDR block.
C. Deploy the web application to Amazon EC2 instances on public subnets behind an internal application load balancer (ALB). Implement NAT gateways on private subnets. Connect an Internet gateway to the VPSet, the outbound destination of the ALB security group, to the CIDR block of the company’s office network.
D. Deploy the web application to Amazon EC2 instances in private subnets behind a public application load balancer (ALB). Connect an Internet gateway to the VPC. Set the ALB security group output destination to 0.0.0.0/0.

A

B. Deploy the web application on Amazon EC2 instances in private subnets behind an internal application load balancer (ALB). Deploy NAT gateways on public subnets. Attach an Internet gateway to the VPC. Set the inbound source of the ALB’s security group to the company’s office network CIDR block.

  • Explanation: This option deploys the web application on private subnets behind an internal ALB, with NAT gateways on public subnets. Allows incoming traffic from the CIDR block of the company’s office network.
  • Pros: Restricts incoming traffic to the company’s office network.
225
Q

798 # A company maintains its accounting records in a custom application that runs on Amazon EC2 instances. The company needs to migrate data to an AWS managed service for development and maintenance of application data. The solution should require minimal operational support and provide immutable, cryptographically verifiable records of data changes. Which solution will meet these requirements in the MOST cost-effective way?

A. Copy the application logs to an Amazon Redshift cluster.
B. Copy the application logs to an Amazon Neptune cluster.
C. Copy the application logs to an Amazon Timestream database.
D. Copy the records from the application into an Amazon Quantum Ledger database (Amazon QLDB) ledger.

A

D. Copy the records from the application into an Amazon Quantum Ledger database (Amazon QLDB) ledger.

  • Explanation: Amazon QLDB is designed for ledger-style applications, providing a transparent, immutable, and cryptographically verifiable record of transactions. It is suitable for use cases where an immutable and transparent record of all changes is needed.
  • Pros: Designed specifically for immutable records and cryptographic verification.
226
Q

799 # A company’s marketing data is loaded from multiple sources into an Amazon S3 bucket. A series of data preparation jobs aggregate the data for reporting. Data preparation jobs must be run at regular intervals in parallel. Some jobs must be run in a specific order later. The company wants to eliminate the operational overhead of job error handling, retry logic, and state management. What solution will meet these requirements?

A. Use an AWS Lambda function to process the data as soon as the data is uploaded to the S3 bucket. Invokes other Lambda functions at regularly scheduled intervals.
B. Use Amazon Athena to process the data. Use Amazon EventBridge Scheduler to invoke Athena on a regular internal.
C. Use AWS Glue DataBrew to process the data. Use an AWS Step Functions state machine to run DataBrew data preparation jobs.
D. Use AWS Data Pipeline to process the data. Schedule the data pipeline to process the data once at midnight.

A

C. Use AWS Glue DataBrew to process the data. Use an AWS Step Functions state machine to run DataBrew data preparation jobs.

It provides detailed control over the work order and integrates with Step Functions for workflow orchestration and management.

  • Explanation: AWS Glue DataBrew can be used for data preparation, and AWS Step Functions can provide orchestration for jobs that must be executed in a specific order. Step Functions can also handle error handling, retry logic, and state management.
  • Pros: Detailed control over the work order, built-in orchestration capabilities.
227
Q

800 # A solutions architect is designing a payment processing application that runs on AWS Lambda in private subnets across multiple availability zones. The app uses multiple Lambda functions and processes millions of transactions every day. The architecture should ensure that the application does not process duplicate payments. What solution will meet these requirements?

A. Use Lambda to retrieve all payments due. Post payments due to an Amazon S3 bucket. Configure the S3 bucket with an event notification to invoke another Lambda function to process payments due.
B. Use Lambda to retrieve all payments due. Posts payments due to an Amazon Simple Queue Service (Amazon SQS) queue. Set up another Lambda function to poll the SQS queue and process payments due.
C. Use Lambda to retrieve all due payments. Publish the payments to an Amazon Simple Queue Service (Amazon SQS) FIFO queue. Configure another Lambda function to poll the FIFO queue and process payments due.
D. Use Lambda to retrieve all payments due. Store payments due in an Amazon DynamoDB table. Configure flows in the DynamoDB table to invoke another Lambda function to process payments due.

A

C. Use Lambda to retrieve all due payments. Publish the payments to an Amazon Simple Queue Service (Amazon SQS) FIFO queue. Configure another Lambda function to poll the FIFO queue and process payments due.

  • Explanation: Similar to Option B, but uses an SQS FIFO queue, which provides one-time ordering and processing.
  • Pros: Ensures message ordering and processing in one go.

Considering the requirement to ensure that the application does not process duplicate payments, Option C (Amazon SQS FIFO Queue) appears to be the most appropriate option. It takes advantage of the reliability, ordering and one-time processing features of an SQS FIFO queue, which align with the need to process payments without duplicates.

NOTE: Option b with regular SQS queue, Potential for message duplication if not handled correctly.

228
Q

801 # A company runs multiple workloads in its on-premises data center. The company’s data center cannot scale fast enough to meet the company’s growing business needs. The company wants to collect usage and configuration data about on-premises servers and workloads to plan a migration to AWS. What solution will meet these requirements?

A. Set the starting AWS Region to AWS Migration Hub. Use AWS Systems Manager to collect data about on-premises servers.
B. Set the home AWS Region in AWS Migration Hub. Use AWS Application Discovery Service to collect data about on-premises servers.
C. Use the AWS Schema Conversion Tool (AWS SCT) to create the relevant templates. Use AWS Trusted Advisor to collect data about on-premises servers.
D. Use the AWS Schema Conversion Tool (AWS SCT) to create the relevant templates. Use the AWS Database Migration Service (AWS DMS) to collect data about on-premises servers.

A

B. Set the home AWS Region in AWS Migration Hub. Use AWS Application Discovery Service to collect data about on-premises servers.

AWS ADS is specifically designed to discover detailed information about servers, applications, and dependencies, providing a complete view of the on-premises environment.

229
Q

802 # A company has an organization in AWS Organizations that has all features enabled. The company requires that all API calls and logins to any existing or new AWS account be audited. The company needs a managed solution to avoid additional work and minimize costs. The business also needs to know when any AWS account does not meet the AWS Foundational Security Best Practices (FSBP) standard. Which solution will meet these requirements with the LESS operating overhead?

A. Deploy an AWS control tower environment in the Organization Management account. Enable AWS Security Hub and AWS Control Tower Account Factory in your environment.
B. Deploy an AWS Control Tower environment in a dedicated Organization Member account. Enable AWS Security Hub and AWS Control Tower Account Factory in your environment.
C. Use AWS Managed Services (AMS) Accelerate to build a multi-account landing zone (MALZ). Submit an RFC to the Amazon GuardDuty self-service provisioning in MALZ.
D. Use AWS Managed Services (AMS) Accelerate to build a multi-account landing zone (MALZ). Submit an RFC to the AWS Security Hub self-service provision on the MALZ.

A

A. Deploy an AWS control tower environment in the Organization Management account. Enable AWS Security Hub and AWS Control Tower Account Factory in your environment.

Explanation: Deploy AWS Control Tower to the Organization Management Account, enable AWS Security Hub and AWS Control Tower Account Factory. Pros: Centralized deployment to the Organization Management account provides a more efficient way to manage and govern multiple accounts. Simplifies operations and reduces the overhead of implementing and managing the Control Tower in each separate account.

230
Q

803 # A company has stored 10 TB of log files in Apache Parquet format in an Amazon S3 bucket. From time to time, the company needs to use SQL to analyze log files. Which solution will meet these requirements in the MOST cost-effective way?

A. Create an Amazon Aurora MySQL database. Migrate S3 bucket data to Aurora using AWS Database Migration Service (AWS DMS). Issue SQL statements to the Aurora database.
B. Create an Amazon Redshift cluster. Use Redshift Spectrum to execute SQL statements directly on data in your S3 bucket.
C. Create an AWS Glue crawler to store and retrieve table metadata from the S3 bucket. Use Amazon Athena to run SQL statements directly on data in your S3 bucket.
D. Create an Amazon EMR cluster. Use Apache Spark SQL to execute SQL statements directly on data in the S3 bucket.

A

C. Create an AWS Glue crawler to store and retrieve table metadata from the S3 bucket. Use Amazon Athena to run SQL statements directly on data in your S3 bucket.

  • AWS Glue Crawler: AWS Glue can discover and store metadata about log files using a crawler. The crawler automatically identifies the schema and structure of the data in the S3 bucket, making it easy to query.
  • Amazon Athena: Athena is a serverless query service that allows you to run SQL queries directly on data in Amazon S3. It supports querying data in various formats, including Apache Parquet. Since Athena is serverless, you only pay for the queries you run, making it a cost-effective solution.

Other Options: Option A (using Amazon Aurora MySQL with AWS DMS) involves unnecessary data migration and may result in increased costs and complexity. Option B (using Amazon Redshift Spectrum) introduces the overhead of managing a Redshift cluster, which might be overkill for occasional SQL analysis. Option D (Using Amazon EMR with Apache Spark SQL) involves setting up and managing an EMR cluster, which may be more complex and expensive than necessary for occasional log file queries.

231
Q

804 # An enterprise needs a solution to prevent AWS CloudFormation stacks from deploying AWS Identity and Access Management (IAM) resources that include an inline policy or “*” in the declaration. The solution should also prohibit the deployment of Amazon EC2 instances with public IP addresses. The company has AWS Control Tower enabled in its organization in AWS organizations. What solution will meet these requirements?

A. Use proactive controls in AWS Control Tower to block the deployment of EC2 instances with public IP addresses and inline policies with elevated or “” access.
B. Use AWS Control Tower detective controls to block the deployment of EC2 instances with public IP addresses and inline policies with elevated or “
” access.
C. Use AWS Config to create rules for EC2 and IAM compliance. Configure rules to run an AWS Systems Manager Session Manager automation to delete a resource when it is not supported.
D. Use a service control policy (SCP) to block actions for the EC2 instances and IAM resources if the actions lead to noncompliance.

A

D. Use a service control policy (SCP) to block actions for the EC2 instances and IAM resources if the actions lead to noncompliance.

  • Service Control Policies (SCP): SCPs are used to set fine-grained permissions for entities in an AWS organization. They allow you to set controls over what actions are allowed or denied on your accounts. In this scenario, an SCP can be created to deny specific actions related to EC2 instances and IAM resources that have inline policies with elevated or “*” access.
232
Q

805 # A company’s web application that is hosted on the AWS cloud has recently increased in popularity. The web application currently exists on a single Amazon EC2 instance on a single public subnet. The web application has not been able to meet the demand of increased web traffic. The business needs a solution that provides high availability and scalability to meet growing user demand without rewriting the web application. What combination of steps will meet these requirements? (Choose two.)

A. Replace the EC2 instance with an instance optimized for the larger compute.
B. Configure Amazon EC2 auto-scaling with multiple availability zones on private subnets.
C. Configure a NAT gateway on a public subnet to handle web requests.
D. Replace the EC2 instance with a larger memory-optimized instance.
E. Configure an application load balancer in a public subnet to distribute web traffic.

A

B. Configure Amazon EC2 auto-scaling with multiple availability zones on private subnets.
E. Configure an application load balancer in a public subnet to distribute web traffic.

  • Amazon EC2 Auto Scaling (Option B): By configuring Auto Scaling with multiple availability zones, you ensure that your web application can automatically adjust the number of instances to handle different levels of demand. This improves availability and scalability.
  • Application Load Balancer (Option E): An application load balancer (ALB) on a public subnet can distribute incoming web traffic across multiple EC2 instances. ALB is designed for high availability and can efficiently handle traffic distribution, improving the overall performance of the web application.
233
Q

806 # A company has AWS Lambda functions that use environment variables. The company does not want its developers to see environment variables in plain text. What solution will meet these requirements?

A. Deploy code to Amazon EC2 instances instead of using Lambda functions.
B. Configure SSL encryption on Lambda functions to use AWS CloudHSM to store and encrypt environment variables.
C. Create a certificate in AWS Certificate Manager (ACM). Configure Lambda functions to use the certificate to encrypt environment variables.
D. Create an AWS Key Management Service (AWS KMS) key. Enable encryption helpers on the Lambda functions to use the KMS key to store and encrypt environment variables.

A

D. Create an AWS Key Management Service (AWS KMS) key. Enable encryption helpers on the Lambda functions to use the KMS key to store and encrypt environment variables.

  • AWS Key Management Service (KMS) provides a secure and scalable way to manage keys. You can create a customer managed key (CMK) in AWS KMS to encrypt and decrypt environment variables used in Lambda functions.
  • By enabling encryption helpers in Lambda functions, you can have Lambda automatically encrypt environment variables using the KMS key. This ensures that environment variables are stored and transmitted securely.
234
Q

807 # An analytics company uses Amazon VPC to run its multi-tier services. The company wants to use RESTful APIs to offer a web analysis service to millions of users. Users must be verified by using an authentication service to access the APIs. Which solution will meet these requirements with the GREATEST operational efficiency?

A. Configure an Amazon Cognito user pool for user authentication. Implement Amazon API Gateway REST APIs with a Cognito authorizer.
B. Configure an Amazon Cognito identity pool for user authentication. Implement Amazon API Gateway HTTP APIs with a Cognito authorizer.
C. Configure an AWS Lambda function to handle user authentication. Deploy Amazon API Gateway REST APIs with a Lambda authorizer.
D. Configure an IAM user to be responsible for user authentication. Deploy Amazon API Gateway HTTP APIs with an IAM authorizer.

A

A. Configure an Amazon Cognito user pool for user authentication. Implement Amazon API Gateway REST APIs with a Cognito authorizer.

  • Amazon Cognito User Pools: Amazon Cognito provides a fully managed service for user identity and authentication that easily scales to millions of users. Setting up a Cognito user pool allows you to manage user authentication efficiently. It supports features such as multi-factor authentication and user management.
  • Amazon API Gateway REST APIs: The REST APIs in Amazon API Gateway are well suited for creating APIs that follow RESTful principles. REST APIs in API Gateway can be configured to use a group of Cognito users as an authorizer, providing a secure and scalable solution to verify users before they can access APIs.
235
Q

808 # A company has a mobile application for customers. Application data is sensitive and must be encrypted at rest. The company uses AWS Key Management Service (AWS KMS). The company needs a solution that prevents accidental deletion of KMS keys. The solution should use Amazon Simple Notification Service (Amazon SNS) to send an email notification to administrators when a user attempts to delete a KMS key. Which solution will meet these requirements with the LESS operating overhead?

A. Create an Amazon EventBridge rule that reacts when a user tries to delete a KMS key. Configure an AWS configuration rule that overrides any deletion of a KMS key. Adds the AWS configuration rule as a target of the EventBridge rule. Create an SNS topic that notifies administrators.
B. Create an AWS Lambda function that has custom logic to prevent deletion of KMS keys. Create an Amazon CloudWatch alarm that is triggered when a user attempts to delete a KMS key. Create an Amazon EventBridge rule that invokes the Lambda function when the DeleteKey operation is performed. Create an SNS topic. Configure the EventBridge rule to publish an SNS message notifying administrators.
C. Create an Amazon EventBridge rule that reacts when the KMS DeleteKey operation is performed. Configure the rule to start an AWS Systems Manager Automation runbook. Configure the run book to cancel the deletion of the KMS key. Create an SNS topic. Configure the EventBridge rule to publish an SNS message notifying administrators.
D. Create an AWS CloudTrail trail. Configure the trail to deliver the logs to a new Amazon CloudWatch log group. Create a CloudWatch alarm based on the metric filter for the CloudWatch log group. Configure the alarm to use Amazon SNS to notify the administrators when the KMS DeleteKey operation is performed.

A

D. Create an AWS CloudTrail trail. Configure the trail to deliver the logs to a new Amazon CloudWatch log group. Create a CloudWatch alarm based on the metric filter for the CloudWatch log group. Configure the alarm to use Amazon SNS to notify the administrators when the KMS DeleteKey operation is performed.

  • AWS CloudTrail Trail: Creates a CloudTrail trail to capture events such as KMS key deletion.
  • Amazon CloudWatch Regios: Configures the trace to deliver logs to a CloudWatch log group.
  • CloudWatch Metrics Filter: Creates a metrics filter on the log group to identify events related to KMS key deletion.
  • CloudWatch Alarm: Creates a CloudWatch alarm based on the metrics filter to notify administrators using Amazon SNS when the KMS DeleteKey operation is performed.

Explanation:
- Option D is recommended because it relies on AWS CloudTrail to capture events, which is common practice for auditing AWS API calls.
- Uses Amazon CloudWatch logs and metric filters to identify specific events (for example, KMS key deletion).
- CloudWatch alarms are used to trigger notifications via Amazon SNS when the defined event occurs.

While all options aim to prevent accidental deletion and notify administrators, Option D stands out as a more optimized and AWS-native solution, leveraging CloudTrail, CloudWatch, and SNS for monitoring and alerting.

236
Q

809 # A company wants to analyze and generate reports to track the usage of its mobile application. The app is popular and has a global user base. The company uses a custom reporting program to analyze application usage. The program generates several reports during the last week of each month. The program takes less than 10 minutes to produce each report. The company rarely uses the program to generate reports outside of the last week of each month. The company wants to generate reports in the shortest time possible when the reports are requested. Which solution will meet these requirements in the MOST cost-effective way?

A. Run the program using Amazon EC2 on-demand instances. Create an Amazon EventBridge rule to start EC2 instances when reporting is requested. Run EC2 instances continuously during the last week of each month.
B. Run the program in AWS Lambda. Create an Amazon EventBridge rule to run a Lambda function when reports are requested.
C. Run the program on Amazon Elastic Container Service (Amazon ECS). Schedule Amazon ECS to run when reports are requested.
D. Run the program using Amazon EC2 Spot Instances. Create an Amazon EventBndge rule to start EC2 instances when reporting is requested. Run EC2 instances continuously during the last week of each month.

A

B. Run the program in AWS Lambda. Create an Amazon EventBridge rule to run a Lambda function when reports are requested.

  • advantages:
  • Serverless execution: AWS Lambda allows you to execute code without provisioning or managing servers. Automatically scales based on demand.
  • Cost-Efficiency: You pay only for the calculation time consumed during the execution of the function.
  • Fast execution: Lambda functions can start quickly, and with proper design, can execute tasks in a short period of time.
  • Event-driven: Integrated with Amazon EventBridge, Lambda can be triggered by events, such as report requests.
  • Considerations:
  • Lambda has execution time limitations (default maximum is 15 minutes). Please ensure that the reporting process can be completed within this time period.

Explanation:
- AWS Lambda is well suited for short-lived and sporadic tasks, making it an ideal choice for occasional reporting requirement.
- With EventBridge, you can trigger Lambda functions based on events, ensuring that the reporting process starts quickly when needed.
- This option is cost-effective as you only pay for the actual compute time used during reporting, without the need to keep instances running continuously.

237
Q

810 # A company is designing a tightly coupled high-performance computing (HPC) environment in the AWS cloud. The enterprise needs to include features that optimize the HPC environment for networking and storage. What combination of solutions will meet these requirements? (Choose two.)

A. Create an accelerator in AWS Global Accelerator. Configure custom routing for the accelerator.
B. Create an Amazon FSx for Luster file system. Configure the file system with scratch storage.
C. Create an Amazon CloudFront distribution. Set the viewer protocol policy to be HTTP and HTTPS.
D. Launch Amazon EC2 instances. Attach an elastic fabric adapter (EFA) to the instances.
E. Create an AWS Elastic Beanstalk deployment to manage the environment.

A

B. Create an Amazon FSx for Luster file system. Configure the file system with scratch storage.
D. Launch Amazon EC2 instances. Attach an elastic fabric adapter (EFA) to the instances.

Option B (Amazon FSx for Luster file system):
- advantages: - High-performance file system: Amazon FSx for Luster provides a high-performance file system optimized for HPC workloads.
- Scratch Storage: Supports scratch storage, which is important for HPC environments to handle temporary data.

  • Considerations:
  • Scratch storage is ephemeral, so it is suitable for temporary data, and you may need additional storage solutions for persistent data.

Option D (Amazon EC2 instances with Elastic Fabric Adapter - EFA):
- advantages:
- High-performance networking: Elastic Fabric Adapter (EFA) improves networking capabilities, providing a Lower latency communication between instances in an HPC cluster.
- Tightly coupled communication: EFA enables tightly coupled communication between nodes in an HPC cluster, making it suitable for parallel computing workloads.

  • Considerations:
  • Ensure your HPC applications and software support EFA for optimal performance.
238
Q

811 # A company needs a solution to prevent photos with unwanted content from being uploaded to the company’s web application. The solution should not include training a machine learning (ML) model. What solution will meet these requirements?

A. Create and deploy a model using Amazon SageMaker Autopilot. Creates a real-time endpoint that the web application invokes when new photos are uploaded.
B. Create an AWS Lambda function that uses Amazon Rekognition to detect unwanted content. Create a Lambda function URL that the web application invokes when new photos are uploaded.
C. Create an Amazon CloudFront function that uses Amazon Comprehend to detect unwanted content. Associate the function with the web application.
D. Create an AWS Lambda function that uses Amazon Rekognition Video to detect unwanted content. Creates a Lambda function URL that the web app invokes when new photos are uploaded.

A

B. Create an AWS Lambda function that uses Amazon Rekognition to detect unwanted content. Create a Lambda function URL that the web application invokes when new photos are uploaded.

  • Advantages:
  • Pre-built ML model: Amazon Rekognition provides pre-trained models for image analysis, including content moderation to detect unwanted content.
  • Serverless Execution: AWS Lambda allows you to run code without managing servers, making it a scalable and cost-effective solution.
  • Considerations:
  • You need to handle the response of the Lambda function in the web application based on the content moderation results.

Explanation:
- Option B takes advantage of Amazon Rekognition’s capabilities to analyze images for unwanted content. By creating an AWS Lambda function that uses Rekognition, you can easily integrate this content moderation process into your web application workflow without needing to train a custom machine learning model.

239
Q

812 # A company uses AWS to run its e-commerce platform. The platform is critical to the company’s operations and has a high volume of traffic and transactions. The company sets up a multi-factor authentication (MFA) device to protect the root user credentials for your AWS account. The company wants to ensure that you will not lose access to the root user account if the MFA device is lost. What solution will meet these requirements?

A. Set up a backup administrator account that the company can use to log in if the company loses the MFA device.
B. Add multiple MFA devices for the root user account to handle the disaster scenario.
C. Create a new administrator account when the company cannot access the root account.
D. Attach the administrator policy to another IAM user when the enterprise cannot access the root account.

A

B. Add multiple MFA devices for the root user account to handle the disaster scenario.

  • Disadvantages:
  • Redundancy: Adding multiple MFA devices provides redundancy, reducing the risk of losing access if a device is lost.
  • Root User Security: The root user is a powerful account, and securing it with MFA is a recommended best practice.
  • Considerations:
  • Device Management: The company needs to manage multiple MFA devices securely.

Explanation:
- Option B is the most effective solution to address the enterprise requirement to ensure access to the root user account in the event the MFA device is lost. By setting up multiple MFAs Devices for the root user, the company establishes redundancy and any of the configured devices can be used for authentication.

240
Q

813 # A social media company is creating a rewards program website for its users. The company awards points to users when they create and upload videos to the website. Users redeem their points for gifts or discounts from the company’s affiliate partners. A unique ID identifies users. Partners refer to this ID to verify the user’s eligibility to receive rewards. Partners want to receive notifications of user IDs through an HTTP endpoint when the company gives points to users. Hundreds of suppliers are interested in becoming affiliate partners every day. The company wants to design an architecture that gives the website the ability to quickly add partners in a scalable way. Which solution will meet these requirements with the LEAST implementation effort?

A. Create an Amazon Timestream database to maintain a list of affiliate partners. Implement an AWS Lambda function to read the list. Configure the Lambda function to send user IDs to each partner when the company gives points to users.
B. Create an Amazon Simple Notification Service (Amazon SNS) topic. Choose an endpoint protocol. Subscribe the partners to the topic. Publish user IDs to the topic when the company gives points to users.
C. Create an AWS Step Functions state machine. Create a task for each affiliate partner. Invoke state machine with user ID as input when company gives points to users.
D. Create a data stream in Amazon Kinesis Data Streams. Implement producer and consumer applications. Store a list of affiliate partners in the data stream. Send user ID when the company gives points to users.

A

B. Create an Amazon Simple Notification Service (Amazon SNS) topic. Choose an endpoint protocol. Subscribe the partners to the topic. Publish user IDs to the topic when the company gives points to users.

  • Advantages:
  • Scalability: Amazon SNS is designed to handle high performance and can easily scale to accommodate hundreds of affiliate partners.
  • Ease of integration: Partners can subscribe to the SNS topic, and the company can publish messages on the topic, simplifying the integration process.
  • Flexibility: Supports multiple endpoint protocols, including HTTP, which aligns with partners’ requirement to receive notifications over an HTTP endpoint.
  • Considerations:
  • Security: Ensure communication between the business and partners is secure, especially when using HTTP endpoints.

Explanation:
- Option B takes advantage of Amazon SNS, which is a fully managed publish/subscribe service. This solution provides an efficient way for the company to notify multiple partners about user IDs when points are given. Partners can subscribe to the SNS topic using their preferred endpoint protocols, including HTTP, making it a scalable and simple solution.

241
Q

814 # An e-commerce company runs its application on AWS. The application uses an Amazon Aurora PostgreSQL cluster in Multi-AZ mode for the underlying database. During a recent promotional campaign, the application experienced heavy read and write load. Users experienced timeout issues when trying to access the app. A solutions architect needs to make the application architecture more scalable and highly available. Which solution will meet these requirements with the LEAST downtime?

A. Create an Amazon EventBridge rule that has the Aurora cluster as a source. Create an AWS Lambda function to log Aurora cluster state change events. Add the Lambda function as a target for the EventBridge rule. Add additional reader nodes for failover.
B. Modify the Aurora cluster and enable the Zero Downtime Reboot (ZDR) feature. Use database activity streams in the cluster to track the health of the cluster.
C. Add additional reader instances to the Aurora cluster. Creates an Amazon RDS Proxy target group for the Aurora cluster.
D. Create an Amazon ElastiCache cache for Redis. Replicates data from the Aurora cluster to Redis using the AWS Database Migration Service (AWS DMS) with a write approach.

A

C. Add additional reader instances to the Aurora cluster. Creates an Amazon RDS Proxy target group for the Aurora cluster.

  • Disadvantages:
  • Scalability: Adding additional reader instances to the Aurora cluster enables horizontal scaling of read capacity, addressing the heavy read load.
  • High availability: Aurora in Multi-AZ mode provides automatic failover for the primary instance, improving availability.
  • Amazon RDS Proxy: RDS Proxy helps manage database connections, improving application scalability and reducing downtime during failovers.
  • Considerations:
  • Cost: While scaling the Aurora cluster horizontally with additional reader instances may incur additional costs, it provides a scalable and highly available solution.

Explanation:
- Option C is a suitable option to improve scalability and availability. By adding additional reader instances, the application can distribute the reading load efficiently. Creating an Amazon RDS Proxy target group further improves the management of database connections, enabling better scalability and reducing downtime during failovers.

242
Q

Dup Number But New
814#A company needs to extract ingredient names from recipe records that are stored as text files in an Amazon S3 bucket. A web application will use the ingredient names to query an Amazon DynamoDB table and determine a nutrition score. The application can handle logs and non-food errors. The company does not have any employees who have machine learning skills to develop this solution. Which solution will meet these requirements in the MOST cost-effective way?

A. Use S3 event notifications to invoke an AWS Lambda function when PutObject requests occur. Schedule the Lambda function to parse the object and extract ingredient names using Amazon Comprehend. Store the Amazon Comprehend output in the DynamoDB table.

B. Use an Amazon EventBridge rule to invoke an AWS Lambda function when PutObject requests occur. Schedule the Lambda function to parse the object using Amazon Forecast to extract ingredient names. Store the forecast output in the DynamoDB table.

C. Use S3 event notifications to invoke an AWS Lambda function when PutObject requests occur. Use Amazon Polly to create audio recordings of the recipe records. Save the audio files to the S3 bucket. Use Amazon Simple Notification Service (Amazon SNS) to send a URL as a message to employees. Instruct employees to listen to the audio files and calculate the nutrition score. Store the ingredient names in the DynamoDB table.

D. Use an Amazon EventBridge rule to invoke an AWS Lambda function when a PutObject request occurs. Schedule the Lambda function to parse the object and extract ingredient names using Amazon SageMaker. Store the inference output from the SageMaker endpoint in the DynamoDB table.

A

A. Use S3 event notifications to invoke an AWS Lambda function when PutObject requests occur. Schedule the Lambda function to parse the object and extract ingredient names using Amazon Comprehend. Store the Amazon Comprehend output in the DynamoDB table.

This option uses S3 event notifications to trigger a Lambda function when new recipe records are uploaded to the S3 bucket. The Lambda function parses the text using Amazon Comprehend to extract ingredient names. Amazon Comprehend is a natural language processing (NLP) service that can identify entities such as food ingredients. This solution is cost-effective as it only uses AWS Lambda and Amazon Comprehend, both of which offer a pay-as-you-go pricing model.

Taking into account cost-effectiveness and compliance with requirements, option A is the most appropriate solution. It leverages AWS Lambda and Amazon Comprehend, offering an efficient and accurate method to extract ingredient names from recipe records while minimizing costs.

243
Q

815#A company needs to create an AWS Lambda function that will run in a VPC in the company’s primary AWS account. The Lambda function needs to access files that the company stores on an Amazon Elastic File System (Amazon EFS) file system. The EFS file system is located in a secondary AWS account. As the company adds files to the file system, the solution must scale to meet demand. Which solution will meet these requirements in the MOST cost-effective way?

A. Create a new EFS file system on the main account. Use AWS DataSync to copy the contents of the original EFS file system to the new EFS file system.

B. Create a VPC peering connection between the VPCs that are in the primary account and the secondary account.

C. Create a second Lambda function in the secondary account that has a mount configured for the file system. Use the parent account’s Lambda function to invoke the child account’s Lambda function.

D. Move the contents of the file system to a Lambda layer. Configure Lambda layer permissions to allow the company’s secondary account to use the Lambda layer.

A

B. Create a VPC peering connection between the VPCs that are in the primary account and the secondary account.

VPC peering allows communication between VPCs in different AWS accounts using private IP addresses. By creating a VPC peering connection between the VPCs in the primary and secondary accounts, the Lambda function in the primary account can directly access files stored in the EFS file system in the secondary account. This solution eliminates the need for data duplication and synchronization, making it cost-effective and efficient to access files across accounts.

Taking into account cost-effectiveness and compliance with requirements, option B is the most suitable solution. Leverages VPC peering to allow direct access to the EFS file system in the secondary account from the Lambda function in the primary account, eliminating the need for data duplication or complex invocations across accounts. This solution is efficient, scalable, and cost-effective for accessing files across AWS accounts.

244
Q

816#A financial company needs to handle highly confidential data. The company will store the data in an Amazon S3 bucket. The company needs to ensure that data is encrypted in transit and at rest. The company must manage encryption keys outside of the AWS cloud. What solution will meet these requirements?

A. Encrypt the data in the S3 bucket with server-side encryption (SSE) that uses a customer-managed key from the AWS Key Management Service (AWS KMS).

B. Encrypt the data in the S3 bucket with server-side encryption (SSE) that uses a key managed by AWS Key Management Service (AWS KMS).

C. Encrypts the data in the S3 bucket with the default server-side encryption (SSE).

D. Encrypt the data at the company’s data center before storing it in the S3 bucket.

A

D. Encrypt the data at the company’s data center before storing it in the S3 bucket.

In fact, option D would be the closest option to meeting the requirements specified in the documentation provided on client-side encryption. By encrypting data in the company’s data center before uploading it to S3, the company can ensure that the data is encrypted before it leaves its environment, thus achieving the goal of client-side encryption.

https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingClientSideEncryption.html

Other Options:
A. Encrypt the data in the S3 bucket with server-side encryption (SSE) that uses a customer-managed key from the AWS Key Management Service (AWS KMS). Reassessment: This option focuses on server-side encryption (SSE) with a customer-managed key (CMK) stored in AWS KMS. Encrypts data at rest in the S3 bucket using company-managed keys. However, it does not directly address client-side encryption, where data is encrypted locally before transmission to S3. While this option ensures encryption at rest, it does not use client-side encryption as described in the provided documentation.

245
Q

817#A company wants to run its payment application on AWS. The application receives payment notifications from mobile devices. Payment notifications require basic validation before they are sent for further processing. The backend processing application runs for a long time and requires computing and memory to be adjusted. The company does not want to manage the infrastructure. Which solution will meet these requirements with the LESS operating overhead?

A. Create an Amazon Simple Queue Service (Amazon SQS) queue. Integrate the queue with an Amazon EventBridge rule to receive payment notifications from mobile devices. Configure the rule to validate payment notifications and send the notifications to the backend application. Deploy your backend application to Amazon Elastic Kubernetes Service (Amazon EKS) anywhere. Create a standalone cluster.

B. Create an Amazon API Gateway API. Integrate the API with an AWS Step Functions state machine to receive payment notifications from mobile devices. Invoke the state machine to validate payment notifications and send the notifications to the backend application. Deploy the backend application to Amazon Elastic Kubernetes Service (Amazon EKS). Set up an EKS cluster with self-managed nodes.

C. Create an Amazon Simple Queue Service (Amazon SQS) queue. Integrate the queue with an Amazon EventBridge rule to receive payment notifications from mobile devices. Configure the rule to validate payment notifications and send the notifications to the backend application. Deploy the backend application to Amazon EC2 Spot Instances. Set up a spot fleet with a predetermined allocation strategy.

D. Create an Amazon API Gateway API. Integrate the API with AWS Lambda to receive payment notifications from mobile devices. Invokes a Lambda function to validate payment notifications and send the notifications to the backend application. Deploy the backend application to Amazon Elastic Container Service (Amazon ECS). Configure Amazon ECS with an AWS Fargate launch type.

A

D. Create an Amazon API Gateway API. Integrate the API with AWS Lambda to receive payment notifications from mobile devices. Invokes a Lambda function to validate payment notifications and send the notifications to the backend application. Deploy the backend application to Amazon Elastic Container Service (Amazon ECS). Configure Amazon ECS with an AWS Fargate launch type.

D. Amazon API Gateway with AWS Lambda and Amazon ECS with Fargate:
● Amazon API Gateway: Receives payment notifications.
● AWS Lambda: Used for basic validation of payment notifications.
● Amazon ECS with Fargate: Offers serverless container orchestration, eliminating the need to manage
infrastructure.
● Operational Overhead: This option involves the least operational overhead as it leverages fully managed services
like AWS Lambda and Fargate, where AWS manages the underlying infrastructure.
Given the requirement for the least operational overhead, Option D is the most suitable. It leverages fully managed services (AWS Lambda and Amazon ECS with Fargate) for handling payment notifications and running the backend application, minimizing the operational burden on the company.

Other Options:
● Amazon API Gateway: Receives payment notifications.
● AWS Lambda: Used for basic validation of payment notifications.
● Amazon ECS with Fargate: Offers serverless container orchestration, eliminating the need to manage
infrastructure.
● Operational Overhead: This option involves the least operational overhead as it leverages fully managed services
like AWS Lambda and Fargate, where AWS manages the underlying infrastructure.
Given the requirement for the least operational overhead, Option D is the most suitable. It leverages fully managed services (AWS Lambda and Amazon ECS with Fargate) for handling payment notifications and running the backend application, minimizing the operational burden on the company.

246
Q

818#A solutions architect is designing a user authentication solution for a company. The solution should invoke two-factor authentication for users who log in from inconsistent geographic locations, IP addresses, or devices. The solution must also be able to scale to accommodate millions of users. What solution will meet these requirements?

A. Configure Amazon Cognito user user pools for user authentication. Enable the risk-based adaptive authentication feature with multi-factor authentication (MFA).

B. Configure Amazon Cognito identity groups for user authentication. Enable multi-factor authentication (MFA).

C. Configure AWS Identity and Access Management (IAM) users for user authentication. Attach an IAM policy that allows the AllowManageOwnUserMFA action.

D. Configure AWS IAM Identity Center authentication (AWS Single Sign-On) for user authentication. Configure permission sets to require multi-factor authentication (MFA).

A

A. Configure Amazon Cognito user pools for user authentication with risk-based adaptive authentication and MFA:
● Amazon Cognito User Pools: Provides user authentication and management service.
● Risk-based Adaptive Authentication: Allows you to define authentication rules based on user behavior, such as
inconsistent geographical locations, IP addresses, or devices.
● Multi-factor Authentication (MFA): Enhances security by requiring users to provide two or more verification factors.
● Scalability: Amazon Cognito is designed to scale to accommodate millions of users.
● Explanation: This option aligns well with the requirements as it leverages Amazon Cognito’s risk-based adaptive
authentication feature to detect suspicious activities based on user behavior and trigger MFA when necessary. Additionally, Amazon Cognito is highly scalable and suitable for accommodating millions of users.

247
Q

819#A company has an Amazon S3 data lake. The company needs a solution that transforms data from the data lake and loads it into a data warehouse every day. The data warehouse must have massively parallel processing (MPP) capabilities. Next, data analysts need to create and train machine learning (ML) models by using SQL commands on the data. The solution should use serverless AWS services whenever possible. What solution will meet these requirements?

A. Run an Amazon EMR daily job to transform the data and load it into Amazon Redshift. Use Amazon Redshift ML to create and train ML models.

B. Run an Amazon EMR daily job to transform the data and load it to Amazon Aurora Serverless. Use Amazon Aurora ML to create and train ML models.

C. Run an AWS Glue daily job to transform the data and load it into Amazon Redshift Serverless. Use Amazon Redshift ML to create and train ML models.

D. Run an AWS Glue daily job to transform the data and load it into Amazon Athena tables. Use Amazon Athena ML to create and train ML models.

A

C. Run an AWS Glue daily job to transform the data and load it into Amazon Redshift Serverless. Use Amazon Redshift ML to create and train ML models.

The only data warehouse solution that is a serverless that is available on AWS is redshift. Option C, where we are using to serverless glue job to transform the data, which is a serverless option, Amazon Redshift serverless, which is a serverless data warehouse option. And you can use redshift machine learning to create and train ml models using SQL.

Other Options:

Amazon EMR is not a serverless service, you have to provision EMR and then you can create EMR jobs. That’s fast, based on that. Second one redshift it’s not serverless. You have Amazon Redshift serverless that serverless. So, that is another reason I would eliminate this one. Okay, so if you are thinking about Amazon EMR serverless, what is the service? Well, glue, AWS glue, right. AWS glue is nothing but behind the scenes. You have you run EMR jobs, but you don’t have to provision it. And right now, I think there is another version called EMR serverless, as well, just like redshift. But anyways, you can cancel this out and be not just for EMR. Aurora, Aurora is not a data warehouse solution. So you can cross that out. And you can cross d out because Athena is not a data warehouse solution. Okay, so for that reason, you can cancel out that

248
Q

820#A company runs containers in a Kubernetes environment in the company’s on-premises data center. The company wants to use Amazon Elastic Kubernetes Service (Amazon EKS) and other AWS managed services. Data must remain locally in the company’s data center and cannot be stored on any remote site or cloud to maintain compliance. What solution will meet these requirements?

A. Deploy AWS local zones in the company’s data center.

B. Use an AWS snowmobile in the company data center.

C. Install an AWS Outposts rack in the company data center.

D. Install an AWS Snowball Edge Storage Optimized node in the data center.

A

C. Install an AWS Outposts rack in the company data center.

● AWS Outposts: AWS Outposts brings native AWS services, infrastructure, and operating models to virtually any data center, co-location space, or on-premises facility. With AWS Outposts, companies can run AWS infrastructure and services locally on premises and use the same APIs, control plane, tools, and hardware on-premises as in the AWS cloud.
● By installing an AWS Outposts rack in the company’s data center, the company can leverage Amazon EKS (Elastic Kubernetes Service) and other AWS managed services while ensuring that all data remains within the local data center, meeting compliance requirements.
● AWS Outposts provides a consistent hybrid experience with seamless integration with AWS services, allowing the company to run containerized workloads in the local Kubernetes environment alongside AWS services without data leaving the local premises.

249
Q

821#A social media company has workloads that collect and process data. Workloads store data on local NFS storage. The data warehouse cannot scale fast enough to meet the company’s expanding business needs. The company wants to migrate the current data warehouse to AWS. Which solution will meet these requirements in the MOST cost-effective way?

A. Configure an AWS Storage Gateway Volume Gateway. Use an Amazon S3 lifecycle policy to transition data to the appropriate storage class.

B. Set up an AWS Storage Gateway, Amazon S3 File Gateway. Use an Amazon S3 lifecycle policy to transition data to the appropriate storage class.

C. Use the Amazon Elastic File System (Amazon EFS) Standard-Infrequent Access (Standard-IA) storage class. Activates the infrequent access lifecycle policy.

D. Use the Amazon Elastic File System (Amazon EFS) One Zone-Infrequent Access (One Zone-IA) storage class. Activates the infrequent access lifecycle policy.

A

B. Set up an AWS Storage Gateway, Amazon S3 File Gateway. Use an Amazon S3 lifecycle policy to transition data to the appropriate storage class.

● File Gateway allows applications to store files as objects in Amazon S3 while accessing them through a Network File System (NFS) interface.
● Similar to Option A, this solution involves using Amazon S3 Lifecycle policies to transition data to the appropriate storage class.
● Cost-effectiveness: This option could be more cost-effective compared to Option A, as it eliminates the need for managing EBS snapshots and associated costs.

Considering cost-effectiveness and the ability to meet the requirements, Option B (AWS Storage Gateway Amazon S3 File Gateway with Amazon S3 Lifecycle Policy) seems to be the most cost-effective solution. It leverages Amazon S3’s scalability and cost-effectiveness while using Storage Gateway to seamlessly integrate with the company’s existing NFS storage.

Other Options:
A. AWS Storage Gateway Volume Gateway with Amazon S3 Lifecycle Policy:
● With Volume Gateway, on-premises applications can use block storage in the form of volumes that are stored as Amazon EBS snapshots.
● This option allows the company to store data in on-premises NFS storage and synchronize it with Amazon S3 using Storage Gateway. Amazon S3 Lifecycle policies can be used to transition the data to the appropriate storage class, such as S3 Standard-IA or S3 Intelligent-Tiering.
● Cost-effectiveness: This option may incur additional costs for maintaining the Storage Gateway Volume Gateway and EBS snapshots, which might not be the most cost-effective solution depending on the volume of data and frequency of access.

250
Q

822#A company uses high-concurrency AWS Lambda functions to process an increasing number of messages in a message queue during marketing events. Lambda functions use CPU-intensive code to process messages. The company wants to reduce computing costs and maintain service latency for its customers. What solution will meet these requirements?

A. Configure reserved concurrency for Lambda functions. Decrease the memory allocated to Lambda functions.

B. Configure reserved concurrency for Lambda functions. Increase memory according to AWS Compute Optimizer recommendations.

C. Configure provisioned concurrency for Lambda functions. Decrease the memory allocated to Lambda functions.

D. Configure provisioned concurrency for Lambda functions. Increase memory according to AWS Compute Optimizer recommendations.

A

D. Configure provisioned concurrency for Lambda functions. Increase memory according to AWS Compute Optimizer recommendations.

● Provisioned concurrency can help maintain low latency by pre-warming Lambda functions.
● Increasing memory might improve performance for CPU-intensive tasks.
● AWS Compute Optimizer recommendations can guide in optimizing resources for cost and performance.
● This option combines the benefits of provisioned concurrency for low latency and AWS Compute Optimizer recommendations for cost optimization and performance improvement.

Other Options:
B. Configure reserved concurrency for the Lambda functions. Increase the memory according to AWS Compute Optimizer recommendations:
● Reserved concurrency helps control costs by limiting the number of concurrent executions.
● Increasing memory might improve performance for CPU-intensive tasks if the Lambda functions are
memory-bound.
● AWS Compute Optimizer provides recommendations for optimizing resources based on utilization metrics.
● This option addresses both cost optimization and potential performance improvements based on recommendations.

251
Q

823#A company runs its workloads on Amazon Elastic Container Service (Amazon ECS). Container images that use the ECS task definition should be scanned for common vulnerabilities and exposures (CVE). You also need to scan any new container images that are created. Which solution will meet these requirements with the LEAST changes to workloads?

A. Use Amazon Elastic Container Registry (Amazon ECR) as a private image repository to store the container images. Specify the scan on the push filters for the basic ECR scan.

B. Store the container images in an Amazon S3 bucket. Use Amazon Macie to scan the images. Use an S3 event notification to start a Macie scan for each event with an event type of s3:ObjectCreated:Put.

C. Deploy the workloads to Amazon Elastic Kubernetes Service (Amazon EKS). Use Amazon Elastic Container Registry (Amazon ECR) as a private image repository. Specify the scan in push filters for ECR enhanced scanning.

D. Store the container images in an Amazon S3 bucket that has versioning enabled. Configure an S3 event notification for s3:ObjectCreated:* events to invoke an AWS Lambda function. Configure the Lambda function to start an Amazon Inspector scan.

A

A. Use Amazon Elastic Container Registry (Amazon ECR) as a private image repository to store the container images. Specify the scan on the push filters for the basic ECR scan.

● Amazon ECR supports scanning container images for vulnerabilities using its built-in scan on push feature.
● With scan on push filters, every new image pushed to the repository triggers a scan for vulnerabilities.
● This option requires minimal changes to the existing ECS setup as it leverages Amazon ECR, which is commonly
used with ECS for storing container images.
● It directly integrates with the container image repository without additional services.

NOTE: the other options might be technically feasible, but each adds more changes.

252
Q

824#A company uses an AWS batch job to run its sales process at the end of the day. The company needs a serverless solution that invokes a third-party reporting application when the AWS Batch job is successful. The reporting application has an HTTP API interface that uses username and password authentication. What solution will meet these requirements?

A. Configure an Amazon EventBridge rule to match incoming AWS Batch job SUCCEEDED events. Configure the third-party API as an EventBridge API destination with a username and password. Set the API destination as the EventBridge rule target.

B. Configure Amazon EventBridge Scheduler to match incoming AWS Batch job events. Configure an AWS Lambda function to invoke the third-party API using a username and password. Set the Lambda function as the target of the EventBridge rule.

C. Configure an AWS Batch job to publish SUCESED job events to an Amazon API Gateway REST API. Configure an HTTP proxy integration in the API Gateway REST API to invoke the third-party API using a username and password.

D. Configure an AWS Batch job to publish SUCESED job events to an Amazon API Gateway REST API. Set up a proxy integration on the API Gateway REST API to an AWS Lambda function. Configure the Lambda function to invoke the third-party API using a username and password.

A

A. Configure an Amazon EventBridge rule to match incoming AWS Batch job SUCCEEDED events. Configure the third-party API as an EventBridge API destination with a username and password. Set the API destination as the EventBridge rule target.

Other Options:
● This option aligns well with using CloudWatch Events to trigger actions based on AWS Batch job state changes.
● By configuring an EventBridge rule to match AWS Batch job success events and sending them to an API destination
(the third-party API with username and password authentication), you can achieve the desired outcome efficiently.
● EventBridge provides seamless integration with various AWS services, including AWS Batch and API Gateway, making it a suitable choice for event-driven architectures.
● Overall, this option remains a strong contender for meeting the requirements effectively. B. Configure EventBridge Scheduler with an AWS Lambda function:
● While EventBridge Scheduler allows triggering events at specific times or intervals, it may not be the best fit for triggering actions based on job completion events like AWS Batch job success.
● Using a Lambda function as a target for EventBridge Scheduler adds unnecessary complexity and may not align well with the event-driven nature of the requirement.
● Therefore, this option is less suitable compared to Option A.
C. Configure AWS Batch job to publish events to an Amazon API Gateway REST API:
● This option involves publishing AWS Batch job success events to an API Gateway REST API.
● While it’s feasible, it introduces additional complexity by requiring setup and management of API Gateway
resources.
● Directly invoking the third-party API from CloudWatch Events (Option A) is more straightforward and aligns better
with the requirements.
D. Configure AWS Batch job to publish events to an Amazon API Gateway REST API with a proxy integration to AWS Lambda:
● This option adds an extra layer of indirection by invoking an AWS Lambda function through API Gateway.
● While it offers flexibility, it increases complexity without significant benefits over directly invoking the third-party API
from CloudWatch Events.
● Therefore, it’s less preferable compared to Option A.

253
Q

825#A company collects and processes data from a vendor. The provider stores its data in an Amazon RDS for MySQL database in the vendor’s own AWS account. The company’s VPC does not have an Internet gateway, an AWS Direct Connect connection, or an AWS Site-to-Site VPN connection. The company needs to access the data that is in the vendor database. What solution will meet this requirement?

A. Instruct the provider to enroll in the AWS Hosted Connection Direct Connect program. Use the VPC pair to connect the company VPC and the provider VPC.

B. Configure a client VPN connection between the company’s VPC and the provider’s VPC. Use VPC peering to connect your company’s VPC and your provider’s VPC.

C. Instruct the vendor to create a network load balancer (NLB). Place the NLB in front of the Amazon RDS for MySQL database. Use AWS PrivateLink to integrate your company’s VPC and the vendor’s VPC.

D. Use AWS Transit Gateway to integrate the enterprise VPC and the provider VPC. Use VPC peering to connect your company’s VPC and your provider’s VPC.

A

C. Instruct the vendor to create a network load balancer (NLB). Place the NLB in front of the Amazon RDS for MySQL database. Use AWS PrivateLink to integrate your company’s VPC and the vendor’s VPC.

● This solution involves the vendor setting up a Network Load Balancer (NLB) in front of the RDS database and using AWS PrivateLink to integrate the company’s VPC and the vendor’s VPC.
● AWS PrivateLink provides private connectivity between VPCs without requiring internet gateways, VPN connections, or Direct Connect.
● By using PrivateLink, the company can securely access resources in the vendor’s VPC without exposing them to the public internet.
● Overall, this solution provides secure and private connectivity between the VPCs without the need for complex networking setups, making it a strong contender for meeting the requirements.

Other Options:
A. AWS Hosted Connection Direct Connect Program with VPC peering:
● This solution involves the vendor signing up for the AWS Hosted Connection Direct Connect Program, which establishes a dedicated connection between the company’s VPC and the vendor’s VPC.
● VPC peering is then used to connect the two VPCs, allowing traffic to flow securely between them.
● While this solution provides a direct and secure connection between the VPCs, it requires coordination with the
vendor to set up the direct connect connection, which might introduce additional complexity and dependencies.
● Overall, this solution can be effective but might involve more coordination and setup effort.
B. Client VPN connection with VPC peering:
● This solution involves setting up a client VPN connection between the company’s VPC and the vendor’s VPC, allowing secure access to resources in the vendor’s VPC.
● VPC peering is then used to establish connectivity between the two VPCs.
● While this solution provides secure access to the vendor’s resources, setting up and managing a client VPN
connection might introduce additional overhead and complexity.
● Moreover, client VPN connections are typically used for remote access scenarios, and using them for inter-VPC
communication might not be the most straightforward approach.
● Overall, this solution might be less optimal due to the additional complexity and overhead of managing a client VPN
connection.

D. AWS Transit Gateway with VPC peering:
● This solution involves using AWS Transit Gateway to integrate the company’s VPC and the vendor’s VPC, allowing for centralized management and routing of traffic between multiple VPCs.
● VPC peering is then used to establish connectivity between the company’s VPC and the vendor’s VPC.
● While AWS Transit Gateway provides centralized management and routing capabilities, it might introduce additional
complexity and overhead, especially if the setup is not already in place.
● Additionally, since the company’s VPC does not have internet access, Transit Gateway might not be the most
straightforward solution for this scenario.
● Overall, while Transit Gateway offers scalability and flexibility, it might be overkill for the specific requirement of
accessing a single RDS database in the vendor’s VPC.

Considering the requirements and the constraints specified (no internet gateway, Direct Connect, or VPN connection), Option C (using a Network Load Balancer with AWS PrivateLink) appears to be the most suitable solution. It provides secure and private connectivity between the VPCs without the need for complex networking setups and dependencies on external services.

254
Q

826#A company wants to set up Amazon Managed Grafana as its visualization tool. The company wants to visualize the data in its Amazon RDS database as one data source. The company needs a secure solution that does not expose data over the Internet. What solution will meet these requirements?

A. Create an Amazon Managed Grafana workspace without a VPC. Create a public endpoint for the RDS database. Configure the public endpoint as a data source in Amazon Managed Grafana.

B. Create an Amazon Managed Grafana workspace in a VPC. Create a private endpoint for the RDS database. Configure the private endpoint as a data source in Amazon Managed Grafana.

C. Create an Amazon Managed Grafana workspace without a VP. Create an AWS PrivateLink endpoint to establish a connection between Amazon Managed Grafana and Amazon RDS. Configure Amazon RDS as a data source in Amazon Managed Grafana.

D. Create an Amazon Managed Grafana workspace in a VPC. Create a public endpoint for the RDS database. Configure the public endpoint as a data source in Amazon Managed Grafana.

A

It appears B or C both can work.

B. Create an Amazon Managed Grafana workspace in a VPC. Create a private endpoint for the RDS database. Configure the private endpoint as a data source in Amazon Managed Grafana.

● This solution involves creating an Amazon Managed Grafana workspace in a VPC and configuring it with a private endpoint, ensuring that it is not accessible over the public internet.
● The RDS database also has a private endpoint within the same VPC, ensuring that data transfer between Grafana and RDS remains within the AWS network and does not traverse the public internet.
● By using private endpoints and keeping the communication within the VPC, this option provides a more secure solution compared to Option A.
● Overall, this option aligns well with the requirement for a secure solution that does not expose the data over the internet.

Other Options:
C. Public Amazon Managed Grafana workspace with AWS PrivateLink to RDS:
● This solution involves creating an Amazon Managed Grafana workspace without a VPC but establishing a connection between Grafana and RDS using AWS PrivateLink.
● AWS PrivateLink allows private connectivity between services across different VPCs or accounts without exposing the data over the internet.
● While this option could leverage AWS PrivateLink for secure communication between Grafana and RDS, it is designed for other uses: AWS PrivateLink provides private connectivity between virtual private clouds (VPCs), supported AWS services, and your on-premises networks without exposing your traffic to the public internet. Interface VPC endpoints, powered by PrivateLink, connect you to services hosted by AWS Partners and supported solutions available in AWS Marketplace.

255
Q

827#A company hosts a data lake on Amazon S3. The data lake ingests data in Apache Parquet format from various data sources. The company uses multiple transformation steps to prepare the ingested data. The steps include filtering out anomalies, normalizing the data to standard date and time values, and generating aggregates for analyses. The company must store the transformed data in S3 buckets that are accessed by data analysts. The company needs a pre-built solution for data transformation that requires no code. The solution must provide data lineage and data profiling. The company needs to share data transformation steps with employees throughout the company. What solution will meet these requirements?

A. Set up an AWS Glue Studio visual canvas to transform the data. Share transformation steps with employees using AWS Glue jobs.

B. Configure Amazon EMR Serverless to transform data. Share transformation steps with employees using EMR serverless jobs.

C. Configure AWS Glue DataBrew to transform the data. Share the transformation steps with employees using DataBrew recipes.

D. Create Amazon Athena tables for the data. Write Athena SQL queries to transform data. Share Athena SQL queries with employees.

A

C. Configure AWS Glue DataBrew to transform the data. Share the transformation steps with employees using DataBrew recipes.

● AWS Glue DataBrew is a visual data preparation tool that allows users to clean and transform data without writing code.
● DataBrew provides an intuitive interface for creating data transformation recipes, making it accessible to users without coding expertise.
● Users can easily share transformation steps (recipes) with employees by using DataBrew’s collaboration features.
● DataBrew also offers data lineage and data profiling capabilities, ensuring visibility into data transformations.
● Overall, this option aligns well with the requirements, providing a code-free solution for data transformation, along
with data lineage, data profiling, and easy sharing of transformation steps.

While Option A provides a graphical interface and could provide the solution without coding, it still requires work by the employees and isn’t as “prebuilt” as DataBrew.

256
Q

828#A solutions architect runs a web application on multiple Amazon EC2 instances that reside in individual target groups behind an application load balancer (ALB). Users can reach the application through a public website. The solutions architect wants to allow engineers to use a development version of the website to access a specific EC2 development instance to test new features for the application. The solutions architect wants to use a zone hosted on Amazon Route 53 to give engineers access to the development instance. The solution should automatically route to the development instance, even if the development instance is replaced. What solution will meet these requirements?

A. Create an A record for the development website that has the value set in the ALB. Create a listener rule on the ALB that forwards requests for the development website to the target group that contains the development instance.

B. Recreate the development instance with a public IP address. Create an A record for the development website that has the value set to the public IP address of the development instance.

C. Create an A record for the development website that has the value set in the ALB. Create a listen rule in the ALB to redirect requests from the development website to the public IP address of the development instance.

D. Place all instances in the same target group. Create an A record for the development website. Set the value to the ALB. Create a listener rule in the ALB that forwards requests for the development website to the target group.

A

A. Create an A record for the development website that has the value set in the ALB. Create a listener rule on the ALB that forwards requests for the development website to the target group that contains the development instance.

● This option sets up a DNS record pointing to the ALB, ensuring that requests for the development website are routed to the load balancer.
● By creating a listener rule on the ALB, requests for the development website can be forwarded to the target group containing the development instance.
● This setup allows for automatic routing to the development instance even if it is replaced, as long as the instance is registered in the target group.

Other Options:
C. Create an A Record for the development website pointing to the ALB, with a listener rule to redirect requests to the public IP of the development instance:
● Similar to Option A, this option uses an A Record to point to the ALB and creates a listener rule to handle requests for the development website.
● However, instead of forwarding requests to the target group, it redirects them to the public IP of the development instance.
● While this setup may work, it introduces complexity and potential performance overhead due to the redirection, and it may not provide seamless failover if the development instance is replaced.

257
Q

829#A company runs a container application in a Kubernetes cluster in the company’s data center. The application uses Advanced Message Queuing Protocol (AMQP) to communicate with a message queue. The data center cannot scale fast enough to meet the company’s growing business needs. The company wants to migrate workloads to AWS. Which solution will meet these requirements with the LESS operating overhead?

A. Migrate the container application to Amazon Elastic Container Service (Amazon ECS). Use Amazon Simple Queue Service (Amazon SQS) to retrieve messages.

B. Migrate the container application to Amazon Elastic Kubernetes Service (Amazon EKS). Use Amazon MQ to retrieve messages.

C. Use highly available Amazon EC2 instances to run the application. Use Amazon MQ to retrieve messages.

D. Use AWS Lambda functions to run the application. Use Amazon Simple Queue Service (Amazon SQS) to retrieve messages.

A

B. Migrate the container application to Amazon Elastic Kubernetes Service (Amazon EKS). Use Amazon MQ to retrieve messages.

● Amazon MQ supports industry-standard messaging protocols, including AMQP, making it a suitable option for applications that require AMQP support.
● By using Amazon EKS for container orchestration and Amazon MQ for message queuing, the company can meet the requirements while minimizing operational overhead.

NOTE: SQS does not support AMQP.

258
Q

830#An online gaming company hosts its platform on Amazon EC2 instances behind Network Load Balancers (NLB) in multiple AWS Regions. NLBs can route requests to targets over the Internet. The company wants to improve the customer gaming experience by reducing end-to-end loading time for its global customer base. What solution will meet these requirements?

A. Create application load balancers (ALBs) in each region to replace existing NLBs. Register existing EC2 instances as targets for ALBs in each region.

B. Configure Amazon Route 53 to route traffic with equal weight to the NLBs in each region.

C. Create additional NLB and EC2 instances in other regions where the company has a large customer base.

D. Create a standard accelerator in AWS Global Accelerator. Configure the existing NLBs as destination endpoints.

A

D. Create a standard accelerator in AWS Global Accelerator. Configure the existing NLBs as destination endpoints.

● AWS Global Accelerator is a networking service that improves the availability and performance of your applications with local and global traffic load balancing, as well as health checks.
● By configuring the existing NLBs as target endpoints in Global Accelerator, traffic can be intelligently routed over AWS’s global network to the closest entry point to the AWS network, reducing the end-to-end load time for customers globally.
● This solution provides a centralized approach to optimize global traffic flow without requiring changes to the existing infrastructure setup.
Considering the requirement to reduce end-to-end load time for the global customer base, option D, utilizing AWS Global Accelerator, would be the most effective solution. It provides a centralized and efficient way to optimize traffic routing globally, leading to improved customer experience with reduced latency.

259
Q

831#A company has an on-premises application that uses SFTP to collect financial data from multiple vendors. The company is migrating to the AWS cloud. The company has created an application that uses Amazon S3 APIs to upload files from vendors. Some vendors run their systems on legacy applications that do not support S3 APIs. Providers want to continue using SFTP-based applications to upload data. The company wants to use managed services for the needs of vendors using legacy applications. Which solution will meet these requirements with the LESS operating overhead?

A. Create an instance of AWS Database Migration Service (AWS DMS) to replicate storage data from vendors using legacy applications to Amazon S3. Provide vendors with credentials to access the AWS DMS instance.

B. Create an AWS Transfer Family endpoint for vendors that use legacy applications.

C. Configure an Amazon EC2 instance to run an SFTP server. Instruct vendors using legacy applications to use the SFTP server to upload data.

D. Configure an Amazon S3 file gateway for vendors that use legacy applications to upload files to an SMB file share.

A

B. Create an AWS Transfer Family endpoint for vendors that use legacy applications.

● AWS Transfer Family provides fully managed SFTP, FTPS, and FTP servers for easy migration of file transfer workloads to AWS.
● With AWS Transfer Family, you can create SFTP endpoints that allow vendors with legacy applications to upload data securely to Amazon S3 using their existing SFTP clients.
● This solution eliminates the need for managing infrastructure or servers, as AWS handles the underlying infrastructure, scaling, and maintenance.

260
Q

832#A marketing team wants to build a campaign for an upcoming multi-sports event. The team has news reports for the last five years in PDF format. The team needs a solution to extract information about content and sentiment from news reports. The solution must use Amazon Textract to process news reports. Which solution will meet these requirements with the LEAST operating overhead?

A. Provide the extracted information to Amazon Athena for analysis. Store the extracted information and analysis in an Amazon S3 bucket.

B. Store the extracted knowledge in an Amazon DynamoDB table. Use Amazon SageMaker to create a sentiment model.

C. Provide the extracted insights to Amazon Comprehend for analysis. Save the analysis to an Amazon S3 bucket.

D. Store the extracted insights in an Amazon S3 bucket. Use Amazon QuickSight to visualize and analyze data.

A

C. Provide the extracted insights to Amazon Comprehend for analysis. Save the analysis to an Amazon S3 bucket.

● Amazon Comprehend is a fully managed natural language processing (NLP) service that can perform sentiment analysis on text data.
● Sending the extracted insights directly to Comprehend for sentiment analysis reduces operational overhead as Comprehend handles the analysis.
● Saving the analysis results to S3 allows for further storage and downstream processing if needed.
● This approach minimizes the need for additional setup or management, as Comprehend is fully managed by AWS.

261
Q

833#A company’s application runs on Amazon EC2 instances that are located in multiple availability zones. The application needs to ingest real-time data from third-party applications. The company needs a data ingestion solution that places the ingested raw data into an Amazon S3 bucket. What solution will meet these requirements?

A. Create Amazon Kinesis data streams for data ingestion. Create Amazon Kinesis Data Firehose delivery streams to consume Kinesis data streams. Specify the S3 bucket as the destination for delivery streams.

B. Create database migration tasks in the AWS Database Migration Service (AWS DMS). Specify the EC2 instance replication instances as the source endpoints. Specify the S3 bucket as the destination endpoint. Set the migration type to migrate existing data and replicate ongoing changes.

C. Create and configure AWS DataSync agents on the EC2 instances. Configure DataSync tasks to transfer data from EC2 instances to the S3 bucket.

D. Create an AWS Direct Connect connection to the application for data ingestion. Create Amazon Kinesis Data Firehose delivery streams to consume direct PUT operations from your application. Specify the S3 bucket as the destination for delivery streams.

A

A. Create Amazon Kinesis data streams for data ingestion. Create Amazon Kinesis Data Firehose delivery streams to consume Kinesis data streams. Specify the S3 bucket as the destination for delivery streams.

● Kinesis Data Streams is well-suited for real-time data ingestion scenarios, allowing applications to ingest and process large streams of data in real time.
● Data Firehose can then deliver the processed data to S3, providing scalability and reliability for data delivery.
● This solution is suitable for handling continuous streams of data from third-party applications in real time.

Considering the requirement for the least operational overhead, option C is the most suitable. It leverages Amazon Comprehend, a fully managed service for sentiment analysis, and stores the results in S3 for easy access and further processing without the need for additional setup or management overhead.

262
Q

834#A company application is receiving data from multiple data sources. Data size varies and is expected to increase over time. The current maximum size is 700 KB. The volume and size of data continues to grow as more data sources are added. The company decides to use Amazon DynamoDB as the primary database for the application. A solutions architect needs to identify a solution that handles large data sizes. Which solution will meet these requirements in the MOST operationally efficient manner?

A. Create an AWS Lambda function to filter data that exceeds DynamoDB item size limits. Store larger data in an Amazon DocumentDB database (with MongoDB support).

B. Store the large data as objects in an Amazon S3 bucket. In a DynamoDB table, create an item that has an attribute that points to the S3 URL of the data.

C. Split all incoming large data into a collection of items that have the same partition key. Write data to a DynamoDB table in a single operation using the BatchWriteItem API operation.

D. Create an AWS Lambda function that uses gzip compression to compress large objects as they are written to a DynamoDB table.

A

B. Store the large data as objects in an Amazon S3 bucket. In a DynamoDB table, create an item that has an attribute that points to the S3 URL of the data.

● This approach leverages the scalability and cost-effectiveness of Amazon S3 for storing large objects.
● DynamoDB stores metadata or pointers to the objects in S3, allowing efficient retrieval when needed.
● It’s a commonly used pattern for handling large payloads in DynamoDB, providing a scalable and efficient solution.

Storing large data in Amazon S3 and referencing them in DynamoDB allows leveraging the scalability and cost-effectiveness of S3 for storing large objects while keeping DynamoDB lightweight and efficient for metadata and quick lookups. This approach simplifies data management, retrieval, and scalability, making it a practical and efficient solution for handling large data volumes in DynamoDB.

263
Q

835#A company is migrating a legacy application from an on-premises data center to AWS. The application is based on hundreds of cron jobs that run between 1 and 20 minutes at different recurring times throughout the day. The company wants a solution to schedule and run cron jobs on AWS with minimal refactoring. The solution must support running cron jobs in response to an event in the future. What solution will meet these requirements?

A. Create a container image for cron jobs. Use Amazon EventBridge Scheduler to create a recurring schedule. Run the cron job tasks as AWS Lambda functions.

B. Create a container image for cron jobs. Use AWS Batch on Amazon Elastic Container Service (Amazon ECS) with a scheduling policy to run cron jobs.

C. Create a container image for the cron jobs. Use Amazon EventBridge Scheduler to create a recurring schedule. Run the cron job tasks on AWS Fargate.

D. Create a container image for cron jobs. Create a workflow in AWS Step Functions that uses a wait state to run cron jobs at a specific time. Use the RunTask action to run cron job tasks in AWS Fargate.

A

C. Create a container image for the cron jobs. Use Amazon EventBridge Scheduler to create a recurring schedule. Run the cron job tasks on AWS Fargate.

● This option aligns with the recommended approach in the provided resource. It suggests using Fargate to run the containerized cron jobs triggered by EventBridge Scheduler. Fargate provides serverless compute for containers, allowing for easy scaling and management without the need to provision or manage servers.

Based on the provided information and the recommended approach in the resource, option C appears to be the most suitable solution. It leverages Amazon EventBridge Scheduler for scheduling and AWS Fargate for running the containerized cron jobs, providing a scalable and efficient solution with minimal operational overhead.

264
Q

836#A company uses Salesforce. The company needs to load existing data and ongoing data changes from Salesforce to Amazon Redshift for analysis. The company does not want data to be transmitted over the public Internet. Which solution will meet these requirements with the LEAST development effort?

A. Establish a VPN connection from the VPC to Salesforce. Use AWS Glue DataBrew to transfer data.

B. Establish an AWS Direct Connect connection from the VPC to Salesforce. Use AWS Glue DataBrew to transfer data.

C. Create an AWS PrivateLink connection in the VPC to Salesforce. Use Amazon AppFlow to transfer data.

D. Create a VPC peering connection to Salesforce. Use Amazon AppFlow to transfer data.

A

C. Create an AWS PrivateLink connection in the VPC to Salesforce. Use Amazon AppFlow to transfer data.

AWS PrivateLink Connection: AWS PrivateLink allows you to securely connect your VPC to supported AWS services and Salesforce privately, without using the public internet. This ensures data transfer occurs over private connections, enhancing security and compliance.
Amazon AppFlow: Amazon AppFlow is a fully managed integration service that enables you to securely transfer data between AWS services and SaaS applications like Salesforce. It provides pre-built connectors for Salesforce, simplifying the data transfer process without the need for custom development.
Least Development Effort: Option C offers the least development effort because it leverages the capabilities of AWS PrivateLink and Amazon AppFlow, which are managed services. You do not need to build or maintain custom VPN connections (Option A and B) or manage VPC peering connections (Option D). Instead, you can quickly set up the PrivateLink connection and configure data transfer using AppFlow’s user-friendly interface, reducing development time and effort.
Therefore, Option C is the most efficient solution with the least development effort while meeting the company’s requirement to securely transfer data from Salesforce to Amazon Redshift without using the public internet.

265
Q

Company 837#A recently migrated its application to AWS. The application runs on Amazon EC2 Linux instances in an auto-scaling group across multiple availability zones. The application stores data on an Amazon Elastic File System (Amazon EFS) file system that uses EFS Standard-Infrequent Access storage. The application indexes company files. The index is stored in an Amazon RDS database. The company needs to optimize storage costs with some changes to applications and services. Which solution will meet these requirements in the MOST cost-effective way?

A. Create an Amazon S3 bucket that uses a Intelligent-Tiering lifecycle policy. Copy all files to S3 bucket. Update the application to use the Amazon S3 API to store and retrieve files.

B. Deploy Amazon FSx file shares for Windows File Server. Update the application to use the CIFS protocol to store and retrieve files.

C. Deploy Amazon FSx for OpenZFS file system shares. Update the application to use the new mount point to store and retrieve files.

D. Create an Amazon S3 bucket that uses S3 Glacier Flexible Retrieval. Copy all files to S3 bucket. Update the application to use the Amazon S3 API to store and retrieve files as standard fetches.

A

A. Create an Amazon S3 bucket that uses a Intelligent-Tiering lifecycle policy. Copy all files to S3 bucket. Update the application to use the Amazon S3 API to store and retrieve files.

  • Amazon S3 Intelligent-Tiering: This option leverages S3 Intelligent-Tiering, which automatically moves objects between two access tiers: frequent access and infrequent access, based on their access patterns. This can help optimize storage costs by moving less frequently accessed data to the infrequent access tier.
  • Application Update: The application needs to be updated to use the Amazon S3 API for storing and retrieving files instead of Amazon EFS. This requires development effort to modify the application’s file storage logic.
  • Cost-Effectiveness: This option can be cost-effective as it leverages S3 Intelligent-Tiering, which automatically adjusts storage costs based on access patterns. However, it requires effort to migrate data from Amazon EFS to S3 and update the application.

Among the options, Option A (using Amazon S3 Intelligent-Tiering) appears to be the most cost-effective solution as it leverages S3 Intelligent-Tiering’s automatic tiering based on access patterns, potentially reducing storage costs without significant application changes. However, the specific choice depends on factors such as the application’s compatibility with S3 APIs and the effort involved in migrating data and updating the application logic.

266
Q

838#A robotics company is designing a solution for medical surgery. The robots will use advanced sensors, cameras and AI algorithms to sense their surroundings and complete surgeries. The company needs a public load balancer in the AWS cloud that ensures seamless communication with backend services. The load balancer must be able to route traffic based on query strings to different target groups. Traffic must also be encrypted. What solution will meet these requirements?

A. Use a network load balancer with an attached AWS Certificate Manager (ACM) certificate. Use routing based on query parameters.

B. Use a gateway load balancer. Import a generated certificate into AWS Identity and Access Management (IAM). Attach the certificate to the load balancer. Use HTTP path-based routing.

C. Use an application load balancer with a certificate attached from AWS Certificate Manager (ACM). Use query parameters-based routing.

D. Use a network load balancer. Import a generated certificate into AWS Identity and Access Management (IAM). Attach the certificate to the load balancer. Use routing based on query parameters.

A

C. Use an application load balancer with a certificate attached from AWS Certificate Manager (ACM). Use query parameters-based routing.

● Application Load Balancer (ALB) is designed to route traffic at the application layer (Layer 7) of the OSI model. It supports advanced routing features such as HTTP and HTTPS traffic routing based on various attributes, including HTTP headers, URL paths, and query parameters.
● ACM Certificate can be attached to ALB to ensure that traffic to the load balancer is encrypted.
● ALB supports query parameter-based routing, allowing you to route traffic based on specific parameters within the HTTP request. This aligns with the requirement for routing traffic based on query strings.

Option C (Use an Application Load Balancer with query parameter-based routing and ACM certificate) aligns with the requirements of ensuring seamless communication with backend services and routing traffic based on query strings. ALB’s support for query parameter-based routing makes it suitable for the scenario described, providing flexibility and ease of configuration for routing traffic based on specific criteria.
https://exampleloadbalancer.com/advanced_request_routing_queryparam_overview.html

267
Q

839#A company has an application that runs on a single Amazon EC2 instance. The application uses a MySQL database running on the same EC2 instance. The business needs a highly available and automatically scalable solution to handle increased traffic. What solution will meet these requirements?

A. Deploy the application to EC2 instances running in an auto-scaling group behind an application load balancer. Create an Amazon Redshift cluster that has multiple MySQL-compatible nodes.

B. Deploy the application to EC2 instances that are configured as a target group behind an application load balancer. Create an Amazon RDS for MySQL cluster that has multiple instances.

C. Deploy the application to EC2 instances that run in an auto-scaling group behind an application load balancer. Create an Amazon Aurora serverless MySQL cluster for the database layer.

D. Deploy the application to EC2 instances that are configured as a target group behind an application load balancer. Create an Amazon ElastiCache for Redis cluster that uses the MySQL connector.

A

C. Deploy the application to EC2 instances that run in an auto-scaling group behind an application load balancer. Create an Amazon Aurora serverless MySQL cluster for the database layer.

● High Availability: Amazon Aurora automatically replicates data across multiple Availability Zones, providing built-in high availability. This ensures that the database remains accessible even in the event of an AZ failure.
● Scalability: Amazon Aurora Serverless automatically adjusts database capacity based on application demand, scaling compute and storage capacity up or down. This provides seamless scalability without the need for manual intervention.

268
Q

840#A company is planning to migrate data to an Amazon S3 bucket. Data must be encrypted at rest within the S3 bucket. The encryption key should be automatically rotated every year. Which solution will meet these requirements with the LESS operating overhead?

A. Migrate data to the S3 bucket. Use server-side encryption with Amazon S3 Managed Keys (SSE-S3). Use the built-in key rotation behavior of SSE-S3 encryption keys.

B. Create an AWS Key Management Service (AWS KMS) customer-managed key. Enable automatic key rotation. Set the default S3 bucket encryption behavior to use the customer-managed KMS key. Migrate data to S3 bucket.

C. Create an AWS Key Management Service (AWS KMS) customer-managed key. Set the default S3 bucket encryption behavior to use the customer-managed KMS key. Migrate data to S3 bucket. Manually turn the KMS key every year.

D. Use the client’s key material to encrypt the data. Migrate data to S3 bucket. Creates an AWS Key Management Service (AWS KMS) key without key material. Import the material from the customer key to the KMS key. Enable automatic key rotation.

A

A. Migrate data to the S3 bucket. Use server-side encryption with Amazon S3 Managed Keys (SSE-S3). Use the built-in key rotation behavior of SSE-S3 encryption keys.

● Encryption at Rest: Server-side encryption with S3 managed keys (SSE-S3) automatically encrypts objects at rest in S3 using strong encryption.
● Automatic Key Rotation: SSE-S3 keys are managed by AWS, and key rotation is handled automatically without any additional configuration or operational overhead.
● Operational Overhead: This option has the least operational overhead as key rotation is automatically managed by AWS.

Based on the evaluation, Option A (SSE-S3 with automatic key rotation) meets the requirements with the least operational overhead, as it leverages AWS-managed keys with automatic key rotation, eliminating the need for manual key management tasks.

269
Q

841#A company is migrating applications from an on-premises Microsoft Active Directory that the company manages to AWS. The company deploys the applications to multiple AWS accounts. The company uses AWS organizations to centrally manage accounts. The company’s security team needs a single sign-on solution across all of the company’s AWS accounts. The company must continue to manage the users and groups that are in the on-premises Active Directory. What solution will meet these requirements?

A. Create an Active Directory Enterprise Edition in AWS Directory Service for Microsoft Active Directory. Configure Active Directory to be the identity source for AWS IAM Identity Center.

B. Enable AWS IAM Identity Center. Set up a two-way forest trust relationship to connect the company’s self-managed Active Directory with IAM Identity Center by using AWS Directory Service for Microsoft Active Directory.

C. Use the AWS directory service and create a two-way trust relationship with the company’s self-managed Active Directory.

D. Implement an identity provider (IdP) on Amazon EC2. Link the IdP as an identity source within the AWS IAM Identity Center.

A

B. Enable AWS IAM Identity Center. Set up a two-way forest trust relationship to connect the company’s self-managed Active Directory with IAM Identity Center by using AWS Directory Service for Microsoft Active Directory.

● This option involves establishing a trust relationship between the company’s on-premises Active Directory and AWS IAM Identity Center using AWS Directory Service for Microsoft AD.
● It allows for single sign-on across AWS accounts by leveraging the existing on-premises Active Directory for user authentication.
● With a two-way trust relationship, users and groups managed in the on-premises Active Directory can be used to access AWS resources without needing to duplicate user management efforts.

Based on the requirements and evaluation, Option B (establishing a two-way trust relationship between the company’s on-premises Active Directory and AWS IAM Identity Center) appears to be the most suitable solution. It allows for single sign-on across AWS accounts while leveraging the existing user management capabilities of the on-premises Active Directory.

Other Options:
A. Create an Enterprise Edition Active Directory in AWS Directory Service for Microsoft Active Directory. Configure the Active Directory to be the identity source for AWS IAM Identity Center.
● This option involves deploying an AWS Managed Microsoft AD in AWS Directory Service and configuring it as the identity source for IAM Identity Center.
● While this setup can provide integration between AWS IAM and the AWS Managed Microsoft AD, it does not directly integrate with the company’s on-premises Active Directory.
● Users and groups from the on-premises Active Directory would need to be synchronized or manually managed in the AWS Managed Microsoft AD, which could add complexity and overhead.

C. Use AWS Directory Service and create a two-way trust relationship with the company’s self-managed Active Directory.
● Similar to option B, this option involves establishing a trust relationship between the company’s on-premises Active Directory and AWS Directory Service.
● However, AWS Directory Service does not inherently provide IAM Identity Center capabilities. Additional configuration would be needed to integrate with IAM for single sign-on across AWS accounts.
D. Deploy an identity provider (IdP) on Amazon EC2. Link the IdP as an identity source within AWS IAM Identity Center.
● This option involves deploying and managing an identity provider (IdP) on Amazon EC2, which adds operational overhead.
● It also requires manual configuration to link the IdP as an identity source within AWS IAM Identity Center.
● While it’s technically feasible, it may not be the most efficient or scalable solution compared to utilizing AWS
Directory Service or IAM Identity Center directly.

270
Q

842#A company is planning to deploy its application to an Amazon Aurora PostgreSQL Serverless v2 cluster. The application will receive large amounts of traffic. The company wants to optimize the cluster’s storage performance as the load on the application increases. Which solution will meet these requirements in the MOST cost-effective way?

A. Configure the cluster to use the standard Aurora storage configuration.

B. Set the cluster storage type to Provisioned IOPS.

C. Configure the cluster storage type to General Purpose.

D. Configure the cluster to use the Aurora I/O optimized storage configuration.

A

C. Configure the cluster storage type to General Purpose.

Based on the requirements for optimizing storage performance while maintaining cost-effectiveness, Option C (configuring the cluster storage type as General Purpose) seems to be the most suitable choice. It offers a good balance between performance and cost, making it well-suited for handling varying levels of traffic without incurring excessive expenses.

● Provisioned IOPS (input/output operations per second) allows you to specify a consistent level of I/O performance by provisioning a specific amount of IOPS.
● While this option ensures predictable performance, it may not be the most cost-effective solution, especially if the application workload varies significantly over time.
● Provisioned IOPS typically incurs higher costs compared to other storage types. C. Configure the cluster storage type as General Purpose.
● General Purpose storage provides a baseline level of performance with the ability to burst to higher levels when needed.
● It offers a good balance of performance and cost, making it suitable for workloads with varying levels of activity.
● This option can provide adequate performance for the application while optimizing costs, especially if the workload
experiences periodic spikes in traffic.

Other Options:
A. Configure the cluster to use the Aurora Standard storage configuration.
● The Aurora Standard storage configuration provides a balance of performance and cost.
● It dynamically adjusts storage capacity based on the workload’s needs.
● However, it may not provide the highest level of performance during peak traffic periods, as it prioritizes
cost-effectiveness over performance.
B. Configure the cluster storage type as Provisioned IOPS.

D. Configure the cluster to use the Aurora I/O-Optimized storage configuration.
● Aurora I/O-Optimized storage is designed to deliver high levels of I/O performance for demanding workloads.
● It is optimized for applications with high throughput and low latency requirements.
● While this option may provide the best performance, it may also come with higher costs compared to other storage
configurations.

271
Q

843#A financial services company running on AWS has designed its security controls to meet industry standards. Industry standards include the National Institute of Standards and Technology (NIST) and the Payment Card Industry Data Security Standard (PCI DSS). The company’s external auditors need evidence that the designed controls have been implemented and are working correctly. The company has hundreds of AWS accounts in a single organization in AWS Organizations. The company needs to monitor the current status of controls across all accounts. What solution will meet these requirements?

A. Designate an account as the Amazon Inspector delegated administrator account for your organization’s management account. Integrate Inspector with organizations to discover and scan resources across all AWS accounts. Enable inspector industry standards for NIST and PCI DSS.

B. Designate an account as the Amazon GuardDuty delegated administrator account from the organization management account. In the designated GuardDuty administrator account, enable GuardDuty to protect all member accounts. Enable GuardDuty industry standards for NIST and PCI DSS.

C. Configure an AWS CloudTrail organization trail in the organization management account. Designate one account as the compliance account. Enable CloudTrail security standards for NIST and PCI DSS in the compliance account.

D. Designate one account as the AWS Security Hub delegated administrator account from the organization management account. In the designated Security Hub administrator account, enable Security Hub for all member accounts. Enable Security Hub standards for NIST and PCI DSS.

A

D. Designate one account as the AWS Security Hub delegated administrator account from the organization management account. In the designated Security Hub administrator account, enable Security Hub for all member accounts. Enable Security Hub standards for NIST and PCI DSS.

D. AWS Security Hub (Correct):
● Explanation: This option designates one AWS account as the AWS Security Hub delegated administrator account and enables Security Hub for all member accounts within the organization. NIST and PCI DSS standards are enabled for compliance checks.
● Why it’s correct: AWS Security Hub is specifically designed for centralized security monitoring and compliance checks across AWS environments. It aggregates findings from various security services and third-party tools, providing a comprehensive view of security alerts and compliance status. By enabling Security Hub standards for NIST and PCI DSS, the company can ensure continuous evaluation of its security posture against these industry standards across all AWS accounts within the organization.
In summary, option D (AWS Security Hub) is the correct choice for meeting the company’s requirements of monitoring security controls across multiple AWS accounts while ensuring compliance with industry standards like NIST and PCI DSS. It offers centralized monitoring, comprehensive security insights, and continuous compliance checks, making it the most suitable solution for the scenario provided.

Other Options:
A. Amazon Inspector (Incorrect):
● Explanation: This option involves designating one AWS account as the Amazon Inspector delegated administrator account and integrating it with AWS Organizations. Inspector would be used to discover and scan resources across all AWS accounts, with NIST and PCI DSS industry standards enabled for security assessments.
● Why it’s incorrect: While Amazon Inspector can perform security assessments, it’s primarily focused on host and application-level vulnerabilities. While it can provide valuable insights into specific vulnerabilities, it may not offer the comprehensive monitoring and compliance checks required across the entire AWS environment. Additionally, Inspector is not optimized for continuous monitoring and compliance reporting across multiple accounts.
B. Amazon GuardDuty (Incorrect):
● Explanation: This option designates one AWS account as the GuardDuty delegated administrator account, enabling GuardDuty to protect all member accounts within the organization. NIST and PCI DSS industry standards are enabled for threat detection.
● Why it’s incorrect: Amazon GuardDuty is a threat detection service rather than a comprehensive compliance monitoring tool. While it can detect suspicious activity and potential threats, it may not provide the extensive compliance checks needed to ensure adherence to industry standards like NIST and PCI DSS. GuardDuty is more focused on detecting malicious activity rather than ensuring compliance with specific security standards.
C. AWS CloudTrail (Incorrect):
● Explanation: This option involves configuring an AWS CloudTrail organization trail in the Organizations management account and designating one account as the compliance account. CloudTrail security standards for NIST and PCI DSS are enabled in the compliance account.
● Why it’s incorrect: While AWS CloudTrail is essential for auditing and logging AWS API activity, it’s primarily focused on providing an audit trail rather than actively monitoring and ensuring compliance. CloudTrail can capture API activity and changes made to AWS resources, but it may not offer the comprehensive compliance checks and real-time monitoring capabilities needed to ensure adherence to industry standards across multiple accounts.

272
Q

844#A company uses an Amazon S3 bucket as its data lake storage platform. The S3 bucket contains a large amount of data that is accessed randomly by multiple computers and hundreds of applications. The company wants to reduce S3 storage costs and provide immediate availability for frequently accessed objects. What is the most operationally efficient solution that meets these requirements?

A. Create an S3 lifecycle rule to transition objects to the S3 intelligent tiering storage class.

B. Store objects in Amazon S3 Glacier. Use S3 Select to provide applications with access to data.

C. Use the S3 storage class analysis data to create S3 lifecycle rules to automatically transition objects to the S3 Standard-Infrequent Access (S3 Standard-IA) storage class.

D. Transition objects to the S3 Standard-Infrequent Access (S3 Standard-IA) storage class. Create an AWS Lambda function to transition objects to the S3 Standard storage class when an application accesses them.

A

A. Create an S3 lifecycle rule to transition objects to the S3 intelligent tiering storage class.

● S3 Intelligent-Tiering automatically moves objects between two access tiers: frequent access and infrequent access.
● Objects that are frequently accessed remain in the frequent access tier, providing immediate availability.
● Objects that are infrequently accessed are moved to the infrequent access tier, reducing storage costs.
● This option provides a balance between cost optimization and immediate availability for frequently accessed
objects without requiring manual management of storage classes.

Based on the requirements for reducing storage costs and providing immediate availability for frequently accessed objects in a data lake scenario, Option A (Create an S3 Lifecycle rule to transition objects to the S3 Intelligent-Tiering storage class) appears to be the most operationally efficient solution. It automatically manages the storage tiers based on access patterns, optimizing costs while ensuring immediate availability for frequently accessed data without the need for manual intervention or complex setups.

Other Options:
B. Store objects in Amazon S3 Glacier. Use S3 Select to provide applications with access to the data.
● Storing objects in Amazon S3 Glacier offers the lowest storage costs among S3 storage classes.
● However, retrieving data from Glacier can have retrieval latency, which may not meet the requirement for immediate
availability for frequently accessed objects.
● Using S3 Select allows applications to retrieve specific data from objects stored in Glacier without needing to
restore the entire object.
● While this option offers cost savings, it may not provide the required immediate availability for frequently accessed
objects.
C. Use data from S3 storage class analysis to create S3 Lifecycle rules to automatically transition objects to the S3 Standard-Infrequent Access (S3 Standard-IA) storage class.
● S3 storage class analysis provides insights into the access patterns of objects, helping identify objects that are candidates for transition to the infrequent access storage class.
● Automatically transitioning objects to S3 Standard-IA reduces storage costs for infrequently accessed data while maintaining availability.
● This option efficiently optimizes storage costs based on access patterns without manual intervention.
D. Transition objects to the S3 Standard-Infrequent Access (S3 Standard-IA) storage class. Create an AWS Lambda
function to transition objects to the S3 Standard storage class when they are accessed by an application.
● Transitioning objects to S3 Standard-IA reduces storage costs for infrequently accessed data.
● Using an AWS Lambda function to transition objects back to the S3 Standard storage class when accessed by an
application ensures immediate availability for frequently accessed objects.
● However, this approach adds complexity with the need to manage and maintain the Lambda function, and it may
not be as operationally efficient as other options.

273
Q

845#A company has 5 TB of data sets. The data sets consist of 1 million user profiles and 10 million connections. User profiles have many-to-many relationship connections. The company needs an efficient way to find mutual connections of up to five levels. What solution will meet these requirements?

A. Use an Amazon S3 bucket to store the data sets. Use Amazon Athena to perform SQL JOIN queries and find connections.

B. Use Amazon Neptune to store the data sets with edges and vertices. Query at the data to find connections.

C. Use an Amazon S3 bucket to store the data sets. Use Amazon QuickSight to view connections.

D. Use Amazon RDS to store your data sets with multiple tables. Perform SQL JOIN queries to find connections.

A

B. Use Amazon Neptune to store the data sets with edges and vertices. Query at the data to find connections.

B. Use Amazon Neptune to store the datasets with edges and vertices. Query the data to find connections.
● Amazon Neptune is a fully managed graph database service that is optimized for storing and querying graph data.
● Graph databases like Neptune are specifically designed to handle complex relationships such as many-to-many
relationships and multi-level connections efficiently.
● With Neptune, you can use graph traversal algorithms to find mutual connections up to five levels with high
performance.
● This solution is well-suited for the requirements of efficiently querying complex relationship data.

Based on the requirements for efficiently finding mutual connections up to five levels in a dataset with many-to-many relationships, Option B (Use Amazon Neptune to store the datasets with edges and vertices) is the most suitable solution. Neptune is specifically designed for handling graph data and complex relationship queries efficiently, making it well-suited for this scenario.

Other Options:
A. Use an Amazon S3 bucket to store the datasets. Use Amazon Athena to perform SQL JOIN queries to find connections.
● Amazon Athena allows you to run SQL queries directly on data stored in Amazon S3, making it suitable for querying large datasets.
● However, performing complex JOIN operations on large datasets might not be the most efficient approach, especially when dealing with many-to-many relationships and multiple levels of connections.
● While Amazon Athena is capable of handling SQL JOINs, the performance may not be optimal for complex relationship queries involving multiple levels.

C. Use an Amazon S3 bucket to store the datasets. Use Amazon QuickSight to visualize connections.
● Amazon QuickSight is a business intelligence (BI) service that allows you to visualize and analyze data.
● While QuickSight can visualize connections, it is not designed for performing complex relationship queries like
finding mutual connections up to five levels.
● This solution may provide visualization capabilities but lacks the necessary querying capabilities for the given
requirements.
D. Use Amazon RDS to store the datasets with multiple tables. Perform SQL JOIN queries to find connections.
● Amazon RDS is a managed relational database service that supports SQL JOIN queries.
● Similar to Option A, while RDS supports JOIN operations, it may not be the most efficient approach for querying
complex relationship data with many-to-many relationships and multiple levels of connections. ● RDS might struggle with performance when dealing with large datasets and complex queries.

274
Q

846#A company needs a secure connection between its on-premises environment and AWS. This connection does not need a lot of bandwidth and will handle a small amount of traffic. The connection should be set up quickly. What is the MOST cost effective method to establish this type of connection?

A. Deploy a client VPN.

B. Implement AWS Direct Connect.

C. Deploy a bastion host on Amazon EC2.

D. Implement an AWS site-to-site VPN connection.

A

D. Implement an AWS site-to-site VPN connection.

275
Q

847#A company has a local SFTP file transfer solution. The company is migrating to the AWS cloud to scale the file transfer solution and optimize costs by using Amazon S3. Company employees will use their on-premises Microsoft Active Directory (AD) credentials to access the new solution. The company wants to maintain the current authentication and file access mechanisms. Which solution will meet these requirements with the LESS operating overhead?

A. Set up an S3 file gateway. Create SMB file shares on the file gateway that use the existing Active Directory to authenticate.

B. Configure an auto-scaling group with Amazon EC2 instances to run an SFTP solution. Configure the pool to scale at 60% CPU utilization.

C. Create an AWS Transfer Family server with SFTP endpoints. Choose the AWS Directory Service option as the identity provider. Use AD Connector to connect the on-premises Active Directory.

D. Create an AWS Transfer Family SFTP endpoint. Configure the endpoint to use the AWS Directory Service option as an identity provider to connect to the existing Active Directory.

A

C. Create an AWS Transfer Family server with SFTP endpoints. Choose the AWS Directory Service option as the identity provider. Use AD Connector to connect the on-premises Active Directory.

● This option utilizes AWS Transfer Family, which simplifies the setup of SFTP endpoints and supports integration with AWS Directory Service for Microsoft Active Directory.
● By using AD Connector, it enables seamless authentication against the on-premises Active Directory without the need to change user credentials.
● This approach minimizes operational overhead and provides a straightforward solution for migrating the file transfer solution to AWS.

Based on the requirement to seamlessly integrate with the existing on-premises Active Directory without changing user credentials and minimizing operational overhead, Option C is the most suitable choice. It leverages AWS Transfer Family with AD Connector to achieve this integration efficiently.
https://docs.aws.amazon.com/transfer/latest/userguide/directory-services-users.html#dir-services-ms-ad

Other Options:
A. Configure an S3 File Gateway. Create SMB file shares on the file gateway that use the existing Active Directory to authenticate.
● This option involves using AWS Storage Gateway to create an S3 File Gateway, which enables accessing objects in S3 via SMB shares. Users can authenticate using their existing Active Directory credentials.
● The operational overhead is relatively low as it leverages existing Active Directory credentials for authentication.
● However, this option doesn’t directly support SFTP file transfers, which may be a requirement in some scenarios.
B. Configure an Auto Scaling group with Amazon EC2 instances to run an SFTP solution. Configure the group to scale up at 60% CPU utilization.
● This option involves setting up and managing EC2 instances to host an SFTP solution.
● It offers flexibility and control over the SFTP environment, but it comes with higher operational overhead, including
managing server instances, scaling, and maintenance.
● Additionally, it may require additional configurations for integrating with Active Directory for user authentication.

D. Create an AWS Transfer Family SFTP endpoint. Configure the endpoint to use the AWS Directory Service option as the identity provider to connect to the existing Active Directory.
● Similar to Option C, this option involves using AWS Transfer Family for setting up SFTP endpoints and integrating with AWS Directory Service.
● However, it connects to an AWS-managed Active Directory (AWS Managed Microsoft AD) rather than the on-premises Active Directory, which might not align with the requirement to use existing on-premises credentials.

276
Q

Question #: 651
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company stores a large volume of image files in an Amazon S3 bucket. The images need to be readily available for the first 180 days. The images are infrequently accessed for the next 180 days. After 360 days, the images need to be archived but must be available instantly upon request. After 5 years, only auditors can access the images. The auditors must be able to retrieve the images within 12 hours. The images cannot be lost during this process.

A developer will use S3 Standard storage for the first 180 days. The developer needs to configure an S3 Lifecycle rule.

Which solution will meet these requirements MOST cost-effectively?

A. Transition the objects to S3 One Zone-Infrequent Access (S3 One Zone-IA) after 180 days. S3 Glacier Instant Retrieval after 360 days, and S3 Glacier Deep Archive after 5 years.
B. Transition the objects to S3 One Zone-Infrequent Access (S3 One Zone-IA) after 180 days. S3 Glacier Flexible Retrieval after 360 days, and S3 Glacier Deep Archive after 5 years.
C. Transition the objects to S3 Standard-Infrequent Access (S3 Standard-IA) after 180 days, S3 Glacier Instant Retrieval after 360 days, and S3 Glacier Deep Archive after 5 years.
D. Transition the objects to S3 Standard-Infrequent Access (S3 Standard-IA) after 180 days, S3 Glacier Flexible Retrieval after 360 days, and S3 Glacier Deep Archive after 5 years.

A

C. Transition the objects to S3 Standard-Infrequent Access (S3 Standard-IA) after 180 days, S3 Glacier Instant Retrieval after 360 days, and S3 Glacier Deep Archive after 5 years.

glacier instant retrieval actually going to retrieve the data instantly within milliseconds, whereas glacial flexible retrieval, this will take at least five to 12 hour. So obviously, we cannot use this for our use case, because after 360 days, we need it immediately, which is only going to be achieved by glacier retrieval.

277
Q

848#A company is designing an event-based order processing system. Each order requires several validation steps after the order is created. An idempotent AWS Lambda function performs each validation step. Each validation step is independent of the other validation steps. The individual validation steps only need a subset of the order event information. The company wants to ensure that each validation step of the Lambda function has access to only the order event information that the function requires. The components of the order processing system must be loosely coupled to adapt to future business changes. What solution will meet these requirements?

A. Create an Amazon Simple Queue Service (Amazon SQS) queue for each validation step. Create a new Lambda function to transform the order data into the format required by each validation step and to post the messages to the appropriate SQS queues. Subscribe to each validation step of the Lambda function in its corresponding SQS queue.

B. Create an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe to the Lambda functions from the validation step to the SNS topic. Use message body filtering to send only the necessary data to each subscribed Lambda function.

C. Create an Amazon EventBridge event bus. Create an event rule for each validation step. Configure the input transformer to send only the required data to each target validation step Lambda function.

D. Create an Amazon Simple Queue Service (Amazon SQS) queue. Create a new Lambda function to subscribe to the SQS queue and transform the order data into the format required by each validation step. Use the new Lambda function to perform synchronous invocations of the validation step of Lambda functions in parallel on separate threads.

A

C. Create an Amazon EventBridge event bus. Create an event rule for each validation step. Configure the input transformer to send only the required data to each target validation step Lambda function.

C. Amazon EventBridge Event Bus Approach:
● EventBridge allows creating event rules for each validation step and configuring input transformers to send only the required data to each target Lambda function.
● This approach provides loose coupling and enables fine-grained control over the event data sent to each validation step Lambda function.
● EventBridge’s input transformation capabilities make it suitable for this scenario, as it allows for extracting subsets of event data efficiently.

Evaluation:
Given the requirement for loosely coupled components and the need to provide each validation step Lambda function with only the necessary data, Option C, utilizing Amazon EventBridge Event Bus with input transformation, appears to be the most suitable solution. It offers the flexibility to tailor event data for each validation step efficiently while maintaining loose coupling between components.

Other Options:
A. Amazon Simple Queue Service (Amazon SQS) Approach:
● In this approach, each validation step has its own SQS queue, and a Lambda function transforms the order data and publishes messages to the appropriate SQS queues.
● This solution offers loose coupling between validation steps and allows each Lambda function to receive only the information it needs.
● However, setting up and managing multiple SQS queues and orchestrating the transformation and message publishing logic could introduce complexity.

B. Amazon Simple Notification Service (Amazon SNS) Approach:
● This approach involves using an SNS topic to which all validation step Lambda functions subscribe.
● SNS message body filtering is utilized to send only the required data to each subscribed Lambda function.
● While this approach offers loose coupling and message filtering capabilities, SNS filtering is limited compared to
EventBridge’s transformation capabilities.

D. Amazon Simple Queue Service (Amazon SQS) Queue with Lambda Approach:
● This approach involves using a single SQS queue and a Lambda function to transform the order data and invoke validation step Lambda functions synchronously.
● While this solution may simplify the architecture by using a single queue, the synchronous invocation of Lambda functions may not be the most efficient approach, especially if the validation steps can be executed concurrently.

278
Q

849#A company is migrating a three-tier application to AWS. The application requires a MySQL database. In the past, app users have reported poor app performance when creating new entries. These performance issues were caused by users generating different real-time reports from the application during work hours. Which solution will improve application performance when moved to AWS?

A. Import the data into a provisioned Amazon DynamoDB table. Refactored the application to use DynamoDB for reporting.

B. Create the database on a compute-optimized Amazon EC2 instance. Ensure that computing resources exceed the local database.

C. Create an Amazon Aurora MySQL Multi-AZ DB cluster with multiple read replicas. Configure the application to use the reader endpoint for reports.

D. Create an Amazon Aurora MySQL Multi-AZ DB cluster. Configure the application to use the cluster backup instance as the endpoint for reporting.

A

C. Create an Amazon Aurora MySQL Multi-AZ DB cluster with multiple read replicas. Configure the application to use the reader endpoint for reports.

● Amazon Aurora is a high-performance relational database engine that is fully compatible with MySQL.
● Multi-AZ deployment ensures high availability, and read replicas can offload read queries, improving overall
performance.
● Using the reader endpoint for reports allows read traffic to be distributed among read replicas, reducing the load on
the primary instance during report generation.
● This solution provides scalability, high availability, and improved performance for read-heavy workloads without
significant application changes.

Evaluation:
Option C, creating an Amazon Aurora MySQL Multi-AZ DB cluster with multiple read replicas and configuring the application to use the reader endpoint for reports, is the most suitable solution. It offers scalability, high availability, and improved performance for read-heavy workloads without requiring significant application changes. Additionally, Aurora’s performance and reliability make it a strong candidate for supporting the application’s database needs.

Other Options:
A. Import data into Amazon DynamoDB:
● DynamoDB is a fully managed NoSQL database service that can provide high performance and scalability.
● By importing data into DynamoDB and refactoring the application to use DynamoDB for reports, the application can
benefit from DynamoDB’s scalability and low-latency read operations.
● However, DynamoDB may require significant application refactoring, especially if the application relies heavily on
SQL queries that are not easily translated to DynamoDB’s query model.
B. Create database on a compute optimized Amazon EC2 instance:
● While running MySQL on a compute-optimized EC2 instance might provide better performance compared to on-premises hardware, it may not fully address the scalability and performance issues during peak usage periods.
● Scaling resources vertically (by increasing the instance size) may have limits and could become costly.
D. Create Amazon Aurora MySQL Multi-AZ DB cluster with backup instance for reports:
● Configuring the application to use the backup instance for reports may not be the most efficient approach.
● While the backup instance can serve read traffic, it may not be optimized for performance, especially during peak
usage periods when the primary instance is under load.

279
Q

850#A company is extending a secure on-premises network to the AWS cloud by using an AWS Direct Connect connection. The local network does not have direct access to the Internet. An application running on the local network needs to use an Amazon S3 bucket. Which solution will meet these requirements in the MOST cost-effective way?

A. Create a public virtual interface (VIF). Routes AWS traffic over the public VIF.

B. Create a VPC and NAT gateway. Routes AWS traffic from the on-premises network to the NAT gateway.

C. Create a VPC and an Amazon S3 interface endpoint. Routes AWS traffic from the on-premises network to the S3 interface endpoint.

D. Create a VPC peering connection between the on-premises network and Direct Connect. Routes AWS traffic through the peering connection.

A

C. Create a VPC and an Amazon S3 interface endpoint. Routes AWS traffic from the on-premises network to the S3 interface endpoint.

280
Q

Company 851#A serves its website using an auto-scaling group of Amazon EC2 instances in a single AWS Region. The website does not require a database. The company is expanding, and the company’s engineering team deploys the website in a second region. The company wants to spread traffic across both regions to accommodate growth and for disaster recovery purposes. The solution should not serve traffic from a region where the website is unhealthy. What policy or resource should the company use to meet these requirements?

A. An Amazon Route 53 simple routing policy

B. An Amazon Route 53 multivalue answer routing policy

C. An application load balancer in a region with a target group that specifies the EC2 instance IDs of both regions

D. An application load balancer in a region with a target group that specifies the IP addresses of the EC2 instances in both regions

A

B. An Amazon Route 53 multivalue answer routing policy

B. Amazon Route 53 multivalue answer routing policy:
● The multivalue answer routing policy allows you to specify multiple healthy records for a single DNS name. Route 53 responds to DNS queries with up to eight healthy records selected at random.
● This policy supports routing traffic to multiple endpoints, which can be EC2 instances in different Regions.
● However, it does not perform health checks on the endpoints, so it cannot avoid routing traffic to unhealthy
Regions.

However, none of the other options meet the requirements of distributing traffic across multiple Regions and avoiding unhealthy Regions. Therefore, this is the best option for achieving the desired outcome. By using this routing policy, you can specify multiple healthy records (such as EC2 instance endpoints) for a single DNS name. While it does not perform health checks on the endpoints, it enables distributing traffic across multiple endpoints, including those in different Regions, which aligns with the company’s requirement for distributing traffic across Regions. However, you would need to implement additional mechanisms to ensure that unhealthy Regions are not used, such as combining it with health checks or failover configurations.

281
Q

852#A company runs its applications on Amazon EC2 instances that are backed by the Amazon Elastic Block Store (Amazon EBS). EC2 instances run the latest version of Amazon Linux. Applications are experiencing availability issues when company employees store and retrieve files that are 25 GB or larger. The company needs a solution that does not require the company to transfer files between EC2 instances. The files must be available on many EC2 instances and in multiple availability zones. What solution will meet these requirements?

A. Migrate all files to an Amazon S3 bucket. Instruct employees to access files from the S3 bucket.

B. Take a snapshot of the existing EBS volume. Mount the snapshot as an EBS volume across the EC2 instances. Instruct employees to access files from EC2 instances.

C. Mount an Amazon Elastic File System (Amazon EFS) file system across all EC2 instances. Instruct employees to access files from EC2 instances.

D. Creates an Amazon Machine Image (AMI) from the EC2 instances. Configure new EC2 instances from the AMI that use an instance store volume. Instruct employees to access files from EC2 instances.

A

C. Mount an Amazon Elastic File System (Amazon EFS) file system across all EC2 instances. Instruct employees to access files from EC2 instances.

● Amazon EFS allows concurrent access to files from multiple EC2 instances and offers low-latency access to data.
● While it may incur slightly higher costs compared to Amazon S3, it provides better performance for applications
requiring frequent access to large files.

While Amazon S3 offers scalability and durability, the frequent access to large files may introduce latency and incur data transfer costs. In scenarios where performance and cost are critical factors, using Amazon EFS may provide a better balance by offering low-latency access to data while still ensuring scalability and durability. Therefore, Option C, mounting an Amazon EFS file system, may be a more suitable solution considering both performance and cost considerations.

282
Q

853#A company is running a highly sensitive application on Amazon EC2 backed by an Amazon RDS database. Compliance regulations require that all personally identifiable information (PII) be encrypted at rest. Which solution should a solutions architect recommend to meet this requirement with the LEAST amount of infrastructure changes?

A. Deploy AWS Certificate Manager to generate certificates. Use the certificates to encrypt the database volume.

B. Deploy AWS CloudHSM, generate encryption keys, and use the keys to encrypt the database volumes.

C. Configure SSL encryption using AWS Key Management Service (AWS KMS) keys to encrypt database volumes.

D. Configure Amazon Elastic Block Store (Amazon EBS) encryption and Amazon RDS encryption with AWS Key Management Service (AWS KMS) keys to encrypt instance and database volumes.

A

D. Configure Amazon Elastic Block Store (Amazon EBS) encryption and Amazon RDS encryption with AWS Key Management Service (AWS KMS) keys to encrypt instance and database volumes.

Explanation of Option D:
Amazon EBS encryption: Amazon EBS encryption allows you to encrypt the Amazon EBS volumes attached to your EC2 instances. By enabling EBS encryption, you can ensure that data stored on these volumes, including the operating system and application data, is encrypted at rest. This encryption is transparent to your application and
does not require any changes to the application itself. It’s a straightforward configuration change at the volume level.
Amazon RDS encryption: Amazon RDS supports encryption of data at rest using AWS Key Management Service (AWS KMS) keys. By enabling RDS encryption, you can ensure that data stored in your RDS databases, including sensitive PII, is encrypted at rest. This encryption is also transparent to your application and does not require any changes to the database schema or application code. You simply enable encryption for your RDS instance using AWS KMS keys.
Combining Amazon EBS encryption and Amazon RDS encryption with AWS KMS keys provides a comprehensive solution for encrypting both the instance volumes (where the application runs) and the database volumes (where the data resides). This ensures that all sensitive data, including PII, is encrypted at rest, thereby meeting compliance regulations.
While options A, B, and C also involve encryption, they may require more changes to the infrastructure or introduce additional complexities:
● Option A involves using AWS Certificate Manager to generate SSL/TLS certificates for encrypting data in transit, but it does not directly address encrypting data at rest on the volumes.
● Option B involves deploying AWS CloudHSM, which is a hardware security module (HSM) service, and generating encryption keys. This option introduces additional infrastructure and management overhead.
● Option C mentions configuring SSL encryption using AWS KMS keys, but it’s not clear how this specifically addresses encrypting data at rest on the volumes.

Therefore, Option D is the most appropriate choice for meeting the encryption requirements with the least amount of changes to the infrastructure.

283
Q

854#A company runs an AWS Lambda function on private subnets in a VPC. Subnets have a default route to the Internet through an Amazon EC2 NAT instance. The Lambda function processes the input data and saves its output as an object in Amazon S3. Intermittently, the Lambda function times out while attempting to load the object due to busy network traffic on the NAT instance. The company wants to access Amazon S3 without going through the Internet. What solution will meet these requirements?

A. Replace the EC2 NAT instance with an AWS managed NAT gateway.

B. Increase the size of the NAT EC2 instance in the VPC to a network-optimized instance type.

C. Provision a gateway endpoint for Amazon S3 on the VPC. Update the subnet route tables accordingly.

D. Provision a transit gateway. Place the transit gateway attachments on the private subnets where the Lambda function is running.

A

C. Provision a gateway endpoint for Amazon S3 on the VPC. Update the subnet route tables accordingly.

  • Gateway Endpoint for Amazon S3: Amazon VPC endpoints enable private connectivity between your VPC and supported AWS services. By provisioning a gateway endpoint for Amazon S3 in the VPC, the Lambda function can access S3 directly without traffic leaving the AWS network or traversing the internet. This ensures that the Lambda function can upload objects to S3 without encountering timeouts due to network congestion on the NAT instance.
  • Private Connectivity: The S3 gateway endpoint provides a private connection to Amazon S3 from within the VPC, eliminating the need for internet gateway or NAT instances for S3 access. Traffic between the Lambda function and S3 stays within the AWS network, enhancing security and reducing latency.
  • Route Table Update: After provisioning the S3 gateway endpoint, you need to update the route tables of the private subnets to route S3 traffic through the endpoint. This ensures that traffic intended for S3 is directed to the endpoint, allowing the Lambda function to communicate with S3 securely and efficiently.

Other Options:
A. Replace the EC2 NAT instance with an AWS managed NAT gateway:
AWS Managed NAT Gateway: NAT gateways are managed services provided by AWS that allow instances in private subnets to initiate outbound traffic to the internet while preventing inbound traffic from initiating a connection with them. Unlike EC2 NAT instances, NAT gateways are fully managed by AWS, providing higher availability, scalability, and better performance.
Traversing Public Internet: However, NAT gateways still route traffic through the public internet. Therefore, while replacing the EC2 NAT instance with a managed NAT gateway might improve availability and scalability, it does not address the company’s requirement to access S3 without traversing the public internet.
B. Increase the size of the EC2 NAT instance in the VPC to a network optimized instance type:
Larger NAT Instance: Increasing the size of the EC2 NAT instance might alleviate some of the performance issues caused by network saturation. By using a larger instance type, the NAT instance can handle more network traffic, potentially reducing timeouts experienced by the Lambda function.
Limitations Remain: However, even with a larger instance type, the EC2 NAT instance still routes traffic through the public internet. As a result, this solution does not address the company’s requirement to access S3 without traversing the internet.
D. Provision a transit gateway. Place transit gateway attachments in the private subnets where the Lambda function is running:
Transit Gateway: Transit Gateway is a service that simplifies network connectivity between VPCs and on-premises networks. It acts as a hub that connects multiple VPCs and VPN connections, allowing for centralized management of network routing.
Transit Gateway Attachments: While transit gateways can provide centralized routing, they do not inherently address the requirement to access S3 without traversing the public internet. In this scenario, placing transit
gateway attachments in the private subnets would still result in traffic passing through the public internet when accessing S3.

In summary, options A, B, and D do not directly address the company’s requirement to access Amazon S3 without traversing the public internet, making option C the most appropriate solution for the given scenario.

284
Q

855#A news company that has reporters all over the world is hosting its broadcast system on AWS. Reporters send live broadcasts to the broadcast system. Reporters use software on their phones to send live streams via Real-Time Messaging Protocol (RTMP). A solutions architect must design a solution that gives reporters the ability to send the highest quality broadcasts. The solution should provide accelerated TCP connections back to the transmission system. What should the solutions architect use to meet these requirements?

A. Amazon CloudFront

B. AWS Global Accelerator

C. AWS VPN Client

D. Amazon EC2 Instances and AWS Elastic IP Addresses

A

B. AWS Global Accelerator

AWS Global Accelerator is a networking service that improves the availability and performance of applications with global users. It uses the AWS global network to optimize the path from users to applications, improving the performance of TCP and UDP traffic. Global Accelerator can provide accelerated TCP connections back to the broadcast system, making it suitable for real-time streaming scenarios where low-latency and high-performance connections are crucial. This option aligns well with the requirements of the news company.