sthithapragnakk -- SAA Exam Dumps Jan 24 1-100 Flashcards
651# A company uses Amazon FSx for NetApp ONTAP in its primary AWS Region for CIFS and NFS file shares. Applications running on Amazon EC2 instances access file shares. The company needs a storage disaster recovery (DR) solution in a secondary region. Data that is replicated in the secondary region needs to be accessed using the same protocols as the primary region. Which solution will meet these requirements with the LESS operating overhead?
A. Create an AWS Lambda function to copy the data to an Amazon S3 bucket. Replicates the S3 bucket to the secondary region.
B. Create an FSx backup for ONTAP volumes using AWS Backup. Copy the volumes to the secondary region. Create a new FSx instance for ONTAP from the backup.
C. Create an FSx instance for ONTAP in the secondary region. Use NetApp SnapMirror to replicate data from the primary region to the secondary region.
D. Create an Amazon Elastic File System (Amazon EFS) volume. Migrate the current data to the volume. Replicates the volume to the secondary region.
C. Create an FSx instance for ONTAP in the secondary region. Use NetApp SnapMirror to replicate data from the primary region to the secondary region.
NetApp SnapMirror is a data replication feature designed for ONTAP systems, enabling efficient data replication between primary and secondary systems. It meets the requirement of replicating data using the same protocols (CIFS and NFS) and involves less operational expenses compared to other options.
Question #: 652
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company has a large data workload that runs for 6 hours each day. The company cannot lose any data while the process is running. A solutions architect is designing an Amazon EMR cluster configuration to support this critical data workload.
Which solution will meet these requirements MOST cost-effectively?
A. Configure a long-running cluster that runs the primary node and core nodes on On-Demand Instances and the task nodes on Spot Instances.
B. Configure a transient cluster that runs the primary node and core nodes on On-Demand Instances and the task nodes on Spot Instances.
C. Configure a transient cluster that runs the primary node on an On-Demand Instance and the core nodes and task nodes on Spot Instances.
D. Configure a long-running cluster that runs the primary node on an On-Demand Instance, the core nodes on Spot Instances, and the task nodes on Spot Instances.
B. Configure a transient cluster that runs the primary node and core nodes on On-Demand Instances and the task nodes on Spot Instances.
So BNC both looks similar, but what is the difference B is going to use primary and core nodes on demand whereas C is going to use primary argument and core node and tasmota Spot Instances obviously, we want to use this even for transient cluster The reason being if both core nodes and task nodes are on Spot Instances, then you won’t have any instances to process your data at all. Even though you have the primary node on on demand, you still need the core node at least some core nodes available because those are the worker nodes right. So, if you have bought spot then no way you are going to achieve it. But option B you own you have both primary and core node on demand, which means you will always have some available to to process the application. Even the task notes even if they are not available, that’s fine because you still have your core notes on on demand. So for that reason, we will pick the option B,
Question #: 653
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company maintains an Amazon RDS database that maps users to cost centers. The company has accounts in an organization in AWS Organizations. The company needs a solution that will tag all resources that are created in a specific AWS account in the organization. The solution must tag each resource with the cost center ID of the user who created the resource.
Which solution will meet these requirements?
A. Move the specific AWS account to a new organizational unit (OU) in Organizations from the management account. Create a service control policy (SCP) that requires all existing resources to have the correct cost center tag before the resources are created. Apply the SCP to the new OU.
B. Create an AWS Lambda function to tag the resources after the Lambda function looks up the appropriate cost center from the RDS database. Configure an Amazon EventBridge rule that reacts to AWS CloudTrail events to invoke the Lambda function.
C. Create an AWS CloudFormation stack to deploy an AWS Lambda function. Configure the Lambda function to look up the appropriate cost center from the RDS database and to tag resources. Create an Amazon EventBridge scheduled rule to invoke the CloudFormation stack.
D. Create an AWS Lambda function to tag the resources with a default value. Configure an Amazon EventBridge rule that reacts to AWS CloudTrail events to invoke the Lambda function when a resource is missing the cost center tag.
B. Create an AWS Lambda function to tag the resources after the Lambda function looks up the appropriate cost center from the RDS database. Configure an Amazon EventBridge rule that reacts to AWS CloudTrail events to invoke the Lambda function.
option A. Let’s cross out because this is not the right answer. This is suggesting us to use service control protocol to enforce bagging before resource creation. But saps don’t directly perform tagging operations. So for that reason, we won’t use that then we have option C. This is talking about our users CloudFormation. And it is using a scheduled event bridge role which may introduce unnecessary complexity. And that does not ensure immediate tagging upon resource resource creation at all, because you’re scheduling it to do that. And then we have option D. Under this proposes tagging resources with a default value and then reacting to events to correct the attack. This introduces a potential delay and does not guarantee that resources are immediately tagged correctly. So for that reason, we are going to go with option B. What does option B says you know, lambda function is used to tag resources and then lambda function is configured to look up the appropriate cost center from the RDS database. What does this do this ensures that each resource is tagged with the correct cost center ID we are not, you know going with the default value instead we are looking up and then we are using human to bridge in conjunction with AWS cloud trail events, we are not scheduling here, we are going to have the cloud trail events to trigger the lambda function when resources are created. This ensures that time processes are automatically initiated whenever a relevant event occurs instead of scheduling like the option C is doing. So for that reason, option B is the correct one in this case.
Question #: 654
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company recently migrated its web application to the AWS Cloud. The company uses an Amazon EC2 instance to run multiple processes to host the application. The processes include an Apache web server that serves static content. The Apache web server makes requests to a PHP application that uses a local Redis server for user sessions.
The company wants to redesign the architecture to be highly available and to use AWS managed solutions.
Which solution will meet these requirements?
A. Use AWS Elastic Beanstalk to host the static content and the PHP application. Configure Elastic Beanstalk to deploy its EC2 instance into a public subnet. Assign a public IP address.
B. Use AWS Lambda to host the static content and the PHP application. Use an Amazon API Gateway REST API to proxy requests to the Lambda function. Set the API Gateway CORS configuration to respond to the domain name. Configure Amazon ElastiCache for Redis to handle session information.
C. Keep the backend code on the EC2 instance. Create an Amazon ElastiCache for Redis cluster that has Multi-AZ enabled. Configure the ElastiCache for Redis cluster in cluster mode. Copy the frontend resources to Amazon S3. Configure the backend code to reference the EC2 instance.
D. Configure an Amazon CloudFront distribution with an Amazon S3 endpoint to an S3 bucket that is configured to host the static content. Configure an Application Load Balancer that targets an Amazon Elastic Container Service (Amazon ECS) service that runs AWS Fargate tasks for the PHP application. Configure the PHP application to use an Amazon ElastiCache for Redis cluster that runs in multiple Availability Zones.
D. Configure an Amazon CloudFront distribution with an Amazon S3 endpoint to an S3 bucket that is configured to host the static content. Configure an Application Load Balancer that targets an Amazon Elastic Container Service (Amazon ECS) service that runs AWS Fargate tasks for the PHP application. Configure the PHP application to use an Amazon ElastiCache for Redis cluster that runs in multiple Availability Zones.
Whenever you see static content, blindly go with s3 and cloud format CloudFront. Those two are like awesome combination for handling static content. So let’s go through the options. And with the hint I just gave you we can clearly see option D is the right answer, but let’s see why the other options are not the correct ones. Option A is talking about Elastic Beanstalk and reality no it is a fully managed service that makes it easy to deploy and run applications. While it simplifies application development it may I’d be the best fit for hosting static content directly. Assigning a public IP to an easy to instance in a public subnet suggests exposure to public internet. While this is common for web servers, it’s not the most highly available architecture. And he does not leverage AWS managed solutions for static content delivery. Hence, we don’t pick that one. Then we have option B, we have lambda, it can host serverless functions, but it may not be the best fit for hosting entire web application, especially one with PHP code. And then we have Amazon API, gateway, and lambda. These are commonly used for serverless applications. But for a web application with PHP, it may introduce complexities, then configuring elastic cache for Redis. For session information, it is a good practice. But this option lacks clear separation between static content dynamic content and session management. So now we don’t pick that. Then option C. This option maintains the back end code on an easy to instance. Again, they want managed solutions not manual solutions, which may not be the most scalable and managed solution. While using elastic cache for readies with multi AZ is a good choice for session management. The overall architecture lacks the separation of concerns between static content and dynamic content content, and also copying the front end resources to Amazon s3 is a step toward better scalability for static content. But the architecture could be further optimized. So hence for that reason, we will go with option D. Why because this option leverages Amazon CloudFront for global content delivery providing low latency access to static resources. Separation of static content, which is hosted in s3 and dynamic content running on ECS with fargate is a good practice for scalability and manageability. Using an application load balancer with AWS fargate tasks for the PHP application provides a scalable and highly available environment. And Amazon elastic cache for Redis cluster with multiple availability zones is used for session management and ensuring highly availability. So overall, Option D is a well architected solution that leverages multiple AWS managed services to achieve high availability, scalability and separation of concern. Clearly, it aligns with the best practices for hosting web applications on AWS.
Question #: 655
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company runs a web application on Amazon EC2 instances in an Auto Scaling group that has a target group. The company designed the application to work with session affinity (sticky sessions) for a better user experience.
The application must be available publicly over the internet as an endpoint. A WAF must be applied to the endpoint for additional security. Session affinity (sticky sessions) must be configured on the endpoint.
Which combination of steps will meet these requirements? (Choose two.)
A. Create a public Network Load Balancer. Specify the application target group.
B. Create a Gateway Load Balancer. Specify the application target group.
C. Create a public Application Load Balancer. Specify the application target group.
D. Create a second target group. Add Elastic IP addresses to the EC2 instances.
E. Create a web ACL in AWS WAF. Associate the web ACL with the endpoint
C. Create a public Application Load Balancer. Specify the application target group.
E. Create a web ACL in AWS WAF. Associate the web ACL with the endpoint
Option one, create a public network load balancer, specify the application target group network load balancers. This is one of the load balancer. The other one is application load balancers. And then we have classic load balancer and then also get a load balancer. But for now, let’s concentrate on network load balancer. These are designed for TCP UDP. That’s the hint you need to remember. Okay, that is a tip from me. network load balancer just remember TCP UDP, application load balancer HTTP and HTTPS traffic. Okay, so NL B’s are designed for TCP UDP traffic and they do not have native support for session affinity or sticky sessions for this ALP is suitable far more suitable for HTTP and HTTPS traffic. Then we have option B. Create a gateway load balancers specify the application target group. Okay, great. Well, load balancers are actually designed for handling traffic at the network and transport layers. They are not used for HTTP HTTPS traffic and they do not support session affinity. Then that will leave us with these three. Let’s go and look at option D. Which is talking about create a second target group Add elastic IP addresses to the easy two instances. Adding elastic IP addresses to easy two instances not directly related to achieving session affinity or applying a web application firewall. Session affinity is typically managed by load balancer and web application firewall is a separate service for the application security. So that will leave us with options C, I and II. Let’s see why they are the right options. And we have discussed so far. LB is designed for HTTP, HTTPS traffic and its support session affinity. So by creating a public ELB, you can expose your web application to the internet with the necessary routing and load balancing capabilities, which is all good but the question asked about endpoints and etc. So that will be handled by option C, or sorry, option E, creating a web ACL in the valve. So whilst our web application firewall provides protection against common web exploits and attacks, if you did the cloud practitioner you know, whenever we hear the words, SQL injection attacks are cross site scripting attacks, we will use the web application firewall because that is actually protects from those attacks. So by creating a web ACL, you can define rules to filter and control the web traffic. associating the web ACL with the endpoint ensures that the web application is protected by the specified security policies hence, is is one of the right option. So in summary, C and E represent the appropriate steps for exposing the web application with session affinity and applying waf for additional security.
Question #: 656
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company runs a website that stores images of historical events. Website users need the ability to search and view images based on the year that the event in the image occurred. On average, users request each image only once or twice a year. The company wants a highly available solution to store and deliver the images to users.
Which solution will meet these requirements MOST cost-effectively?
A. Store images in Amazon Elastic Block Store (Amazon EBS). Use a web server that runs on Amazon EC2.
B. Store images in Amazon Elastic File System (Amazon EFS). Use a web server that runs on Amazon EC2.
C. Store images in Amazon S3 Standard. Use S3 Standard to directly deliver images by using a static website.
D. Store images in Amazon S3 Standard-Infrequent Access (S3 Standard-IA). Use S3 Standard-IA to directly deliver images by using a static website.
D. Store images in Amazon S3 Standard-Infrequent Access (S3 Standard-IA). Use S3 Standard-IA to directly deliver images by using a static website.
whenever they ask for storage to store images, audio video files blindly go with s3, there is no better solution or cost effective solution than s3 to store images, audio video, so that actually leaves us with options CMD. So you can immediately eliminate A and B cross them out. So in CMD, they’re both RS three. So which one do we choose? And option C is talking about using s3 standard, then use standard to directly deliver images by using a static static website. Well, you can do it but if you have to talk about most cost effective obviously going with this is a little bit too much. Don’t you assume why? Because the emails are going to users request each email once or twice a year for that VI goes standard standard means you will which is costlier than any other s3 class. Okay, and standard means 24/7 It will be available, you can query it anytime, etc. But since this is once or twice, and the better option cost effective wise would be option D because it is going to use infrequent access, infrequent access. As the name suggests, you will use this class when the files are infrequently accessed, which is our case, obviously, we’ll go with this instead of standard so that this will be cost effective. So that’s what I’m talking about. All the options are right, all the options can actually handle the scenario given. But C and D are the best ones for images, and d is the best one cost effective. So this is kind of combination of three to four questions as I mentioned. So if you don’t know which one these particular services are, etc, then it would be very hard for you to answer these questions.
Question #: 657
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company has multiple AWS accounts in an organization in AWS Organizations that different business units use. The company has multiple offices around the world. The company needs to update security group rules to allow new office CIDR ranges or to remove old CIDR ranges across the organization. The company wants to centralize the management of security group rules to minimize the administrative overhead that updating CIDR ranges requires.
Which solution will meet these requirements MOST cost-effectively?
A. Create VPC security groups in the organization’s management account. Update the security groups when a CIDR range update is necessary.
B. Create a VPC customer managed prefix list that contains the list of CIDRs. Use AWS Resource Access Manager (AWS RAM) to share the prefix list across the organization. Use the prefix list in the security groups across the organization.
C. Create an AWS managed prefix list. Use an AWS Security Hub policy to enforce the security group update across the organization. Use an AWS Lambda function to update the prefix list automatically when the CIDR ranges change.
D. Create security groups in a central administrative AWS account. Create an AWS Firewall Manager common security group policy for the whole organization. Select the previously created security groups as primary groups in the policy.
B. Create a VPC customer managed prefix list that contains the list of CIDRs. Use AWS Resource Access Manager (AWS RAM) to share the prefix list across the organization. Use the prefix list in the security groups across the organization.
option one, which is not the correct answer for our case. So it’s talking about creating VPC security groups in the organization’s management account, this may lead to an administrative overhead and may not be as scalable or centralized. So for that reason, we are not going to use that. Then we are going to create because how many security groups are you going to create? So right, it’s not scalable at all options, see, is talking about creating AWS manage prefix list and security hub policies, this could provide automation, it might introduce unnecessary complexity and may not be as cost effective. So for that reason, we won’t go with that as well. And then we have option D. Similar to Option A, but this is talking about creating security groups. And it is talking about you creating AWS firewall manager. Well, firewall manager is generally used for managing AWS web application firewall and AWS shield advanced. And it’s used for managing security groups might be overkill for a specific use case described. And additionally, it may introduce additional costs, right? And because they are asking about minimizing administrative overhead when they say that what does the mean means use the features of the tools that are mentioned in the question, and do it don’t try to introduce something else. So part of that reason, option B is more suitable. Why? Because it’s talking about creating a VPC customer managed request list. This will allow you to define a list of cider ranges that can be shared across AWS accounts and regions. This provides a centralized way to manage data ranges. And then they’re talking about using AWS RAM, which is just a resource access manager to share the prefix list across the organization. So what is Ram? It enables resource sharing across AWS accounts, including prefix lists. So by sharing the customer manager prefix list, you centralize the management of cider ranges. And then they’re talking about using the prefix lists and security groups across the organization. So what does this do, you can reference the shared prefix list in security group rules that you created in the previous steps. This ensures that security groups across multiple AWS accounts use the same centralized set of set of ranges. So this approach minimizes administrative overhead or loss for centralized control, and provides a scalable solution for managing security group rules globally. So this makes this option most cost effective and suitable for centralizing the management of security group roles across organizations.
674 Question #: 658
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company uses an on-premises network-attached storage (NAS) system to provide file shares to its high performance computing (HPC) workloads. The company wants to migrate its latency-sensitive HPC workloads and its storage to the AWS Cloud. The company must be able to provide NFS and SMB multi-protocol access from the file system.
Which solution will meet these requirements with the LEAST latency? (Choose two.)
A. Deploy compute optimized EC2 instances into a cluster placement group.
B. Deploy compute optimized EC2 instances into a partition placement group.
C. Attach the EC2 instances to an Amazon FSx for Lustre file system.
D. Attach the EC2 instances to an Amazon FSx for OpenZFS file system.
E. Attach the EC2 instances to an Amazon FSx for NetApp ONTAP file system.
A. Deploy compute optimized EC2 instances into a cluster placement group.
E. Attach the EC2 instances to an Amazon FSx for NetApp ONTAP file system.
So this question is divided into two groups a and b as one CD as another one and between A and B. Obviously, I will go with a not B. Why? Because A is talking about cluster placement groups. Well, what are cluster placement placement goes exactly. These allow you to group instances within a single availability zone to provide low latency network performance. This is suitable for tightly coupled HPC workloads. So cluster placement groups are they go hand in hand with HPC HPC workloads, which is what our question is asking. Whereas the partition placement groups, they can provide low latency networking. But cluster placement groups are generally preferred for HPC workloads, as they provide a higher degree of network performance optimization. Then among CDE, which one do we pick? Well, we pick the correct one based on these NFS and SMB multi protocol access. And out of these three, only NetApp ONTAP supports both NFS and SMB. This doesn’t. Well, if you want me to show something, I haven’t. So if you see this is a table, you can go and see here, SMB and NFS is supported by NetApp ONTAP. Open it only supports NFS whereas the luster supports none of this, okay, so the for that reason, we will go with NetApp ONTAP file system. So this satisfies both the entire question.
675 Question #: 483
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C02 Questions]
A company is relocating its data center and wants to securely transfer 50 TB of data to AWS within 2 weeks. The existing data center has a Site-to-Site VPN connection to AWS that is 90% utilized.
Which AWS service should a solutions architect use to meet these requirements?
A. AWS DataSync with a VPC endpoint
B. AWS Direct Connect
C. AWS Snowball Edge Storage Optimized
D. AWS Storage Gateway
C. AWS Snowball Edge Storage Optimized
since it’s 90% utilized, you cannot use the 10% utilization to move the 50 DB data in two weeks. But whenever the question says most something within one week or two weeks, usually they are referring you to use the snowball family device. So not the other words. So in this case, if you want to look at the data sync with VPC, again, Data Sync, it will send the data over internet and we don’t have that much bandwidth available. So you can cross that out. Same thing with Direct Connect already 90% is useless Direct Connect even though it is a direct connect to connection between on premise and your cloud. Usually, we cannot use this for one time data transfer right this because right now we want to do a one time secure transfer of HTTP data. So for that, setting up Direct Connect is not feasible. And again, even if you want to set a bad connect, it takes at least one month to do the complete setup, which is an overkill. So that will leave us with options of C and D and even D is not the right option. Why because storage gateway it enables hybrid cloud storage between on premise environments and AWS. However, given the requirement to transfer a large amount of data quickly, and the existing VPN already being utilized, 90% using a physical Data Transfer Service like Snowball is preferred, right? Because this happens over the internet and storage gateway is actually used to access cloud storage from on prem is not for migration data one time thing is, so that will leave us with a snowball device, which is a physical device, right? It’s a physical device that you can use to transfer large amounts of data to and from AWS. The storage optimized variant is specifically designed for data transfer, it can be shipped to your data center and you can load the data onto the device. After that you ship the device back to AWS where the data is transferred into your s3 bucket. So this is a hint, whenever they say their bandwidth is utilized, they are hinting you to pick the storage device.
676 Question #: 660
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company hosts an application on Amazon EC2 On-Demand Instances in an Auto Scaling group. Application peak hours occur at the same time each day. Application users report slow application performance at the start of peak hours. The application performs normally 2-3 hours after peak hours begin. The company wants to ensure that the application works properly at the start of peak hours.
Which solution will meet these requirements?
A. Configure an Application Load Balancer to distribute traffic properly to the instances.
B. Configure a dynamic scaling policy for the Auto Scaling group to launch new instances based on memory utilization.
C. Configure a dynamic scaling policy for the Auto Scaling group to launch new instances based on CPU utilization.
D. Configure a scheduled scaling policy for the Auto Scaling group to launch new instances before peak hours.
D. Configure a scheduled scaling policy for the Auto Scaling group to launch new instances before peak hours.
So as you can see in the question multiple times they know what happens exactly when the peak happens, how long it will run after the peak, and so on and so forth. So what do you think they are referring to? Whenever they tell you that, you know the company knows pretty much everything about the application, etc? They’re hinting you to select something scheduled? When do you pick scheduled when you know the patterns when you know the usage patterns, when the peak happens when this etc happens? Since you already know everything. In those situations, you will use schedule. If they say like, you know, the company doesn’t know when the peak happens, etc, then they are in hinting to dynamic scaling policies because you don’t know when it will peak. So obviously, you cannot schedule something that you already don’t know. So by that you can directly go and pick answer a day. But let’s look at the options other options why they are wrong? Option A, they are using configuring lb Well, this helps distribute traffic, but it may not address the issue of slow application performance or start off peak apps. LPS distribute traffic to existing instances but don’t inherently solve the problem of insufficient capacity during peak period. So then we have option B which uses dynamic scaling policy for Auto Scaling group based on memory utilization. Scaling based on memory utilization may not necessarily align with the actual demand for the application during peak hours. It’s typically better to scale based on metrics that directly related to application performance instead of memory. And similar to B C’s use, instead of CS Using CPU utilization here, I think same answer goes here as well, because it might not be the most accurate indicator of application demand during peak hours. So that will leave us with Option D, instead of using some utilizations, we already know that is going to happen, we already know the peak is going to happen. And after the peak, it is going to run so on and so forth. So we are going to go instead with scheduled. So this option addresses the specific requirements of ensuring that new instances are launched before the start of peak hours, providing the necessary capacity to handle the increased demand at the beginning of the peak period. Okay, so by using a scheduled scaling policy, you can proactively ensure that sufficient capacity is available to handle the expected peak demand, improving application performance at the start of peak hours. This will only work if you know when the peak is going to happen. If you don’t, then the dynamic scaling would be appropriate.
677 Question #: 661
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company runs applications on AWS that connect to the company’s Amazon RDS database. The applications scale on weekends and at peak times of the year. The company wants to scale the database more effectively for its applications that connect to the database.
Which solution will meet these requirements with the LEAST operational overhead?
A. Use Amazon DynamoDB with connection pooling with a target group configuration for the database. Change the applications to use the DynamoDB endpoint.
B. Use Amazon RDS Proxy with a target group for the database. Change the applications to use the RDS Proxy endpoint.
C. Use a custom proxy that runs on Amazon EC2 as an intermediary to the database. Change the applications to use the custom proxy endpoint.
D. Use an AWS Lambda function to provide connection pooling with a target group configuration for the database. Change the applications to use the Lambda function.
B. Use Amazon RDS Proxy with a target group for the database. Change the applications to use the RDS Proxy endpoint.
As I always say, least operational overhead means use a feature that is part of the service that is mentioned in the question, don’t create your own solution. So that being said, let’s look at different options. Option A is trying to create their own solution. So using that logic, you can actually pick the appropriate or the correct one, you don’t even have to go through each one of them. Because option A C and D are creating a new solution, whereas option B, you are using the feature of rds obviously that is a free giveaway. But as usual, let’s go through why other options are wrong. So this option is not suitable because DynamoDB is a no SQL database service and is not a direct replacement for Amazon RDS, which might be necessary for the application. So hence that is gone. Then option C. Managing a custom proxy and easy to introduce his additional over operational complexity and maintenance overhead, which may not be the least effort solution, this easiest solution but it’s not least operational overhead. And option D if we talk about it. While lambda can be used for specific use cases, it may not be the most suitable solution for connection pooling and managing database connections due to its stateless nature and limitations in connecting persistence. So that will leave us with Option B, which uses RDS proxy, which is the Manage Database proxy that provides connection pooling, failover and security features for database applications. It allows applications to scale more effectively and by managing database connections on their behalf. It integrates well with RBS and reduces operational overhead.
678 Question #: 662
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company uses AWS Cost Explorer to monitor its AWS costs. The company notices that Amazon Elastic Block Store (Amazon EBS) storage and snapshot costs increase every month. However, the company does not purchase additional EBS storage every month. The company wants to optimize monthly costs for its current storage usage.
Which solution will meet these requirements with the LEAST operational overhead?
A. Use logs in Amazon CloudWatch Logs to monitor the storage utilization of Amazon EBS. Use Amazon EBS Elastic Volumes to reduce the size of the EBS volumes.
B. Use a custom script to monitor space usage. Use Amazon EBS Elastic Volumes to reduce the size of the EBS volumes.
C. Delete all expired and unused snapshots to reduce snapshot costs.
D. Delete all nonessential snapshots. Use Amazon Data Lifecycle Manager to create and manage the snapshots according to the company’s snapshot policy requirements.
D. Delete all nonessential snapshots. Use Amazon Data Lifecycle Manager to create and manage the snapshots according to the company’s snapshot policy requirements.
so again, another list operational overhead, the previous logic will apply here as well. Okay, now let’s look at the different options here. Option A is not the right option. Why? Because cloud while while CloudWatch logs can provide insights using elastic volumes to reduce the size of sorry, using elastic volumes to resize. volumes may involve manual intervention and could be operationally overhead intensive. So for that reason, we’ve already used that and Option B is kind of similar to Option A, but here it is using a custom script which introduces operational overhead and manually resizing values may not be the most efficient solution. And then we have option C. Deleting expired and unused snapshots is a good practice, but it may not directly address the observed increase in EBS storage costs. Snapshots may not be the primary contributor to increase the storage costs. So that will leave us with Option D.
What are this option D this option addresses the specific concern of non essential snapshots and uses data lifecycle manager to automate the snapshot management process, it is more streamlined and automated approach with a less operational overhead. So therefore, Option D provides a more efficient and automated solution to manage snapshots and optimize costs for the company’s current storage usage. Think of s3 lifecycle manager that’s for s3, whereas data lifecycle manager is for stories.
679 Question #: 663
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company is developing a new application on AWS. The application consists of an Amazon Elastic Container Service (Amazon ECS) cluster, an Amazon S3 bucket that contains assets for the application, and an Amazon RDS for MySQL database that contains the dataset for the application. The dataset contains sensitive information. The company wants to ensure that only the ECS cluster can access the data in the RDS for MySQL database and the data in the S3 bucket.
Which solution will meet these requirements?
A. Create a new AWS Key Management Service (AWS KMS) customer managed key to encrypt both the S3 bucket and the RDS for MySQL database. Ensure that the KMS key policy includes encrypt and decrypt permissions for the ECS task execution role.
B. Create an AWS Key Management Service (AWS KMS) AWS managed key to encrypt both the S3 bucket and the RDS for MySQL database. Ensure that the S3 bucket policy specifies the ECS task execution role as a user.
C. Create an S3 bucket policy that restricts bucket access to the ECS task execution role. Create a VPC endpoint for Amazon RDS for MySQL. Update the RDS for MySQL security group to allow access from only the subnets that the ECS cluster will generate tasks in.
D. Create a VPC endpoint for Amazon RDS for MySQL. Update the RDS for MySQL security group to allow access from only the subnets that the ECS cluster will generate tasks in. Create a VPC endpoint for Amazon S3. Update the S3 bucket policy to allow access from only the S3 VPC endpoint.
D. Create a VPC endpoint for Amazon RDS for MySQL. Update the RDS for MySQL security group to allow access from only the subnets that the ECS cluster will generate tasks in. Create a VPC endpoint for Amazon S3. Update the S3 bucket policy to allow access from only the S3 VPC endpoint.
considering all the requirements that our application wants, first of all, where is this rd RDS for that contains the data set for the company wants to ensure that only ECS cluster can access the data. So whenever they say only something wants access to something else, we are talking about giving permission to the security groups. Okay, for that particular service, okay, only easy to cluster so easy to cluster, you can have a security group that can provide access, you will provide access to that on the RDS and the data in s3 bucket. And data in s3 bucket. As you already know, it goes through a public. It goes to public Internet, but if you want to provide access. Since it has sensitive information, they don’t want it to go through public internet. You want it to go through private internet and we already discussed I think somewhere. If you want to connect it to s3 privately instead of publicly then the service you use is VPC endpoints, okay? And okay that and the security groups or subnet, it is indirectly hinting us towards option D because option D is the only one that is using VPC endpoint. But let’s go through all the options. Option A, which is not the right one is talking about creating a new AWS kms customer managed key to encrypt both the s3 bucket and the RDS for MySQL database. And ensuring that that came is key policy includes encrypt and decrypt from the permissions for the ECS task execution role. But it is not the most direct way to restrict access to data sources, right? Because KMS is encrypt encryption. It’s not about permissions and access restrictions, etc. So forget about that. And then even be you can just cross it out because same day, they’re talking about a mess kms same options, so we will ignore it. Okay, looks like even option C is using VPC endpoints, but let’s go through that as well, which is not the correct answer. Option C. So we are in it is creating an s3 bucket policy that restricts bucket access to the ECS task execution role, which is a good practice, but it does not address securing access to RDS and MySQL database, right. So for that reason, we can ignore that. Create a VPC endpoint for Amazon RDS for MySQL, then update the RDS for MySQL security group to allow access from only the subnets that the cluster will generate tasks in. Well, this is a good one. But this is far too make the ease only the easiest cluster to access it. But what about the s3, we are using bucket policies that restrict bucket access file, roll, then create no it doesn’t work that way. You’re going to create VPC endpoint for s3, not for rds. So if you look at option D, which actually talks about that, we are going to create a VPC endpoint for RDS for MySQL. Then also update the RDS for MySQL security group to allow access from only the subnets that the ECS cluster will generate tasks in. Then we’ll create a VPC endpoint for s3 Then we will update the s3 bucket policy to allow access Only the VPC end point. You might be thinking like, oh, CS almost, but yes, almost it’s not the same. We are restricting to ECS task execution role, but instead you should be doing giving access only to the VPC endpoint.
680 Question #: 664
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company has a web application that runs on premises. The application experiences latency issues during peak hours. The latency issues occur twice each month. At the start of a latency issue, the application’s CPU utilization immediately increases to 10 times its normal amount.
The company wants to migrate the application to AWS to improve latency. The company also wants to scale the application automatically when application demand increases. The company will use AWS Elastic Beanstalk for application deployment.
Which solution will meet these requirements?
A. Configure an Elastic Beanstalk environment to use burstable performance instances in unlimited mode. Configure the environment to scale based on requests.
B. Configure an Elastic Beanstalk environment to use compute optimized instances. Configure the environment to scale based on requests.
C. Configure an Elastic Beanstalk environment to use compute optimized instances. Configure the environment to scale on a schedule.
D. Configure an Elastic Beanstalk environment to use burstable performance instances in unlimited mode. Configure the environment to scale on predictive metrics.
D. Configure an Elastic Beanstalk environment to use burstable performance instances in unlimited mode. Configure the environment to scale on predictive metrics.
Well, basically they already gave the answer wherein they are going to use Beanstalk, but which configuration or instance type of the beanstalk you want to use? Fall if you are not aware of it, then you won’t be able to answer it. So let’s take this moment to answer that one. Looks like a and d are similar B and C ad similar. So let’s go through option A why that is not the right option. Option A is talking about configuring Bienstock with burstable performance, while using bustable performance instance in unlimited mald can help with bustable workloads. Configuring the environment to scale based on requests may not address this specific requirement of scaling automatically when the CPU utilization increases 10 times during latency issues. So for that reason, we don’t do that we don’t use that, then we have option B and C which are also wrong both of them are using Compute optimized instances. These are used to provide better performance, but scaling based on requests based on request as similar to Option A may not directly address the increase in CPU utilization during latency issues. And option C again, instead of based on request, they are scheduling it and obviously that is not dynamic at all, scheduling may not be responsive enough to handle unpredictable spikes in demand during latency issues. And it may not be the most effective solution for dynamic scaling. So for that reason, we are going to pick option D because it is scaling on predictive metrics. So what is it first of all, it is going to use burstable performance which we have already seen. And configuring the environment to scale on pre two metrics allows you to proactively scale based on anticipated demand, like you don’t know how the demand is going to be. So depending on that it is going to scare not scheduled not on requests, but dynamically. So this aligns well with the requirement to automatically scale when the CPU utilization increases 10 times during latency issues. Therefore, Option D is the most suitable solution for improving latency and automatically scaling the application during peak hours.
681 Question #: 665
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company has customers located across the world. The company wants to use automation to secure its systems and network infrastructure. The company’s security team must be able to track and audit all incremental changes to the infrastructure.
Which solution will meet these requirements?
A. Use AWS Organizations to set up the infrastructure. Use AWS Config to track changes.
B. Use AWS CloudFormation to set up the infrastructure. Use AWS Config to track changes.
C. Use AWS Organizations to set up the infrastructure. Use AWS Service Catalog to track changes.
D. Use AWS CloudFormation to set up the infrastructure. Use AWS Service Catalog to track changes.
B. Use AWS CloudFormation to set up the infrastructure. Use AWS Config to track changes.
And whenever you want to track incremental changes to the infrastructure, which is like changing the count for the you know, you want to audit the configuration, changes of the services etc. And in that case, we know we will use AWS config. AWS config is specifically used to handle that. Okay, let’s go through the options though. Clearly, two options are using organizations, combination of catalog and conflict. Let’s learn what they are so that we can pick the right option. Organizations we already know is more focused on managing multiple AWS accounts within an organization. Okay, while config is the service designed for tracking managing changes to AWS resources, Option A lacks the automation right because organizations is very common. So for that reason, even option D is wrong. We clearly know that CloudFormation is the one that actually gives you the automation capability of the infrastructure right you cannot automate, provisioning infrastructure etc. So but both option B and D has CloudFormation. So which one is the right option then we have to go see AWS configure a label service catalog. Alright well, service catalog is designed for creating and managing catalogs have IT resources. While it can be used for governance and tracking, it may not provide the same level of you know, inventory of resources like AWS config. For that reason, we will cross this out, and instead option B is the character.
683 Question #: 667
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company is moving its data and applications to AWS during a multiyear migration project. The company wants to securely access data on Amazon S3 from the company’s AWS Region and from the company’s on-premises location. The data must not traverse the internet. The company has established an AWS Direct Connect connection between its Region and its on-premises location.
Which solution will meet these requirements?
A. Create gateway endpoints for Amazon S3. Use the gateway endpoints to securely access the data from the Region and the on-premises location.
B. Create a gateway in AWS Transit Gateway to access Amazon S3 securely from the Region and the on-premises location.
C. Create interface endpoints for Amazon S3. Use the interface endpoints to securely access the data from the Region and the on-premises location.
D. Use an AWS Key Management Service (AWS KMS) key to access the data securely from the Region and the on-premises location.
C. Create interface endpoints for Amazon S3. Use the interface endpoints to securely access the data from the Region and the on-premises location.
Amazon S3 supports both gateway endpoints and interface endpoints. With a gateway endpoint, you can access Amazon S3 from your VPC, without requiring an internet gateway or NAT device for your VPC, and with no additional cost. However, gateway endpoints do not allow access from on-premises networks, from peered VPCs in other AWS Regions, or through a transit gateway. For those scenarios, you must use an interface endpoint, which is available for an additional cost. For more information, see Types of VPC endpoints for Amazon S3 in the Amazon S3 User Guide. https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-s3.html
684 Question #: 668
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company created a new organization in AWS Organizations. The organization has multiple accounts for the company’s development teams. The development team members use AWS IAM Identity Center (AWS Single Sign-On) to access the accounts. For each of the company’s applications, the development teams must use a predefined application name to tag resources that are created.
A solutions architect needs to design a solution that gives the development team the ability to create resources only if the application name tag has an approved value.
Which solution will meet these requirements?
A. Create an IAM group that has a conditional Allow policy that requires the application name tag to be specified for resources to be created.
B. Create a cross-account role that has a Deny policy for any resource that has the application name tag.
C. Create a resource group in AWS Resource Groups to validate that the tags are applied to all resources in all accounts.
D. Create a tag policy in Organizations that has a list of allowed application names.
D. Create a tag policy in Organizations that has a list of allowed application names.
Okay, they’re talking about creating it tag policies. But since we are talking about organization, we have to create it at an organization level. So let’s see which option is actually talking about that option is not the right right one because I m policies can include conditions, they are more focused on actions and resources and may not be more suitable for enforcing specific tag values. And then we have option B which talks about cross account role. And Denae policies are generally not recommended unless absolutely necessary and using them for tag enforcement might lead to complex policies. So no, usually you don’t do that through the normal account policies, and then we have option C, which is talking about AWS resource groups, they can help in organizing and searching their sources based on tags, but it does not enforce or control the tags that can be applied. So it will take us to tag policies that is an option that you can create in organizations that has a list of a load application names, which is very robust way if you want to implement tags. This is an effective way to ensure that only approved application names are used as tags. Therefore D creating a tag policy in organizations is the most appropriate solution for enforcing the required tag values.
685 Question #: 669
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company runs its databases on Amazon RDS for PostgreSQL. The company wants a secure solution to manage the master user password by rotating the password every 30 days.
Which solution will meet these requirements with the LEAST operational overhead?
A. Use Amazon EventBridge to schedule a custom AWS Lambda function to rotate the password every 30 days.
B. Use the modify-db-instance command in the AWS CLI to change the password.
C. Integrate AWS Secrets Manager with Amazon RDS for PostgreSQL to automate password rotation.
D. Integrate AWS Systems Manager Parameter Store with Amazon RDS for PostgreSQL to automate password rotation.
C. Integrate AWS Secrets Manager with Amazon RDS for PostgreSQL to automate password rotation.
Whenever you see the question talking about database credentials, API credentials, etc. There is only one service that should come to your mind which is a secret manager. Which option see you don’t have to even look at anything else.
password rotation = AWS Secrets Manager
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/rds-secrets-manager.html#rds-secrets-manager-overview
686 Question #: 670
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company performs tests on an application that uses an Amazon DynamoDB table. The tests run for 4 hours once a week. The company knows how many read and write operations the application performs to the table each second during the tests. The company does not currently use DynamoDB for any other use case. A solutions architect needs to optimize the costs for the table.
Which solution will meet these requirements?
A. Choose on-demand mode. Update the read and write capacity units appropriately.
B. Choose provisioned mode. Update the read and write capacity units appropriately.
C. Purchase DynamoDB reserved capacity for a 1-year term.
D. Purchase DynamoDB reserved capacity for a 3-year term.
B. Choose provisioned mode. Update the read and write capacity units appropriately.
With provisioned capacity mode, you specify the number of reads and writes per second that you expect your application to require, and you are billed based on that. Furthermore if you can forecast your capacity requirements you can also reserve a portion of DynamoDB provisioned capacity and optimize your costs even further. https://docs.aws.amazon.com/wellarchitected/latest/serverless-applications-lens/capacity.html
The company knows how many read and write operations the application performs to the table each second during the tests.” so ideally they can set this as max
mentioned “Update the read and write capacity units appropriately.” which are automatically set in “on-demand”
solutions architect needs to optimize the cost for the table. And then when they literally tell you how many read and write operations it takes, and if you already know the pattern, then you don’t go for on demand on demand, you will use it when you don’t know a specific pattern like the traffic when it is big when it is not or how many read write operations it will take extra, but clearly we know that when we know that we are not going to go for on demand data. Okay. So for that, we can cancel out this one. And since we run for hours once a week, do you want to go with a reserved instance for one and three years? Of course not we are not going to usually in these cases you will go with on demand if you are only going to run that, but we have another one or another option available or another mode available for parameter which is the provision mode. Okay. So what does this do the provision mode, you manually provision the read and write capacity units based on your known workload. And in the question that clearly said they know it. Since the company knows the read and write operations during the test, they can provision the exact capacity needed for those specific periods. Optimizing costs by not paying for unused capacity during other times.
687 Question #: 671
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company runs its applications on Amazon EC2 instances. The company performs periodic financial assessments of its AWS costs. The company recently identified unusual spending.
The company needs a solution to prevent unusual spending. The solution must monitor costs and notify responsible stakeholders in the event of unusual spending.
Which solution will meet these requirements?
A. Use an AWS Budgets template to create a zero spend budget.
B. Create an AWS Cost Anomaly Detection monitor in the AWS Billing and Cost Management console.
C. Create AWS Pricing Calculator estimates for the current running workload pricing details.
D. Use Amazon CloudWatch to monitor costs and to identify unusual spending.
B. Create an AWS Cost Anomaly Detection monitor in the AWS Billing and Cost Management console.
AWS Cost Anomaly Detection is designed to automatically detect unusual spending patterns based on machine learning algorithms. It can identify anomalies and send notifications when it detects unexpected changes in spending. This aligns well with the requirement to prevent unusual spending and notify stakeholders.
https://aws.amazon.com/aws-cost-management/aws-cost-anomaly-detection/
clearly, there is a tool that does it but pricing calculator, you can eliminate it because this is something that you use before moving to cloud or the event the services are not yet deployed, if you want to know the costs and estimates etc, you will use that. So that’s got budgets, we already know if you want to get notified when particular cost is breached. For example, if you put $100 on the account, if it goes beyond that, then it will get notified. But again, that doesn’t give you whenever there is unusual spending, right, because what you want to do here is not limit the costs of certain things. But whenever there is unusual spending, you want to it’s and then you have cloud option D which is the cloud watch. While CloudWatch can be used for monitoring various metrics, it is not specifically designed for cost anomaly detection. So this is considered as an anomaly. This is not designed for that. For that purpose, we have actually a service or a feature which is Option B which is about cost anomaly detection. This uses machine learning to identify unexpected spending patterns and anomalies in your costs. It can automatically detect unusual spending and send notifications making it suitable for the scenario described think of this as like credit cards right whenever you swipe a card, if you swipe it in a different location that you usually stay in then obviously, it is going to send a call give it give you a call or text messages asking you is this you who made the transaction. So those are considered as anomaly detections. And even AWS has something called anomaly detection monitor here.
688 Question #: 672
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A marketing company receives a large amount of new clickstream data in Amazon S3 from a marketing campaign. The company needs to analyze the clickstream data in Amazon S3 quickly. Then the company needs to determine whether to process the data further in the data pipeline.
Which solution will meet these requirements with the LEAST operational overhead?
A. Create external tables in a Spark catalog. Configure jobs in AWS Glue to query the data.
B. Configure an AWS Glue crawler to crawl the data. Configure Amazon Athena to query the data.
C. Create external tables in a Hive metastore. Configure Spark jobs in Amazon EMR to query the data.
D. Configure an AWS Glue crawler to crawl the data. Configure Amazon Kinesis Data Analytics to use SQL to query the data.
B. Configure an AWS Glue crawler to crawl the data. Configure Amazon Athena to query the data.
Option B - leverages serverless services that minimise management tasks and allows the company to focus on querying and analysing the data with the LEAST operational overhead. AWS Glue with Athena (Option B): AWS Glue is a fully managed extract, transform, and load (ETL) service, and Athena is a serverless query service that allows you to analyze data directly in Amazon S3 using SQL queries. By configuring an AWS Glue crawler to crawl the data, you can create a schema for the data, and then use Athena to query the data directly without the need to load it into a separate database. This minimizes operational overhead.
All they have to do is they have to analyze the Clickstream data in s3 quickly. So if you want to analyze data, that is honestly usually what do you do you run SQL queries, but then somebody will think like, oh, we have the SQL queries, s3 has s3 Select, but the problem we already discussed In one of the previous question is s3 Select works only on one file not on bunch of files. So what is the next quick solution if it is not as three, then Athena, right. So one of the options is right here because this is least operational overhead because Athena is serverless, etc. But let’s go through the other options as well. And option is talking about using Spark catalog and glues. While AWS glue can be used to query data using Spark with AWS glue introduces additional operational overhead. Spark jobs typically typically require more configuration and maintenance. So for that reason, we won’t use that. And then we have option C, which is talking about hive meta store and Spark job similar to Option A, this involves using Spark with EMR option A spark with glue here spark with EMR, again, it introduces additional complexity compared to serverless solution, and it’s not happening. And option D, which is somewhat similar to Option B, at least the first half of it. But using Amazon Kinesis data analytics is it’s more suitable for real time analytics on streaming data. And it might be an over engineered solution for analyzing Clickstream data stored in s3. So for that reason, we’ll go with option A, or Option B, wherein we are using glue crawler, we are not using glue jobs glue crawler, which can automatically discover and catalog metadata aboard the Clickstream data in s3, then we will use Athena being a serverless query service which allows for quick ad hoc SQL queries on the data without the need to set up and manage infrastructure. So therefore option B configuring an AWS glue crawler to crawl the data and using Amazon Athena for query is the most suitable solution for quickly analyzing the data with minimal operational overhead.
689 Question #: 673
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company runs an SMB file server in its data center. The file server stores large files that the company frequently accesses for up to 7 days after the file creation date. After 7 days, the company needs to be able to access the files with a maximum retrieval time of 24 hours.
Which solution will meet these requirements?
A. Use AWS DataSync to copy data that is older than 7 days from the SMB file server to AWS.
B. Create an Amazon S3 File Gateway to increase the company’s storage space. Create an S3 Lifecycle policy to transition the data to S3 Glacier Deep Archive after 7 days.
C. Create an Amazon FSx File Gateway to increase the company’s storage space. Create an Amazon S3 Lifecycle policy to transition the data after 7 days.
D. Configure access to Amazon S3 for each user. Create an S3 Lifecycle policy to transition the data to S3 Glacier Flexible Retrieval after 7 days.
B. Create an Amazon S3 File Gateway to increase the company’s storage space. Create an S3 Lifecycle policy to transition the data to S3 Glacier Deep Archive after 7 days.
S3 File Gateway will connect SMB to S3. Lifecycle policy will move objects to S3 Glacier Deep Archive which support 12 hours retrieval https://aws.amazon.com/blogs/aws/new-amazon-s3-storage-class-glacier-deep-archive/
Amazon S3 File Gateway supports SMB and NFS, Amazon FSx File Gateway SMB for windows workloads.
S3 file gateway supports SMB and S3 Glacier Deep Archive can retrieve data within 12 hours. https://aws.amazon.com/storagegateway/file/s3/ https://docs.aws.amazon.com/prescriptive-guidance/latest/backup-recovery/amazon-s3-glacier.html
Option A clearly data sync so far we have what we have learned that data sync is usually used for transferring make Creating, copying data from on premise to the cloud. So copy data that is older than seven days agenda and than seven days from SMB files. For these scenarios, we don’t actually use Data Sync, this is more for data transfer services, then we have option B. Sorry, Option C, wherein it is using Fs X for Windows file server and transitioning data with s3 lifecycle policy after seven days, it is a suitable approach. However, this option does not explicitly address the specified maximum retrieval time of 24 hours. Right? It doesn’t talk about it, it just says lifecycle policy to transition after seven days, but what about this one retrieval time point for where are you transitioning it to which storage class already transitioning it to, etc. So let’s forget about that. It’s not good. Then we have option D, wherein it’s trying to access s3 for each user created s3 lifecycle policy to transition data to s3 glacier flexible retrieval after seven days, direct access to s3 may not be the most efficient solution for this scenario, right? Because it is storing the files where are the files, SMB file server, right? But here it’s saying access to s3 for each user etc, we have to fast you know, get those files access for them from the Assembly files are here it is not talking about it. Hence, we will go with option B wherein we are going to create an s3 file gateway to increase the company’s storage space. And we already know that file gateway is used to access the cloud storage from within on premise This is not addressing that, then we will create a lifecycle policy to transition the data to s3 glacier deep archive, because flexible retrieval is what five to 12 hours, but here this is 20 they are okay with 24 hours. So the glacier deep archive it I think it is around two well 248 hours. So we are fine with this and this is cheaper than this one. So we would go with option B instead.
690 Question #: 674
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company runs a web application on Amazon EC2 instances in an Auto Scaling group. The application uses a database that runs on an Amazon RDS for PostgreSQL DB instance. The application performs slowly when traffic increases. The database experiences a heavy read load during periods of high traffic.
Which actions should a solutions architect take to resolve these performance issues? (Choose two.)
A. Turn on auto scaling for the DB instance.
B. Create a read replica for the DB instance. Configure the application to send read traffic to the read replica.
C. Convert the DB instance to a Multi-AZ DB instance deployment. Configure the application to send read traffic to the standby DB instance.
D. Create an Amazon ElastiCache cluster. Configure the application to cache query results in the ElastiCache cluster.
E. Configure the Auto Scaling group subnets to ensure that the EC2 instances are provisioned in the same Availability Zone as the DB instance.
B. Create a read replica for the DB instance. Configure the application to send read traffic to the read replica.
D. Create an Amazon ElastiCache cluster. Configure the application to cache query results in the ElastiCache cluster.
B: Read replicas distribute load and help improving performance D: Caching of any kind will help with performance Remember: “ The database experiences a heavy read load during periods of high traffic.”
By creating a read replica, you offload read traffic from the primary DB instance to the replica, distributing the load and improving overall performance during periods of heavy read traffic. Amazon ElastiCache can be used to cache frequently accessed data, reducing the load on the database. This is particularly effective for read-heavy workloads, as it allows the application to retrieve data from the cache rather than making repeated database queries.
B. Create a read replica for the DB instance. Configure the application to send read traffic to the read replica. By creating a read replica, you offload read traffic from the primary DB instance to the replica, distributing the load and improving overall performance during periods of heavy read traffic. D. Create an Amazon ElastiCache cluster. Configure the application to cache query results in the ElastiCache cluster. Amazon ElastiCache can be used to cache frequently accessed data, reducing the load on the database. This is particularly effective for read-heavy workloads, as it allows the application to retrieve data from the cache rather than making repeated database queries.
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/creating-elasticache-cluster-with-RDS-settings.html
Turn on auto scaling for the DB instance aka what happens when you do that? Well, this is not applicable for RDS instances, auto scaling is typically used for what EC two instances and not RDS instances. So this doesn’t work at all. Then we have option C, which is talking about converting the DB instance to a multi AZ deployment. This is mainly for enhancing availability, and fault tolerance, but might not significantly improve read performance. Okay. And then we have option II, which is talking about again, configuring Auto Scaling group subnets to ensure easy two instances are provisioned in the same availability zone as the DB instance this might not directly address the read performance issues. It’s more about optimizing the architecture for locality, which has nothing to do with improving the performance during heavy read. So whenever you see the question that talks about raid, Lord, you usually the answers options will have something called read replicas read replicas are specifically used, wherein it will help the database performance because the read workload will be completely offloaded from the main database to the read replica, which is the option B which talks about that one. So what is it saying it is saying create a read replica where you will offload your read traffic from the primary DB instance to the replica distributing the read load and improving overall performance this is a common approach to horizontally scale read heavy database workloads. And then we are another option is which is the wherein we are using Amazon elastic cache. We already know it is a managed caching service that can help improve application performance by doing what by caching frequently accessed data. So caching query results in elastic cache can reduce the load on Postgres equal database, especially for repeat Add read queries when you are reading the same data again and again. Instead of hitting the main database, not the second time onwards, it will instead hit the caching solution to get the same data.
Question #: 675
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company uses Amazon EC2 instances and Amazon Elastic Block Store (Amazon EBS) volumes to run an application. The company creates one snapshot of each EBS volume every day to meet compliance requirements. The company wants to implement an architecture that prevents the accidental deletion of EBS volume snapshots. The solution must not change the administrative rights of the storage administrator user.
Which solution will meet these requirements with the LEAST administrative effort?
A. Create an IAM role that has permission to delete snapshots. Attach the role to a new EC2 instance. Use the AWS CLI from the new EC2 instance to delete snapshots.
B. Create an IAM policy that denies snapshot deletion. Attach the policy to the storage administrator user.
C. Add tags to the snapshots. Create retention rules in Recycle Bin for EBS snapshots that have the tags.
D. Lock the EBS snapshots to prevent deletion.
D. Lock the EBS snapshots to prevent deletion.
The “lock” feature in AWS allows you to prevent accidental deletion of resources, including EBS snapshots. This can be set at the snapshot level, providing a straightforward and effective way to meet the requirements without changing the administrative rights of the storage administrator user. Exactly what a locked EBS snapshot is used for https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-snapshot-lock.html
whenever the talk about least which means there is some feature that is available on EBS that you have to use the all you have to do is enable that don’t try to implement a new solution. Okay. So when you gaze through these options, option A and B are trying to create a create something new, whereas C and D are trying to use the features of EBS, etc. So let’s go through these options anyways, option one, or Option A, which is not the right answer. Let’s cross that out. Why is it not the right answer? Well, because this, this involves creating a new Iam role. With permission to delete snapshots and attaching it to an easy to instance, the idea is to delegate the Delete permission to an easy to instance, and the user would use the AWS CLI from the instance. Violet is a valid approach. It introduces additional components and complexity. Again, don’t try to reinvent the wheel. This works, but this is too much administrative effort. Hence, that is not the right answer. Option B. What is he trying to do? It involves creating an IAM policy that explicitly denies permission to delete snapshots. This policy is then attached to the storage administrator user. While it achieves the goal of preventing accidental deletion. It involves modifying the administrator rights of the storage administrator user which might not be desired, because this clearly says do not change the administrator heads. And then we have options see. wherein we are tagging the snapshots and using a service like AWS resource access manager, or recycling been previously called to enforce retention rules based on the tax. It adds complexity and introduces a new service which might be more than what is needed for the simple requirement of preventing accidental deletion, which is where we are heading. We don’t have to reinvent the wheel. All you have to do is use the creation that is already there for the EBS snapshots. Okay, so here, all we are doing is the last leave a snapshot so well how are you doing it because EBS provides a built in feature to lock snapshots, preventing them from being deleted. This is a straightforward and effective solution that does not involve creating additional IAM roles or policies or tags. It directly addresses the requirement of preventing accidental deletion with minimal administrative effort.
692 Question #: 676
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company’s application uses Network Load Balancers, Auto Scaling groups, Amazon EC2 instances, and databases that are deployed in an Amazon VPC. The company wants to capture information about traffic to and from the network interfaces in near real time in its Amazon VPC. The company wants to send the information to Amazon OpenSearch Service for analysis.
Which solution will meet these requirements?
A. Create a log group in Amazon CloudWatch Logs. Configure VPC Flow Logs to send the log data to the log group. Use Amazon Kinesis Data Streams to stream the logs from the log group to OpenSearch Service.
B. Create a log group in Amazon CloudWatch Logs. Configure VPC Flow Logs to send the log data to the log group. Use Amazon Kinesis Data Firehose to stream the logs from the log group to OpenSearch Service.
C. Create a trail in AWS CloudTrail. Configure VPC Flow Logs to send the log data to the trail. Use Amazon Kinesis Data Streams to stream the logs from the trail to OpenSearch Service.
D. Create a trail in AWS CloudTrail. Configure VPC Flow Logs to send the log data to the trail. Use Amazon Kinesis Data Firehose to stream the logs from the trail to OpenSearch Service.
B. Create a log group in Amazon CloudWatch Logs. Configure VPC Flow Logs to send the log data to the log group. Use Amazon Kinesis Data Firehose to stream the logs from the log group to OpenSearch Service.
CloudTrail is for logging administrative actions, we need CloudWatch. We want the data in another AWS service (OpenSearch), not Kinesis, thus we need Firehose, not Streams. VPC Flow Logs capture information about the IP traffic going to and from network interfaces in a VPC. By configuring VPC Flow Logs to send the log data to a log group in Amazon CloudWatch Logs, you can then use Amazon Kinesis Data Firehose to stream the logs from the log group to Amazon OpenSearch Service for analysis. This approach provides near real-time streaming of logs to the analytics service.
So, option so most you know you need to remember near real time that is another thing that is what you need and capture information about the traffic to and from the network interfaces. Okay that is another thing that you need to not and we already covered this right whenever you want traffic information about traffic to and from in within the VPC, what should come to your mind server admins, VPC flow logs should come to your mind because that is what provides you detailed information about traffic to and fro from a PC and all the options has it so that is not a big help here at all. But let’s go through each option. Option is not the right one because this involves setting up VPC flow logs to capture network traffic information, and then send the logs to an Amazon CloudWatch To log group, and subsequently it suggests using Amazon Kinesis data streams to stream the logs from the log group to Amazon OpenSearch service with Viola, this is feasible, trust me it works. OK, using Kinesis data streams might introduce what might introduce unnecessary complexity for this use case. Okay, so let’s this will work. But there are there might be some other options in this options where you don’t have to do all this where it maybe there is a feature in one of these or VPC flow logs that you can use to make it possible. So let’s go through other wrong options, which will be landed the options see, and this option involves using Cloud trail to capture VPC flow logs and then using Kinesis data streams to stream the logs to the open source service. However, what is cloud trail it is typically used for logging API activity on your account and it may not may not provide the detailed network traffic information captured by VPC flow logs hence, that’s gone or similarly, Option D which is same as d where in it is using Cloud trail. So that will leave us with Option B okay, if you are looking at these two and like okay, both look the same. Why not? Why are you crossing out a and b? Well, here you are using data streams. Here we are using data firehose. Well, what is the difference? Well, this option again, similar to Option A, but suggests using Kinesis data firehose, instead of data streams. Firehose is a managed service that simplifies the process of delivering data to destinations, such as open source service, data streams cannot open data Firehose can send data to open source service, it is a more suitable option for near real time log analysis, making it a better solution for this scenario.
693 Question #: 677
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company is developing an application that will run on a production Amazon Elastic Kubernetes Service (Amazon EKS) cluster. The EKS cluster has managed node groups that are provisioned with On-Demand Instances.
The company needs a dedicated EKS cluster for development work. The company will use the development cluster infrequently to test the resiliency of the application. The EKS cluster must manage all the nodes.
Which solution will meet these requirements MOST cost-effectively?
A. Create a managed node group that contains only Spot Instances.
B. Create two managed node groups. Provision one node group with On-Demand Instances. Provision the second node group with Spot Instances.
C. Create an Auto Scaling group that has a launch configuration that uses Spot Instances. Configure the user data to add the nodes to the EKS cluster.
D. Create a managed node group that contains only On-Demand Instances.
B. Create two managed node groups. Provision one node group with On-Demand Instances. Provision the second node group with Spot Instances.
The keywords are infrequent and resiliency. This solution allows you to have a mix of On-Demand Instances and Spot Instances within the same EKS cluster. You can use the On-Demand Instances for the development work where you need dedicated resources and then leverage Spot Instances for testing the resiliency of the application. Spot Instances are generally more cost-effective but can be terminated with short notice, so using a combination of On-Demand and Spot Instances provides a balance between cost savings and stability. Option A (Create a managed node group that contains only Spot Instances) might be cost-effective, but it could introduce potential challenges for tasks that require dedicated resources and might not be the best fit for all scenarios.
the company needs a dedicated Eks cluster for development work, the company will use the development cluster frequently to test the resiliency of the application, I think we have done similar one, but for a different use case, the Eks cluster must manage all nodes which solution most cost effectively. Okay, here we are seeing cost effectively. Okay. So what is it saying it has managed node groups. And then they want to test the resiliency of the application. And they are also using on demand, which is costly. But we want to choose the most cost effective solution here. So let’s go through this option is talking about creating a managed node group that contains only Spot Instances. Well, yeah, this is cheaper, but this might not be the best solution because stability and predictability are required for certain development activities I and we are talking about making it available right needs dedicated Eks cluster for development. If you go with Spot Instances, there will be instances where you won’t have any spot instances. So there won’t be any development Eks cluster in the development to work with that also having Spot Instances for all of them, no, we are not going to it is not going to work it out okay, then we have option C wherein it is talking about both actually both option C and Option D involves using auto scaling groups and user data scripts. While they could work managing multiple managed node groups, provides a cleaner and more straightforward approach in the context of eks. So here they are talking again, same thing manage node group that contains only on demand this is going to be costly for that reason, we won’t use it. And this is talking about auto scaling groups confident users Spot Instances again same thing as this. So instead what we will do is we will use a combination of dedicated on demand instances and Spot Instances. So whenever the Spot Instances are not available, it doesn’t mean that complete Eks cluster is down no we will have the on demand instances we will use it so option B allows the company to have a dedicated cluster for development work by creating two managed node instances instance one, one using on demand instances and the other using Spot Instances. The company can manage costs effectively, on demand on demand instances provides stability and reliability, which is suitable for development work that requires consistent and predictable performance. And the Spot Instances offer cost savings but come with the trade off of potential termination with short notice. However, for free infrequent testing and resilience experiments, Spot Instances can be utilized to optimize costs. Therefore, option B is the most cost effective solution that aligns with the specified requirements.
694 Question #: 678
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company stores sensitive data in Amazon S3. A solutions architect needs to create an encryption solution. The company needs to fully control the ability of users to create, rotate, and disable encryption keys with minimal effort for any data that must be encrypted.
Which solution will meet these requirements?
A. Use default server-side encryption with Amazon S3 managed encryption keys (SSE-S3) to store the sensitive data.
B. Create a customer managed key by using AWS Key Management Service (AWS KMS). Use the new key to encrypt the S3 objects by using server-side encryption with AWS KMS keys (SSE-KMS).
C. Create an AWS managed key by using AWS Key Management Service (AWS KMS). Use the new key to encrypt the S3 objects by using server-side encryption with AWS KMS keys (SSE-KMS).
D. Download S3 objects to an Amazon EC2 instance. Encrypt the objects by using customer managed keys. Upload the encrypted objects back into Amazon S3.
B. Create a customer managed key by using AWS Key Management Service (AWS KMS). Use the new key to encrypt the S3 objects by using server-side encryption with AWS KMS keys (SSE-KMS).
This option allows you to create a customer managed key using AWS KMS. With a customer managed key, you have full control over key lifecycle management, including the ability to create, rotate, and disable keys with minimal effort. SSE-KMS also integrates with AWS Identity and Access Management (IAM) for fine-grained access control.
as you see encryption, we have to you have to think about kms. But there are other encryptions that are available as well. For example like SSE s3, but here we are talking about create rotate and disabled keys SSE s3, you cannot rotate them because it is managed by actually s3. So since it’s managed is SSE s3 is my s3 managed encryption grace. It does provide encryption but it does not give you the same level of control or what key management as done by kms. So let’s forget about that. So option C even though it is using AWS kms. It probably is important to note that AWS managed keys do not provide the same level of control over key creation, rotation and disabling as customer manage key because these are managed by kms itself. Okay, okay, and Option D involves manually downloading encrypting uploading s3 objects, which is a less efficient and more error prone process competitor leveraging AWS kms server side encryption for that reason we are going to go with option B. wherein we are going to create and manage customer managed keys. giving you full control over the lifecycle of the keys. This includes the ability to create rotate, disable keys as needed, and using server side encryption with AWS kms keys ensures that the s3 objects are encrypted with the specified customer man is the key. This provides a secure and managed approach to encrypting sensitive data in Amazon s3.
695 Question #: 679
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company wants to back up its on-premises virtual machines (VMs) to AWS. The company’s backup solution exports on-premises backups to an Amazon S3 bucket as objects. The S3 backups must be retained for 30 days and must be automatically deleted after 30 days.
Which combination of steps will meet these requirements? (Choose three.)
A. Create an S3 bucket that has S3 Object Lock enabled.
B. Create an S3 bucket that has object versioning enabled.
C. Configure a default retention period of 30 days for the objects.
D. Configure an S3 Lifecycle policy to protect the objects for 30 days.
E. Configure an S3 Lifecycle policy to expire the objects after 30 days.
F. Configure the backup solution to tag the objects with a 30-day retention period
A. Create an S3 bucket that has S3 Object Lock enabled.
C. Configure a default retention period of 30 days for the objects.
E. Configure an S3 Lifecycle policy to expire the objects after 30 days.
In theory, E alone would be enough because the objects are “retained for 30 days” without any configuration as long as no one deletes them. But let’s assume that they want us to prevent deletion. A: Yes, required to prevent deletion. Object Lock requires Versioning, so if we ‘create an S3 bucket that has S3 Object Lock enabled’ that this also has object versioning enabled, otherwise we would not be able to create it. C: Yes, “default retention period” specifies how long object lock will be applied to new objects by default, we need this to protect objects from deletion.
https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock.html
A. Create an S3 bucket that has S3 Object Lock enabled. Enable the S3 Object Lock feature on S3. C. Configure a default retention period of 30 days for the objects. To lock the objects for 30 days. E. Configure an S3 Lifecycle policy to expire the objects after 30 days. -> to delete the objects after 30 days.
First of all, they should be retained for 30 days, they should be automatically deleted after 30 days. Okay, so those are our requirements looks like so let’s go through these options. Let’s eliminate what you know the wrong answers obviously, it is talking about create s3 bucket that has object versioning enabled. Okay, so why do we need this? This is not necessarily for meeting the specific requirements mentioned in the question at all. Object versioning is more relevant when managing different versions of objects, but it does not directly address the retention and deletion policies. So forget about that. Then we have option D configure in lifecycle policy to protect the objects for 30 days. Well, this is obviously not applicable here as well because the goal is to retain the object for 30 days then automatically delete them not to protect them nowhere in the question they’re talking about protecting so forget about that. And then option f is not explicitly needed for implementing the retention and deletion policy using s3 object locking lifecycle policies. I mean, we can do that using that but here it is saying tagging while tagging can be useful for organizational purposes it is not a primary requirement, if you want to delete or retain them. So for that purpose, what are we going to do we are going to use s3 object or lock enable. What does this do it will ensure that the objects in the bucket cannot be deleted or modified fire a specified retention period. This helps in meeting the requirement of retaining backups for 30 days. Then, Option C configure a default retention period of 30 days for the objects Yes. By specifying the objects within the bucket are locked for a duration of 30 days when you do this. This enforces the retention policy on the objects. Finally we have option E which will make sure to delete a data medically after that it is.
696 Question #: 680
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A solutions architect needs to copy files from an Amazon S3 bucket to an Amazon Elastic File System (Amazon EFS) file system and another S3 bucket. The files must be copied continuously. New files are added to the original S3 bucket consistently. The copied files should be overwritten only if the source file changes.
Which solution will meet these requirements with the LEAST operational overhead?
A. Create an AWS DataSync location for both the destination S3 bucket and the EFS file system. Create a task for the destination S3 bucket and the EFS file system. Set the transfer mode to transfer only data that has changed.
B. Create an AWS Lambda function. Mount the file system to the function. Set up an S3 event notification to invoke the function when files are created and changed in Amazon S3. Configure the function to copy files to the file system and the destination S3 bucket.
C. Create an AWS DataSync location for both the destination S3 bucket and the EFS file system. Create a task for the destination S3 bucket and the EFS file system. Set the transfer mode to transfer all data.
D. Launch an Amazon EC2 instance in the same VPC as the file system. Mount the file system. Create a script to routinely synchronize all objects that changed in the origin S3 bucket to the destination S3 bucket and the mounted file system.
A. Create an AWS DataSync location for both the destination S3 bucket and the EFS file system. Create a task for the destination S3 bucket and the EFS file system. Set the transfer mode to transfer only data that has changed.
A fulfils the requirement of “copied files should be overwritten only if the source file changes” so A is correct.
Transfer only data that has changed – DataSync copies only the data and metadata that differs between the source and destination location. Transfer all data – DataSync copies everything in the source to the destination without comparing differences between the locations. https://docs.aws.amazon.com/datasync/latest/userguide/configure-metadata.html
602# A company has five organizational units (OUs) as part of its organization in AWS Organizations. Each OU correlates with the five businesses the company owns. The company’s research and development (R&D) business is being separated from the company and will need its own organization. A solutions architect creates a new, separate management account for this purpose. What should the solutions architect do next in the new management account?
A. Make the AWS R&D account part of both organizations during the transition.
B. Invite the AWS R&D account to be part of the new organization after the R&D AWS account has left the old organization.
C. Create a new AWS R&D account in the new organization. Migrate resources from the old AWS R&D account to the new AWS R&D account.
D. Have the AWS R&D account join the new organization. Make the new management account a member of the old organization.
B. Invite the AWS R&D account to be part of the new organization after the R&D AWS account has left the old organization.
603# A company is designing a solution to capture customer activity across different web applications to process analytics and make predictions. Customer activity in web applications is unpredictable and can increase suddenly. The company needs a solution that integrates with other web applications. The solution must include an authorization step for security reasons. What solution will meet these requirements?
A. Configure a gateway load balancer (GWLB) in front of an Amazon Elastic Container Service (Amazon ECS) container instance that stores information that the business receives on an Amazon Elastic File System (Amazon EFS) file system. ). Authorization is resolved in the GWLB.
B. Configure an Amazon API Gateway endpoint in front of an Amazon Kinesis data stream that stores the information the business receives in an Amazon S3 bucket. Use an AWS Lambda function to resolve authorization.
C. Configure an Amazon API Gateway endpoint in front of an Amazon Kinesis Data Firehose that stores the information the business receives in an Amazon S3 bucket. Use an API Gateway Lambda authorizer to resolve authorization.
D. Configure a gateway load balancer (GWLB) in front of an Amazon Elastic Container Service (Amazon ECS) container instance that stores information that the business receives on an Amazon Elastic File System (Amazon EFS) file system. ). Use an AWS Lambda function to resolve authorization.
C. Configure an Amazon API Gateway endpoint in front of an Amazon Kinesis Data Firehose that stores the information the business receives in an Amazon S3 bucket. Use an API Gateway Lambda authorizer to resolve authorization.
Amazon API Gateway: Acts as a managed service to create, publish, and secure APIs at scale. Allows the creation of API endpoints that can be integrated with other web applications. Amazon Kinesis Data Firehose: Used to capture and upload streaming data to other AWS services. In this case, you can store the information in an Amazon S3 bucket. API Gateway Lambda Authorizer: Provides a way to control access to your APIs using Lambda functions. Allows you to implement custom authorization logic. This solution offers scalability, the ability to handle unpredictable surges in activity, and integration capabilities. Using a Lambda API Gateway authorizer ensures that the authorization step is performed securely.
The other options have some limitations or are less aligned with the specified requirements:
A. GWLB vs. Amazon ECS: This option involves a load balancer vs. ECS, which might be better suited for scenarios requiring container orchestration, but could introduce an unnecessary complexity for the given requirements.
B. Amazon API Gateway vs. Amazon Kinesis Data Flow: This option lacks the Lambda authorizer to resolve authorization and may not be as easy to integrate with other web applications.
D. GWLB vs. Amazon ECS with Lambda function for authorization: Similar to option A, this introduces a load balancer and ECS, which might be more complex than necessary for the given requirements. In summary, Option C offers a streamlined solution with the necessary scalability, integration capabilities, and authorization control.
604# An e-commerce company wants a disaster recovery solution for its Amazon RDS DB instances running Microsoft SQL Server Enterprise Edition. The company’s current recovery point objective (RPO) and recovery time objective (RTO) are 24 hours. Which solution will meet these requirements in the MOST cost-effective way?
A. Create a cross-region read replica and promote the read replica to the primary instance.
B. Use AWS Database Migration Service (AWS DMS) to create RDS replication between regions.
C. Use 24-hour cross-region replication to copy native backups to an Amazon S3 bucket.
D. Copy automatic snapshots to another region every 24 hours.
D. Copy automatic snapshots to another region every 24 hours
Amazon RDS creates and saves automated backups of your DB instance or Multi-AZ DB cluster during your DB instance backup window. RDS creates a storage volume snapshot of your database instance, backing up the entire database instance and not just individual databases. RDS saves automated backups of your DB instance according to the backup retention period that you specify. If necessary, you can recover your DB instance at any time during the backup retention period.
605# A company runs a web application on Amazon EC2 instances in an auto-scaling group behind an application load balancer that has sticky sessions enabled. The web server currently hosts the user’s session state. The company wants to ensure high availability and prevent the loss of user session state in the event of a web server outage. What solution will meet these requirements?
A. Use an Amazon ElastiCache for Memcached instance to store session data. Update the application to use ElastiCache for Memcached to store session state.
B. Use Amazon ElastiCache for Redis to store session state. Update the application to use ElastiCache for Redis to store session state.
C. Use an AWS Storage Gateway cached volume to store session data. Update the application to use the AWS Storage Gateway cached volume to store session state.
D. Use Amazon RDS to store session state. Update your application to use Amazon RDS to store session state.
B. Use Amazon ElastiCache for Redis to store session state. Update the application to use ElastiCache for Redis to store session state.
In summary, option B (Amazon ElastiCache for Redis) is a common and effective solution for maintaining user session state in a web application, providing high availability and preventing loss of session state during server outages. Web.
Amazon ElastiCache for Redis: Redis is an in-memory data store that can be used to store session data. It offers high availability and persistence options, making it suitable for maintaining session state. Sticky sessions and auto-scaling group: Using ElastiCache for Redis enables centralized storage of session state, ensuring that sticky sessions can still be maintained even if an EC2 instance is unavailable or replaced due to scaling automatic.
606# A company migrated a MySQL database from the company’s on-premises data center to an Amazon RDS for MySQL DB instance. The company sized the RDS database instance to meet the company’s average daily workload. Once a month, the database runs slowly when the company runs queries for a report. The company wants the ability to run reports and maintain performance of daily workloads. What solution will meet these requirements?
A. Create a read replica of the database. Directs queries to the read replica.
B. Create a backup of the database. Restore the backup to another database instance. Direct queries to the new database.
C. Export the data to Amazon S3. Use Amazon Athena to query the S3 bucket.
D. Resize the database instance to accommodate the additional workload.
A. Create a read replica of the database. Directs queries to the read replica.
This is the most cost-effective solution because it does not require any additional AWS services. A read replica is a copy of a database that is synchronized with the primary database. You can route report queries to the read replica, which will not impact performance for daily workloads
607# A company runs a container application using Amazon Elastic Kubernetes Service (Amazon EKS). The application includes microservices that manage customers and place orders. The business needs to direct incoming requests to the appropriate microservices. Which solution will meet this requirement in the MOST cost-effective way?
A. Use the AWS Load Balancing Controller to provision a network load balancer.
B. Use the AWS Load Balancer Controller to provision an application load balancer.
C. Use an AWS Lambda function to connect requests to Amazon EKS.
D. Use Amazon API Gateway to connect requests to Amazon EKS.
D. Use Amazon API Gateway to connect requests to Amazon EKS.
You are charged for each hour or partial hour that an application load balancer is running, and the number of load balancer capacity units (LCUs) used per hour. With Amazon API Gateway, you only pay when your APIs are in use. https://aws.amazon.com/blogs/containers/integrate-amazon-api-gateway-with-amazon-eks/
608# A company uses AWS and sells access to copyrighted images. The company’s global customer base needs to be able to access these images quickly. The company must deny access to users from specific countries. The company wants to minimize costs as much as possible. What solution will meet these requirements?
A. Use Amazon S3 to store images. Enable multi-factor authentication (MFA) and public access to the bucket. Provide clients with a link to the S3 bucket.
B. Use Amazon S3 to store images. Create an IAM user for each customer. Add users to a group that has permission to access the S3 bucket.
C. Use the Amazon EC2 instances that are behind the application load balancers (ALBs) to store the images. Deploy instances only to countries where your company serves. Provide customers with links to ALBs for their country-specific instances.
D. Use Amazon S3 to store images. Use Amazon CloudFront to distribute geo-restricted images. Provide a signed URL for each client to access data in CloudFront.
D. Use Amazon S3 to store images. Use Amazon CloudFront to distribute geo-restricted images. Provide a signed URL for each client to access data in CloudFront.
In summary, option D (Use Amazon S3 to store the images. Use Amazon CloudFront to distribute the geo-restricted images. Provide a signed URL for each client to access the data in CloudFront) is the most appropriate solution to meet the requirements. specified.
Amazon S3 for Storage: Amazon S3 is used to store the images. Provides scalable, durable, low-latency storage for images. Amazon
CloudFront for content delivery: CloudFront is used as a content delivery network (CDN) to distribute images globally. This reduces latency and ensures fast access for customers around the world. Geo Restrictions in CloudFront: CloudFront supports geo restrictions, allowing the company to deny access to users from specific countries. This satisfies the requirement of controlling access based on the user’s location.
Signed URLs for secure access: Signed URLs are provided to clients for secure access to images. This ensures that only authorized customers can access the content.
Cost Minimization: CloudFront is a cost-effective solution for content delivery, and can significantly reduce data transfer costs by serving content from edge locations close to end users.
609# A solutions architect is designing a solution based on Amazon ElastiCache for highly available Redis. The solutions architect must ensure that failures do not result in performance degradation or data loss locally and within an AWS Region. The solution must provide high availability at the node level and at the region level. What solution will meet these requirements?
A. Use Multi-AZ Redis replication groups with shards containing multiple nodes.
B. Use Redis shards containing multiple nodes with Redis stub-only files (AOF) enabled.
C. Use a Multi-AZ Redis cluster with more than one read replica in the replication group.
D. Use Redis shards containing multiple nodes with auto-scaling enabled.
A. Use Multi-AZ Redis replication groups with shards containing multiple nodes.
In summary, option A (Use Multi-AZ Redis Replication Groups with shards containing multiple nodes) is the most appropriate option to achieve high availability at both the node level and the AWS Region level in Amazon ElastiCache for Redis.
Multi-AZ Redis Replication Groups: Amazon ElastiCache provides Multi-AZ support for Redis, allowing the creation of replication groups that span multiple availability zones (AZs) within a region. This guarantees high availability at a regional level.
Shards with Multi-node: Shards within the replication group can contain multiple nodes, providing scalability and redundancy at the node level. This contributes to high availability and performance.
610# A company plans to migrate to AWS and use Amazon EC2 on-demand instances for its application. During the migration testing phase, a technical team observes that the application takes a long time to start and load memory to be fully productive. Which solution will reduce the app launch time during the next testing phase?
A. Start two or more EC2 instances on demand. Enable auto-scaling features and make EC2 on-demand instances available during the next testing phase.
B. Start EC2 Spot Instances to support the application and scale the application to be available during the next testing phase.
C. Start the EC2 on-demand instances with hibernation enabled. Configure EC2 Auto Scaling hot pools during the next testing phase.
D. Start EC2 on-demand instances with capacity reservations. Start additional EC2 instances during the next testing phase.
C. Start the EC2 on-demand instances with hibernation enabled. Configure EC2 Auto Scaling hot pools during the next testing phase.
In summary, option C (Start EC2 on-demand instances with hibernation enabled. Configure EC2 Auto Scaling hot pools during the next testing phase) addresses the problem of reducing launch time by utilizing hibernation and maintenance. from hot pools for faster response.
EC2 On-Demand Instances with Hibernation: Hibernation allows EC2 instances to persist their in-memory state to Amazon EBS. When an instance is hibernated, it can quickly resume with its previous memory state intact. This is particularly useful for reducing startup time and loading memory quickly.
EC2 Auto Scaling Warm Pools: Auto Scaling warm pools allow you to keep a specific number of instances running even when demand is low. Warm pools keep instances in a state where they can respond quickly to increased demand. This helps reduce the time it takes for an instance to become fully productive.
611# An enterprise’s applications run on Amazon EC2 instances in auto-scaling groups. The company notices that its apps experience traffic spikes on random days of the week. The company wants to maintain application performance during traffic surges. Which solution will meet these requirements in the MOST cost-effective way?
A. Use manual scaling to change the size of the auto-scaling group.
B. Use predictive scaling to change the size of the Auto Scaling group.
C. Use dynamic scaling to change the size of the Auto Scaling group.
D. Use schedule scaling to change the size of the auto-scaling group.
C. Use dynamic scaling to change the size of the Auto Scaling group.
Dynamic Scaling: Dynamic scaling adjusts the size of the Auto Scaling group in response to changing demand. Allows the Auto Scaling group to automatically increase or decrease the number of instances based on defined policies. This is well suited for handling surges in traffic as the group enters or exits as needed.
612# An e-commerce application uses a PostgreSQL database running on an Amazon EC2 instance. During a monthly sales event, database usage increases and causes database connection issues for the application. Traffic is unpredictable for subsequent monthly sales events, which affects sales forecasting. The business needs to maintain performance when there is an unpredictable increase in traffic. Which solution solves this problem in the MOST cost-effective way?
A. Migrate the PostgreSQL database to Amazon Aurora Serverless v2.
B. Enable PostgreSQL database auto-scaling on the EC2 instance to accommodate increased usage.
C. Migrate the PostgreSQL database to Amazon RDS for PostgreSQL with a larger instance type.
D. Migrate the PostgreSQL database to Amazon Redshift to accommodate increased usage.
A. Migrate the PostgreSQL database to Amazon Aurora Serverless v2.
Amazon Aurora Serverless v2: Aurora Serverless v2 is designed for variable and unpredictable workloads. Automatically adjusts database capacity based on actual usage, allowing you to scale down during low demand periods and scale up during peak periods. This ensures that the application can handle increased traffic without manual intervention.