AWS Certified Solutions Architect Associate Practice Test 5 Flashcards
A company has a running m5ad.large EC2 instance with a default attached 75 GB SSD instance-store backed volume. You shut it down and then start the instance. You noticed that the data which you have saved earlier on the attached volume is no longer available.
What might be the cause of this?
A. The EC2 instance was using EBS backed root volumes, which are ephemeral and only live for the life of the instance
B. The instance was hit by a virus that wipes out all data
C. The volume of the instance was not big enough to handle all of the processing data
D. The Ec2 instance was using instance store volumes, which are ephemeral and only live for the life of the instance
D. The Ec2 instance was using instance store volumes, which are ephemeral and only live for the life of the instance
Explanation:
An instance store provides temporary block-level storage for your instance. This storage is located on disks that are physically attached to the host computer. Instance store is ideal for temporary storage of information that changes frequently, such as buffers, caches, scratch data, and other temporary content, or for data that is replicated across a fleet of instances, such as a load-balanced pool of web servers.
An instance store consists of one or more instance store volumes exposed as block devices. The size of an instance store as well as the number of devices available varies by instance type. While an instance store is dedicated to a particular instance, the disk subsystem is shared among instances on a host computer.
The data in an instance store persists only during the lifetime of its associated instance. If an instance reboots (intentionally or unintentionally), data in the instance store persists. However, data in the instance store is lost under the following circumstances:
- The underlying disk drive fails
- The instance stops
- The instance terminates
A company plans to use Route 53 instead of an ELB to load balance the incoming request to the web application. The system is deployed to two EC2 instances to which the traffic needs to be distributed. You want to set a specific percentage of traffic to go to each instance.
Which routing policy would you use?
A. Geolocation
B. Weighted
C. Latency
D. Failover
B. Weighted
Explanation:
Weighted routing lets you associate multiple resources with a single domain name (tutorialsdojo.com) or subdomain name (portal.tutorialsdojo.com) and choose how much traffic is routed to each resource. This can be useful for a variety of purposes including load balancing and testing new versions of software. You can set a specific percentage of how much traffic will be allocated to the resource by specifying the weights.
For example, if you want to send a tiny portion of your traffic to one resource and the rest to another resource, you might specify weights of 1 and 255. The resource with a weight of 1 gets 1/256th of the traffic (1/1+255), and the other resource gets 255/256ths (255/1+255).
You can gradually change the balance by changing the weights. If you want to stop sending traffic to a resource, you can change the weight for that record to 0.
Hence, the correct answer is Weighted.
Latency is incorrect because you cannot set a specific percentage of traffic for the 2 EC2 instances with this routing policy. Latency routing policy is primarily used when you have resources in multiple AWS Regions and if you need to automatically route traffic to a specific AWS Region that provides the best latency with less round-trip time.
Failover is incorrect because this type is commonly used if you want to set up an active-passive failover configuration for your web application.
Geolocation is incorrect because this is more suitable for routing traffic based on the location of your users, and not for distributing a specific percentage of traffic to two AWS resources.
In a startup company you are working for, you are asked to design a web application that requires a NoSQL database that has no limit on the storage size for a given table. The startup is still new in the market and it has very limited human resources who can take care of the database infrastructure.
Which is the most suitable service that you can implement that provides a fully managed, scalable and highly available NoSQL service?
A. DyanmoDB
B. SimpleDB
C. Amazon Neptune
D. Amazon Aurora
A. DyanmoDB
Explanation:
The term “fully managed” means that Amazon will manage the underlying infrastructure of the service hence, you don’t need an additional human resource to support or maintain the service. Therefore, Amazon DynamoDB is the right answer. Remember that Amazon RDS is a managed service but not “fully managed” as you still have the option to maintain and configure the underlying server of the database.
Amazon DynamoDB is a fast and flexible NoSQL database service for all applications that need consistent, single-digit millisecond latency at any scale. It is a fully managed cloud database and supports both document and key-value store models. Its flexible data model, reliable performance, and automatic scaling of throughput capacity make it a great fit for mobile, web, gaming, ad tech, IoT, and many other applications.
Amazon Neptune is incorrect because this is primarily used as a graph database.
Amazon Aurora is incorrect because this is a relational database and not a NoSQL database.
SimpleDB is incorrect. Although SimpleDB is also a highly available and scalable NoSQL database, it has a limit on the request capacity or storage size for a given table, unlike DynamoDB.
An operations team has an application running on EC2 instances inside two custom VPCs. The VPCs are located in the Ohio and N.Virginia Region respectively. The team wants to transfer data between the instances without traversing the public internet.
Which combination of steps will achieve this? (Select TWO.)
A. Launch a NAT Gateway in the public subnet of each VPC
B. Deploy a VPC endpoint on each region to enable a private connection
C. Set up a VPC peering connection between the VPCs
D. Re-configure the route tables target and destination of the instances subnet
E. Create an Egress only Internet Gateway
C. Set up a VPC peering connection between the VPCs
D. Re-configure the route tables target and destination of the instances subnet
Explanation:
A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them using private IPv4 addresses or IPv6 addresses. Instances in either VPC can communicate with each other as if they are within the same network. You can create a VPC peering connection between your own VPCs, or with a VPC in another AWS account. The VPCs can be in different regions (also known as an inter-region VPC peering connection).
Inter-Region VPC Peering provides a simple and cost-effective way to share resources between regions or replicate data for geographic redundancy. Built on the same horizontally scaled, redundant, and highly available technology that powers VPC today, Inter-Region VPC Peering encrypts inter-region traffic with no single point of failure or bandwidth bottleneck. Traffic using Inter-Region VPC Peering always stays on the global AWS backbone and never traverses the public internet, thereby reducing threat vectors, such as common exploits and DDoS attacks.
Hence, the correct answers are:
- Set up a VPC peering connection between the VPCs.
- Re-configure the route table’s target and destination of the instances’ subnet.
The option that says: Create an Egress only Internet Gateway is incorrect because this will just enable outbound IPv6 communication from instances in a VPC to the internet. Take note that the scenario requires private communication to be enabled between VPCs from two different regions.
The option that says: Launch a NAT Gateway in the public subnet of each VPC is incorrect because NAT Gateways are used to allow instances in private subnets to access the public internet. Note that the requirement is to make sure that communication between instances will not traverse the internet.
The option that says: Deploy a VPC endpoint on each region to enable private connection is incorrect. VPC endpoints are region-specific only and do not support inter-region communication.
A web application hosted in an Auto Scaling group of EC2 instances in AWS. The application receives a burst of traffic every morning, and a lot of users are complaining about request timeouts. The EC2 instance takes 1 minute to boot up before it can respond to user requests. The cloud architecture must be redesigned to better respond to the changing traffic of the application.
How should the Solutions Architect redesign the architecture?
A. Create a CloudFront distribution and set the EC2 instance as the origin
B. Create a new launch template and upgrade the size of the instance
C. Create a Network Load Balancer with slow start mode
D. Create a step scaling policy and configure an instance warm up time condition
D. Create a step scaling policy and configure an instance warm up time condition
Explanation:
Amazon EC2 Auto Scaling helps you maintain application availability and allows you to automatically add or remove EC2 instances according to conditions you define. You can use the fleet management features of EC2 Auto Scaling to maintain the health and availability of your fleet. You can also use the dynamic and predictive scaling features of EC2 Auto Scaling to add or remove EC2 instances. Dynamic scaling responds to changing demand and predictive scaling automatically schedules the right number of EC2 instances based on predicted demand. Dynamic scaling and predictive scaling can be used together to scale faster.
Step scaling applies “step adjustments” which means you can set multiple actions to vary the scaling depending on the size of the alarm breach. When you create a step scaling policy, you can also specify the number of seconds that it takes for a newly launched instance to warm up.
Hence, the correct answer is: Create a step scaling policy and configure an instance warm-up time condition.
The option that says: Create a Network Load Balancer with slow start mode is incorrect because Network Load Balancer does not support slow start mode. If you need to enable slow start mode, you should use Application Load Balancer.
The option that says: Create a new launch template and upgrade the size of the instance is incorrect because a larger instance does not always improve the boot time. Instead of upgrading the instance, you should create a step scaling policy and add a warm-up time.
The option that says: Create a CloudFront distribution and set the EC2 instance as the origin is incorrect because this approach only resolves the traffic latency. Take note that the requirement in the scenario is to resolve the timeout issue and not the traffic latency.
A financial company wants to store their data in Amazon S3 but at the same time, they want to store their frequently accessed data locally on their on-premises server. This is due to the fact that they do not have the option to extend their on-premises storage, which is why they are looking for a durable and scalable storage service to use in AWS.
What is the best solution for this scenario?
A. Use the Amazon Storage Gateway - Cached Volumes
B. use a fleet of Ec2 instances with EBS volumes to store the commonly used data
C. Use both Elasticache and S3 for frequently accessed data
D. Use Amazon GLacier
A. Use the Amazon Storage Gateway - Cached Volumes
Explanation:
By using Cached volumes, you store your data in Amazon Simple Storage Service (Amazon S3) and retain a copy of frequently accessed data subsets locally in your on-premises network. Cached volumes offer substantial cost savings on primary storage and minimize the need to scale your storage on-premises. You also retain low-latency access to your frequently accessed data. This is the best solution for this scenario.
Using a fleet of EC2 instance with EBS volumes to store the commonly used data is incorrect because an EC2 instance is not a storage service and it does not provide the required durability and scalability.
Using both Elasticache and S3 for frequently accessed data is incorrect as this is not efficient. Moreover, the question explicitly said that the frequently accessed data should be stored locally on their on-premises server and not on AWS.
Using Amazon Glacier is incorrect as this is mainly used for data archiving.
A healthcare company stores sensitive patient health records in their on-premises storage systems. These records must be kept indefinitely and protected from any type of modifications once they are stored. Compliance regulations mandate that the records must have granular access control and each data access must be audited at all levels. Currently, there are millions of obsolete records that are not accessed by their web application, and their on-premises storage is quickly running out of space. The Solutions Architect must design a solution to immediately move existing records to AWS and support the ever-growing number of new health records.
Which of the following is the most suitable solution that the Solutions Architect should implement to meet the above requirements?
A. Set up AWS DataSync to move the existing health records from the on premises network to the AWS Cloud. Launch a new Amazon S3 bucket to store existing and new records. Enable S3 Object lock in the bucket
B. Set up AWS DataSync to move the existing health records from the on premises network to the AWS Cloud. Laucnh a new Amazon S3 CloudTrail with Management devents and Amazon S3 Object Lock in the bucket
C. Set up AWS Storage Gateway to move the existing health records from the on premises network to the AWS Cloud. Launch a new Amazon S3 bucket to store existing and new records. Enable AWS CloudTrail with Management Events and Amazon S3 Object Lock in the bucket
D. Set up AWS Storage Gateway to move the existing health records from the on premises network to the AWS Cloud. Launch an Amazon EBS backed EC2 instance to store both access logging and S3 Object Lock in the bucket
A. Set up AWS DataSync to move the existing health records from the on premises network to the AWS Cloud. Launch a new Amazon S3 bucket to store existing and new records. Enable S3 Object lock in the bucket
Explanation:
AWS Storage Gateway is a set of hybrid cloud services that gives you on-premises access to virtually unlimited cloud storage. Customers use Storage Gateway to integrate AWS Cloud storage with existing on-site workloads so they can simplify storage management and reduce costs for key hybrid cloud storage use cases. These include moving backups to the cloud, using on-premises file shares backed by cloud storage, and providing low latency access to data in AWS for on-premises applications.
AWS DataSync is an online data transfer service that simplifies, automates, and accelerates moving data between on-premises storage systems and AWS Storage services, as well as between AWS Storage services. You can use DataSync to migrate active datasets to AWS, archive data to free up on-premises storage capacity, replicate data to AWS for business continuity, or transfer data to the cloud for analysis and processing.
Both AWS Storage Gateway and AWS DataSync can send data from your on-premises data center to AWS and vice versa. However, AWS Storage Gateway is more suitable to be used in integrating your storage services by replicating your data while AWS DataSync is better for workloads that require you to move or migrate your data.
You can also use a combination of DataSync and File Gateway to minimize your on-premises infrastructure while seamlessly connecting on-premises applications to your cloud storage. AWS DataSync enables you to automate and accelerate online data transfers to AWS storage services. File Gateway is a fully managed solution that will automate and accelerate the replication of data between the on-premises storage systems and AWS storage services.
AWS CloudTrail is an AWS service that helps you enable governance, compliance, and operational and risk auditing of your AWS account. Actions taken by a user, role, or an AWS service are recorded as events in CloudTrail. Events include actions taken in the AWS Management Console, AWS Command Line Interface, and AWS SDKs and APIs.
There are two types of events that you configure your CloudTrail for:
- Management Events
- Data Events
Management Events provide visibility into management operations that are performed on resources in your AWS account. These are also known as control plane operations. Management events can also include non-API events that occur in your account.
Data Events, on the other hand, provide visibility into the resource operations performed on or within a resource. These are also known as data plane operations. It allows granular control of data event logging with advanced event selectors. You can currently log data events on different resource types such as Amazon S3 object-level API activity (e.g. GetObject, DeleteObject, and PutObject API operations), AWS Lambda function execution activity (the Invoke API), DynamoDB Item actions, and many more.
With S3 Object Lock, you can store objects using a write-once-read-many (WORM) model. Object Lock can help prevent objects from being deleted or overwritten for a fixed amount of time or indefinitely. You can use Object Lock to help meet regulatory requirements that require WORM storage or to simply add another layer of protection against object changes and deletion.
You can record the actions that are taken by users, roles, or AWS services on Amazon S3 resources and maintain log records for auditing and compliance purposes. To do this, you can use server access logging, AWS CloudTrail logging, or a combination of both. AWS recommends that you use AWS CloudTrail for logging bucket and object-level actions for your Amazon S3 resources.
Hence, the correct answer is: Set up AWS DataSync to move the existing health records from the on-premises network to the AWS Cloud. Launch a new Amazon S3 bucket to store existing and new records. Enable AWS CloudTrail with Data Events and Amazon S3 Object Lock in the bucket.
The option that says: Set up AWS Storage Gateway to move the existing health records from the on-premises network to the AWS Cloud. Launch a new Amazon S3 bucket to store existing and new records. Enable AWS CloudTrail with Management Events and Amazon S3 Object Lock in the bucket is incorrect. The requirement explicitly says that the Solutions Architect must immediately move the existing records to AWS and not integrate or replicate the data. Using AWS DataSync is a more suitable service to use here since the primary objective is to migrate or move data. You also have to use Data Events here and not Management Events in CloudTrail, to properly track all the data access and changes to your objects.
The option that says: Set up AWS Storage Gateway to move the existing health records from the on-premises network to the AWS Cloud. Launch an Amazon EBS-backed EC2 instance to store both the existing and new records. Enable Amazon S3 server access logging and S3 Object Lock in the bucket is incorrect. Just as mentioned in the previous option, using AWS Storage Gateway is not a recommended service to use in this situation since the objective is to move the obsolete data. Moreover, using Amazon EBS to store health records is not a scalable solution compared with Amazon S3. Enabling server access logging can help audit the stored objects. However, it is better to CloudTrail as it provides more granular access control and tracking.
The option that says: Set up AWS DataSync to move the existing health records from the on-premises network to the AWS Cloud. Launch a new Amazon S3 bucket to store existing and new records. Enable AWS CloudTrail with Management Events and Amazon S3 Object Lock in the bucket is incorrect. Although it is right to use AWS DataSync to move the health records, you still have to configure Data Events in AWS CloudTrail and not Management Events. This type of event only provides visibility into management operations that are performed on resources in your AWS account and not the data events that are happening in the individual objects in Amazon S3.
A company is running a batch job on an EC2 instance inside a private subnet. The instance gathers input data from an S3 bucket in the same region through a NAT Gateway. The company is looking for a solution that will reduce costs without imposing risks on redundancy or availability.
Which solution will accomplish this?
A. Replace the NAT Gateway with a NAT instance hosted on a burstable instance type
B. Re-assign the NAT gateway to a lower EC2 instance type
C. Deploy a transit gateway to peer connection between the instance and the S3 bucket
D. Remove the NAT Gateway and use a Gateway VPC endpoint to access the S3 bucket from the instance
D. Remove the NAT Gateway and use a Gateway VPC endpoint to access the S3 bucket from the instance
Explanation:
A gateway endpoint is a gateway that you specify in your route table to access Amazon S3 from your VPC over the AWS network. Interface endpoints extend the functionality of gateway endpoints by using private IP addresses to route requests to Amazon S3 from within your VPC, on-premises, or from a different AWS Region. Interface endpoints are compatible with gateway endpoints. If you have an existing gateway endpoint in the VPC, you can use both types of endpoints in the same VPC.
There is no additional charge for using gateway endpoints. However, standard charges for data transfer and resource usage still apply.
Hence, the correct answer is: Remove the NAT Gateway and use a Gateway VPC endpoint to access the S3 bucket from the instance.
The option that says: Replace the NAT Gateway with a NAT instance hosted on burstable instance type is incorrect. This solution may possibly reduce costs, but the availability and redundancy will be compromised.
The option that says: Deploy a Transit Gateway to peer connection between the instance and the S3 bucket is incorrect. Transit Gateway is a service that is specifically used for connecting multiple VPCs through a central hub.
The option that says: Re-assign the NAT Gateway to a lower EC2 instance type is incorrect. NAT Gateways are fully managed resources. You cannot access nor modify the underlying instance that hosts it.
A company has 10 TB of infrequently accessed financial data files that would need to be stored in AWS. These data would be accessed infrequently during specific weeks when they are retrieved for auditing purposes. The retrieval time is not strict as long as it does not exceed 24 hours.
Which of the following would be a secure, durable, and cost-effective solution for this scenario?
A. Upload the data to S3 and set a lifecycle policy to transition data to Glacier after 0 days
B. Upload the data to S3 then use a lifecycle policy to transfer data to S3-IA
C. Upload the data to Amazon FSx for Windows File Server using the Server Message Block (SMB) protocol
D. Upload the data to S3 then use a lifecycle policy to transfer data to S3 One Zone-IA
A. Upload the data to S3 and set a lifecycle policy to transition data to Glacier after 0 days
Explanation:
Glacier is a cost-effective archival solution for large amounts of data. Bulk retrievals are S3 Glacier’s lowest-cost retrieval option, enabling you to retrieve large amounts, even petabytes, of data inexpensively in a day. Bulk retrievals typically complete within 5 – 12 hours. You can specify an absolute or relative time period (including 0 days) after which the specified Amazon S3 objects should be transitioned to Amazon Glacier.
Hence, the correct answer is the option that says: Upload the data to S3 and set a lifecycle policy to transition data to Glacier after 0 days.
Glacier has a management console that you can use to create and delete vaults. However, you cannot directly upload archives to Glacier by using the management console. To upload data such as photos, videos, and other documents, you must either use the AWS CLI or write code to make requests by using either the REST API directly or by using the AWS SDKs.
Take note that uploading data to the S3 Console and setting its storage class of “Glacier” is a different story as the proper way to upload data to Glacier is still via its API or CLI. In this way, you can set up your vaults and configure your retrieval options. If you uploaded your data using the S3 console then it will be managed via S3 even though it is internally using a Glacier storage class.
Uploading the data to S3 then using a lifecycle policy to transfer data to S3-IA is incorrect because using Glacier would be a more cost-effective solution than using S3-IA. Since the required retrieval period should not exceed more than a day, Glacier would be the best choice.
Uploading the data to Amazon FSx for Windows File Server using the Server Message Block (SMB) protocol is incorrect because this option costs more than Amazon Glacier, which is more suitable for storing infrequently accessed data. Amazon FSx for Windows File Server provides fully managed, highly reliable, and scalable file storage that is accessible over the industry-standard Server Message Block (SMB) protocol.
Uploading the data to S3 then using a lifecycle policy to transfer data to S3 One Zone-IA is incorrect because with S3 One Zone-IA, the data will only be stored in a single availability zone and thus, this storage solution is not durable. It also costs more compared to Glacier.
A social media company needs to capture the detailed information of all HTTP requests that went through their public-facing Application Load Balancer every five minutes. The client’s IP address and network latencies must also be tracked. They want to use this data for analyzing traffic patterns and for troubleshooting their Docker applications orchestrated by the Amazon ECS Anywhere service.
Which of the following options meets the customer requirements with the LEAST amount of overhead?
A. Integrate Amazon EventBridge (Amazon CloudWatch Events) metrics on the Application Load Balancer to capture the client IP address. Use Amazon CloudWatch Container Insights to analyze traffic patterns
B. Enable AWS CloudTrail for their application load balancer. Use the AWS CLoudTrail Lake to analyze and troubleshoot the application traffic
C. Enable access logs on the Application Load Balancer. Integrate the Amazon ECS cluster with Amazon CloudWatch Application Insights to analyze traffic patterns and simplify troubleshooting
D. Install and run the AWS X-Ray daemon on the Amazon ECS cluster. Use the Amazon CloudWatch ServiceLens to analyze the traffic that goes through the application
C. Enable access logs on the Application Load Balancer. Integrate the Amazon ECS cluster with Amazon CloudWatch Application Insights to analyze traffic patterns and simplify troubleshooting
Explanation:
Amazon CloudWatch Application Insights facilitates observability for your applications and underlying AWS resources. It helps you set up the best monitors for your application resources to continuously analyze data for signs of problems with your applications. Application Insights, which is powered by SageMaker and other AWS technologies, provides automated dashboards that show potential problems with monitored applications, which help you to quickly isolate ongoing issues with your applications and infrastructure. The enhanced visibility into the health of your applications that Application Insights provides helps reduce the “mean time to repair” (MTTR) to troubleshoot your application issues.
When you add your applications to Amazon CloudWatch Application Insights, it scans the resources in the applications and recommends and configures metrics and logs on CloudWatch for application components. Example application components include SQL Server backend databases and Microsoft IIS/Web tiers. Application Insights analyzes metric patterns using historical data to detect anomalies and continuously detects errors and exceptions from your application, operating system, and infrastructure logs. It correlates these observations using a combination of classification algorithms and built-in rules. Then, it automatically creates dashboards that show the relevant observations and problem severity information to help you prioritize your actions.
Elastic Load Balancing provides access logs that capture detailed information about requests sent to your load balancer. Each log contains information such as the time the request was received, the client’s IP address, latencies, request paths, and server responses. You can use these access logs to analyze traffic patterns and troubleshoot issues.
Access logging is an optional feature of Elastic Load Balancing that is disabled by default. After you enable access logging for your load balancer, Elastic Load Balancing captures the logs and stores them in the Amazon S3 bucket that you specify as compressed files. You can disable access logging at any time.
Hence, the correct answer is: Enable access logs on the Application Load Balancer. Integrate the Amazon ECS cluster with Amazon CloudWatch Application Insights to analyze traffic patterns and simplify troubleshooting.
The option that says: Enable AWS CloudTrail for their Application Load Balancer. Use the AWS CloudTrail Lake to analyze and troubleshoot the application traffic is incorrect because AWS CloudTrail is primarily used to monitor and record the account activity across your AWS resources and not your web applications. You cannot use CloudTrail to capture the detailed information of all HTTP requests that go through your public-facing Application Load Balancer (ALB). CloudTrail can only track the resource changes made to your ALB, but not the actual IP traffic that goes through it. For this use case, you have to enable the access logs feature instead. In addition, the AWS CloudTrail Lake feature is more suitable for running SQL-based queries on your API events and not for analyzing application traffic.
The option that says: Install and run the AWS X-Ray daemon on the Amazon ECS cluster. Use the Amazon CloudWatch ServiceLens to analyze the traffic that goes through the application is incorrect. Although this solution is possible, this won’t track the client’s IP address since the access log feature in the ALB is not enabled. Take note that the scenario explicitly mentioned that the client’s IP address and network latencies must also be tracked.
The option that says: Integrate Amazon EventBridge (Amazon CloudWatch Events) metrics on the Application Load Balancer to capture the client IP address. Use Amazon CloudWatch Container Insights to analyze traffic patterns is incorrect because Amazon EventBridge doesn’t track the actual traffic to your ALB. It is the Amazon CloudWatch service that monitors the changes to your ALB itself and the actual IP traffic that it distributes to the target groups. The primary function of CloudWatch Container Insights is to collect, aggregate, and summarize metrics and logs from your containerized applications and microservices.
A company plans to design a highly available architecture in AWS. They have two target groups with three EC2 instances each, which are added to an Application Load Balancer. In the security group of the EC2 instance, you have verified that port 80 for HTTP is allowed. However, the instances are still showing out of service from the load balancer.
What could be the root cause of this issue?
A. The instances are using the wrong AMI
B. The wrong subnet was used in your VPC
C. The wrong instance type was used for the Ec2 instance
D. The health check configuration is not properly defined
D. The health check configuration is not properly defined
Explanation:
Since the security group is properly configured, the issue may be caused by a wrong health check configuration in the Target Group.
Your Application Load Balancer periodically sends requests to its registered targets to test their status. These tests are called health checks. Each load balancer node routes requests only to the healthy targets in the enabled Availability Zones for the load balancer. Each load balancer node checks the health of each target, using the health check settings for the target group with which the target is registered. After your target is registered, it must pass one health check to be considered healthy. After each health check is completed, the load balancer node closes the connection that was established for the health check.
A Solutions Architect needs to ensure that all of the AWS resources in Amazon VPC don’t go beyond their respective service limits. The Architect should prepare a system that provides real-time guidance in provisioning resources that adheres to the AWS best practices.
Which of the following is the MOST appropriate service to use to satisfy this task?
A. Amazon Inspector
B. AWS Trusted Advisor
C. AWS budgets
D. AWS Cost Explorer
B. AWS Trusted Advisor
Explanation:
AWS Trusted Advisor is an online tool that provides you with real-time guidance to help you provision your resources following AWS best practices. It inspects your AWS environment and makes recommendations for saving money, improving system performance and reliability, or closing security gaps.
Whether establishing new workflows, developing applications, or as part of ongoing improvement, take advantage of the recommendations provided by Trusted Advisor on a regular basis to help keep your solutions provisioned optimally.
Trusted Advisor includes an ever-expanding list of checks in the following five categories:
Cost Optimization – recommendations that can potentially save you money by highlighting unused resources and opportunities to reduce your bill.
Security – identification of security settings that could make your AWS solution less secure.
Fault Tolerance – recommendations that help increase the resiliency of your AWS solution by highlighting redundancy shortfalls, current service limits, and over-utilized resources.
Performance – recommendations that can help to improve the speed and responsiveness of your applications.
Service Limits – recommendations that will tell you when service usage is more than 80% of the service limit.
Hence, the correct answer in this scenario is AWS Trusted Advisor.
AWS Cost Explorer is incorrect because this is just a tool that enables you to view and analyze your costs and usage. You can explore your usage and costs using the main graph, the Cost Explorer cost and usage reports, or the Cost Explorer RI reports. It has an easy-to-use interface that lets you visualize, understand, and manage your AWS costs and usage over time.
AWS Budgets is incorrect because it simply gives you the ability to set custom budgets that alert you when your costs or usage exceed (or are forecasted to exceed) your budgeted amount. You can also use AWS Budgets to set reservation utilization or coverage targets and receive alerts when your utilization drops below the threshold you define.
Amazon Inspector is incorrect because it is just an automated security assessment service that helps improve the security and compliance of applications deployed on AWS. Amazon Inspector automatically assesses applications for exposure, vulnerabilities, and deviations from best practices.
An online shopping platform is hosted on an Auto Scaling group of On-Demand EC2 instances with a default Auto Scaling termination policy and no instance protection configured. The system is deployed across three Availability Zones in the US West region (us-west-1) with an Application Load Balancer in front to provide high availability and fault tolerance for the shopping platform. The us-west-1a, us-west-1b, and us-west-1c Availability Zones have 10, 8 and 7 running instances respectively. Due to the low number of incoming traffic, the scale-in operation has been triggered.
Which of the following will the Auto Scaling group do to determine which instance to terminate first in this scenario? (Select THREE.)
A> Select the instances with the most recent launch configuration
B. Select the instance that is farthest to the next billing hour
C. Choose the Availability Zone with the most number of instances, which is the us west 1a Availability Zone in this scenario
D. Select the instance that is closest to the next billing hour
E. Select the instances with the oldest launch configuration
C. Choose the Availability Zone with the most number of instances, which is the us west 1a Availability Zone in this scenario
D. Select the instance that is closest to the next billing hour
E. Select the instances with the oldest launch configuration
Explanation:
The default termination policy is designed to help ensure that your network architecture spans Availability Zones evenly. With the default termination policy, the behavior of the Auto Scaling group is as follows:
- If there are instances in multiple Availability Zones, choose the Availability Zone with the most instances and at least one instance that is not protected from scale in. If there is more than one Availability Zone with this number of instances, choose the Availability Zone with the instances that use the oldest launch configuration.
- Determine which unprotected instances in the selected Availability Zone use the oldest launch configuration. If there is one such instance, terminate it.
- If there are multiple instances to terminate based on the above criteria, determine which unprotected instances are closest to the next billing hour. (This helps you maximize the use of your EC2 instances and manage your Amazon EC2 usage costs.) If there is one such instance, terminate it.
- If there is more than one unprotected instance closest to the next billing hour, choose one of these instances at random.
The following flow diagram illustrates how the default termination policy works:
A Solutions Architect is working for a fast-growing startup that just started operations during the past 3 months. They currently have an on-premises Active Directory and 10 computers. To save costs in procuring physical workstations, they decided to deploy virtual desktops for their new employees in a virtual private cloud in AWS. The new cloud infrastructure should leverage the existing security controls in AWS but can still communicate with their on-premises network.
Which set of AWS services will the Architect use to meet these requirements?
A. AWS Directory Services, VPN Connection and AWS Identity and Access Management
B. AWS Directory Services, VPN Connection and Amazon Workspaces
C. AWS Directory Services, VPN connection and ClassicLink
D. AWS Directory Services, VPN connection and Amazon S3
B. AWS Directory Services, VPN Connection and Amazon Workspaces
Explanation:
For this scenario, the best answer is: AWS Directory Services, VPN connection, and Amazon Workspaces.
First, you need a VPN connection to connect the VPC and your on-premises network. Second, you need AWS Directory Services to integrate with your on-premises Active Directory and lastly, you need to use Amazon Workspace to create the needed virtual desktops in your VPC.
An application is hosted on an EC2 instance with multiple EBS Volumes attached and uses Amazon Neptune as its database. To improve data security, you encrypted all of the EBS volumes attached to the instance to protect the confidential data stored in the volumes.
Which of the following statements are true about encrypted Amazon Elastic Block Store volumes? (Select TWO.)
A> Only the data in the volume is encrypted and not all the data moving between the volume and the instance
B. The volumes created from the encrypted snapshot are not encrypted
C. Snapshots are not automatically encrypted
D. All data moving between the volume and the instance are encrypted
E. Snapshots are automatically encrypted
D. All data moving between the volume and the instance are encrypted
E. Snapshots are automatically encrypted
Explanation:
Amazon Elastic Block Store (Amazon EBS) provides block-level storage volumes for use with EC2 instances. EBS volumes are highly available and reliable storage volumes that can be attached to any running instance that is in the same Availability Zone. EBS volumes that are attached to an EC2 instance are exposed as storage volumes that persist independently from the life of the instance. When you create an encrypted EBS volume and attach it to a supported instance type, the following types of data are encrypted:
- Data at rest inside the volume
- All data moving between the volume and the instance
- All snapshots created from the volume
- All volumes created from those snapshots
Encryption operations occur on the servers that host EC2 instances, ensuring the security of both data-at-rest and data-in-transit between an instance and its attached EBS storage. You can encrypt both the boot and data volumes of an EC2 instance.
A company has a web-based ticketing service that utilizes Amazon SQS and a fleet of EC2 instances. The EC2 instances that consume messages from the SQS queue are configured to poll the queue as often as possible to keep end-to-end throughput as high as possible. The Solutions Architect noticed that polling the queue in tight loops is using unnecessary CPU cycles, resulting in increased operational costs due to empty responses.
In this scenario, what should the Solutions Architect do to make the system more cost-effective?
A. Configure Amazon SQS to use short polling by setting the ReceiveMessageWaitTimeSeconds to a number greater than zero
B. Configure Amazon SQS to use short polling by setting the ReceiveMessageWaitTimeSeconds to Zero
C. Configure Amazon SQS to use long polling by setting the ReceiveMessageWaitTimeSeconds to a number greater than zero
D. Configure Amazon SQS to use long polling by setting the ReceiveMessageWaitTimeSeconds to zero
C. Configure Amazon SQS to use long polling by setting the ReceiveMessageWaitTimeSeconds to a number greater than zero
Explanation:
In this scenario, the application is deployed in a fleet of EC2 instances that are polling messages from a single SQS queue. Amazon SQS uses short polling by default, querying only a subset of the servers (based on a weighted random distribution) to determine whether any messages are available for inclusion in the response. Short polling works for scenarios that require higher throughput. However, you can also configure the queue to use Long polling instead, to reduce cost.
The ReceiveMessageWaitTimeSeconds is the queue attribute that determines whether you are using Short or Long polling. By default, its value is zero which means it is using Short polling. If it is set to a value greater than zero, then it is Long polling.
Hence, configuring Amazon SQS to use long polling by setting the ReceiveMessageWaitTimeSeconds to a number greater than zero is the correct answer.
Quick facts about SQS Long Polling:
- Long polling helps reduce your cost of using Amazon SQS by reducing the number of empty responses when there are no messages available to return in reply to a ReceiveMessage request sent to an Amazon SQS queue and eliminating false empty responses when messages are available in the queue but aren’t included in the response.
- Long polling reduces the number of empty responses by allowing Amazon SQS to wait until a message is available in the queue before sending a response. Unless the connection times out, the response to the ReceiveMessage request contains at least one of the available messages, up to the maximum number of messages specified in the ReceiveMessage action.
- Long polling eliminates false empty responses by querying all (rather than a limited number) of the servers. Long polling returns messages as soon any message becomes available.
A company has an application hosted in an Amazon ECS Cluster behind an Application Load Balancer. The Solutions Architect is building a sophisticated web filtering solution that allows or blocks web requests based on the country that the requests originate from. However, the solution should still allow specific IP addresses from that country.
Which combination of steps should the Architect implement to satisfy this requirement? (Select TWO.)
A. Using AWS WAF, create a web ACL with a rule that explicitly allows requests from approved IP addresses declared in an IP set
B. In the Application Load Balancer, create a listener rule that explicitly allows requests from approved IP addresses
C. Add another rule in the AWS WAF web ACL with a geo match condition that blocks requests that originate from a specific country
D. Place a transit gateway in front of the VPC where the application is hosted and set up Network ACLs that block requests that originate from a specific country
E. Set up a geo match condition in the Application Load Balancer that blocks requests from a specific country
A. Using AWS WAF, create a web ACL with a rule that explicitly allows requests from approved IP addresses declared in an IP set
C. Add another rule in the AWS WAF web ACL with a geo match condition that blocks requests that originate from a specific country
Explanation:
If you want to allow or block web requests based on the country that the requests originate from, create one or more geo-match conditions. A geo match condition lists countries that your requests originate from. Later in the process, when you create a web ACL, you specify whether to allow or block requests from those countries.
You can use geo-match conditions with other AWS WAF Classic conditions or rules to build sophisticated filtering. For example, if you want to block certain countries but still allow specific IP addresses from that country, you could create a rule containing a geo match condition and an IP match condition. Configure the rule to block requests that originate from that country and do not match the approved IP addresses. As another example, if you want to prioritize resources for users in a particular country, you could include a geo-match condition in two different rate-based rules. Set a higher rate limit for users in the preferred country and set a lower rate limit for all other users.
If you are using the CloudFront geo restriction feature to block a country from accessing your content, any request from that country is blocked and is not forwarded to AWS WAF Classic. So if you want to allow or block requests based on geography plus other AWS WAF Classic conditions, you should not use the CloudFront geo restriction feature. Instead, you should use an AWS WAF Classic geo match condition.
Hence, the correct answers are:
- Using AWS WAF, create a web ACL with a rule that explicitly allows requests from approved IP addresses declared in an IP Set.
- Add another rule in the AWS WAF web ACL with a geo match condition that blocks requests that originate from a specific country.
The option that says: In the Application Load Balancer, create a listener rule that explicitly allows requests from approved IP addresses is incorrect because a listener rule just checks for connection requests using the protocol and port that you configure. It only determines how the load balancer routes the requests to its registered targets.
The option that says: Set up a geo match condition in the Application Load Balancer that block requests that originate from a specific country is incorrect because you can’t configure a geo match condition in an Application Load Balancer. You have to use AWS WAF instead.
The option that says: Place a Transit Gateway in front of the VPC where the application is hosted and set up Network ACLs that block requests that originate from a specific country is incorrect because AWS Transit Gateway is simply a service that enables customers to connect their Amazon Virtual Private Clouds (VPCs) and their on-premises networks to a single gateway. Using this type of gateway is not warranted in this scenario. Moreover, Network ACLs are not suitable for blocking requests from a specific country. You have to use AWS WAF instead.
A company needs to accelerate the performance of its AI-powered medical diagnostic application by running its machine learning workloads on the edge of telecommunication carriers’ 5G networks. The application must be deployed to a Kubernetes cluster and have role-based access control (RBAC) access to IAM users and roles for cluster authentication.
Which of the following should the Solutions Architect implement to ensure single-digit millisecond latency for the application?
A. Host the application to an Amazon EKS cluster and run the Kubernetes pods on AWS Fargate. Create node groups in AWS Wavelength Zones for the Amazon EKS cluster. Add the EKS pod execution IAM role (AmazonEKSFargatePodExecutionRole) to your cluster and ensure that the Fargate profile has the same IAM role as your Amazon EC2 node groups
B. Launch the application to an Amazon Elastic Kubernetes Service (Amazon EKS) cluster. Create node groups in Wavelength Zones for the Amazon EKS cluster via the AWS Wavelength service. Apply the AWS authenticator configuration map (aws-auth ConfigMap) to your cluster
C. Host the application to an Amazon Elastic Kubernetes Service (amazon EKS) cluster. Set up node groups in AWS Wavelength Zones for the Amazon EKS cluster. Attach the Amazon EKS connector agent role (AmazonECSConnectorAgentRole) to your cluster and use AWS Control Tower for RBAC access
D. Launch the application to an Amazon Kubernetes Service (Amazon EKS) cluster. Create VPC endpoints for the AWS Wavelength ZOnes and apply them to the Amazon EKS cluster. Install the AWS IAM AUthenticastor for Kubernetes (aws-iam-authenticator) to your cluster
B. Launch the application to an Amazon Elastic Kubernetes Service (Amazon EKS) cluster. Create node groups in Wavelength Zones for the Amazon EKS cluster via the AWS Wavelength service. Apply the AWS authenticator configuration map (aws-auth ConfigMap) to your cluster
Explanation:
AWS Wavelength combines the high bandwidth and ultralow latency of 5G networks with AWS compute and storage services so that developers can innovate and build a new class of applications.
Wavelength Zones are AWS infrastructure deployments that embed AWS compute and storage services within telecommunications providers’ data centers at the edge of the 5G network, so application traffic can reach application servers running in Wavelength Zones without leaving the mobile providers’ network. This prevents the latency that would result from multiple hops to the internet and enables customers to take full advantage of 5G networks. Wavelength Zones extend AWS to the 5G edge, delivering a consistent developer experience across multiple 5G networks around the world. Wavelength Zones also allow developers to build the next generation of ultra-low latency applications using the same familiar AWS services, APIs, tools, and functionality they already use today.
Amazon EKS uses IAM to provide authentication to your Kubernetes cluster, but it still relies on native Kubernetes Role-Based Access Control (RBAC) for authorization. This means that IAM is only used for the authentication of valid IAM entities. All permissions for interacting with your Amazon EKS cluster’s Kubernetes API are managed through the native Kubernetes RBAC system.
Access to your cluster using AWS Identity and Access Management (IAM) entities is enabled by the AWS IAM Authenticator for Kubernetes, which runs on the Amazon EKS control plane. The authenticator gets its configuration information from the aws-auth ConfigMap (AWS authenticator configuration map).
The aws-auth ConfigMap is automatically created and applied to your cluster when you create a managed node group or when you create a node group using eksctl. It is initially created to allow nodes to join your cluster, but you also use this ConfigMap to add role-based access control (RBAC) access to IAM users and roles.
Hence, the correct answer is: Launch the application to an Amazon Elastic Kubernetes Service (Amazon EKS) cluster. Create node groups in Wavelength Zones for the Amazon EKS cluster via the AWS Wavelength service. Apply the AWS authenticator configuration map (aws-auth ConfigMap) to your cluster.
The option that says: Host the application to an Amazon Elastic Kubernetes Service (Amazon EKS) cluster. Set up node groups in AWS Wavelength Zones for the Amazon EKS cluster. Attach the Amazon EKS connector agent role (AmazonECSConnectorAgentRole) to your cluster and use AWS Control Tower for RBAC access is incorrect. An Amazon EKS connector agent is only used to connect your externally hosted Kubernetes clusters and to allow them to be viewed in your AWS Management Console. The AWS Control Tower doesn’t provide RBAC access too to your EKS cluster. This service is commonly used for setting up a secure multi-account AWS environment and not for providing cluster authentication using IAM users and roles.
The option that says: Launch the application to an Amazon Elastic Kubernetes Service (Amazon EKS) cluster. Create VPC endpoints for the AWS Wavelength Zones and apply them to the Amazon EKS cluster. Install the AWS IAM Authenticator for Kubernetes (aws-iam-authenticator) to your cluster is incorrect because you cannot create VPC Endpoints in AWS Wavelength Zones. In addition, it is more appropriate to apply the AWS authenticator configuration map (aws-auth ConfigMap) to your Amazon EKS cluster to enable RBAC access.
The option that says: Host the application to an Amazon EKS cluster and run the Kubernetes pods on AWS Fargate. Create node groups in AWS Wavelength Zones for the Amazon EKS cluster. Add the EKS pod execution IAM role (AmazonEKSFargatePodExecutionRole) to your cluster and ensure that the Fargate profile has the same IAM role as your Amazon EC2 node groups is incorrect. Although this solution is possible, the security configuration of the Amazon EKS control plane is wrong. You have to ensure that the Fargate profile has a different IAM role as your Amazon EC2 node groups and not the other way around.