Neal Davis - Practice Test 3 - Correct Flashcards
Question 1:
A systems administrator of a company wants to detect and remediate the compromise of services such as Amazon EC2 instances and Amazon S3 buckets.
Which AWS service can the administrator use to protect the company against attacks?
A: Amazon GuardDuty
B: Amazon Inspector
C: Amazon Cognito
D: Amazon Macie
Explanation
Amazon GuardDuty gives you access to built-in detection techniques that are developed and optimized for the cloud. The detection algorithms are maintained and continuously improved upon by AWS Security. The primary detection categories include reconnaissance, instance compromise, account compromise, and bucket compromise.
Amazon GuardDuty offers HTTPS APIs, CLI tools, and Amazon CloudWatch Events to support automated security responses to security findings. For example, you can automate the response workflow by using CloudWatch Events as an event source to trigger an AWS Lambda function.
CORRECT: “Amazon GuardDuty” is the correct answer.
INCORRECT: “Amazon Cognito” is incorrect. Cognito provides sign up and sign services for mobiles apps.
INCORRECT: “Amazon Inspector” is incorrect. Inspector is more about identifying vulnerabilities and evaluating against security best practices. It does not detect compromise.
INCORRECT: “Amazon Macie” is incorrect. Macie is used for detecting and protecting sensitive data that is in Amazon S3.
Question 2:
A company copies 250 TB of data from a recent land survey onto multiple AWS Snowball Edge Storage Optimized devices. The company has a high-performance computing (HPC) cluster that is hosted within AWS to look for items of archaeological interest. A solutions architect must provide the cluster with consistent low latency and high-throughput access to the data which is hosted on the Snowball Edge Storage Optimized devices. The company is sending the devices back to AWS.
Which solution will meet these requirements?
A: Create a bucket in Amazon S3 and import the data into the S3 bucket. Set up an AWS Storage Gateway file gateway to use the S3 bucket and access the file gateway from the HPC cluster instances.
B: Set up an Amazon Elastic File System (Amazon EFS) file system and an Amazon S3 bucket. Upload the data to the S3 bucket. Using the EFS file system, copy the data from the S3 bucket and access the EFS file system from the HPC cluster instances.
C: Set up an Amazon S3 bucket. Configure an Amazon FSx for Lustre file system and integrate it with the S3 bucket after importing the data then access the FSx for Lustre file system from the HPC cluster instances.
D: Create an Amazon FSx for Lustre file system and import the data directly into the FSx for Lustre file system and access the FSx for Lustre file system from the HPC cluster instances.
Explanation
Using an Amazon FSX for Lustre file system is ideal as it is designed for High Performance Compute workloads. The native connection between Snowball and Amazon S3 ensures this solution meets the stated requirements.
CORRECT: “Set up an Amazon S3 bucket. Configure an Amazon FSx for Lustre file system and integrate it with the S3 bucket after importing the data then access the FSx for Lustre file system from the HPC cluster instances” is the correct answer (as explained above.)
INCORRECT: “Create a bucket in Amazon S3 and import the data into the S3 bucket. Set up an AWS Storage Gateway file gateway to use the S3 bucket and access the file gateway from the HPC cluster instances” is incorrect. AWS Storage Gateway File Gateway is not designed to allow extremely low latency file systems. It is a hybrid cloud storage service not designed for this application.
INCORRECT: “Set up an Amazon Elastic File System (Amazon EFS) file system and an Amazon S3 bucket. Upload the data to the S3 bucket. Using the EFS file system, copy the data from the S3 bucket and access the EFS file system from the HPC cluster instances” is incorrect. Although this would work, a standard EFS File System would not provide enough performance to fit the applications requirements.
INCORRECT: “Create an Amazon FSx for Lustre file system and import the data directly into the FSx for Lustre file system and access the FSx for Lustre file system from the HPC cluster instances” is incorrect. You cannot access the FSx for Lustre file system from the HPC cluster instances and this is only possible via S3.
Question 3:
A company has deployed an application that consists of several microservices running on Amazon EC2 instances behind an Amazon API Gateway API. A Solutions Architect is concerned that the microservices are not designed to elastically scale when large increases in demand occur.
Which solution addresses this concern?
A: Spread the microservices across multiple Availability Zones and configure Amazon Data Lifecycle Manager to take regular snapshots.
B: Use Amazon CloudWatch alarms to notify operations staff when the microservices are suffering high CPU utilization.
C: Use an Elastic Load Balancer to distribute the traffic between the microservices. Configure Amazon CloudWatch metrics to monitor traffic to the microservices.
D: Create an Amazon SQS queue to store incoming requests. Configure the microservices to retrieve the requests from the queue for processing.
Explanation
The individual microservices are not designed to scale. Therefore, the best way to ensure they are not overwhelmed by requests is to decouple the requests from the microservices. An Amazon SQS queue can be created, and the API Gateway can be configured to add incoming requests to the queue. The microservices can then pick up the requests from the queue when they are ready to process them.
CORRECT: “Create an Amazon SQS queue to store incoming requests. Configure the microservices to retrieve the requests from the queue for processing” is the correct answer.
INCORRECT: “Use Amazon CloudWatch alarms to notify operations staff when the microservices are suffering high CPU utilization” is incorrect. This solution requires manual intervention and does not help the application to elastically scale.
INCORRECT: “Spread the microservices across multiple Availability Zones and configure Amazon Data Lifecycle Manager to take regular snapshots” is incorrect. This does not automate the elasticity of the application.
INCORRECT: “Use an Elastic Load Balancer to distribute the traffic between the microservices. Configure Amazon CloudWatch metrics to monitor traffic to the microservices” is incorrect. You cannot use an ELB spread traffic across many different individual microservices as the requests must be directed to individual microservices. Therefore, you would need a target group per microservice, and you would need Auto Scaling to scale the microservices.
Question 4:
A Solutions Architect is designing an application that will run on Amazon EC2 instances. The application will use Amazon S3 for storing image files and an Amazon DynamoDB table for storing customer information. The security team require that traffic between the EC2 instances and AWS services must not traverse the public internet.
How can the Solutions Architect meet the security team’s requirements?
A: Create a virtual private gateway and configure VPC route tables.
B: Create gateway VPC endpoints for Amazon S3 and DynamoDB.
(Correct)
C: Create interface VPC endpoints for Amazon S3 and DynamoDB.
D: Create a NAT gateway in a public subnet and configure route tables.
Explanation
A VPC endpoint enables private connections between your VPC and supported AWS services and VPC endpoint services powered by AWS PrivateLink. A gateway endpoint is used for Amazon S3 and Amazon DynamoDB. You specify a gateway endpoint as a route table target for traffic that is destined for the supported AWS services.
CORRECT: “Create gateway VPC endpoints for Amazon S3 and DynamoDB” is the correct answer.
INCORRECT: “Create a NAT gateway in a public subnet and configure route tables” is incorrect. A NAT gateway is used for enabling internet connectivity for instances in private subnets. Connections will traverse the internet.
INCORRECT: “Create interface VPC endpoints for Amazon S3 and DynamoDB” is incorrect. You should use a gateway VPC endpoint for S3 and DynamoDB.
INCORRECT: “Create a virtual private gateway and configure VPC route tables” is incorrect VGWs are used for VPN connections, they do not allow access to AWS services from a VPC.
Question 6:
A web app allows users to upload images for viewing online. The compute layer that processes the images is behind an Auto Scaling group. The processing layer should be decoupled from the front end and the ASG needs to dynamically adjust based on the number of images being uploaded.
How can this be achieved?
A: Create an Amazon SNS Topic to generate a notification each time a message is uploaded. Have the ASG scale based on the number of SNS messages
B: Create a scheduled policy that scales the ASG at times of expected peak load
C: Create a target tracking policy that keeps the ASG at 70% CPU utilization
D: Create an Amazon SQS queue and custom CloudWatch metric to measure the number of messages in the queue. Configure the ASG to scale based on the number of messages in the queue
Explanation
The best solution is to use Amazon SQS to decouple the front end from the processing compute layer. To do this you can create a custom CloudWatch metric that measures the number of messages in the queue and then configure the ASG to scale using a target tracking policy that tracks a certain value.
CORRECT: “Create an Amazon SQS queue and custom CloudWatch metric to measure the number of messages in the queue. Configure the ASG to scale based on the number of messages in the queue” is the correct answer.
INCORRECT: “Create an Amazon SNS Topic to generate a notification each time a message is uploaded. Have the ASG scale based on the number of SNS messages” is incorrect. The Amazon Simple Notification Service (SNS) is used for sending notifications using topics. Amazon SQS is a better solution for this scenario as it provides a decoupling mechanism where the actual images can be stored for processing. SNS does not provide somewhere for the images to be stored.
INCORRECT: “Create a target tracking policy that keeps the ASG at 70% CPU utilization” is incorrect. Using a target tracking policy with the ASG that tracks CPU utilization does not allow scaling based on the number of images being uploaded.
INCORRECT: “Create a scheduled policy that scales the ASG at times of expected peak load” is incorrect. Using a scheduled policy is less dynamic as though you may be able to predict usage patterns, it would be better to adjust dynamically based on actual usage.
Question 7:
A Financial Services company currently stores data in Amazon S3. Each bucket contains items which have different access patterns. The Chief Financial officer of the organization wants to reduce costs, as they have noticed a sharp increase in their S3 bill. The Chief Financial Officer wants to reduce the S3 spend as quickly as possible.
What is the quickest way to reduce the S3 spend with the LEAST operational overhead?
A: Create a Lambda function to scan your S3 buckets, check which objects are stored in the appropriate buckets, and move them there.
B: Transition the objects to the appropriate storage class by using an S3 Lifecycle configuration.
C: Place all objects in S3 Glacier Instant Retrieval.
D: Automate the move of your S3 objects to the best storage class with AWS Trusted Advisor.
Explanation
An S3 Lifecycle configuration is a set of rules that define actions that Amazon S3 applies to a group of objects. There are two types of actions:
● Transition actions – These actions define when objects transition to another storage class. For example, you might choose to transition objects to the S3 Standard-IA storage class 30 days after creating them, or archive objects to the S3 Glacier Flexible Retrieval storage class one year after creating them. For more information, see Using Amazon S3 storage classes.
● Expiration actions – These actions define when objects expire. Amazon S3 deletes expired objects on your behalf.
CORRECT: “Transition the objects to the appropriate storage class by using an S3 Lifecycle configuration” is the correct answer (as explained above.)
INCORRECT: “Automate the move of your S3 objects to the best storage class with AWS Trusted Advisor” is incorrect. Trusted Advisor does not automatically transfer objects into the most appropriate buckets. You can use Trusted Advisor to review cost optimization options, and check for public access to your buckets but you cannot automatically transition objects.
INCORRECT: “Create a Lambda function to scan your S3 buckets, check which objects are stored in the appropriate buckets, and move them there” is incorrect. You could perhaps build a Lambda function to do this, however the easiest way to do this would be to use an S3 Lifecycle configuration.
INCORRECT: “Place all objects in S3 Glacier Instant Retrieval” is incorrect. It states in the question that each bucket contains items which have different access patterns, therefore S3 Glacier is not a suitable use case.
Question 8:
An application runs on Amazon EC2 instances backed by Amazon EBS volumes and an Amazon RDS database. The application is highly sensitive and security compliance requirements mandate that all personally identifiable information (PII) be encrypted at rest.
Which solution should a Solutions Architect choose to this requirement?
A: Configure SSL/TLS encryption using AWS KMS customer master keys (CMKs) to encrypt database volumes.
B: Configure Amazon EBS encryption and Amazon RDS encryption with AWS KMS keys to encrypt instance and database volumes.
C: Deploy AWS CloudHSM, generate encryption keys, and use the customer master key (CMK) to encrypt database volumes.
D: Enable encryption on Amazon RDS during creation. Use Amazon Macie to identify sensitive data.
Explanation
The data must be encrypted at rest on both the EC2 instance’s attached EBS volumes and the RDS database. Both storage locations can be encrypted using AWS KMS keys. With RDS, KMS uses a customer master key (CMK) to encrypt the DB instance, all logs, backups, and snapshots.
CORRECT: “Configure Amazon EBS encryption and Amazon RDS encryption with AWS KMS keys to encrypt instance and database volumes” is the correct answer.
INCORRECT: “Enable encryption on Amazon RDS during creation. Use Amazon Macie to identify sensitive data” is incorrect. This does not encrypt the EBS volumes attached to the EC2 instance and Macie cannot be used with RDS.
INCORRECT: “Configure SSL/TLS encryption using AWS KMS customer master keys (CMKs) to encrypt database volumes” is incorrect. SSL encryption encrypts data in transit but not at rest.
INCORRECT: “Deploy AWS CloudHSM, generate encryption keys, and use the customer master key (CMK) to encrypt database volumes” is incorrect. CloudHSM is not required for this solution, and we need to encrypt the database volumes and the EBS volumes.
Question 9:
An application is deployed on multiple AWS regions and accessed from around the world. The application exposes static public IP addresses. Some users are experiencing poor performance when accessing the application over the Internet.
What should a solutions architect recommend to reduce internet latency?
A: Set up AWS Global Accelerator and add endpoints
B: Set up an Amazon CloudFront distribution to access an application
C: Set up AWS Direct Connect locations in multiple Regions
D: Set up an Amazon Route 53 geoproximity routing policy to route traffic
Explanation
AWS Global Accelerator is a service in which you create accelerators to improve availability and performance of your applications for local and global users. Global Accelerator directs traffic to optimal endpoints over the AWS global network. This improves the availability and performance of your internet applications that are used by a global audience. Global Accelerator is a global service that supports endpoints in multiple AWS Regions, which are listed in the AWS Region Table.
By default, Global Accelerator provides you with two static IP addresses that you associate with your accelerator. (Or, instead of using the IP addresses that Global Accelerator provides, you can configure these entry points to be IPv4 addresses from your own IP address ranges that you bring to Global Accelerator.)
The static IP addresses are anycast from the AWS edge network and distribute incoming application traffic across multiple endpoint resources in multiple AWS Regions, which increases the availability of your applications. Endpoints can be Network Load Balancers, Application Load Balancers, EC2 instances, or Elastic IP addresses that are located in one AWS Region or multiple Regions.
CORRECT: “Set up AWS Global Accelerator and add endpoints” is the correct answer.
INCORRECT: “Set up AWS Direct Connect locations in multiple Regions” is incorrect as this is used to connect from an on-premises data center to AWS. It does not improve performance for users who are not connected to the on-premises data center.
INCORRECT: “Set up an Amazon CloudFront distribution to access an application” is incorrect as CloudFront cannot expose static public IP addresses.
INCORRECT: “Set up an Amazon Route 53 geoproximity routing policy to route traffic” is incorrect as this does not reduce internet latency as well as using Global Accelerator. GA will direct users to the closest edge location and then use the AWS global network.
Question 10:
A company has some statistical data stored in an Amazon RDS database. The company wants to allow users to access this information using an API. A solutions architect must create a solution that allows sporadic access to the data, ranging from no requests to large bursts of traffic.
Which solution should the solutions architect suggest?
A: Set up an Amazon API Gateway and use Amazon ECS
B: Set up an Amazon API Gateway and use AWS Elastic Beanstalk
C: Set up an Amazon API Gateway and use AWS Lambda functions
D: Set up an Amazon API Gateway and use Amazon EC2 with Auto Scaling
Explanation
This question is simply asking you to work out the best compute service for the stated requirements. The key requirements are that the compute service should be suitable for a workload that can range quite broadly in demand from no requests to large bursts of traffic.
AWS Lambda is an ideal solution as you pay only when requests are made and it can easily scale to accommodate the large bursts in traffic. Lambda works well with both API Gateway and Amazon RDS.
CORRECT: “Set up an Amazon API Gateway and use AWS Lambda functions” is the correct answer.
INCORRECT: “Set up an Amazon API Gateway and use Amazon ECS” is incorrect as Lambda is a better fit for this use case as the traffic patterns are highly dynamic.
INCORRECT: “Set up an Amazon API Gateway and use AWS Elastic Beanstalk” is incorrect as Lambda is a better fit for this use case as the traffic patterns are highly dynamic.
INCORRECT: “Set up an Amazon API Gateway and use Amazon EC2 with Auto Scaling” is incorrect as Lambda is a better fit for this use case as the traffic patterns are highly dynamic.
Question 11:
A Solutions Architect has been tasked with migrating 30 TB of data from an on-premises data center within 20 days. The company has an internet connection that is limited to 25 Mbps and the data transfer cannot use more than 50% of the connection speed.
What should a Solutions Architect do to meet these requirements?
A: Use AWS Snowball.
B: Use AWS Storage Gateway.
C: Use a site-to-site VPN.
D: Use AWS DataSync.
Explanation
This is a simple case of working out roughly how long it will take to migrate the data using the 12.5 Mbps of bandwidth that is available for transfer and seeing which options are feasible. Transferring 30 TB of data across a 25 Mbps connection could take upwards of 200 days.
Therefore, we know that using the Internet connection will not meet the requirements and we can rule out any solution that will use the internet (all options except for Snowball). AWS Snowball is a physical device that is shipped to your office or data center. You can then load data onto it and ship it back to AWS where the data is uploaded to Amazon S3.
Snowball is the only solution that will achieve the data migration requirements within the 20-day period.
CORRECT: “Use AWS Snowball” is the correct answer.
INCORRECT: “Use AWS DataSync” is incorrect. This uses the internet which will not meet the 20-day deadline.
INCORRECT: “Use AWS Storage Gateway” is incorrect. This uses the internet which will not meet the 20-day deadline.
INCORRECT: “Use a site-to-site VPN” is incorrect. This uses the internet which will not meet the 20-day deadline.
Question 13:
A security team wants to limit access to specific services or actions in all of the team’s AWS accounts. All accounts belong to a large organization in AWS Organizations. The solution must be scalable and there must be a single point where permissions can be maintained.
What should a solutions architect do to accomplish this?
A: Create cross-account roles in each account to deny access to the services or actions
B: Create a security group to allow accounts and attach it to user groups
C: Create a service control policy in the root organizational unit to deny access to the services or actions
D: Create an ACL to provide access to the services or actions
Explanation
Service control policies (SCPs) offer central control over the maximum available permissions for all accounts in your organization, allowing you to ensure your accounts stay within your organization’s access control guidelines.
SCPs alone are not sufficient for allowing access in the accounts in your organization. Attaching an SCP to an AWS Organizations entity (root, OU, or account) defines a guardrail for what actions the principals can perform. You still need to attach identity-based or resource-based policies to principals or resources in your organization’s accounts to actually grant permissions to them.
CORRECT: “Create a service control policy in the root organizational unit to deny access to the services or actions” is the correct answer.
INCORRECT: “Create an ACL to provide access to the services or actions” is incorrect as access control lists are not used for permissions associated with IAM. Permissions policies are used with IAM.
INCORRECT: “Create a security group to allow accounts and attach it to user groups” is incorrect as security groups are instance level firewalls. They do not limit service actions.
INCORRECT: “Create cross-account roles in each account to deny access to the services or actions” is incorrect as this is a complex solution and does not provide centralized control
Question 14:
A Solutions Architect working for a large financial institution is building an application to manage their customers financial information and their sensitive personal information. The Solutions Architect requires that the storage layer can store immutable data out of the box, with the ability to encrypt the data at rest and requires that the storage layer provides ACID properties. They also want to use a containerized solution to manage the compute layer.
Which solution will meet these requirements with the LEAST amount of operational overhead?
A: Set up an ECS cluster behind an Application Load Balancer on AWS Fargate. Use Amazon Quantum Ledger Database (QLDB) to manage the storage layer.
B: Configure an ECS cluster on EC2 behind an Application Load Balancer within an Auto Scaling Group. Store data using Amazon DynamoDB.
C: Create an Auto Scaling Group with EC2 instances behind an Application Load Balancer. To manage the storage layer, use Amazon S3.
D: Create a cluster of ECS instances on AWS Fargate within an Auto Scaling Group behind an Application Load Balancer. To manage the storage layer, use Amazon S3.
Explanation
The solution requires that the storage layer be immutable. This immutability can only be delivered by Amazon Quantum Ledger Database (QLDB), as Amazon QLDB has a built-in immutable journal that stores an accurate and sequenced entry of every data change. The journal is append-only, meaning that data can only be added to a journal, and it cannot be overwritten or deleted.
Secondly the compute layer needs to not only be containerized, and implemented with the least possible operational overhead. The option that best fits these requirements is Amazon ECS on AWS Fargate, as AWS Fargate is a Serverless, containerized deployment option.
CORRECT: “Set up an ECS cluster behind an Application Load Balancer on AWS Fargate. Use Amazon Quantum Ledger Database (QLDB) to manage the storage layer” is the correct answer (as explained above.)
INCORRECT: “Create an Auto Scaling Group with EC2 instances behind an Application Load Balancer. To manage the storage layer, use Amazon S3” is incorrect. EC2 instances are virtual machines, not a container product and Amazon S3 is an object storage service which does not act as an immutable storage layer.
INCORRECT: “Configure an ECS cluster on EC2 behind an Application Load Balancer within an Auto Scaling Group. Store data using Amazon DynamoDB” is incorrect. ECS on EC2 provides a higher level of operational overhead than using AWS Fargate, as Fargate is a Serverless service.
INCORRECT: “Create a cluster of ECS instances on AWS Fargate within an Auto Scaling Group behind an Application Load Balancer. To manage the storage layer, use Amazon S3” is incorrect. Although Fargate would be a suitable deployment option, Amazon S3 is not suitable for the storage layer as it is not immutable by default.
Question 15:
A company has deployed an API in a VPC behind an internal Network Load Balancer (NLB). An application that consumes the API as a client is deployed in a second account in private subnets.
Which architectural configurations will allow the API to be consumed without using the public Internet? (Select TWO.)
A: Configure a VPC peering connection between the two VPCs. Access the API using the private address
B: Configure a ClassicLink connection for the API into the client VPC. Access the API using the ClassicLink address
C: Configure a PrivateLink connection for the API into the client VPC. Access the API using the PrivateLink address
D: Configure an AWS Resource Access Manager connection between the two accounts. Access the API using the private address
E: Configure an AWS Direct Connect connection between the two VPCs. Access the API using the private address
Explanation
You can create your own application in your VPC and configure it as an AWS PrivateLink-powered service (referred to as an endpoint service). Other AWS principals can create a connection from their VPC to your endpoint service using an interface VPC endpoint. You are the service provider, and the AWS principals that create connections to your service are service consumers.
This configuration is powered by AWS PrivateLink and clients do not need to use an internet gateway, NAT device, VPN connection or AWS Direct Connect connection, nor do they require public IP addresses.
Another option is to use a VPC Peering connection. A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them using private IPv4 addresses or IPv6 addresses. Instances in either VPC can communicate with each other as if they are within the same network. You can create a VPC peering connection between your own VPCs, or with a VPC in another AWS account.
CORRECT: “Configure a VPC peering connection between the two VPCs. Access the API using the private address” is a correct answer.
CORRECT: “Configure a PrivateLink connection for the API into the client VPC. Access the API using the PrivateLink address” is also a correct answer.
INCORRECT: “Configure an AWS Direct Connect connection between the two VPCs. Access the API using the private address” is incorrect. Direct Connect is used for connecting from on-premises data centers into AWS. It is not used from one VPC to another.
INCORRECT: “Configure a ClassicLink connection for the API into the client VPC. Access the API using the ClassicLink address” is incorrect. ClassicLink allows you to link EC2-Classic instances to a VPC in your account, within the same Region. This is not relevant to sending data between two VPCs.
INCORRECT: “Configure an AWS Resource Access Manager connection between the two accounts. Access the API using the private address” is incorrect. AWS RAM lets you share resources that are provisioned and managed in other AWS services. However, APIs are not shareable resources with AWS RAM.
Question 16:
A Solutions Architect is designing a solution for an application that requires very low latency between the client and the backend. The application uses the UDP protocol, and the backend is hosted on Amazon EC2 instances. The solution must be highly available across multiple Regions and users around the world should be directed to the most appropriate Region based on performance.
How can the Solutions Architect meet these requirements?
A: Deploy an Amazon CloudFront distribution with a custom origin pointing to Amazon EC2 instances in multiple Regions.
B: Deploy Amazon EC2 instances in multiple Regions. Create a multivalue answer routing record in Amazon Route 53 that includes all EC2 endpoints.
C: Deploy an Application Load Balancer in front of the EC2 instances in each Region. Use AWS WAF to direct traffic to the most optimal Regional endpoint.
D: Deploy a Network Load Balancer in front of the EC2 instances in each Region. Use AWS Global Accelerator to route traffic to the most optimal Regional endpoint.
Explanation
An NLB is ideal for latency-sensitive applications and can listen on UDP for incoming requests. As Elastic Load Balancers are region-specific it is necessary to have an NLB in each Region in front of the EC2 instances.
To direct traffic based on optimal performance, AWS Global Accelerator can be used. GA will ensure traffic is routed across the AWS global network to the most optimal endpoint based on performance.
CORRECT: “Deploy a Network Load Balancer in front of the EC2 instances in each Region. Use AWS Global Accelerator to route traffic to the most optimal Regional endpoint” is the correct answer.
INCORRECT: “Deploy an Application Load Balancer in front of the EC2 instances in each Region. Use AWS WAF to direct traffic to the most optimal Regional endpoint” is incorrect. You cannot use WAF to direct traffic to endpoints based on performance.
INCORRECT: “Deploy an Amazon CloudFront distribution with a custom origin pointing to Amazon EC2 instances in multiple Regions” is incorrect. CloudFront cannot listen on UDP, it is used for HTTP/HTTPS.
INCORRECT: “Deploy Amazon EC2 instances in multiple Regions. Create a multivalue answer routing record in Amazon Route 53 that includes all EC2 endpoints” is incorrect. This configuration would not route incoming requests to the most optimal endpoint based on performance, it would provide multiple records in answers and traffic would be distributed across multiple Regions.
Question 17:
A security officer requires that access to company financial reports is logged. The reports are stored in an Amazon S3 bucket. Additionally, any modifications to the log files must be detected.
Which actions should a solutions architect take?
A: Use AWS CloudTrail to create a new trail. Configure the trail to log read and write data events on the S3 bucket that houses the reports. Log these events to a new bucket, and enable log file validation
B: Use AWS CloudTrail to create a new trail. Configure the trail to log read and write management events on the S3 bucket that houses the reports. Log these events to a new bucket, and enable log file validation
C: Use S3 server access logging on the bucket that houses the reports with the read and write data events and the log file validation options enabled
D: Use S3 server access logging on the bucket that houses the reports with the read and write management events and log file validation options enabled
Explanation
Amazon CloudTrail can be used to log activity on the reports. The key difference between the two answers that include CloudTrail is that one references data events whereas the other references management events.
Data events provide visibility into the resource operations performed on or within a resource. These are also known as data plane operations. Data events are often high-volume activities.
Example data events include:
* Amazon S3 object-level API activity (for example, GetObject, DeleteObject, and PutObject API operations).
* AWS Lambda function execution activity (the Invoke API).
Management events provide visibility into management operations that are performed on resources in your AWS account. These are also known as control plane operations. Example management events include:
* Configuring security (for example, IAM AttachRolePolicy API operations)
* Registering devices (for example, Amazon EC2 CreateDefaultVpc API operations).
Therefore, to log data about access to the S3 objects the solutions architect should log read and write data events.
Log file validation can also be enabled on the destination bucket:
CORRECT: “Use AWS CloudTrail to create a new trail. Configure the trail to log read and write data events on the S3 bucket that houses the reports. Log these events to a new bucket, and enable log file validation” is the correct answer.
INCORRECT: “Use AWS CloudTrail to create a new trail. Configure the trail to log read and write management events on the S3 bucket that houses the reports. Log these events to a new bucket, and enable log file validation” is incorrect as data events should be logged rather than management events.
INCORRECT: “Use S3 server access logging on the bucket that houses the reports with the read and write data events and the log file validation options enabled” is incorrect as server access logging does not have an option for choosing data events or log file validation.
INCORRECT: “Use S3 server access logging on the bucket that houses the reports with the read and write management events and log file validation options enabled” is incorrect as server access logging does not have an option for choosing management events or log file validation.
Question 20:
A Solutions Architect needs a solution for hosting a website that will be used by a development team. The website contents will consist of HTML, CSS, client-side JavaScript, and images.
Which solution is MOST cost-effective?
A: Create an Application Load Balancer with an AWS Lambda target.
B: Launch an Amazon EC2 instance and host the website there.
C: Use a Docker container to host the website on AWS Fargate.
D: Create an Amazon S3 bucket and host the website there.
Explanation
Amazon S3 can be used for hosting static websites and cannot be used for dynamic content. In this case the content is purely static with client-side code running. Therefore, an S3 static website will be the most cost-effective solution for hosting this website.
CORRECT: “Create an Amazon S3 bucket and host the website there” is the correct answer.
INCORRECT: “Launch an Amazon EC2 instance and host the website there” is incorrect. This will be more expensive as it uses an EC2 instances.
INCORRECT: “Use a Docker container to host the website on AWS Fargate” is incorrect. A static website on S3 is sufficient for this use case and will be more cost-effective than Fargate.
INCORRECT: “Create an Application Load Balancer with an AWS Lambda target” is incorrect. This is also a more expensive solution and unnecessary for this use case.
Question 21:
A production application runs on an Amazon RDS MySQL DB instance. A solutions architect is building a new reporting tool that will access the same data. The reporting tool must be highly available and not impact the performance of the production application.
How can this be achieved?
A: Create a Multi-AZ RDS Read Replica of the production RDS DB instance
B: Create a cross-region Multi-AZ deployment and create a read replica in the second region
C: Create a Single-AZ RDS Read Replica of the production RDS DB instance. Create a second Single-AZ RDS Read Replica from the replica
D: Use Amazon Data Lifecycle Manager to automatically create and manage snapshots
Explanation
You can create a read replica as a Multi-AZ DB instance. Amazon RDS creates a standby of your replica in another Availability Zone for failover support for the replica. Creating your read replica as a Multi-AZ DB instance is independent of whether the source database is a Multi-AZ DB instance.
CORRECT: “Create a Multi-AZ RDS Read Replica of the production RDS DB instance” is the correct answer.
INCORRECT: “Create a Single-AZ RDS Read Replica of the production RDS DB instance. Create a second Single-AZ RDS Read Replica from the replica” is incorrect. Read replicas are primarily used for horizontal scaling. The best solution for high availability is to use a Multi-AZ read replica.
INCORRECT: “Create a cross-region Multi-AZ deployment and create a read replica in the second region” is incorrect as you cannot create a cross-region Multi-AZ deployment with RDS.
INCORRECT: “Use Amazon Data Lifecycle Manager to automatically create and manage snapshots” is incorrect as using snapshots is not the best solution for high availability.
Question 22:
A company operates a production web application that uses an Amazon RDS MySQL database. The database has automated, non-encrypted daily backups. To increase the security of the data, it has been recommended that encryption should be enabled for backups. Unencrypted backups will be destroyed after the first encrypted backup has been completed.
What should be done to enable encryption for future backups?
A: Enable an encrypted read replica on RDS for MySQL. Promote the encrypted read replica to primary. Remove the original database instance
B: Modify the backup section of the database configuration to toggle the Enable encryption check box
C: Enable default encryption for the Amazon S3 bucket where backups are stored
D: Create a snapshot of the database. Copy it to an encrypted snapshot. Restore the database from the encrypted snapshot
Explanation
Amazon RDS uses snapshots for backup. Snapshots are encrypted when created only if the database is encrypted and you can only select encryption for the database when you first create it. In this case the database, and hence the snapshots, ad unencrypted.
However, you can create an encrypted copy of a snapshot. You can restore using that snapshot which creates a new DB instance that has encryption enabled. From that point on encryption will be enabled for all snapshots.
CORRECT: “Create a snapshot of the database. Copy it to an encrypted snapshot. Restore the database from the encrypted snapshot” is the correct answer.
INCORRECT: “Enable an encrypted read replica on RDS for MySQL. Promote the encrypted read replica to primary. Remove the original database instance” is incorrect as you cannot create an encrypted read replica from an unencrypted master.
INCORRECT: “Modify the backup section of the database configuration to toggle the Enable encryption check box” is incorrect as you cannot add encryption for an existing database.
INCORRECT: “Enable default encryption for the Amazon S3 bucket where backups are stored” is incorrect because you do not have access to the S3 bucket in which snapshots are stored.
Question 23:
A company is creating a solution that must offer disaster recovery across multiple AWS Regions. The solution requires a relational database that can support a Recovery Point Objective (RPO) of 1 second and a Recovery Time Objective (RTO) of 1 minute.
Which AWS solution can achieve this?
A: Amazon RDS for with Multi-AZ enabled.
B: Amazon RDS for with a cross-Region replica.
C: Amazon DynamoDB global tables.
D: Amazon Aurora Global Database.
Explanation
Aurora Global Database lets you easily scale database reads across the world and place your applications close to your users. Your applications enjoy quick data access regardless of the number and location of secondary regions, with typical cross-region replication latencies below 1 second.
If your primary region suffers a performance degradation or outage, you can promote one of the secondary regions to take read/write responsibilities. An Aurora cluster can recover in less than 1 minute even in the event of a complete regional outage. This provides your application with an effective Recovery Point Objective (RPO) of 1 second and a Recovery Time Objective (RTO) of less than 1 minute, providing a strong foundation for a global business continuity plan.
CORRECT: “Amazon Aurora Global Database” is the correct answer.
INCORRECT: “Amazon RDS for with Multi-AZ enabled” is incorrect. RDS Multi-AZ is across availability zones, not across Regions.
INCORRECT: “Amazon RDS for with a cross-Region replica” is incorrect. A cross-Region replica for RDS cannot provide an RPO of 1 second as there is typically more latency. You also cannot achieve a minute RPO as it takes much longer to promote a replica to a master.
INCORRECT: “Amazon DynamoDB global tables” is incorrect. This is not a relational database; it is a non-relational database (NoSQL).
Question 25:
An application runs on a fleet of Amazon EC2 instances in an Amazon EC2 Auto Scaling group behind an Elastic Load Balancer. The operations team has determined that the application performs best when the CPU utilization of the EC2 instances is at or near 60%.
Which scaling configuration should a Solutions Architect use to optimize the applications performance?
A: Use a target tracking policy to dynamically scale the Auto Scaling group.
B: Use a scheduled scaling policy to dynamically the Auto Scaling group.
C: Use a step scaling policy to dynamically scale the Auto Scaling group.
D: Use a simple scaling policy to dynamically scale the Auto Scaling group.
Explanation
With target-tracking scaling policies, you select a scaling metric and set a target value. Amazon EC2 Auto Scaling creates and manages the CloudWatch alarms that trigger the scaling policy and calculates the scaling adjustment based on the metric and the target value.
The scaling policy adds or removes capacity as required to keep the metric at, or close to, the specified target value. In addition to keeping the metric close to the target value, a target tracking scaling policy also adjusts to changes in the metric due to a changing load pattern.
The following diagram shows a target tracking policy set to keep the CPU utilization of the EC2 instances at or close to 60%.
CORRECT: “Use a target tracking policy to dynamically scale the Auto Scaling group” is the correct answer.
INCORRECT: “Use a simple scaling policy to dynamically scale the Auto Scaling group” is incorrect. Simple scaling is not used for maintaining a target utilization. It is used for making simple adjustments up or down based on a threshold value.
INCORRECT: “Use a step scaling policy to dynamically scale the Auto Scaling group” is incorrect. Step scaling is not used for maintaining a target utilization. It is used for making step adjustments that vary based on the size of the alarm breach.
INCORRECT: “Use a scheduled scaling policy to dynamically the Auto Scaling group” is incorrect. Scheduled scaling is not used for maintaining a target utilization. It is used for scheduling changes at specific dates and times.