Practice Exam 2 Flashcards
An online media company has created an application which provides analytical data to its clients. The application is hosted on EC2 instances in an Auto Scaling Group. You have been brought on as a consultant and add an Application Load Balancer to front the Auto Scaling Group and distribute the load between the instances. The VPC which houses this architecture is running IPv4 and IPv6. The last thing you need to do to complete the configuration is point the domain name to the Application Load Balancer. Using Route 53, which record type at the zone apex will you use to point the DNS name of the Application Load Balancer? Choose two.
Alias with an A type record set.
- Alias with a type “AAAA” record set and Alias with a type “A” record set are correct.
- To route domain traffic to an ELB load balancer, use Amazon Route 53 to create an alias record that points to your load balancer.
- An alias record is a Route 53 extension to DNS.
Alias with an AAAA type record set.
- Alias with a type “AAAA” record set and Alias with a type “A” record set are correct.
- To route domain traffic to an ELB, use Amazon Route 53 to create an alias record that points to your load balancer.
Your company has recently converted to a hybrid cloud environment and will slowly be migrating to a fully AWS cloud environment. The AWS side is in need of some steps to prepare for disaster recovery. A disaster recovery plan needs drawn up and disaster recovery drills need to be performed. The company wants to establish Recovery Time and Recovery Point Objectives, with a major component being a very aggressive RTO, with cost not being a major factor. You have determined and will recommend that the best DR configuration to meet cost and RTO/RPO objectives will be to run a second AWS architecture in another Region in an active-active configuration. Which AWS disaster recovery pattern will best meet these requirements?
Multi-site
- Multi-site with the active-active architecture is correct.
- This pattern will have the highest cost but the quickest failover.
You work for an online retailer where any downtime at all can cause a significant loss of revenue. You have architected your application to be deployed on an Auto Scaling Group of EC2 instances behind a load balancer. You have configured and deployed these resources using a CloudFormation template. The Auto Scaling Group is configured with default settings, and a simple CPU utilization scaling policy. You have also set up multiple Availability Zones for high availability. The Load Balancer does health checks against an html file generated by script. When you begin performing load testing on your application and notice in CloudWatch that the load balancer is not sending traffic to one of your EC2 instances. What could be the problem?
The EC2 instance has failed the load balancer health check.
- The load balancer will route the incoming requests only to the healthy instances.
- The EC2 instance may have passed status check and be considered health to the Auto Scaling Group, but the ELB may not use it if the ELB health check has not been met.
- The ELB health check has a default of 30 seconds between checks, and a default of 3 checks before making a decision.
- Therefore the instance could be visually available but unused for at least 90 seconds before the GUI would show it as failed.
- In CloudWatch where the issue was noticed it would appear to be a healthy EC2 instance but with no traffic. Which is what was observed.
An accounting company has big data applications for analyzing actuary data. The company is migrating some of its services to the cloud, and for the foreseeable future, will be operating in a hybrid environment. They need a storage service that provides a simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources. Which AWS service can meet these requirements?
EFS
- Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources.
- It is built to scale on-demand to petabytes without disrupting applications, growing and shrinking automatically as you add and remove files, eliminating the need to provision, and manage capacity to accommodate growth.
- Amazon EFS offers two storage classes: the Standard storage class, and the Infrequent Access storage class (EFS IA).
- EFS IA provides price/performance that’s cost-optimized for files not accessed every day.
- By simply enabling EFS Lifecycle Management on your file system, files not accessed according to the lifecycle policy you choose will be automatically and transparently moved into EFS IA.
You are working as a Solutions Architect in a large healthcare organization. You have many Auto Scaling Groups that you need to create. One requirement is that you need to reuse some software licenses and therefore need to use dedicated hosts on EC2 instances in your Auto Scaling Groups. What step must you take to meet this requirement?
Use a launch template with your Auto Scaling Group.
In addition to the features of Amazon EC2 Auto Scaling that you can configure by using launch templates, launch templates provide more advanced Amazon EC2 configuration options.
For example, you must use launch templates to use Amazon EC2 Dedicated Hosts.
- Dedicated Hosts are physical servers with EC2 instance capacity that are dedicated to your use.
- While Amazon EC2 Dedicated Instances also run on dedicated hardware, the advantage of using Dedicated Hosts over Dedicated Instances is that you can bring eligible software licenses from external vendors and use them on EC2 instances.
If you currently use launch configurations, you can specify a launch template when you update an Auto Scaling group that was created using a launch configuration.
To create a launch template to use with an Auto Scaling Group,
- create the template from scratch,
- create a new version of an existing template,
- or copy the parameters from a launch configuration, running instance, or other template.
You are working for a large financial institution and preparing for disaster recovery and upcoming DR drills. A key component in the DR plan will be the database instances and their data. An aggressive Recovery Time Objective (RTO) dictates that the database needs to be synchronously replicated. Which configuration can meet this requirement?
RDS Multi-AZ
Amazon RDS Multi-AZ deployments provide enhanced availability and durability for RDS database (DB) instances, making them a natural fit for production database workloads.
When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ).
- Each AZ runs on its own physically distinct, independent infrastructure
- Is engineered to be highly reliable.
- In case of an infrastructure failure, Amazon RDS performs an automatic failover to the standby (or to a read replica in the case of Amazon Aurora)
- You can resume database operations as soon as the failover is complete.
- Since the endpoint for your DB Instance remains the same after a failover, your application can resume database operation without the need for manual administrative intervention.
Your development team has created a gaming application that uses DynamoDB to store user statistics and provide fast game updates back to users. The team has begun testing the application but needs a consistent data set to perform tests with. The testing process alters the dataset, so the baseline data needs to be retrieved upon each new test. Which AWS service can meet this need by exporting data from DynamoDB and importing data into DynamoDB?
Elastic Map Reduce
You can use Amazon EMR with a customized version of Hive that includes connectivity to DynamoDB to perform operations on data stored in DynamoDB:
- Loading DynamoDB data into the Hadoop Distributed File System (HDFS) and using it as input into an Amazon EMR cluster
- Querying live DynamoDB data using SQL-like statements (HiveQL)
- Joining data stored in DynamoDB and exporting it or querying against the joined data
- Exporting data stored in DynamoDB to Amazon S3
- Importing data stored in Amazon S3 to DynamoDB
A company has an application for sharing static content, such as photos. The popularity of the application has grown, and the company is now sharing content worldwide. This worldwide service has caused some issues with latency. What AWS services can be used to host a static website, serve content to globally dispersed users, and address latency issues, while keeping cost under control? Choose two.
S3
- Amazon S3 is an object storage built to store and retrieve any amount of data from anywhere on the Internet.
- It’s a simple storage service that offers an extremely durable, highly available, and infinitely scalable data storage infrastructure at very low costs.
- AWS Global Accelerator and Amazon CloudFront are separate services that use the AWS global network and its edge locations around the world.
- CloudFront improves performance for both cacheable content (such as images and videos) and dynamic content (such as API acceleration and dynamic site delivery).
- Global Accelerator improves performance for a wide range of applications over TCP or UDP by proxying packets at the edge to applications running in one or more AWS Regions.
- Global Accelerator is a good fit for non-HTTP use cases, such as gaming (UDP), IoT (MQTT), or Voice over IP, as well as for HTTP use cases that specifically require static IP addresses or deterministic, fast regional failover.
- Both services integrate with AWS Shield for DDoS protection.
CloudFront
- Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment.
- CloudFront is integrated with AWS – both physical locations that are directly connected to the AWS global infrastructure, as well as other AWS services.
- CloudFront works seamlessly with services including AWS Shield for DDoS mitigation, Amazon S3, Elastic Load Balancing, or Amazon EC2 as origins for your applications, and Lambda@Edge to run custom code closer to customers’ users and to customize the user experience.
- Lastly, if you use AWS origins such as Amazon S3, Amazon EC2, or Elastic Load Balancing, you don’t pay for any data transferred between these services and CloudFront.
A new startup is considering the advantages of using DynamoDB versus a traditional relational database in AWS RDS. The NoSQL nature of DynamoDB presents a small learning curve to the team members who all have experience with traditional databases. The company will have multiple databases, and the decision will be made on a case-by-case basis. Which of the following use cases would favor DynamoDB? Select two.
Managing web session data
- DynamoDB is a NoSQL database that supports key-value and document data structures.
- A key-value store is a database service that provides support for storing, querying, and updating collections of objects that are identified using a key and values that contain the actual content being stored.
- Meanwhile, a document data store provides support for storing, querying, and updating items in a document format such as JSON, XML, and HTML.
- DynamoDB’s fast and predictable performance characteristics make it a great match for handling session data.
- Plus, since it’s a fully-managed NoSQL database service, you avoid all the work of maintaining and operating a separate session store.
Storing metadata for S3 objects
- Storing metadata for Amazon S3 objects is correct because the Amazon DynamoDB stores structured data indexed by primary key and allows low-latency read and write access to items ranging from 1 byte up to 400KB.
- Amazon S3 stores unstructured blobs and is suited for storing large objects up to 5 TB.
- In order to optimize your costs across AWS services, large objects or infrequently accessed data sets should be stored in Amazon S3, while smaller data elements or file pointers (possibly to Amazon S3 objects) are best saved in Amazon DynamoDB.
A new startup company decides to use AWS to host their web application. They configure a VPC as well as two subnets within the VPC. They also attach an internet gateway to the VPC. In the first subnet, they create an EC2 instance to host a web application. There is a network ACL and a security group, which both have the proper ingress and egress to and from the internet. There is a route in the route table to the internet gateway. The EC2 instances added to the subnet need to have a globally unique IP address to ensure internet access. Which is not a globally unique IP address?
Private IP address
- Public IPv4 address, elastic IP address, and IPv6 address are globally unique addresses.
- The IPv4 addresses known for not being unique are private IPs.
- These are found in the following ranges: from 10.0.0.0 to 10.255.255.255, from 172.16.0.0 to 172.31.255.255, and from 192.168.0.0 to 192.168.255.255.
A professional baseball league has chosen to use a key-value and document database for storage, processing, and data delivery. Many of the data requirements involve high-speed processing of data such as a Doppler radar system which samples the position of the baseball 2000 times per second. Which AWS data storage can meet these requirements?
DynamoDB
- Amazon DynamoDB is a NoSQL database that supports key-value and document data models, and enables developers to build modern, serverless applications that can start small and scale globally to support petabytes of data and tens of millions of read and write requests per second.
- DynamoDB is designed to run high-performance, internet-scale applications that would overburden traditional relational databases.
You have been assigned to create an architecture which uses load balancers to direct traffic to an Auto Scaling Group of EC2 instances across multiple Availability Zones. You were considering using an Application Load Balancer, but some of the requirements you have been given seem to point to a Classic Load Balancer. Which requirement would be better served by an Application Load Balancer?
Path-based routing
Using an Application Load Balancer instead of a Classic Load Balancer has the following benefits:
- Support for path-based routing.
- You can configure rules for your listener that forward requests based on the URL in the request.
- This enables you to structure your application as smaller services, and route requests to the correct service based on the content of the URL.
- Support for host-based routing.
- You can configure rules for your listener that forward requests based on the host field in the HTTP header.
- This enables you to route requests to multiple domains using a single load balancer.
Support for routing based on
- fields in the request, such as standard and custom HTTP headers and methods
- query parameters
- source IP addresses
- Support for routing requests to multiple applications on a single EC2 instance.
- You can register each instance or IP address with the same target group using multiple ports.
- Support for redirecting requests from one URL to another.
- Support for returning a custom HTTP response.
- Support for registering targets by IP address, including targets outside the VPC for the load balancer.
- Support for registering Lambda functions as targets.
- Support for the load balancer to authenticate users of your applications through their corporate or social identities before routing requests.
Support for containerized applications.
- Amazon Elastic Container Service (Amazon ECS) can select an unused port when scheduling a task and register the task with a target group using this port.
- This enables you to make efficient use of your clusters.
Support for monitoring the health of each service independently, as health checks are defined at the target group level and many CloudWatch metrics are reported at the target group level.
Attaching a target group to an Auto Scaling Group enables you to scale each service dynamically based on demand.
Access logs contain additional information and are stored in compressed format.
Your company is slowly migrating to the cloud and is currently in a hybrid environment. The server team has been using Puppet for deployment automations. The decision has been made to continue using Puppet in the AWS environment if possible. If possible, which AWS service provides integration with Puppet?
AWS OpsWorks
- AWS OpsWorks for Puppet Enterprise is a fully-managed configuration management service that hosts Puppet Enterprise, a set of automation tools from Puppet for infrastructure and application management.
- OpsWorks also maintains your Puppet master server by automatically patching, updating, and backing up your server.
- OpsWorks eliminates the need to operate your own configuration management systems or worry about maintaining its infrastructure.
- OpsWorks gives you access to all of the Puppet Enterprise features, which you manage through the Puppet console.
- It also works seamlessly with your existing Puppet code.
You are designing an architecture for a financial company which provides a day trading application to customers. After viewing the traffic patterns for the existing application you notice that traffic is fairly steady throughout the day, with the exception of large spikes at the opening of the market in the morning and at closing around 3 pm. Your architecture will include an Auto Scaling Group of EC2 instances. How can you configure the Auto Scaling Group to ensure that system performance meets the increased demands at opening and closing of the market?
Use a predictive scaling policy on the Auto Scaling Group to meet opening and closing spikes.
- Using data collected from your actual EC2 usage and further informed by billions of data points drawn from our own observations, we use well-trained Machine Learning models to predict your expected traffic (and EC2 usage) including daily and weekly patterns.
- The model needs at least one day’s of historical data to start making predictions
- it is re-evaluated every 24 hours to create a forecast for the next 48 hours.
Your boss has tasked you with decoupling your existing web frontend from the backend. Both applications run on EC2 instances. After you investigate the existing architecture, you find that (on average) the backend resources are processing about 5,000 requests per second and will need something that supports their extreme level of message processing. It’s also important that each request is processed only 1 time. What can you do to decouple these resources?
Use SQS Standard.
- Include a unique ordering ID in each message
- Have the backend application use this to deduplicate messages.
- This would be a great choice, as SQS Standard can handle this level of extreme performance.
- If the application didn’t require this level of performance, then SQS FIFO would be the better and easier choice.