Practice Exam 1 Flashcards
Your development team has created a gaming application that uses DynamoDB to store user statistics and provide fast game updates back to users. The team has begun testing the application but needs a consistent data set to perform tests with. The testing process alters the dataset, so the baseline data needs to be retrieved upon each new test. Which AWS service can meet this need by exporting data from DynamoDB and importing data into DynamoDB?
Elastic Map Reduce
- You can use Amazon EMR with a customized version of Hive that includes connectivity to DynamoDB to perform operations on data stored in DynamoDB:
- Loading DynamoDB data into the Hadoop Distributed File System (HDFS) and using it as input into an Amazon EMR cluster
- Querying live DynamoDB data using SQL-like statements (HiveQL)
- Joining data stored in DynamoDB and exporting it or querying against the joined data
- Exporting data stored in DynamoDB to Amazon S3
- Importing data stored in Amazon S3 to DynamoDB
You have configured an Auto Scaling Group of EC2 instances fronted by an Application Load Balancer and backed by an RDS database. You want to begin monitoring the EC2 instances using CloudWatch metrics. Which metric is not readily available out of the box?
Memory utilization
Memory utilization is not available as an out of the box metric in CloudWatch.
You can, however, collect memory metrics when you configure a custom metric for CloudWatch.
Types of custom metrics that you can set up include:
- Memory utilization
- Disk swap utilization
- Disk space utilization
- Page file utilization
- Log collection
You are working as a Solutions Architect in a large healthcare organization. You have many Auto Scaling Groups that utilize launch configurations. Many of these launch configurations are similar yet have subtle differences. You’d like to use multiple versions of these launch configurations. An ideal approach would be to have a default launch configuration and then have additional versions that add additional features. Which option best meets these requirements?
Use launch templates instead
- A launch template is similar to a launch configuration, in that it specifies instance configuration information.
- Included are the ID of the Amazon Machine Image (AMI), the instance type, a key pair, security groups, and the other parameters that you use to launch EC2 instances.
- However, defining a launch template instead of a launch configuration allows you to have multiple versions of a template.
- With versioning, you can create a subset of the full set of parameters and then reuse it to create other templates or template versions.
- For example, you can create a default template that defines common configuration parameters and allow the other parameters to be specified as part of another version of the same template.
Your company is currently building out a second AWS region. Following best practices, they’ve been using CloudFormation to make the migration easier. They’ve run into a problem with the template though. Whenever the template is created in the new region, it’s still referencing the AMI in the old region. What steps can you take to automatically select the correct AMI when the template is deployed?
Create a mapping in the template. Define the unique AMI value per region.
- This is exactly what mappings are built for.
- By using mappings, you easily automate this issue away.
- Make sure to copy your AMI to the region before you try and run the template, though, as AMIs are region specific.
Your company is using a hybrid configuration because there are some legacy applications which are not easily converted and migrated to AWS. And with this configuration comes a typical scenario where the legacy apps must maintain the same private ip address and MAC address. You are attempting to convert the application to the Cloud and have configured an EC2 instance to house the application. What you are currently testing is removing the ENI from the legacy instance and attaching it to the EC2 instance. You want to attempt a warm attach. What does this mean?
Attach the ENI to an instance when it is stopped.
Some best practices for configuring network interfaces:
- attach a network interface to an instance when it’s running (hot attach),
- when it’s stopped (warm attach),
- when the instance is being launched (cold attach).
- You can detach secondary network interfaces when the instance is running or stopped.
- However, you can’t detach the primary network interface.
- You can move a network interface from one instance to another, if the instances are in the same Availability Zone and VPC but in different subnets.
- When launching an instance using the CLI, API, or an SDK, you can specify the primary network interface and additional network interfaces.
- Launching an Amazon Linux or Windows Server instance with multiple network interfaces automatically configures interfaces, private IPv4 addresses, and route tables on the operating system of the instance.
- A warm or hot attach of an additional network interface may require you to manually bring up the second interface, configure the private IPv4 address, and modify the route table accordingly.
- Instances running Amazon Linux or Windows Server automatically recognize the warm or hot attach and configure themselves.
- Attaching another network interface to an instance (for example, a NIC teaming configuration) cannot be used as a method to increase or double the network bandwidth to or from the dual-homed instance.
- If you attach two or more network interfaces from the same subnet to an instance, you may encounter networking issues such as asymmetric routing.
- If possible, use a secondary private IPv4 address on the primary network interface instead.
A company has an Auto Scaling Group of EC2 instances hosting their retail sales application. Any significant downtime for this application can result in large losses of profit. Therefore the architecture also includes an Application Load Balancer and an RDS database in a Multi-AZ deployment. What will happen to preserve high availability if the primary database fails?
The CNAME is switched from the primary db instance to the secondary.
- Amazon RDS Multi-AZ deployments provide enhanced availability and durability for RDS database (DB) instances, making them a natural fit for production database workloads.
- When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ).
- Each AZ runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable.
- In case of an infrastructure failure, Amazon RDS performs an automatic failover to the standby (or to a read replica in the case of Amazon Aurora), so that you can resume database operations as soon as the failover is complete.
- Since the endpoint for your DB Instance remains the same after a failover, your application can resume database operation without the need for manual administrative intervention.
- Failover is automatically handled by Amazon RDS so that you can resume database operations as quickly as possible without administrative intervention.
- When failing over, Amazon RDS simply flips the canonical name record (CNAME) for your DB instance to point at the standby, which is in turn promoted to become the new primary.
You work for an online retailer where any downtime at all can cause a significant loss of revenue. You have architected your application to be deployed on an Auto Scaling Group of EC2 instances behind a load balancer. You have configured and deployed these resources using a CloudFormation template. The Auto Scaling Group is configured with default settings, and a simple CPU utilization scaling policy. You have also set up multiple Availability Zones for high availability. The Load Balancer does health checks against an html file generated by script. When you begin performing load testing on your application and notice in CloudWatch that the load balancer is not sending traffic to one of your EC2 instances. What could be the problem?
The EC2 instance has failed the load balancer health check.
- The load balancer will route the incoming requests only to the healthy instances.
- The EC2 instance may have passed status check and be considered health to the Auto Scaling Group, but the ELB may not use it if the ELB health check has not been met.
- The ELB health check has a default of 30 seconds between checks, and a default of 3 checks before making a decision.
- Therefore the instance could be visually available but unused for at least 90 seconds before the GUI would show it as failed.
- In CloudWatch where the issue was noticed it would appear to be a healthy EC2 instance but with no traffic. Which is what was observed.
Your boss has tasked you with decoupling your existing web frontend from the backend. Both applications run on EC2 instances. After you investigate the existing architecture, you find that (on average) the backend resources are processing about 5,000 requests per second and will need something that supports their extreme level of message processing. It’s also important that each request is processed only 1 time. What can you do to decouple these resources?
Use SQS Standard. Include a unique ordering ID in each message, and have the backend application use this to deduplicate messages.
- This would be a great choice, as SQS Standard can handle this level of extreme performance.
- If the application didn’t require this level of performance, then SQS FIFO would be the better and easier choice.
You have just started work at a small startup in the Seattle area. Your first job is to help containerize your company’s microservices and move them to AWS. The team has selected ECS as their orchestration service of choice. You’ve discovered the code currently uses access keys and secret access keys in order to communicate with S3. How can you best handle this authentication for the newly containerized application?
Attach a role with the appropriate permissions to the task definition in ECS.
- It’s always a good idea to use roles over hard-coded credentials.
- One of the best parts of using ECS is the ease of attaching roles to your containers.
- This allows the container to have an individual role even if it’s running with other containers on the same EC2 instance.
A team of architects is designing a new AWS environment for a company which wants to migrate to the Cloud. The architects are considering the use of EC2 instances with instance store volumes. The architects realize that the data on the instance store volumes are ephemeral. Which action will not cause the data to be deleted on an instance store volume?
Reboot
- Some Amazon Elastic Compute Cloud (Amazon EC2) instance types come with a form of directly attached, block-device storage known as the instance store.
- The instance store is ideal for temporary storage, because the data stored in instance store volumes is not persistent through instance stops, terminations, or hardware failures.
A software gaming company has produced an online racing game that uses CloudFront for fast delivery to worldwide users. The game also uses DynamoDB for storing in-game and historical user data. The DynamoDB table has a preconfigured read and write capacity. Users have been reporting slowdown issues, and an analysis has revealed the DynamoDB table has begun throttling during peak traffic times. What step can you take to improve game performance?
Adjust your auto scaling thresholds to scale more aggressively.
- Amazon DynamoDB auto scaling uses the AWS Application Auto Scaling service to dynamically adjust provisioned throughput capacity on your behalf in response to actual traffic patterns.
- This enables a table or a global secondary index to increase its provisioned read and write capacity to handle sudden increases in traffic without throttling.
- When the workload decreases, Application Auto Scaling decreases the throughput so you don’t pay for unused provisioned capacity.
- Note that if you use the AWS Management Console to create a table or a global secondary index, DynamoDB auto scaling is enabled by default.
- You can modify your auto scaling settings at any time.
A professional baseball league has chosen to use a key-value and document database for storage, processing, and data delivery. Many of the data requirements involve high-speed processing of data such as a Doppler radar system which samples the position of the baseball 2000 times per second. Which AWS data storage can meet these requirements?
DynamoDB
- Amazon DynamoDB is a NoSQL database that supports key-value and document data models
- enables developers to build modern, serverless applications that can start small and scale globally to support petabytes of data and tens of millions of read and write requests per second.
- DynamoDB is designed to run high-performance, internet-scale applications that would overburden traditional relational databases.
Your team has provisioned an Auto Scaling Groups in a single Region. The Auto Scaling Group at max capacity would total 40 EC2 instances between them. However, you notice that the Auto Scaling Groups will only scale out to a portion of that number of instances at any one time. What could be the problem?
There is a vCPU-based on-demand instance limit per region
- Your AWS account has default quotas, formerly referred to as limits, for each AWS service.
- Unless otherwise noted, each quota is Region-specific.
- You can request increases for some quotas, and other quotas cannot be increased.
- Remember that each EC2 instance can have a variance of the number of vCPUs, depending on its type and your configuration, so it’s always wise to calculate your vCPU needs to make sure you are not going to hit quotas easily.
- Service Quotas is an AWS service that helps you manage your quotas for over 100 AWS services from one location.
- Along with looking up the quota values, you can also request a quota increase from the Service Quotas console.
You work for an advertising company that has a real-time bidding application. You are also using CloudFront on the front end to accommodate a worldwide user base. Your users begin complaining about response times and pauses in real-time bidding. What is the best service that can be used to reduce DynamoDB response times by an order of magnitude (milliseconds to microseconds)?
DAX
- Amazon DynamoDB Accelerator (DAX) is a fully managed, highly available, in-memory cache that can reduce Amazon DynamoDB response times from milliseconds to microseconds, even at millions of requests per second.
- While DynamoDB offers consistent single-digit millisecond latency, DynamoDB with DAX takes performance to the next level with response times in microseconds for millions of requests per second for read-heavy workloads.
- With DAX, your applications remain fast and responsive, even when a popular event or news story drives unprecedented request volumes your way.
- No tuning required.
Your company uses IoT devices installed in businesses to provide those business real-time data for analysis. You have decided to use AWS Kinesis Data Firehose to stream the data to multiple backend storing services for analytics. Which service listed is not a viable solution to stream the real time data to?
Athena
- Amazon Athena is correct because Amazon Kinesis Data Firehose cannot load streaming data to Athena.
- Amazon Kinesis Data Firehose is the easiest way to load streaming data into data stores and analytics tools.
- It can capture, transform, and load streaming data into Amazon S3, Amazon Redshift, Amazon Elasticsearch Service, and Splunk, enabling near real-time analytics with existing business intelligence tools and dashboards you’re already using today.
- It is a fully managed service that automatically scales to match the throughput of your data and requires no ongoing administration.
- It can also batch, compress, and encrypt the data before loading it, minimizing the amount of storage used at the destination and increasing security.