Practice Exam 1 Flashcards

1
Q

Your development team has created a gaming application that uses DynamoDB to store user statistics and provide fast game updates back to users. The team has begun testing the application but needs a consistent data set to perform tests with. The testing process alters the dataset, so the baseline data needs to be retrieved upon each new test. Which AWS service can meet this need by exporting data from DynamoDB and importing data into DynamoDB?

A

Elastic Map Reduce

  1. You can use Amazon EMR with a customized version of Hive that includes connectivity to DynamoDB to perform operations on data stored in DynamoDB:
  2. Loading DynamoDB data into the Hadoop Distributed File System (HDFS) and using it as input into an Amazon EMR cluster
  3. Querying live DynamoDB data using SQL-like statements (HiveQL)
  4. Joining data stored in DynamoDB and exporting it or querying against the joined data
  5. Exporting data stored in DynamoDB to Amazon S3
  6. Importing data stored in Amazon S3 to DynamoDB
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

You have configured an Auto Scaling Group of EC2 instances fronted by an Application Load Balancer and backed by an RDS database. You want to begin monitoring the EC2 instances using CloudWatch metrics. Which metric is not readily available out of the box?

A

Memory utilization

Memory utilization is not available as an out of the box metric in CloudWatch.

You can, however, collect memory metrics when you configure a custom metric for CloudWatch.

Types of custom metrics that you can set up include:

  1. Memory utilization
  2. Disk swap utilization
  3. Disk space utilization
  4. Page file utilization
  5. Log collection
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

You are working as a Solutions Architect in a large healthcare organization. You have many Auto Scaling Groups that utilize launch configurations. Many of these launch configurations are similar yet have subtle differences. You’d like to use multiple versions of these launch configurations. An ideal approach would be to have a default launch configuration and then have additional versions that add additional features. Which option best meets these requirements?

A

Use launch templates instead

  1. A launch template is similar to a launch configuration, in that it specifies instance configuration information.
  2. Included are the ID of the Amazon Machine Image (AMI), the instance type, a key pair, security groups, and the other parameters that you use to launch EC2 instances.
  3. However, defining a launch template instead of a launch configuration allows you to have multiple versions of a template.
  4. With versioning, you can create a subset of the full set of parameters and then reuse it to create other templates or template versions.
  5. For example, you can create a default template that defines common configuration parameters and allow the other parameters to be specified as part of another version of the same template.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Your company is currently building out a second AWS region. Following best practices, they’ve been using CloudFormation to make the migration easier. They’ve run into a problem with the template though. Whenever the template is created in the new region, it’s still referencing the AMI in the old region. What steps can you take to automatically select the correct AMI when the template is deployed?

A

Create a mapping in the template. Define the unique AMI value per region.

  1. This is exactly what mappings are built for.
  2. By using mappings, you easily automate this issue away.
  3. Make sure to copy your AMI to the region before you try and run the template, though, as AMIs are region specific.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Your company is using a hybrid configuration because there are some legacy applications which are not easily converted and migrated to AWS. And with this configuration comes a typical scenario where the legacy apps must maintain the same private ip address and MAC address. You are attempting to convert the application to the Cloud and have configured an EC2 instance to house the application. What you are currently testing is removing the ENI from the legacy instance and attaching it to the EC2 instance. You want to attempt a warm attach. What does this mean?

A

Attach the ENI to an instance when it is stopped.

Some best practices for configuring network interfaces:

  1. attach a network interface to an instance when it’s running (hot attach),
  2. when it’s stopped (warm attach),
  3. when the instance is being launched (cold attach).
  • You can detach secondary network interfaces when the instance is running or stopped.
  • However, you can’t detach the primary network interface.
  • You can move a network interface from one instance to another, if the instances are in the same Availability Zone and VPC but in different subnets.
  • When launching an instance using the CLI, API, or an SDK, you can specify the primary network interface and additional network interfaces.
  • Launching an Amazon Linux or Windows Server instance with multiple network interfaces automatically configures interfaces, private IPv4 addresses, and route tables on the operating system of the instance.
  • A warm or hot attach of an additional network interface may require you to manually bring up the second interface, configure the private IPv4 address, and modify the route table accordingly.
  • Instances running Amazon Linux or Windows Server automatically recognize the warm or hot attach and configure themselves.
  • Attaching another network interface to an instance (for example, a NIC teaming configuration) cannot be used as a method to increase or double the network bandwidth to or from the dual-homed instance.
  • If you attach two or more network interfaces from the same subnet to an instance, you may encounter networking issues such as asymmetric routing.
  • If possible, use a secondary private IPv4 address on the primary network interface instead.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

A company has an Auto Scaling Group of EC2 instances hosting their retail sales application. Any significant downtime for this application can result in large losses of profit. Therefore the architecture also includes an Application Load Balancer and an RDS database in a Multi-AZ deployment. What will happen to preserve high availability if the primary database fails?

A

The CNAME is switched from the primary db instance to the secondary.

  1. Amazon RDS Multi-AZ deployments provide enhanced availability and durability for RDS database (DB) instances, making them a natural fit for production database workloads.
  2. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ).
  3. Each AZ runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable.
  4. In case of an infrastructure failure, Amazon RDS performs an automatic failover to the standby (or to a read replica in the case of Amazon Aurora), so that you can resume database operations as soon as the failover is complete.
  5. Since the endpoint for your DB Instance remains the same after a failover, your application can resume database operation without the need for manual administrative intervention.
  6. Failover is automatically handled by Amazon RDS so that you can resume database operations as quickly as possible without administrative intervention.
  7. When failing over, Amazon RDS simply flips the canonical name record (CNAME) for your DB instance to point at the standby, which is in turn promoted to become the new primary.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

You work for an online retailer where any downtime at all can cause a significant loss of revenue. You have architected your application to be deployed on an Auto Scaling Group of EC2 instances behind a load balancer. You have configured and deployed these resources using a CloudFormation template. The Auto Scaling Group is configured with default settings, and a simple CPU utilization scaling policy. You have also set up multiple Availability Zones for high availability. The Load Balancer does health checks against an html file generated by script. When you begin performing load testing on your application and notice in CloudWatch that the load balancer is not sending traffic to one of your EC2 instances. What could be the problem?

A

The EC2 instance has failed the load balancer health check.

  1. The load balancer will route the incoming requests only to the healthy instances.
  2. The EC2 instance may have passed status check and be considered health to the Auto Scaling Group, but the ELB may not use it if the ELB health check has not been met.
  3. The ELB health check has a default of 30 seconds between checks, and a default of 3 checks before making a decision.
  4. Therefore the instance could be visually available but unused for at least 90 seconds before the GUI would show it as failed.
  5. In CloudWatch where the issue was noticed it would appear to be a healthy EC2 instance but with no traffic. Which is what was observed.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Your boss has tasked you with decoupling your existing web frontend from the backend. Both applications run on EC2 instances. After you investigate the existing architecture, you find that (on average) the backend resources are processing about 5,000 requests per second and will need something that supports their extreme level of message processing. It’s also important that each request is processed only 1 time. What can you do to decouple these resources?

A

Use SQS Standard. Include a unique ordering ID in each message, and have the backend application use this to deduplicate messages.

  • This would be a great choice, as SQS Standard can handle this level of extreme performance.
  • If the application didn’t require this level of performance, then SQS FIFO would be the better and easier choice.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

You have just started work at a small startup in the Seattle area. Your first job is to help containerize your company’s microservices and move them to AWS. The team has selected ECS as their orchestration service of choice. You’ve discovered the code currently uses access keys and secret access keys in order to communicate with S3. How can you best handle this authentication for the newly containerized application?

A

Attach a role with the appropriate permissions to the task definition in ECS.

  • It’s always a good idea to use roles over hard-coded credentials.
  • One of the best parts of using ECS is the ease of attaching roles to your containers.
  • This allows the container to have an individual role even if it’s running with other containers on the same EC2 instance.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

A team of architects is designing a new AWS environment for a company which wants to migrate to the Cloud. The architects are considering the use of EC2 instances with instance store volumes. The architects realize that the data on the instance store volumes are ephemeral. Which action will not cause the data to be deleted on an instance store volume?

A

Reboot

  • Some Amazon Elastic Compute Cloud (Amazon EC2) instance types come with a form of directly attached, block-device storage known as the instance store.
  • The instance store is ideal for temporary storage, because the data stored in instance store volumes is not persistent through instance stops, terminations, or hardware failures.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

A software gaming company has produced an online racing game that uses CloudFront for fast delivery to worldwide users. The game also uses DynamoDB for storing in-game and historical user data. The DynamoDB table has a preconfigured read and write capacity. Users have been reporting slowdown issues, and an analysis has revealed the DynamoDB table has begun throttling during peak traffic times. What step can you take to improve game performance?

A

Adjust your auto scaling thresholds to scale more aggressively.

  1. Amazon DynamoDB auto scaling uses the AWS Application Auto Scaling service to dynamically adjust provisioned throughput capacity on your behalf in response to actual traffic patterns.
  2. This enables a table or a global secondary index to increase its provisioned read and write capacity to handle sudden increases in traffic without throttling.
  3. When the workload decreases, Application Auto Scaling decreases the throughput so you don’t pay for unused provisioned capacity.
  4. Note that if you use the AWS Management Console to create a table or a global secondary index, DynamoDB auto scaling is enabled by default.
  5. You can modify your auto scaling settings at any time.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

A professional baseball league has chosen to use a key-value and document database for storage, processing, and data delivery. Many of the data requirements involve high-speed processing of data such as a Doppler radar system which samples the position of the baseball 2000 times per second. Which AWS data storage can meet these requirements?

A

DynamoDB

  1. Amazon DynamoDB is a NoSQL database that supports key-value and document data models
  2. enables developers to build modern, serverless applications that can start small and scale globally to support petabytes of data and tens of millions of read and write requests per second.
  3. DynamoDB is designed to run high-performance, internet-scale applications that would overburden traditional relational databases.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Your team has provisioned an Auto Scaling Groups in a single Region. The Auto Scaling Group at max capacity would total 40 EC2 instances between them. However, you notice that the Auto Scaling Groups will only scale out to a portion of that number of instances at any one time. What could be the problem?

A

There is a vCPU-based on-demand instance limit per region

  1. Your AWS account has default quotas, formerly referred to as limits, for each AWS service.
  2. Unless otherwise noted, each quota is Region-specific.
  3. You can request increases for some quotas, and other quotas cannot be increased.
  4. Remember that each EC2 instance can have a variance of the number of vCPUs, depending on its type and your configuration, so it’s always wise to calculate your vCPU needs to make sure you are not going to hit quotas easily.
  5. Service Quotas is an AWS service that helps you manage your quotas for over 100 AWS services from one location.
  6. Along with looking up the quota values, you can also request a quota increase from the Service Quotas console.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

You work for an advertising company that has a real-time bidding application. You are also using CloudFront on the front end to accommodate a worldwide user base. Your users begin complaining about response times and pauses in real-time bidding. What is the best service that can be used to reduce DynamoDB response times by an order of magnitude (milliseconds to microseconds)?

A

DAX

  1. Amazon DynamoDB Accelerator (DAX) is a fully managed, highly available, in-memory cache that can reduce Amazon DynamoDB response times from milliseconds to microseconds, even at millions of requests per second.
  2. While DynamoDB offers consistent single-digit millisecond latency, DynamoDB with DAX takes performance to the next level with response times in microseconds for millions of requests per second for read-heavy workloads.
  3. With DAX, your applications remain fast and responsive, even when a popular event or news story drives unprecedented request volumes your way.
  4. No tuning required.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Your company uses IoT devices installed in businesses to provide those business real-time data for analysis. You have decided to use AWS Kinesis Data Firehose to stream the data to multiple backend storing services for analytics. Which service listed is not a viable solution to stream the real time data to?

A

Athena

  1. Amazon Athena is correct because Amazon Kinesis Data Firehose cannot load streaming data to Athena.
  2. Amazon Kinesis Data Firehose is the easiest way to load streaming data into data stores and analytics tools.
  3. It can capture, transform, and load streaming data into Amazon S3, Amazon Redshift, Amazon Elasticsearch Service, and Splunk, enabling near real-time analytics with existing business intelligence tools and dashboards you’re already using today.
  4. It is a fully managed service that automatically scales to match the throughput of your data and requires no ongoing administration.
  5. It can also batch, compress, and encrypt the data before loading it, minimizing the amount of storage used at the destination and increasing security.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

You work for an oil and gas company as a lead in data analytics. The company is using IoT devices to better understand their assets in the field (for example, pumps, generators, valve assemblies, and so on). Your task is to monitor the IoT devices in real-time to provide valuable insight that can help you maintain the reliability, availability, and performance of your IoT devices. What tool can you use to process streaming data in real time with standard SQL without having to learn new programming languages or processing frameworks?

A

Kinesis Data Analytics

  1. Monitoring IoT devices in real-time can provide valuable insight that can help you maintain the reliability, availability, and performance of your IoT devices.
  2. You can track time series data on device connectivity and activity.
  3. This insight can help you react quickly to changing conditions and emerging situations.
  4. Amazon Web Services (AWS) offers a comprehensive set of powerful, flexible, and simple-to-use services that enable you to extract insights and actionable information in real time.
  5. Amazon Kinesis is a platform for streaming data on AWS, offering key capabilities to cost-effectively process streaming data at any scale.
  6. Kinesis capabilities include Amazon Kinesis Data Analytics, the easiest way to process streaming data in real time with standard SQL without having to learn new programming languages or processing frameworks.
17
Q

An Application Load Balancer is fronting an Auto Scaling Group of EC2 instances, and the instances are backed by an RDS database. The Auto Scaling Group has been configured to use the Default Termination Policy. You are testing the Auto Scaling Group and have triggered a scale-in. Which instance will be terminated first?

A

The instance launched from the oldest launch configuration

What do we know?

  1. The ASG is using the Default Termination Policy.
  2. The default termination policy is designed to help ensure that your instances span Availability Zones evenly for high availability.
  3. The default policy is kept generic and flexible to cover a range of scenarios.
  4. The default termination policy behavior is as follows:
  5. Determine which Availability Zones have the most instances, and at least one instance that is not protected from scale in.
  6. Determine which instances to terminate so as to align the remaining instances to the allocation strategy for the on-demand or spot instance that is terminating.
  7. This only applies to an Auto Scaling Group that specifies allocation strategies.
  8. For example, after your instances launch, you change the priority order of your preferred instance types. When a scale-in event occurs, Amazon EC2 Auto Scaling tries to gradually shift the on-demand instances away from instance types that are lower priority. Determine whether any of the instances use the oldest launch template or configuration: [For Auto Scaling Groups that use a launch template] Determine whether any of the instances use the oldest launch template unless there are instances that use a launch configuration. Amazon EC2 Auto Scaling terminates instances that use a launch configuration before instances that use a launch template. [For Auto Scaling Groups that use a launch configuration] Determine whether any of the instances use the oldest launch configuration. After applying all of the above criteria, if there are multiple unprotected instances to terminate, determine which instances are closest to the next billing hour. If there are multiple unprotected instances closest to the next billing hour, terminate one of these instances at random.
18
Q

You have configured an Auto Scaling Group of EC2 instances. You have begun testing the scaling of the Auto Scaling Group using a stress tool to force the CPU utilization metric being used to force scale out actions. The stress tool is also being manipulated by removing stress to force a scale in. But you notice that these actions are only taking place in five-minute intervals. What is happening?

A

The Auto Scaling Group is following the default cooldown procedure.

  1. The cooldown period helps you prevent your Auto Scaling group from launching or terminating additional instances before the effects of previous activities are visible.
  2. You can configure the length of time based on your instance startup time or other application needs.
  3. When you use simple scaling, after the Auto Scaling group scales using a simple scaling policy, it waits for a cooldown period to complete before any further scaling activities due to simple scaling policies can start.
  4. An adequate cooldown period helps to prevent the initiation of an additional scaling activity based on stale metrics.
  5. By default, all simple scaling policies use the default cooldown period associated with your Auto Scaling Group, but you can configure a different cooldown period for certain policies, as described in the following sections.
  6. Note that Amazon EC2 Auto Scaling honors cooldown periods when using simple scaling policies, but not when using other scaling policies or scheduled scaling.
  7. A default cooldown period automatically applies to any scaling activities for simple scaling policies, and you can optionally request to have it apply to your manual scaling activities.
  8. When you use the AWS Management Console to update an Auto Scaling Group, or when you use the AWS CLI or an AWS SDK to create or update an Auto Scaling Group, you can set the optional default cooldown parameter.
  9. If a value for the default cooldown period is not provided, its default value is 300 seconds.
19
Q

You have been given an assignment to configure Network ACLs in your VPC. Before configuring the NACLs, you need to understand how the NACLs are evaluated. How are NACL rules evaluated?

A

NACL rules are evaluated by rule number from lowest to highest and executed immediately when a matching rule is found.

  • You can add or remove rules from the default network ACL, or create additional network ACLs for your VPC.
  • When you add or remove rules from a network ACL, the changes are automatically applied to the subnets that it’s associated with.
  • A network access control list (ACL) is an optional layer of security for your VPC that acts as a firewall for controlling traffic in and out of one or more subnets.
  • You might set up network ACLs with rules similar to your security groups in order to add an additional layer of security to your VPC.

The following are the parts of a network ACL rule:

  1. Rules are evaluated starting with the lowest-numbered rule.
  2. As soon as a rule matches traffic, it’s applied regardless of any higher-numbered rule that might contradict it.
  3. The type of traffic, for example, SSH. You can also specify all traffic or a custom range.
  4. You can specify any protocol that has a standard protocol number.
  5. Port range. The listening port or port range for the traffic.
  6. Source. [Inbound rules only] The source of the traffic (CIDR range).
  7. Destination. [Outbound rules only] The destination for the traffic (CIDR range).
  8. Allow/Deny. Whether to allow or deny the specified traffic.
20
Q

You have been evaluating the NACLs in your company. Currently, you are looking at the default network ACL. Which statement is true about NACLs?

A

The default configuration of the default NACL is Allow, and the default configuration of a custom NACL is Deny.

  1. Your VPC automatically comes with a modifiable default network ACL.
  2. By default, it allows all inbound and outbound IPv4 traffic and, if applicable, IPv6 traffic.
  3. You can create a custom network ACL and associate it with a subnet.
  4. By default, each custom network ACL denies all inbound and outbound traffic until you add rules.
21
Q

A consultant hired by a small company to configure an AWS environment. The consultant begins working with the VPC and launching EC2 instances within the VPC. The initial instances will be placed in a public subnet. The consultant begins to create security groups. What is true of security groups?

A

You can specify allow rules but not deny rules

The following are the basic characteristics of security groups for your VPC:

  1. There are quotas on the number of security groups that you can create per VPC
  2. the number of rules that you can add to each security group,
  3. the number of security groups that you can associate with a network interface.
  4. You can specify allow rules, but not deny rules.
  5. You can specify separate rules for inbound and outbound traffic.
  6. When you create a security group, it has no inbound rules.
  7. Therefore, no inbound traffic originating from another host to your instance is allowed until you add inbound rules to the security group.
  8. By default, a security group includes an outbound rule that allows all outbound traffic.
  9. You can remove the rule and add outbound rules that allow specific outbound traffic only.
  10. If your security group has no outbound rules, no outbound traffic originating from your instance is allowed.
  11. Security groups are stateful.
  12. If you send a request from your instance, the response traffic for that request is allowed to flow in regardless of inbound security group rules.
  13. Responses to allowed inbound traffic are allowed to flow out, regardless of outbound rules.
22
Q

An international company has many clients around the world. These clients need to transfer gigabytes to terabytes of data quickly and on a regular basis to an S3 bucket. Which S3 feature will enable these long distance data transfers in a secure and fast manner?

A

Transfer Acceleration

You might want to use Transfer Acceleration on a bucket for various reasons, including the following:

  • You have customers that upload to a centralized bucket from all over the world.
  • You transfer gigabytes to terabytes of data on a regular basis across continents.
  • You are unable to utilize all of your available bandwidth over the Internet when uploading to Amazon S3
23
Q

An organization of about 100 employees has performed the initial setup of users in IAM. All users except administrators have the same basic privileges. But now it has been determined that 50 employees will have extra restrictions on EC2. They will be unable to launch new instances or alter the state of existing instances. What will be the quickest way to implement these restrictions?

A

Create the appropriate policy. Create a new group for the restricted users. Place the restricted users in the new group and attach the policy to the group.

  • You manage access in AWS by creating policies and attaching them to IAM identities (users, groups of users, or roles) or AWS resources.
  • A policy is an object in AWS that, when associated with an identity or resource, defines their permissions.
  • AWS evaluates these policies when an IAM principal (user or role) makes a request.
  • Permissions in the policies determine whether the request is allowed or denied.
  • Most policies are stored in AWS as JSON documents.

AWS supports six types of policies:

  1. identity-based policies,
  2. resource-based policies,
  3. permissions boundaries,
  4. Organizations
  5. SCPs
  6. ACLs
  7. session policies.
  • IAM policies define permissions for an action regardless of the method that you use to perform the operation.
  • For example, if a policy allows the GetUser action, then a user with that policy can get user information from the AWS Management Console, the AWS CLI, or the AWS API.
  • When you create an IAM user, you can choose to allow console or programmatic access.
  • If console access is allowed, the IAM user can sign in to the console using a user name and password.
  • Or if programmatic access is allowed, the user can use access keys to work with the CLI or API.
24
Q

A small company has nearly 200 users who already have AWS accounts in the company AWS environment. A new S3 bucket has been created which will allow roughly a third of all users access to sensitive information in the bucket. What is the most time efficient way to get these users access to the bucket?

A

Create a new policy which will grant permissions to the bucket. Create a group and attach the policy to that group. Add the users to this group.

An IAM group is a collection of IAM users.

Groups let you specify permissions for multiple users, which can make it easier to manage the permissions for those users.

Note that a group is not truly an “identity” in IAM because it cannot be identified as a Principal in a permission policy.

It is simply a way to attach policies to multiple users at one time.

Following are some important characteristics of groups:

  1. A group can contain many users, and a user can belong to multiple groups.
  2. Groups can’t be nested; they can contain only users, not other groups.
  3. There’s no default group that automatically includes all users in the AWS account. If you want to have a group like that, you need to create it and assign each new user to it.
  4. There’s a limit to the number of groups you can have, and a limit to how many groups a user can be in.
25
Q

You have been tasked with migrating an application and the servers it runs on to the company AWS cloud environment. You have created a checklist of steps necessary to perform this migration. A subsection in the checklist is security considerations. One of the things that you need to consider is the shared responsibility module. Which option does AWS handle under the shared responsibility model?

A

Physical Hardware Infrastructure

  1. Security and compliance is a shared responsibility between AWS and the customer.
  2. This shared model can help relieve the customer’s operational burden as AWS operates, manages, and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the service operates.
  3. The customer assumes responsibility for, and management of, the guest operating system (including updates and security patches), other associated application software, and the configuration of the AWS provided security group firewall.
  4. Customers should carefully consider the services they choose, as their responsibilities vary depending on the services used, the integration of those services into their IT environment, and applicable laws and regulations.
  5. The nature of this shared responsibility also provides the flexibility and customer control that permits the deployment.

AWS responsibility “Security of the Cloud”:

  • AWS is responsible for protecting the infrastructure that runs all of the services offered in the AWS Cloud.
  • This infrastructure is composed of the hardware, software, networking, and facilities that run AWS Cloud services.
26
Q

After an IT Steering Committee meeting, you have been put in charge of configuring a hybrid environment for the company’s compute resources. You weigh the pros and cons of various technologies based on the requirements you are given. The main requirements to drive this selection are overall cost considerations and the ability to reuse existing internet connections. Which technology best meets these requirements?

A

AWS VPN

  1. AWS Managed VPN lets you reuse existing VPN equipment and processes, and reuse existing internet connections.
  2. It is an AWS-managed high availability VPN service.
  3. It supports static routes or dynamic Border Gateway Protocol (BGP) peering and routing policies.
27
Q

After an IT Steering Committee meeting, you have been put in charge of configuring a hybrid environment for the company’s compute resources. You weigh the pros and cons of various technologies, such as VPN and Direct Connect, and based on the requirements you have decided to configure a VPN connection. What features and advantages can a VPN connection provide?

A

It provides a connection between an on-premises network and a VPC, using a secure and private connection with IPsec and TLS.

  1. A VPC/VPN Connection utilizes IPSec to establish encrypted network connectivity between your intranet and Amazon VPC over the Internet.
  2. VPN Connections can be configured in minutes and are a good solution if you have an immediate need, have low-to-modest bandwidth requirements, and can tolerate the inherent variability in Internet-based connectivity.
  3. AWS Client VPN is a managed client-based VPN service that enables you to securely access your AWS resources or your on-premises network.
  4. With AWS Client VPN, you configure an endpoint to which your users can connect to establish a secure TLS VPN session.
  5. This enables clients to access resources in AWS or on-premises from any location using an OpenVPN-based VPN client.
  6. You can create an IPsec VPN connection between your VPC and your remote network.
  7. On the AWS side of the Site-to-Site VPN connection, a virtual private gateway or transit gateway provides two VPN endpoints (tunnels) for automatic failover.
  8. You configure your customer gateway device on the remote side of the Site-to-Site VPN connection.
28
Q

A company is running a teaching application which is consumed by users all over the world. The application is translated into 5 different languages. All of these language files need to be stored somewhere that is highly-durable and can be accessed frequently. As content is added to the site, the storage demands will grow by a factor of five, so the storage must be highly-scalable as well. Which storage option will be highly-durable, cost-effective, and highly-scalable?

A

Amazon S3

  • Amazon S3 is object storage built to store and retrieve any amount of data from anywhere on the Internet.
  • It’s a simple storage service that offers an extremely durable, highly-available, and infinitely-scalable data storage infrastructure at very low costs.
  • The total volume of data and number of objects you can store are unlimited.
  • Individual Amazon S3 objects can range in size from a minimum of 0 bytes to a maximum of 5 terabytes.
  • The largest object that can be uploaded in a single PUT is 5 gigabytes.
  • For objects larger than 100 megabytes, customers should consider using the Multipart Upload capability.

Amazon S3 offers a range of storage classes designed for different use cases.

  1. These include S3 Standard for general-purpose storage of frequently accessed data
  2. S3 Intelligent-Tiering for data with unknown or changing access patterns,
  3. S3 Standard-Infrequent Access (S3 Standard-IA),
  4. S3 One Zone-Infrequent Access (S3 One Zone-IA) for long-lived, but less frequently accessed data,
  5. Amazon S3 Glacier (S3 Glacier)
  6. Amazon S3 Glacier Deep Archive (S3 Glacier Deep Archive) for long-term archive and digital preservation.
29
Q

A company needs to deploy EC2 instances to handle overnight batch processing. This includes media transcoding and some voice to text transcription. This is not high priority work, and it is OK if these batch runs get interrupted. What is the best EC2 instance purchasing option for this work?

A

Spot

Amazon EC2 provides the following purchasing options to enable you to optimize your costs based on your needs:

  1. On-Demand Instances – Pay, by the second, for the instances that you launch.
  2. Savings Plans – Reduce your Amazon EC2 costs by making a commitment to a consistent amount of usage, in USD per hour, for a term of 1 or 3 years.
  3. Reserved Instances – Reduce your Amazon EC2 costs by making a commitment to a consistent instance configuration, including instance type and Region, for a term of 1 or 3 years.
  4. Scheduled Instances – Purchase instances that are always available on the specified recurring schedule, for a one-year term.
  5. Spot Instances – Request unused EC2 instances, which can reduce your Amazon EC2 costs significantly.
  6. Dedicated Hosts – Pay for a physical host that is fully dedicated to running your instances, and bring your existing per-socket, per-core, or per-VM software licenses to reduce costs.
  7. Dedicated Instances – Pay, by the hour, for instances that run on single-tenant hardware.
  8. Capacity Reservations – Reserve capacity for your EC2 instances in a specific Availability Zone for any duration.
  9. A Spot Instance is an unused EC2 instance that is available for less than the On-Demand price.
  • Because Spot Instances enable you to request unused EC2 instances at steep discounts, you can lower your Amazon EC2 costs significantly.
  • The hourly price for a Spot Instance is called a Spot price.
  • The Spot price of each instance type in each Availability Zone is set by Amazon EC2, and adjusted gradually based on the long-term supply of and demand for Spot Instances.
  • Your Spot Instance runs whenever capacity is available and the maximum price per hour for your request exceeds the Spot price.
30
Q

A testing team is using a group of EC2 instances to run batch, automated tests on an application. The tests run overnight, but don’t take all night. The instances sit idle for long periods of time and accrue unnecessary charges. What can you do to stop these instances when they are idle for long periods?

A

You can create a CloudWatch alarm that is triggered when the average CPU utilization percentage has been lower than 10 percent for 4 hours, and stops the instance.

Adding Stop Actions to Amazon CloudWatch Alarms:

  • You can create an alarm that stops an Amazon EC2 instance when a certain threshold has been met.
  • You can create an alarm that is triggered when the average CPU utilization percentage has been lower than 10 percent for 24 hours, signaling that it is idle and no longer in use.
  • You can adjust the threshold, duration, and period to suit your needs, plus you can add an SNS notification, so that you will receive an email when the alarm is triggered.
  • Amazon EC2 instances that use an Amazon Elastic Block Store volume as the root device can be stopped or terminated, whereas instances that use the instance store as the root device can only be terminated.
31
Q

You are managing data storage for your company, and there are many EBS volumes. Your management team has given you some new requirements. Certain metrics on the EBS volumes need to be monitored, and the database team needs to be notified by email when certain metric thresholds are exceeded. Which AWS services can be configured to meet these requirements?

A

CloudWatch

  • CloudWatch can be used to monitor the volume
  • SNS can be used to send emails to the Ops team.

Amazon SNS is for messaging-oriented applications, with multiple subscribers requesting and receiving “push” notifications of time-critical messages via a choice of transport protocols, including HTTP, Amazon SQS, and email.