Practice Exam 3 Flashcards
Your development team has created a gaming application that uses DynamoDB to store user statistics and provide fast game updates back to users. The team has begun testing the application but needs a consistent data set to perform tests with. The testing process alters the dataset, so the baseline data needs to be retrieved upon each new test. Which AWS service can meet this need by exporting data from DynamoDB and importing data into DynamoDB?
Elastic Map Reduce
- You can use Amazon EMR with a customized version of Hive that includes connectivity to DynamoDB to perform operations on data stored in DynamoDB:
- Loading DynamoDB data into the Hadoop Distributed File System (HDFS) and using it as input into an Amazon EMR cluster
- Querying live DynamoDB data using SQL-like statements (HiveQL)
- Joining data stored in DynamoDB and exporting it or querying against the joined data
- Exporting data stored in DynamoDB to Amazon S3
- Importing data stored in Amazon S3 to DynamoDB
A company has an application for sharing static content, such as photos. The popularity of the application has grown, and the company is now sharing content worldwide. This worldwide service has caused some issues with latency. What AWS services can be used to host a static website, serve content to globally dispersed users, and address latency issues, while keeping cost under control? Choose two.
S3
- Amazon S3 is an object storage built to store and retrieve any amount of data from anywhere on the Internet.
- It’s a simple storage service that offers an extremely durable, highly available, and infinitely scalable data storage infrastructure at very low costs.
- AWS Global Accelerator and Amazon CloudFront are separate services that use the AWS global network and its edge locations around the world.
- CloudFront improves performance for both cacheable content (such as images and videos) and dynamic content (such as API acceleration and dynamic site delivery).
- Global Accelerator improves performance for a wide range of applications over TCP or UDP by proxying packets at the edge to applications running in one or more AWS Regions.
- Global Accelerator is a good fit for non-HTTP use cases, such as gaming (UDP), IoT (MQTT), or Voice over IP, as well as for HTTP use cases that specifically require static IP addresses or deterministic, fast regional failover. Both services integrate with AWS Shield for DDoS protection.
CloudFront
- Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment.
- CloudFront is integrated with AWS – both physical locations that are directly connected to the AWS global infrastructure, as well as other AWS services.
- CloudFront works seamlessly with services including AWS Shield for DDoS mitigation, Amazon S3, Elastic Load Balancing, or Amazon EC2 as origins for your applications, and Lambda@Edge to run custom code closer to customers’ users and to customize the user experience.
- Lastly, if you use AWS origins such as Amazon S3, Amazon EC2, or Elastic Load Balancing, you don’t pay for any data transferred between these services and CloudFront.
Your company has recently converted to a hybrid cloud environment and will slowly be migrating to a fully AWS cloud environment. The AWS side is in need of some steps to prepare for disaster recovery. A disaster recovery plan needs drawn up and disaster recovery drills need to be performed. The company wants to establish Recovery Time and Recovery Point Objectives, with a major component being a very aggressive RTO, with cost not being a major factor. You have determined and will recommend that the best DR configuration to meet cost and RTO/RPO objectives will be to run a second AWS architecture in another Region in an active-active configuration. Which AWS disaster recovery pattern will best meet these requirements?
Multi-site
- Multi-site with the active-active architecture is correct.
- This pattern will have the highest cost but the quickest failover.
The AWS team in a large company is spending a lot of time monitoring EC2 instances and maintenance when the instances report health check failures. How can you most efficiently automate this monitoring and repair?
- Create an Amazon CloudWatch alarm that monitors an Amazon EC2 instance and automatically reboots the instance if a health check fails.
- You can create an Amazon CloudWatch alarm that monitors an Amazon EC2 instance and automatically reboots the instance.
- The reboot alarm action is recommended for Instance Health Check failures (as opposed to the recover alarm action, which is suited for System Health Check failures).
- An instance reboot is equivalent to an operating system reboot.
- In most cases, it takes only a few minutes to reboot your instance.
- When you reboot an instance, it remains on the same physical host, so your instance keeps its public DNS name, private IP address, and any data on its instance store volumes.
- Rebooting an instance doesn’t start a new instance billing hour, unlike stopping and restarting your instance.
Your company uses IoT devices installed in businesses to provide those business real-time data for analysis. You have decided to use AWS Kinesis Data Firehose to stream the data to multiple backend storing services for analytics. Which service listed is not a viable solution to stream the real time data to?
Athena
- Amazon Athena is correct because Amazon Kinesis Data Firehose cannot load streaming data to Athena.
- Amazon Kinesis Data Firehose is the easiest way to load streaming data into data stores and analytics tools.
- It can capture, transform, and load streaming data into Amazon S3, Amazon Redshift, Amazon Elasticsearch Service, and Splunk, enabling near real-time analytics with existing business intelligence tools and dashboards you’re already using today.
- It is a fully managed service that automatically scales to match the throughput of your data and requires no ongoing administration.
- It can also batch, compress, and encrypt the data before loading it, minimizing the amount of storage used at the destination and increasing security.
Recently, you’ve been experiencing issues with your dynamic application that is running on EC2 instances. These instances aren’t able to keep up with the amount of traffic being sent to them, and customers are getting timeouts. Upon further investigation, there is no discernible traffic pattern for these surges. What can you do to fix the problem while keeping cost in mind?
Migrate the application to ECS (Elastic Container Service). Use Fargate to run the required tasks.
- This would be a perfect use case for Fargate, as the workload is unpredictable.
- It will automatically scale in and out based on the workload being thrown at it
Your company is slowly migrating to the cloud and is currently in a hybrid environment. The server team has been using Puppet for deployment automations. The decision has been made to continue using Puppet in the AWS environment if possible. If possible, which AWS service provides integration with Puppet?
AWS OpsWorks
- AWS OpsWorks for Puppet Enterprise is a fully-managed configuration management service that hosts Puppet Enterprise, a set of automation tools from Puppet for infrastructure and application management.
- OpsWorks also maintains your Puppet master server by automatically patching, updating, and backing up your server.
- OpsWorks eliminates the need to operate your own configuration management systems or worry about maintaining its infrastructure.
- OpsWorks gives you access to all of the Puppet Enterprise features, which you manage through the Puppet console. It also works seamlessly with your existing Puppet code.
A large, big-box hardware chain is setting up a new inventory management system. They have developed a system using IoT sensors which captures the removal of items from the store shelves in near real-time and want to use this information to update their inventory system. The company wants to analyze this data in the hopes of being ahead of demand and properly managing logistics and delivery of in-demand items.
Which AWS service can be used to capture this data as close to real-time as possible, while being able to both transform and load the streaming data into Amazon S3 or ElasticSearch?
Kinesis Data Firehose
- Amazon Kinesis Data Firehose is the easiest way to reliably load streaming data into data lakes, data stores, and analytics tools.
- It can capture, transform, and load streaming data into Amazon S3, Amazon Redshift, Amazon Elasticsearch Service, and Splunk, enabling near-real-time analytics with existing business intelligence tools and dashboards you’re already using today.
- It is a fully-managed service that automatically scales to match the throughput of your data and requires no ongoing administration.
- It can also batch, compress, transform, and encrypt the data before loading it, minimizing the amount of storage used at the destination and increasing security.
You work for an oil and gas company as a lead in data analytics. The company is using IoT devices to better understand their assets in the field (for example, pumps, generators, valve assemblies, and so on). Your task is to monitor the IoT devices in real-time to provide valuable insight that can help you maintain the reliability, availability, and performance of your IoT devices. What tool can you use to process streaming data in real time with standard SQL without having to learn new programming languages or processing frameworks?
Kinesis Data Analytics
- Monitoring IoT devices in real-time can provide valuable insight that can help you maintain the reliability, availability, and performance of your IoT devices.
- You can track time series data on device connectivity and activity.
- This insight can help you react quickly to changing conditions and emerging situations. Amazon Web Services (AWS) offers a comprehensive set of powerful, flexible, and simple-to-use services that enable you to extract insights and actionable information in real time.
- Amazon Kinesis is a platform for streaming data on AWS, offering key capabilities to cost-effectively process streaming data at any scale.
- Kinesis capabilities include Amazon Kinesis Data Analytics, the easiest way to process streaming data in real time with standard SQL without having to learn new programming languages or processing frameworks.
A financial institution has begun using AWS services and plans to migrate as much of their IT infrastructure and applications to AWS as possible. The nature of the business dictates that strict compliance practices be in place. The AWS team has configured AWS CloudTrail to help meet compliance requirements and be ready for any upcoming audits. Which item is not a feature of AWS CloudTrail?
Monitor Auto Scaling Groups and optimize resource utilization.
An organization of about 100 employees has performed the initial setup of users in IAM. All users except administrators have the same basic privileges. But now it has been determined that 50 employees will have extra restrictions on EC2. They will be unable to launch new instances or alter the state of existing instances. What will be the quickest way to implement these restrictions?
Create the appropriate policy. Create a new group for the restricted users. Place the restricted users in the new group and attach the policy to the group.
- You manage access in AWS by creating policies and attaching them to IAM identities (users, groups of users, or roles) or AWS resources.
- A policy is an object in AWS that, when associated with an identity or resource, defines their permissions.
- AWS evaluates these policies when an IAM principal (user or role) makes a request.
- Permissions in the policies determine whether the request is allowed or denied.
- Most policies are stored in AWS as JSON documents.
- AWS supports six types of policies: identity-based policies, resource-based policies, permissions boundaries, Organizations SCPs, ACLs, and session policies.
- IAM policies define permissions for an action regardless of the method that you use to perform the operation.
- For example, if a policy allows the GetUser action, then a user with that policy can get user information from the AWS Management Console, the AWS CLI, or the AWS API.
- When you create an IAM user, you can choose to allow console or programmatic access.
- If console access is allowed, the IAM user can sign in to the console using a user name and password.
- Or if programmatic access is allowed, the user can use access keys to work with the CLI or API.
A small startup company has multiple departments with small teams representing each department. They have hire you to configure Identity and Access Management in their AWS account. The team expects to grow rapidly, and promote from within which could mean promoted team members switching over to a new team fairly often. How can you configure IAM to prepare for this type of growth?
Create the user accounts, create a group for each department, create and attach an appropriate policy to each group, and place each user account into their department’s group. When new team members are onboarded, create their account and put them in the appropriate group. If an existing team member changes departments, move their account to their new IAM group.
- An IAM group is a collection of IAM users.
- Groups let you specify permissions for multiple users, which can make it easier to manage the permissions for those users.
- For example, you could have a group called Admins and give that group the types of permissions that administrators typically need.
- Any user in that group automatically has the permissions that are assigned to the group.
- If a new user joins your organization and needs administrator privileges, you can assign the appropriate permissions by adding the user to that group.
- Similarly, if a person changes jobs in your organization, instead of editing that user’s permissions, you can remove him or her from the old groups and add him or her to the appropriate new groups.
Your company has decided to go to a hybrid cloud environment. Part of this effort will be to move a large data warehouse to the cloud. The warehouse is 50TB, and will take over a month to migrate given the current bandwidth available. What is the best option available to perform this migration considering both cost and performance aspects?
AWS Snowball Edge
- The AWS Snowball Edge is a type of Snowball device with on-board storage and compute power for select AWS capabilities.
- Snowball Edge can undertake local processing and edge-computing workloads in addition to transferring data between your local environment and the AWS Cloud.
- Each Snowball Edge device can transport data at speeds faster than the internet.
- This transport is done by shipping the data in the appliances through a regional carrier.
- The appliances are rugged shipping containers, complete with E Ink shipping labels.
- The AWS Snowball Edge device differs from the standard Snowball because it can bring the power of the AWS Cloud to your on-premises location, with local storage and compute functionality.
- Snowball Edge devices have three options for device configurations: storage optimized, compute optimized, and with GPU.
- When this guide refers to Snowball Edge devices, it’s referring to all options of the device.
- Whenever specific information applies to only one or more optional configurations of devices, like how the Snowball Edge with GPU has an on-board GPU, it will be called out.
A company needs to deploy EC2 instances to handle overnight batch processing. This includes media transcoding and some voice to text transcription. This is not high priority work, and it is OK if these batch runs get interrupted. What is the best EC2 instance purchasing option for this work?
Spot
Amazon EC2 provides the following purchasing options to enable you to optimize your costs based on your needs:
- On-Demand Instances – Pay, by the second, for the instances that you launch.
- Savings Plans – Reduce your Amazon EC2 costs by making a commitment to a consistent amount of usage, in USD per hour, for a term of 1 or 3 years.
- Reserved Instances – Reduce your Amazon EC2 costs by making a commitment to a consistent instance configuration, including instance type and Region, for a term of 1 or 3 years.
- Scheduled Instances – Purchase instances that are always available on the specified recurring schedule, for a one-year term.
-
Spot Instances – Request unused EC2 instances, which can reduce your Amazon EC2 costs significantly. Dedicated Hosts – Pay for a physical host that is fully dedicated to running your instances, and bring your existing per-socket, per-core, or per-VM software licenses to reduce costs.
- A Spot Instance is an unused EC2 instance that is available for less than the On-Demand price.
- Because Spot Instances enable you to request unused EC2 instances at steep discounts, you can lower your Amazon EC2 costs significantly.
- The hourly price for a Spot Instance is called a Spot price.
- The Spot price of each instance type in each Availability Zone is set by Amazon EC2, and adjusted gradually based on the long-term supply of and demand for Spot Instances.
- Your Spot Instance runs whenever capacity is available and the maximum price per hour for your request exceeds the Spot price.
- Dedicated Instances – Pay, by the hour, for instances that run on single-tenant hardware.
-
Capacity Reservations – Reserve capacity for your EC2 instances in a specific Availability Zone for any duration.
*
You are managing data storage for your company, and there are many EBS volumes. Your management team has given you some new requirements. Certain metrics on the EBS volumes need to be monitored, and the database team needs to be notified by email when certain metric thresholds are exceeded. Which AWS services can be configured to meet these requirements?
SNS
- CloudWatch can be used to monitor the volume, and SNS can be used to send emails to the Ops team.
- Amazon SNS is for messaging-oriented applications, with multiple subscribers requesting and receiving “push” notifications of time-critical messages via a choice of transport protocols, including HTTP, Amazon SQS, and email.
CloudWatch
- CloudWatch can be used to monitor the volume, and SNS can be used to send emails to the Ops team.
- Amazon SNS is for messaging-oriented applications, with multiple subscribers requesting and receiving “push” notifications of time-critical messages via a choice of transport protocols, including HTTP, Amazon SQS, and email.