Practice Exam 2 Flashcards

1
Q

An online media company has created an application which provides analytical data to its clients. The application is hosted on EC2 instances in an Auto Scaling Group. You have been brought on as a consultant and add an Application Load Balancer to front the Auto Scaling Group and distribute the load between the instances. The VPC which houses this architecture is running IPv4 and IPv6. The last thing you need to do to complete the configuration is point the domain name to the Application Load Balancer. Using Route 53, which record type at the zone apex will you use to point the DNS name of the Application Load Balancer? Choose two.

A

Alias with an A type record set.

  • Alias with a type “AAAA” record set and Alias with a type “A” record set are correct.
  • To route domain traffic to an ELB load balancer, use Amazon Route 53 to create an alias record that points to your load balancer.
  • An alias record is a Route 53 extension to DNS.

Alias with an AAAA type record set.

  • Alias with a type “AAAA” record set and Alias with a type “A” record set are correct.
  • To route domain traffic to an ELB, use Amazon Route 53 to create an alias record that points to your load balancer.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Your company has recently converted to a hybrid cloud environment and will slowly be migrating to a fully AWS cloud environment. The AWS side is in need of some steps to prepare for disaster recovery. A disaster recovery plan needs drawn up and disaster recovery drills need to be performed. The company wants to establish Recovery Time and Recovery Point Objectives, with a major component being a very aggressive RTO, with cost not being a major factor. You have determined and will recommend that the best DR configuration to meet cost and RTO/RPO objectives will be to run a second AWS architecture in another Region in an active-active configuration. Which AWS disaster recovery pattern will best meet these requirements?

A

Multi-site

  • Multi-site with the active-active architecture is correct.
  • This pattern will have the highest cost but the quickest failover.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

You work for an online retailer where any downtime at all can cause a significant loss of revenue. You have architected your application to be deployed on an Auto Scaling Group of EC2 instances behind a load balancer. You have configured and deployed these resources using a CloudFormation template. The Auto Scaling Group is configured with default settings, and a simple CPU utilization scaling policy. You have also set up multiple Availability Zones for high availability. The Load Balancer does health checks against an html file generated by script. When you begin performing load testing on your application and notice in CloudWatch that the load balancer is not sending traffic to one of your EC2 instances. What could be the problem?

A

The EC2 instance has failed the load balancer health check.

  • The load balancer will route the incoming requests only to the healthy instances.
  • The EC2 instance may have passed status check and be considered health to the Auto Scaling Group, but the ELB may not use it if the ELB health check has not been met.
  • The ELB health check has a default of 30 seconds between checks, and a default of 3 checks before making a decision.
  • Therefore the instance could be visually available but unused for at least 90 seconds before the GUI would show it as failed.
  • In CloudWatch where the issue was noticed it would appear to be a healthy EC2 instance but with no traffic. Which is what was observed.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

An accounting company has big data applications for analyzing actuary data. The company is migrating some of its services to the cloud, and for the foreseeable future, will be operating in a hybrid environment. They need a storage service that provides a simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources. Which AWS service can meet these requirements?

A

EFS

  • Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources.
  • It is built to scale on-demand to petabytes without disrupting applications, growing and shrinking automatically as you add and remove files, eliminating the need to provision, and manage capacity to accommodate growth.
  • Amazon EFS offers two storage classes: the Standard storage class, and the Infrequent Access storage class (EFS IA).
  • EFS IA provides price/performance that’s cost-optimized for files not accessed every day.
  • By simply enabling EFS Lifecycle Management on your file system, files not accessed according to the lifecycle policy you choose will be automatically and transparently moved into EFS IA.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

You are working as a Solutions Architect in a large healthcare organization. You have many Auto Scaling Groups that you need to create. One requirement is that you need to reuse some software licenses and therefore need to use dedicated hosts on EC2 instances in your Auto Scaling Groups. What step must you take to meet this requirement?

A

Use a launch template with your Auto Scaling Group.

In addition to the features of Amazon EC2 Auto Scaling that you can configure by using launch templates, launch templates provide more advanced Amazon EC2 configuration options.

For example, you must use launch templates to use Amazon EC2 Dedicated Hosts.

  • Dedicated Hosts are physical servers with EC2 instance capacity that are dedicated to your use.
  • While Amazon EC2 Dedicated Instances also run on dedicated hardware, the advantage of using Dedicated Hosts over Dedicated Instances is that you can bring eligible software licenses from external vendors and use them on EC2 instances.

If you currently use launch configurations, you can specify a launch template when you update an Auto Scaling group that was created using a launch configuration.

To create a launch template to use with an Auto Scaling Group,

  1. create the template from scratch,
  2. create a new version of an existing template,
  3. or copy the parameters from a launch configuration, running instance, or other template.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

You are working for a large financial institution and preparing for disaster recovery and upcoming DR drills. A key component in the DR plan will be the database instances and their data. An aggressive Recovery Time Objective (RTO) dictates that the database needs to be synchronously replicated. Which configuration can meet this requirement?

A

RDS Multi-AZ

Amazon RDS Multi-AZ deployments provide enhanced availability and durability for RDS database (DB) instances, making them a natural fit for production database workloads.

When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ).

  • Each AZ runs on its own physically distinct, independent infrastructure
  • Is engineered to be highly reliable.
  • In case of an infrastructure failure, Amazon RDS performs an automatic failover to the standby (or to a read replica in the case of Amazon Aurora)
  • You can resume database operations as soon as the failover is complete.
  • Since the endpoint for your DB Instance remains the same after a failover, your application can resume database operation without the need for manual administrative intervention.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Your development team has created a gaming application that uses DynamoDB to store user statistics and provide fast game updates back to users. The team has begun testing the application but needs a consistent data set to perform tests with. The testing process alters the dataset, so the baseline data needs to be retrieved upon each new test. Which AWS service can meet this need by exporting data from DynamoDB and importing data into DynamoDB?

A

Elastic Map Reduce

You can use Amazon EMR with a customized version of Hive that includes connectivity to DynamoDB to perform operations on data stored in DynamoDB:

  • Loading DynamoDB data into the Hadoop Distributed File System (HDFS) and using it as input into an Amazon EMR cluster
  • Querying live DynamoDB data using SQL-like statements (HiveQL)
  • Joining data stored in DynamoDB and exporting it or querying against the joined data
  • Exporting data stored in DynamoDB to Amazon S3
  • Importing data stored in Amazon S3 to DynamoDB
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

A company has an application for sharing static content, such as photos. The popularity of the application has grown, and the company is now sharing content worldwide. This worldwide service has caused some issues with latency. What AWS services can be used to host a static website, serve content to globally dispersed users, and address latency issues, while keeping cost under control? Choose two.

A

S3

  • Amazon S3 is an object storage built to store and retrieve any amount of data from anywhere on the Internet.
  • It’s a simple storage service that offers an extremely durable, highly available, and infinitely scalable data storage infrastructure at very low costs.
  • AWS Global Accelerator and Amazon CloudFront are separate services that use the AWS global network and its edge locations around the world.
  • CloudFront improves performance for both cacheable content (such as images and videos) and dynamic content (such as API acceleration and dynamic site delivery).
  • Global Accelerator improves performance for a wide range of applications over TCP or UDP by proxying packets at the edge to applications running in one or more AWS Regions.
    • Global Accelerator is a good fit for non-HTTP use cases, such as gaming (UDP), IoT (MQTT), or Voice over IP, as well as for HTTP use cases that specifically require static IP addresses or deterministic, fast regional failover.
    • Both services integrate with AWS Shield for DDoS protection.

CloudFront

  • Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment.
  • CloudFront is integrated with AWS – both physical locations that are directly connected to the AWS global infrastructure, as well as other AWS services.
  • CloudFront works seamlessly with services including AWS Shield for DDoS mitigation, Amazon S3, Elastic Load Balancing, or Amazon EC2 as origins for your applications, and Lambda@Edge to run custom code closer to customers’ users and to customize the user experience.
  • Lastly, if you use AWS origins such as Amazon S3, Amazon EC2, or Elastic Load Balancing, you don’t pay for any data transferred between these services and CloudFront.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

A new startup is considering the advantages of using DynamoDB versus a traditional relational database in AWS RDS. The NoSQL nature of DynamoDB presents a small learning curve to the team members who all have experience with traditional databases. The company will have multiple databases, and the decision will be made on a case-by-case basis. Which of the following use cases would favor DynamoDB? Select two.

A

Managing web session data

  • DynamoDB is a NoSQL database that supports key-value and document data structures.
  • A key-value store is a database service that provides support for storing, querying, and updating collections of objects that are identified using a key and values that contain the actual content being stored.
  • Meanwhile, a document data store provides support for storing, querying, and updating items in a document format such as JSON, XML, and HTML.
  • DynamoDB’s fast and predictable performance characteristics make it a great match for handling session data.
  • Plus, since it’s a fully-managed NoSQL database service, you avoid all the work of maintaining and operating a separate session store.

Storing metadata for S3 objects

  • Storing metadata for Amazon S3 objects is correct because the Amazon DynamoDB stores structured data indexed by primary key and allows low-latency read and write access to items ranging from 1 byte up to 400KB.
  • Amazon S3 stores unstructured blobs and is suited for storing large objects up to 5 TB.
  • In order to optimize your costs across AWS services, large objects or infrequently accessed data sets should be stored in Amazon S3, while smaller data elements or file pointers (possibly to Amazon S3 objects) are best saved in Amazon DynamoDB.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

A new startup company decides to use AWS to host their web application. They configure a VPC as well as two subnets within the VPC. They also attach an internet gateway to the VPC. In the first subnet, they create an EC2 instance to host a web application. There is a network ACL and a security group, which both have the proper ingress and egress to and from the internet. There is a route in the route table to the internet gateway. The EC2 instances added to the subnet need to have a globally unique IP address to ensure internet access. Which is not a globally unique IP address?

A

Private IP address

  • Public IPv4 address, elastic IP address, and IPv6 address are globally unique addresses.
  • The IPv4 addresses known for not being unique are private IPs.
  • These are found in the following ranges: from 10.0.0.0 to 10.255.255.255, from 172.16.0.0 to 172.31.255.255, and from 192.168.0.0 to 192.168.255.255.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

A professional baseball league has chosen to use a key-value and document database for storage, processing, and data delivery. Many of the data requirements involve high-speed processing of data such as a Doppler radar system which samples the position of the baseball 2000 times per second. Which AWS data storage can meet these requirements?

A

DynamoDB

  • Amazon DynamoDB is a NoSQL database that supports key-value and document data models, and enables developers to build modern, serverless applications that can start small and scale globally to support petabytes of data and tens of millions of read and write requests per second.
  • DynamoDB is designed to run high-performance, internet-scale applications that would overburden traditional relational databases.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

You have been assigned to create an architecture which uses load balancers to direct traffic to an Auto Scaling Group of EC2 instances across multiple Availability Zones. You were considering using an Application Load Balancer, but some of the requirements you have been given seem to point to a Classic Load Balancer. Which requirement would be better served by an Application Load Balancer?

A

Path-based routing

Using an Application Load Balancer instead of a Classic Load Balancer has the following benefits:

  • Support for path-based routing.
  • You can configure rules for your listener that forward requests based on the URL in the request.
  • This enables you to structure your application as smaller services, and route requests to the correct service based on the content of the URL.
  • Support for host-based routing.
  • You can configure rules for your listener that forward requests based on the host field in the HTTP header.
  • This enables you to route requests to multiple domains using a single load balancer.

Support for routing based on

  • fields in the request, such as standard and custom HTTP headers and methods
  • query parameters
  • source IP addresses
  • Support for routing requests to multiple applications on a single EC2 instance.
  • You can register each instance or IP address with the same target group using multiple ports.
  • Support for redirecting requests from one URL to another.
  • Support for returning a custom HTTP response.
  • Support for registering targets by IP address, including targets outside the VPC for the load balancer.
  • Support for registering Lambda functions as targets.
  • Support for the load balancer to authenticate users of your applications through their corporate or social identities before routing requests.

Support for containerized applications.

  • Amazon Elastic Container Service (Amazon ECS) can select an unused port when scheduling a task and register the task with a target group using this port.
  • This enables you to make efficient use of your clusters.

Support for monitoring the health of each service independently, as health checks are defined at the target group level and many CloudWatch metrics are reported at the target group level.

Attaching a target group to an Auto Scaling Group enables you to scale each service dynamically based on demand.

Access logs contain additional information and are stored in compressed format.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Your company is slowly migrating to the cloud and is currently in a hybrid environment. The server team has been using Puppet for deployment automations. The decision has been made to continue using Puppet in the AWS environment if possible. If possible, which AWS service provides integration with Puppet?

A

AWS OpsWorks

  • AWS OpsWorks for Puppet Enterprise is a fully-managed configuration management service that hosts Puppet Enterprise, a set of automation tools from Puppet for infrastructure and application management.
  • OpsWorks also maintains your Puppet master server by automatically patching, updating, and backing up your server.
  • OpsWorks eliminates the need to operate your own configuration management systems or worry about maintaining its infrastructure.
  • OpsWorks gives you access to all of the Puppet Enterprise features, which you manage through the Puppet console.
  • It also works seamlessly with your existing Puppet code.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

You are designing an architecture for a financial company which provides a day trading application to customers. After viewing the traffic patterns for the existing application you notice that traffic is fairly steady throughout the day, with the exception of large spikes at the opening of the market in the morning and at closing around 3 pm. Your architecture will include an Auto Scaling Group of EC2 instances. How can you configure the Auto Scaling Group to ensure that system performance meets the increased demands at opening and closing of the market?

A

Use a predictive scaling policy on the Auto Scaling Group to meet opening and closing spikes.

  • Using data collected from your actual EC2 usage and further informed by billions of data points drawn from our own observations, we use well-trained Machine Learning models to predict your expected traffic (and EC2 usage) including daily and weekly patterns.
  • The model needs at least one day’s of historical data to start making predictions
  • it is re-evaluated every 24 hours to create a forecast for the next 48 hours.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Your boss has tasked you with decoupling your existing web frontend from the backend. Both applications run on EC2 instances. After you investigate the existing architecture, you find that (on average) the backend resources are processing about 5,000 requests per second and will need something that supports their extreme level of message processing. It’s also important that each request is processed only 1 time. What can you do to decouple these resources?

A

Use SQS Standard.

  • Include a unique ordering ID in each message
  • Have the backend application use this to deduplicate messages.
  • This would be a great choice, as SQS Standard can handle this level of extreme performance.
  • If the application didn’t require this level of performance, then SQS FIFO would be the better and easier choice.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Recently, you’ve been experiencing issues with your dynamic application that is running on EC2 instances. These instances aren’t able to keep up with the amount of traffic being sent to them, and customers are getting timeouts. Upon further investigation, there is no discernible traffic pattern for these surges. What can you do to fix the problem while keeping cost in mind?

A

Migrate the application to ECS. Use Fargate to run the required tasks.

  • This would be a perfect use case for Fargate, as the workload is unpredictable.
  • It will automatically scale in and out based on the workload being thrown at it.
17
Q

Your company uses IoT devices installed in businesses to provide those business real-time data for analysis. You have decided to use AWS Kinesis Data Firehose to stream the data to multiple backend storing services for analytics. Which service listed is not a viable solution to stream the real time data to?

A

Athena

  • Amazon Athena is correct because Amazon Kinesis Data Firehose cannot load streaming data to Athena.
  • Amazon Kinesis Data Firehose is the easiest way to load streaming data into data stores and analytics tools.
  • It can capture, transform, and load streaming data into Amazon S3, Amazon Redshift, Amazon Elasticsearch Service, and Splunk, enabling near real-time analytics with existing business intelligence tools and dashboards you’re already using today.
  • It is a fully managed service that automatically scales to match the throughput of your data and requires no ongoing administration.
  • It can also batch, compress, and encrypt the data before loading it, minimizing the amount of storage used at the destination and increasing security.
18
Q

You work for an oil and gas company as a lead in data analytics. The company is using IoT devices to better understand their assets in the field (for example, pumps, generators, valve assemblies, and so on). Your task is to monitor the IoT devices in real-time to provide valuable insight that can help you maintain the reliability, availability, and performance of your IoT devices. What tool can you use to process streaming data in real time with standard SQL without having to learn new programming languages or processing frameworks?

A

Kinesis Data Analytics

  • Monitoring IoT devices in real-time can provide valuable insight that can help you maintain the reliability, availability, and performance of your IoT devices.
  • You can track time series data on device connectivity and activity.
  • This insight can help you react quickly to changing conditions and emerging situations.
  • Amazon Web Services (AWS) offers a comprehensive set of powerful, flexible, and simple-to-use services that enable you to extract insights and actionable information in real time.
  • Amazon Kinesis is a platform for streaming data on AWS, offering key capabilities to cost-effectively process streaming data at any scale.
  • Kinesis capabilities include Amazon Kinesis Data Analytics, the easiest way to process streaming data in real time with standard SQL without having to learn new programming languages or processing frameworks.
19
Q

Your boss recently asked you to investigate how to move your containerized application into AWS. During this migration, you’ll need to be able to easily move containers back and forth between on-premises and AWS. It has also been requested that you use an open-source container orchestration service. Which AWS tool would you pick to meet these requirements?

A

EKS

EKS is a managed version of the open-source tool Kubernetes.

20
Q

You have been assigned to create an architecture which uses load balancers to direct traffic to an Auto Scaling Group of EC2 instances across multiple Availability Zones. The application to be deployed on these instances is a life insurance application which requires path-based and host-based routing. Which type of load balancer will you need to use?

A

Application Load Balancer

  • Only the Application Load Balancer can support path-based and host-based routing.
  • Using an Application Load Balancer instead of a Classic Load Balancer has the following benefits:
    • Support for path-based routing.
    • You can configure rules for your listener that forward requests based on the URL in the request.
    • This enables you to structure your application as smaller services, and route requests to the correct service based on the content of the URL.

Support for host-based routing.

You can configure rules for your listener that forward requests based on the host field in the HTTP header.

This enables you to route requests to multiple domains using a single load balancer.

Support for routing based on fields in the request, such as standard and custom HTTP headers and methods, query parameters, and source IP addresses.

Support for routing requests to multiple applications on a single EC2 instance.

You can register each instance or IP address with the same target group using multiple ports.

Support for redirecting requests from one URL to another.

Support for returning a custom HTTP response.

Support for registering targets by IP address, including targets outside the VPC for the load balancer.

Support for registering Lambda functions as targets.

Support for the load balancer to authenticate users of your applications through their corporate or social identities before routing requests.

Support for containerized applications.

  • Amazon Elastic Container Service (Amazon ECS) can select an unused port when scheduling a task and register the task with a target group using this port.
  • This enables you to make efficient use of your clusters.

Support for monitoring the health of each service independently, as health checks are defined at the target group level and many CloudWatch metrics are reported at the target group level.

Attaching a target group to an Auto Scaling group enables you to scale each service dynamically based on demand.

Access logs contain additional information and are stored in compressed format.

Improved load balancer performance.

21
Q

A small software team is creating an application which will give subscribers real-time weather updates. The application will run on EC2 and will make several requests to AWS services such as S3 and DynamoDB. What is the best way to grant permissions to these other AWS services?

A

Create an IAM role that you attach to the EC2 instance to give temporary security credentials to applications running on the instance.

Create an IAM role in the following situations:

  • You’re creating an application that runs on an Amazon Elastic Compute Cloud (Amazon EC2) instance and that application makes requests to AWS.

​ Don’t create an IAM user and pass the user’s credentials to the application or embed the credentials in the application.

  • Instead, create an IAM role that you attach to the EC2 instance to give temporary security credentials to applications running on the instance.
  • When an application uses these credentials in AWS, it can perform all of the operations that are allowed by the policies attached to the role.
22
Q

You have begun creating a hybrid cloud environment. Now you need to create a bastion host in the company’s custom VPC. The personnel in the corporate data center are the only ones to have access to the bastion host via SSH. How can you configure the bastion host and set up access?

A

Create the bastion host (EC2 instance).

  • For the instance security group, add ingress on port 22, and specify the address range of the personnel in the data center.
  • Use a private key to connect to the bastion host.
  • Add an internet gateway, a route table, and a route to the internet gateway in the route table.
  • Including bastion hosts in your VPC environment enables you to securely connect to your Linux instances without exposing your environment to the internet.
  • After you set up your bastion hosts, you can access the other instances in your VPC through Secure Shell (SSH) connections on Linux.
  • Bastion hosts are also configured with security groups to provide fine-grained ingress control.
23
Q

The company you work for has reshuffled teams a bit and you’ve been moved from the AWS IAM team to the AWS network team. One of your first assignments is to review the subnets in the main VPCs. You have recommended that the company add some private subnets and segregate databases from public traffic. What differentiates a public subnet from a private subnet?

A
  • If a subnet’s traffic is routed to an internet gateway, the subnet is known as a public subnet.
  • A public subnet is a subnet that’s associated with a route table that has a route to an internet gateway.
24
Q

A small company has nearly 200 users who already have AWS accounts in the company AWS environment. A new S3 bucket has been created which will allow roughly a third of all users access to sensitive information in the bucket. What is the most time efficient way to get these users access to the bucket?

A
  1. Create a new policy which will grant permissions to the bucket.
  2. Create a group and attach the policy to that group.
  3. Add the users to this group.

An IAM group is a collection of IAM users.

  • Groups let you specify permissions for multiple users, which can make it easier to manage the permissions for those users.

For example, you could have a group called Admins and give that group the types of permissions that administrators typically need. Any user in that group automatically has the permissions that are assigned to the group. If a new user joins your organization and needs administrator privileges, you can assign the appropriate permissions by adding the user to that group.

  • If a person changes jobs in your organization, instead of editing that user’s permissions, you can remove him or her from the old groups and add him or her to the appropriate new groups.
  • Note that a group is not truly an “identity” in IAM because it cannot be identified as a Principal in a permission policy.
  • It is simply a way to attach policies to multiple users at one time. Following are some important characteristics of groups:
  • A group can contain many users, and a user can belong to multiple groups.
  • Groups can’t be nested; they can contain only users, not other groups.
  • There’s no default group that automatically includes all users in the AWS account.
  • If you want to have a group like that, you need to create it and assign each new user to it.
  • There’s a limit to the number of groups you can have, and a limit to how many groups a user can be in.
25
Q

A consultant hired by a small company to configure an AWS environment. The consultant begins working with the VPC and launching EC2 instances within the VPC. The initial instances will be placed in a public subnet. The consultant begins to create security groups. What is true of security groups?

A

You can specify allow rules but not deny rules.

The following are the basic characteristics of security groups for your VPC:

  • There are quotas on the number of security groups that you can create per VPC,
  • the number of rules that you can add to each security group,
  • and the number of security groups that you can associate with a network interface.
  • You can specify allow rules, but not deny rules.
  • You can specify separate rules for inbound and outbound traffic.
  • When you create a security group, it has no inbound rules.
  • Therefore, no inbound traffic originating from another host to your instance is allowed until you add inbound rules to the security group.
  • By default, a security group includes an outbound rule that allows all outbound traffic.
  • You can remove the rule and add outbound rules that allow specific outbound traffic only.
  • If your security group has no outbound rules, no outbound traffic originating from your instance is allowed.
  • Security groups are stateful.
  • If you send a request from your instance, the response traffic for that request is allowed to flow in regardless of inbound security group rules.
  • Responses to allowed inbound traffic are allowed to flow out, regardless of outbound rules.
26
Q

You have joined a newly formed software company as a Solutions Architect. It is a small company, and you are the only employee with AWS experience. The owner has asked for your recommendations to ensure that the AWS resources are deployed to proactively remain within budget. Which AWS service can you use to help ensure you don’t have cost overruns for your AWS resources?

A

AWS Budgets

  • AWS Budgets gives you the ability to set custom budgets that alert you when your costs or usage exceed (or are forecasted to exceed) your budgeted amount.
  • You can also use AWS Budgets to set reservation utilization or coverage targets and receive alerts when your utilization drops below the threshold you define.
  • And remember the keyword, proactively.
  • With AWS Budgets, we can be proactive about attending to cost overruns before they become a major budget issue at the end of the month or quarter.
  • Budgets can be tracked at the monthly, quarterly, or yearly level, and you can customize the start and end dates.
  • You can further refine your budget to track costs associated with multiple dimensions, such as AWS service, linked account, tag, and others.
  • Budget alerts can be sent via email and/or Amazon Simple Notification Service (SNS) topic.
  • You can also use AWS Budgets to set a custom reservation utilization target and receive alerts when your utilization drops below the threshold you define.
  • RI utilization alerts support Amazon EC2, Amazon RDS, Amazon Redshift, and Amazon ElastiCache reservations.
  • Budgets can be created and tracked from the AWS Budgets dashboard, or via the Budgets API.
27
Q

After an IT Steering Committee meeting, you have been put in charge of configuring a hybrid environment for the company’s compute resources. You weigh the pros and cons of various technologies based on the requirements you are given. The decision you make is to go with Direct Connect. Which option best describes the features Direct Connect provides?

A

A private, dedicated network connection between your facilities and AWS

  • AWS Direct Connect is a cloud service solution that makes it easy to establish a dedicated network connection from your premises to AWS.
  • Using AWS Direct Connect, you can establish private connectivity between AWS and your datacenter, office, or colocation environment, can reduce your network costs
  • Increase bandwidth throughput
  • Provide a more consistent network experience than internet-based connections.
  • AWS Direct Connect makes it easy to establish a dedicated connection from an on-premises network to one or more VPCs in the same region.
  • Using private VIF on AWS Direct Connect, you can establish private connectivity between AWS and your data center, office, or colocation environment.
  • AWS Direct Connect can reduce network costs, increase bandwidth throughput, and provide a more consistent network experience than internet-based connections.
28
Q

You are managing data storage for your company, and there are many EBS volumes. Your management team has given you some new requirements. Certain metrics on the EBS volumes need to be monitored, and the database team needs to be notified by email when certain metric thresholds are exceeded. Which AWS services can be configured to meet these requirements?

A

CloudWatch

  • CloudWatch can be used to monitor the volume
  • SNS can be used to send emails to the Ops team.
  • Amazon SNS is for messaging-oriented applications, with multiple subscribers requesting and receiving “push” notifications of time-critical messages via a choice of transport protocols, including HTTP, Amazon SQS, and email.
29
Q

You have recently migrated your small company to AWS and are looking for some general best practice guidance within the platform. Which AWS service can help you optimize your AWS environment by giving recommendations to reduce cost, increase security, and improve performance.

A

AWS Trusted Advisor

  • AWS Trusted Advisor is an online tool that provides you realtime guidance to help you provision your resources following AWS best practices.
  • Trusted Advisor checks help optimize your AWS infrastructure, increase security and performance, reduce your overall costs, and monitor service limits.
  • Whether establishing new workflows, developing applications, or as part of ongoing improvement, take advantage of the recommendations provided by Trusted Advisor on a regular basis to help keep your solutions provisioned optimally.