Domain 3: Design High-Performing Architectures Flashcards

1
Q

Your team has provisioned Auto Scaling groups in a single Region. The Auto Scaling groups, at max capacity, would total 40 EC2 On-Demand Instances between them. However, you notice that the Auto Scaling groups will only scale out to a portion of that number of instances at any one time. What could be the problem?

You can have only 20 instances per Region. This is a hard limit.

There is a vCPU-based On-Demand Instance limit per Region.

The associated load balancer can serve only 20 instances at one time.

You can have only 20 instances per Availability Zone.

A

There is a vCPU-based On-Demand Instance limit per Region.

Your AWS account has default quotas, formerly referred to as limits, for each AWS service. Unless otherwise noted, each quota is Region specific. You can request increases for some quotas, and other quotas cannot be increased. Remember that each EC2 instance can have a variance of the number of vCPUs, depending on its type and your configuration, so it’s always wise to calculate your vCPU needs to make sure you are not going to hit quotas easily. Service Quotas is an AWS service that helps you manage your quotas for over 100 AWS services from one location. Along with looking up the quota values, you can also request a quota increase from the Service Quotas console. Reference: AWS Service Quotas Reference: Amazon EC2 Endpoints and Quotas

You can have only 20 instances per Availability Zone.

The instance limit is per Region.

Selected

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

An Application Load Balancer is fronting an Auto Scaling Group of EC2 instances, and the instances are backed by an RDS database. The Auto Scaling Group has been configured to use the Default Termination Policy. You are testing the Auto Scaling Group and have triggered a scale-in. Which instance will be terminated first?

The instance for which the load balancer stops sending traffic.

The instance launched from the oldest launch configuration.

The longest running instance.

The Auto Scaling Group will randomly select an instance to terminate.

A

The instance launched from the oldest launch configuration.

What do we know? The ASG is using the Default Termination Policy. The default termination policy is designed to help ensure that your instances span Availability Zones evenly for high availability. The default policy is kept generic and flexible to cover a range of scenarios. The default termination policy behavior is as follows: Determine which Availability Zones have the most instances, and at least one instance that is not protected from scale in. Determine which instances to terminate so as to align the remaining instances to the allocation strategy for the on-demand or spot instance that is terminating. This only applies to an Auto Scaling Group that specifies allocation strategies. For example, after your instances launch, you change the priority order of your preferred instance types. When a scale-in event occurs, Amazon EC2 Auto Scaling tries to gradually shift the on-demand instances away from instance types that are lower priority. Determine whether any of the instances use the oldest launch template or configuration: [For Auto Scaling Groups that use a launch template] Determine whether any of the instances use the oldest launch template unless there are instances that use a launch configuration. Amazon EC2 Auto Scaling terminates instances that use a launch configuration before instances that use a launch template. [For Auto Scaling Groups that use a launch configuration] Determine whether any of the instances use the oldest launch configuration. After applying all of the above criteria, if there are multiple unprotected instances to terminate, determine which instances are closest to the next billing hour. If there are multiple unprotected instances closest to the next billing hour, terminate one of these instances at random. https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-instance-termination.html

The Auto Scaling Group will randomly select an instance to terminate.

Incorrect. The instance from the oldest launch configuration will be used.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

You work for a large online education company that teaches IT using pre-recorded videos. They want to make their website available to the hearing impaired and need to find a way to convert their videos and audio into speech so they can then display as subtitles. Which AWS service should they use?

Amazon Rekognition

Amazon Comprehend

Amazon Transcribe

Amazon Translate

A

Amazon Transcribe

Amazon Transcribe converts speech to text automatically. You can use this service to generate subtitles.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

You have just started working at a company that is migrating from a physical data center into AWS. Currently, you have 25 TB of data that needs to be moved to an S3 bucket. Your company has just finished setting up a 1 GB Direct Connect drop, but you will not have a VPN up and running for 30 days. This data needs to be encrypted during transit and at rest and must be uploaded to the S3 bucket within 21 days. What is the best way to meet these requirements?

Order a Snowcone device to transmit the data.

Use a Snowball device to transmit the data.

Upload the data to S3 using your public internet connection.

Upload the data using Direct Connect.

A

Use a Snowball device to transmit the data.

This would be the perfect choice to transmit your data. Snowball encrypts your data, so all the security and speed requirements would be met. Reference: Encryption in AWS Snowball

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

You are designing an architecture that will house an Auto Scaling group of EC2 instances. The application hosted on the instances is expected to be extremely popular. Forecasts for traffic to this site predict very high traffic, and you will need a load balancer to handle tens of millions of requests per second while maintaining high throughput at ultra-low latency. You need to select the correct type of load balancer to front your Auto Scaling group to meet this high traffic requirement. Which load balancer should you select?

You will need a Network Load Balancer to meet this requirement.

You will need an Application Load Balancer to meet this requirement.

All the AWS load balancers meet the requirement and perform the same.

You will need a Classic Load Balancer to meet this requirement.

A

You will need a Network Load Balancer to meet this requirement.

If extreme performance is needed for your application, AWS recommends that you use a Network Load Balancer. A Network Load Balancer operates at the connection level (Layer 4), routing connections to targets (Amazon EC2 instances, microservices, and containers) within Amazon VPC, based on IP protocol data. It is ideal for load balancing both TCP and UDP traffic, and is capable of handling millions of requests per second while maintaining ultra-low latencies. It is optimized to handle sudden and volatile traffic patterns while using a single static IP address per Availability Zone, and is integrated with other popular AWS services such as Auto Scaling, Amazon EC2 Container Service (ECS), Amazon CloudFormation, and AWS Certificate Manager (ACM).

References: Network Load Balancer

Elastic Load Balancing Features

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

You work for an advertising company that has a real-time bidding application. You are also using CloudFront on the front end to accommodate a worldwide user base. Your users begin complaining about response times and pauses in real-time bidding. What is the best service that can be used to reduce DynamoDB response times by an order of magnitude (milliseconds to microseconds)?

CloudFront Edge Caches

DynamoDB Auto Scaling

DAX

ElastiCache

A

DAX

Correct. Amazon DynamoDB Accelerator (DAX) is a fully managed, highly available, in-memory cache that can reduce Amazon DynamoDB response times from milliseconds to microseconds, even at millions of requests per second. While DynamoDB offers consistent single-digit millisecond latency, DynamoDB with DAX takes performance to the next level with response times in microseconds for millions of requests per second for read-heavy workloads. With DAX, your applications remain fast and responsive, even when a popular event or news story drives unprecedented request volumes your way. No tuning required. https://aws.amazon.com/dynamodb/dax/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

You work for a large healthcare provider as an AWS lead architect. There is a need to collect data in real-time from devices throughout the organization. The data will include log and event data from sources such as servers, desktops, and mobile devices. The data initially captured will be technical device data, but the goal is to expand the effort to collecting clinical data in real-time from handheld devices used by nurses and doctors. Which AWS service best meets this requirement?

Kinesis Data Streams

Kinesis Video Streams

AWS Redshift

AWS Lambda

A

Kinesis Data Streams

Correct. Kinesis Data Streams can be used to collect log and event data from sources such as servers, desktops, and mobile devices. You can then build Kinesis applications to continuously process the data, generate metrics, power live dashboards, and emit aggregated data into stores such as Amazon S3. https://aws.amazon.com/kinesis/data-streams/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

You have landed your dream job at Amazon and are moving to the Alexa team. You will be tasked with product design and improvement. You meet a new colleague who does not come from a tech background, and they would like to know what services make up the Alexa service. Select the correct services. CHOOSE 3

Amazon Comprehend

Amazon Transcribe

Amazon Lex

Amazon Polly

A

Amazon Transcribe

Amazon Transcribe is used to convert speech to text automatically. You can use this service to generate subtitles. It is part of the Alexa suite of services.

Selected
Amazon Lex

Amazon Lex allows you to build conversational interfaces in your applications using natural-language models. It is part of the Alexa suite of services.

Selected
Amazon Polly

Amazon Polly turns your text into lifelike speech and allows you to create applications that talk to and interact with you using a variety of languages and accents. It is part of the Alexa suite of services.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Your application team stores critical data within a third-party SaaS cloud vendor. The data comes from an internal application that runs on Amazon ECS Fargate, which then stores the data within Amazon S3 in a proprietary format. Currently, AWS Lambda functions are triggered via Amazon S3 event notifications to trigger the transfer of data to a SaaS application. Due to resource and time limits, you are exploring other means of completing this workflow of transferring data from AWS to the SaaS solution. Which AWS service offers the most efficiency and has the least operational overhead?

Amazon EKS

AWS Step Function Fargate Capacity

Amazon EventBridge

Amazon AppFlow

A

Amazon AppFlow

AppFlow offers a fully managed service for easily automating the bidirectional exchange of data to SaaS vendors from AWS services like Amazon S3. This helps avoid resource constraints.

Reference: What Is AppFlow? Reference: Amazon AppFlow

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

You have been brought in as a consultant for a large-scale enterprise that requires assistance with a move to AWS. The application team that is part of the migration currently runs a messaging application that uses RabbitMQ as their message queue software. The consumer and producer applications run on virtual machines via Java runtimes, and they poll the RabbitMQ queues and process messages as they are found. The team wants to avoid any major coding changes on the initial move to the cloud.

Which AWS services would you recommend them to use initially? CHOOSE 2

Configure the producers and consumers to leverage Amazon DynamoDB for storing messages.

Configure the producers and consumers to leverage Amazon SQS for messaging.

Install the Java application on Amazon EC2 instances

Set up Amazon MQ to easily integrate RabbitMQ into AWS.

Break the application functions into individual AWS Lambda functions.

A

Install the Java application on Amazon EC2 instances.

To avoid major coding changes, you can install and run the Java application on Amazon EC2 instances. This most closely resembles the on-premises virtual machines.

Reference: AWS EC2

Selected
Set up Amazon MQ to easily integrate RabbitMQ into AWS.

Leverage Amazon MQ to more easily migrate applications to AWS that currently rely on custom message broker services like RabbitMQ and ActiveMQ.

Reference: Amazon MQ Reference: Working with RabbitMQ

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Your company is in the process of creating a multi-region disaster recovery solution for your database, and you have been tasked to implement it. The required RTO is 1 hour, and the RPO is 15 minutes. What steps can you take to ensure these thresholds are met?

Use Redshift to host your database. Enable “multi-region” failover with Redshift. In the event of a failure, do nothing, as Redshift will handle it for you.

Use RDS to host your database. Create a cross-region read replica of your database. In the event of a failure, promote the read replica to be a standalone database. Send new reads and writes to this database.

Take EBS snapshots of the required EC2 instances nightly. In the event of a disaster, restore the snapshots to another region.

Use RDS to host your database. Enable the Multi-AZ option for your database. In the event of a failure, cut over to the secondary database.

A

Use RDS to host your database. Create a cross-region read replica of your database. In the event of a failure, promote the read replica to be a standalone database. Send new reads and writes to this database.

This would handle both your Recovery Time Objective and Recovery Point Objective. Your data is kept in the secondary region and could easily be accessed when needed. https://aws.amazon.com/rds/features/read-replicas/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

An application is hosted on an EC2 instance in a VPC. The instance is in a subnet in the VPC, and the instance has a public IP address. There is also an internet gateway and a security group with the proper ingress configured. But your testers are unable to access the instance from the Internet. What could be the problem?

A NAT gateway needs to be configured.

Add a route to the route table, from the subnet containing the instance, to the Internet Gateway.

A virtual private gateway needs to be configured.

Make sure the instance has a private IP address.

A

Add a route to the route table, from the subnet containing the instance, to the Internet Gateway.

Correct. The question doesn’t state if the subnet containing the instance is public or private. An internet gateway is a horizontally scaled, redundant, and highly available VPC component that allows communication between your VPC and the internet. To enable access to or from the internet for instances in a subnet in a VPC, you must do the following:

Attach an internet gateway to your VPC.
Add a route to your subnet’s route table that directs internet-bound traffic to the internet gateway. If a subnet is associated with a route table that has a route to an internet gateway, it’s known as a public subnet. If a subnet is associated with a route table that does not have a route to an internet gateway, it’s known as a private subnet.
Ensure that instances in your subnet have a globally unique IP address (public IPv4 address, Elastic IP address, or IPv6 address).
Ensure that your network access control lists and security group rules allow the relevant traffic to flow to and from your instance.
In your subnet route table, you can specify a route for the internet gateway to all destinations not explicitly known to the route table (0.0.0.0/0 for IPv4 or ::/0 for IPv6). Alternatively, you can scope the route to a narrower range of IP addresses. For example, the public IPv4 addresses of your company’s public endpoints outside of AWS, or the elastic IP addresses of other Amazon EC2 instances outside your VPC. To enable communication over the Internet for IPv4, your instance must have a public IPv4 address or an Elastic IP address that’s associated with a private IPv4 address on your instance. Your instance is only aware of the private (internal) IP address space defined within the VPC and subnet. The internet gateway logically provides the one-to-one NAT on behalf of your instance so that when traffic leaves your VPC subnet and goes to the Internet, the reply address field is set to the public IPv4 address or elastic IP address of your instance and not its private IP address. Conversely, traffic that’s destined for the public IPv4 address or elastic IP address of your instance has its destination address translated into the instance’s private IPv4 address before the traffic is delivered to the VPC. To enable communication over the Internet for IPv6, your VPC and subnet must have an associated IPv6 CIDR block, and your instance must be assigned an IPv6 address from the range of the subnet. IPv6 addresses are globally unique, and therefore public by default. https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Internet_Gateway.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

A small startup company has begun using AWS for all of its IT infrastructure. The company has one AWS Solutions Architect and the demands for their time are overwhelming. The software team has been given permission to deploy their Python and PHP applications on their own. They would like to deploy these applications without having to worry about the underlying infrastructure. Which AWS service would they use for deployments?

CloudFormation

Elastic Beanstalk

CloudFront

CodeDeploy

A

Elastic Beanstalk

With Elastic Beanstalk, you can quickly deploy and manage applications in the AWS Cloud without having to learn about the infrastructure that runs those applications. Elastic Beanstalk reduces management complexity without restricting choice or control. You simply upload your application, and Elastic Beanstalk automatically handles the details of capacity provisioning, load balancing, scaling, and application health monitoring. Elastic Beanstalk supports applications developed in Go, Java, .NET, Node.js, PHP, Python, and Ruby. When you deploy your application, Elastic Beanstalk builds the selected supported platform version and provisions one or more AWS resources, such as Amazon EC2 instances, to run your application. https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/Welcome.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

A software company has developed a social gaming application that leverages EC2 web servers with Amazon DynamoDB to store player data, session history, and leaderboards for a huge number of concurrent users. The DynamoDB table has pre-configured read and write capacity units. Users have been reporting slowdown issues, and an analysis has revealed that the application requires response times in microseconds for optimal performance. What step can you take to enable this application to handle read-heavy or bursty workloads, while delivering the fastest possible response time for eventually consistent read operations?

Add a load balancer in front of the EC2 web servers to decouple your application requests synchronously, improving performance for read-heavy and bursty workloads.

Implement in-memory acceleration with DynamoDB Accelerator (DAX).

Deploy Amazon CloudFront to your architecture, so you can cache common Amazon DynamoDB queries and reduce response time to microseconds.

Configure Amazon SQS to queue requests that could be lost and improve the application response time.

A

Implement in-memory acceleration with DynamoDB Accelerator (DAX).

DAX is a DynamoDB-compatible caching service that enables you to benefit from fast in-memory performance for demanding applications. As an in-memory cache, DAX reduces the response times of eventually consistent read workloads by an order of magnitude from single-digit milliseconds to microseconds. AWS also recommends DAX for read-heavy or bursty workloads, since it provides increased throughput and potential operational cost savings by reducing the need to overprovision read capacity units. AWS Documentation: Use Cases for DAX.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

After an IT Steering Committee meeting you have been put in charge of configuring a hybrid environment for the company’s compute resources. You weigh the pros and cons of various technologies based on the requirements you are given. Your primary requirement is the necessity for a private, dedicated connection, which bypasses the Internet and can provide throughput of 10 Gbps. Which option will you select?

AWS VPN

AWS Direct Gateway

VPC Peering

AWS Direct Connect

A

AWS Direct Connect

Correct. AWS Direct Connect can reduce network costs, increase bandwidth throughput, and provide a more consistent network experience than internet-based connections. It uses industry-standard 802.1q VLANs to connect to Amazon VPC using private IP addresses. You can choose from an ecosystem of WAN service providers for integrating your AWS Direct Connect endpoint in an AWS Direct Connect location with your remote networks. AWS Direct Connect lets you establish 1 Gbps or 10 Gbps dedicated network connections (or multiple connections) between AWS networks and one of the AWS Direct Connect locations. You can also work with your provider to create sub-1G connection or use link aggregation group (LAG) to aggregate multiple 1 gigabit or 10 gigabit connections at a single AWS Direct Connect endpoint, allowing you to treat them as a single, managed connection. A Direct Connect gateway is a globally available resource to enable connections to multiple Amazon VPCs across different regions or AWS accounts. https://docs.aws.amazon.com/whitepapers/latest/aws-vpc-connectivity-options/aws-direct-connect.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly