Auto Scaling (SA Associate) Flashcards

1
Q

A commercial bank has a forex trading application. They created an Auto Scaling group of EC2 instances that allow the bank to cope with the current traffic and achieve cost-efficiency. They want the Auto Scaling group to behave in such a way that it will follow a predefined set of parameters before it scales down the number of EC2 instances, which protects the system from unintended slowdown or unavailability.

Which of the following statements are true regarding the cooldown period? (Select TWO.)

1.It ensures that the Auto Scaling group launches or terminates additional EC2 instances without any downtime.

  1. Its default value is 600 seconds.
  2. It ensures that the Auto Scaling group does not launch or terminate additional EC2 instances before the previous scaling activity takes effect.
  3. It ensures that before the Auto Scaling group scales out, the EC2 instances have an ample time to cooldown.
  4. Its default value is 300 seconds.
A
  1. It ensures that the Auto Scaling group does not launch or terminate additional EC2 instances before the previous scaling activity takes effect.
  2. Its default value is 300 seconds.

In Auto Scaling, the following statements are correct regarding the cooldown period:

It ensures that the Auto Scaling group does not launch or terminate additional EC2 instances before the previous scaling activity takes effect.
Its default value is 300 seconds.
It is a configurable setting for your Auto Scaling group.
The following options are incorrect:

– It ensures that before the Auto Scaling group scales out, the EC2 instances have ample time to cooldown.

– It ensures that the Auto Scaling group launches or terminates additional EC2 instances without any downtime.

– Its default value is 600 seconds.

These statements are inaccurate and don’t depict what the word “cooldown” actually means for Auto Scaling. The cooldown period is a configurable setting for your Auto Scaling group that helps to ensure that it doesn’t launch or terminate additional instances before the previous scaling activity takes effect. After the Auto Scaling group dynamically scales using a simple scaling policy, it waits for the cooldown period to complete before resuming scaling activities.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

A suite of web applications is hosted in an Auto Scaling group of EC2 instances across three Availability Zones and is configured with default settings. There is an Application Load Balancer that forwards the request to the respective target group on the URL path. The scale-in policy has been triggered due to the low number of incoming traffic to the application.

Which EC2 instance will be the first one to be terminated by your Auto Scaling group?

  1. The EC2 instance which has been running for the longest time
  2. The instance will be randomly selected by the Auto Scaling group
  3. The EC2 instance which has the least number of user sessions
  4. The EC2 instance launched from the oldest launch template.
A
  1. The EC2 instance launched from the oldest launch template.

The default termination policy is designed to help ensure that your network architecture spans Availability Zones evenly. With the default termination policy, the behavior of the Auto Scaling group is as follows:

  1. If there are instances in multiple Availability Zones, choose the Availability Zone with the most instances and at least one instance that is not protected from scale in. If there is more than one Availability Zone with this number of instances, choose the Availability Zone with the instances that use the oldest launch template.
  2. Determine which unprotected instances in the selected Availability Zone use the oldest launch template. If there is one such instance, terminate it.
  3. If there are multiple instances to terminate based on the above criteria, determine which unprotected instances are closest to the next billing hour. (This helps you maximize the use of your EC2 instances and manage your Amazon EC2 usage costs.) If there is one such instance, terminate it.
  4. If there is more than one unprotected instance closest to the next billing hour, choose one of these instances at random.

The option that says: The EC2 instance which has the least number of user sessions is incorrect because the number of user sessions is not a factor considered by Amazon EC2 Auto Scaling groups when deciding which instances to terminate during a scale-in event.

The option that says: The EC2 instance which has been running for the longest time is incorrect because the duration for which an EC2 instance has been running is not a factor considered by Amazon EC2 Auto Scaling groups when deciding which instances to terminate during a scale-in event.

The option that says: The instance will be randomly selected by the Auto Scaling group is incorrect because Amazon EC2 Auto Scaling groups do

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

A tech company is currently using Auto Scaling for their web application. A new AMI now needs to be used for launching a fleet of EC2 instances. Which of the following changes needs to be done?

  1. Create a new target group.
  2. Create a new target group and launch template.
  3. Create a new launch template.
  4. Do nothing. You can start directly launching EC2 instances in the Auto Scaling group with the same launch template.
A
  1. Create a new launch template.

A launch template is a template that an Auto Scaling group uses to launch EC2 instances. When you create a launch template, you specify information for the instances, such as the ID of the Amazon Machine Image (AMI), the instance type, a key pair, one or more security groups, and a block device mapping. If you’ve launched an EC2 instance before, you specified the same information in order to launch the instance.

You can specify your launch template with multiple Auto Scaling groups. However, you can only specify one launch template for an Auto Scaling group at a time, and you can’t modify a launch template after you’ve created it. Therefore, if you want to change the launch template for an Auto Scaling group, you must create a template and then update your Auto Scaling group with the new launch template.

For this scenario, you have to create a new launch template. Remember that you can’t modify a launch template after you’ve created it.

Hence, the correct answer is: Create a new launch template.

The option that says: Do nothing. You can start directly launching EC2 instances in the Auto Scaling group with the same launch template is incorrect because what you are trying to achieve is to change the AMI being used by your fleet of EC2 instances. Therefore, you need to change the launch template to update what your instances are using.

The option that says: Create a new target group and Create a new target group and launch template are both incorrect because you only want to change the AMI being used by your instances, and not the instances themselves. Target groups are primarily used in ELBs and not in Auto Scaling. The scenario didn’t mention that the architecture has a load balancer. Therefore, you should be updating your launch template, not the target group.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

A major TV network has a web application running on eight Amazon T3 EC2 instances behind an application load balancer. The number of requests that the application processes are consistent and do not experience spikes. A Solutions Architect must configure an Auto Scaling group for the instances to ensure that the application is running at all times.

Which of the following options can satisfy the given requirements?

  1. Deploy four EC2 instances with Auto Scaling in one Availability Zone and four in another availability zone in the same region behind an Amazon Elastic Load Balancer.
  2. Deploy two EC2 instances with Auto Scaling in four regions behind an Amazon Elastic Load Balancer.
  3. Deploy eight EC2 instances with Auto Scaling in one Availability Zone behind an Amazon Elastic Load Balancer.
  4. Deploy four EC2 instances with Auto Scaling in one region and four in another region behind an Amazon Elastic Load Balancer.
A
  1. Deploy four EC2 instances with Auto Scaling in one Availability Zone and four in another availability zone in the same region behind an Amazon Elastic Load Balancer.

The best option to take is to deploy four EC2 instances in one Availability Zone and four in another availability zone in the same region behind an Amazon Elastic Load Balancer. In this way, if one availability zone goes down, there is still another available zone that can accommodate traffic.
When the first AZ goes down, the second AZ will only have an initial 4 EC2 instances. This will eventually be scaled up to 8 instances since the solution is using Auto Scaling.

The 110% compute capacity for the 4 servers might cause some degradation of the service but not a total outage since there are still some instances that handle the requests. Depending on your scale-up configuration in your Auto Scaling group, the additional 4 EC2 instances can be launched in a matter of minutes.

T3 instances also have a Burstable Performance capability to burst or go beyond the current compute capacity of the instance to higher performance as required by your workload. So your 4 servers will be able to manage 110% compute capacity for a short period of time. This is the power of cloud computing versus our on-premises network architecture. It provides elasticity and unparalleled scalability.

Take note that Auto Scaling will launch additional EC2 instances to the remaining Availability Zone/s in the event of an Availability Zone outage in the region. Hence, the correct answer is the option that says: Deploy four EC2 instances with Auto Scaling in one Availability Zone and four in another availability zone in the same region behind an Amazon Elastic Load Balancer.

The option that says: Deploy eight EC2 instances with Auto Scaling in one Availability Zone behind an Amazon Elastic Load Balancer is incorrect because this architecture is not highly available. If that Availability Zone goes down, then your web application will be unreachable.

The options that say: Deploy four EC2 instances with Auto Scaling in one region and four in another region behind an Amazon Elastic Load Balancer and Deploy two EC2 instances with Auto Scaling in four regions behind an Amazon Elastic Load Balancer are incorrect because the ELB is designed to only run in one region and not across multiple regions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

An application is hosted in an Auto Scaling group of EC2 instances. To improve the monitoring process, you have to configure the current capacity to increase or decrease based on a set of scaling adjustments. This should be done by specifying the scaling metrics and threshold values for the CloudWatch alarms that trigger the scaling process.

Which of the following is the most suitable type of scaling policy that you should use?

  1. Target tracking scaling
  2. Step scaling
  3. Scheduled Scaling
  4. Simple scaling
A
  1. Step scaling

With step scaling, you choose scaling metrics and threshold values for the CloudWatch alarms that trigger the scaling process as well as define how your scalable target should be scaled when a threshold is in breach for a specified number of evaluation periods. Step scaling policies increase or decrease the current capacity of a scalable target based on a set of scaling adjustments, known as step adjustments. The adjustments vary based on the size of the alarm breach. After a scaling activity is started, the policy continues to respond to additional alarms, even while a scaling activity is in progress. Therefore, all alarms that are breached are evaluated by Application Auto Scaling as it receives the alarm messages.

When you configure dynamic scaling, you must define how to scale in response to changing demand. For example, you have a web application that currently runs on two instances and you want the CPU utilization of the Auto Scaling group to stay at around 50 percent when the load on the application changes. This gives you extra capacity to handle traffic spikes without maintaining an excessive amount of idle resources. You can configure your Auto Scaling group to scale automatically to meet this need. The policy type determines how the scaling action is performed.

Amazon EC2 Auto Scaling supports the following types of scaling policies:

Target tracking scaling – Increase or decrease the current capacity of the group based on a target value for a specific metric. This is similar to the way that your thermostat maintains the temperature of your home – you select a temperature and the thermostat does the rest.

Step scaling – Increase or decrease the current capacity of the group based on a set of scaling adjustments, known as step adjustments, that vary based on the size of the alarm breach.

Simple scaling – Increase or decrease the current capacity of the group based on a single scaling adjustment.

If you are scaling based on a utilization metric that increases or decreases proportionally to the number of instances in an Auto Scaling group, then it is recommended that you use target tracking scaling policies. Otherwise, it is better to use step scaling policies instead.

Hence, the correct answer in this scenario is Step Scaling.

Target tracking scaling is incorrect because the target tracking scaling policy increases or decreases the current capacity of the group based on a target value for a specific metric instead of a set of scaling adjustments.

Simple scaling is incorrect because the simple scaling policy increases or decreases the current capacity of the group based on a single scaling adjustment instead of a set of scaling adjustments.

Scheduled Scaling is incorrect because the scheduled scaling policy is based on a schedule that allows you to set your own scaling schedule for predictable load changes. This is not considered as one of the types of dynamic scaling.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

A web application hosted in an Auto Scaling group of EC2 instances in AWS. The application receives a burst of traffic every morning, and a lot of users are complaining about request timeouts. The EC2 instance takes 1 minute to boot up before it can respond to user requests. The cloud architecture must be redesigned to better respond to the changing traffic of the application.

How should the Solutions Architect redesign the architecture?

  1. Create a CloudFront distribution and set the EC2 instance as the origin.
  2. Create a step scaling policy and configure an instance warm-up time condition.
  3. Create a new launch template and upgrade the size of the instance.
  4. Create a Network Load Balancer with slow-start mode.
A
  1. Create a step scaling policy and configure an instance warm-up time condition.

Amazon EC2 Auto Scaling helps you maintain application availability and allows you to automatically add or remove EC2 instances according to conditions you define. You can use the fleet management features of EC2 Auto Scaling to maintain the health and availability of your fleet. You can also use the dynamic and predictive scaling features of EC2 Auto Scaling to add or remove EC2 instances. Dynamic scaling responds to changing demand and predictive scaling automatically schedules the right number of EC2 instances based on predicted demand. Dynamic scaling and predictive scaling can be used together to scale faster.

Step scaling applies “step adjustments” which means you can set multiple actions to vary the scaling depending on the size of the alarm breach. When you create a step scaling policy, you can also specify the number of seconds that it takes for a newly launched instance to warm up.

Hence, the correct answer is: Create a step scaling policy and configure an instance warm-up time condition.

The option that says: Create a Network Load Balancer with slow start mode is incorrect because Network Load Balancer does not support slow start mode. If you need to enable slow start mode, you should use Application Load Balancer.

The option that says: Create a new launch template and upgrade the size of the instance is incorrect because a larger instance does not always improve the boot time. Instead of upgrading the instance, you should create a step scaling policy and add a warm-up time.

The option that says: Create a CloudFront distribution and set the EC2 instance as the origin is incorrect because this approach only resolves the traffic latency. Take note that the requirement in the scenario is to resolve the timeout issue and not the traffic latency.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

A commercial bank has designed its next-generation online banking platform to use a distributed system architecture. As their Software Architect, you have to ensure that their architecture is highly scalable, yet still cost-effective.

Which of the following will provide the most suitable solution for this scenario?

  1. Launch multiple EC2 instances behind an Application Load Balancer to host your application services and SNS which will act as a highly-scalable buffer that stores messages as they travel between distributed applications.
  2. Launch an Auto-Scaling group of EC2 instances to host your application services and an SQS queue. Include an Auto Scaling trigger to watch the SQS queue size which will either scale in or scale out the number of EC2 instances based on the queue.
  3. Launch multiple EC2 instances behind an Application Load Balancer to host your application services, and Step Functions which will act as a highly-scalable buffer that stores messages as they travel between distributed applications.
  4. Launch multiple On-Demand EC2 instances to host your application services and an SQS queue which will act as a highly-scalable buffer that stores messages as they travel between distributed applications.
A
  1. Launch an Auto-Scaling group of EC2 instances to host your application services and an SQS queue. Include an Auto Scaling trigger to watch the SQS queue size which will either scale in or scale out the number of EC2 instances based on the queue.

An Auto Scaling group allows you to maintain the health and availability of your application by scaling the number of EC2 instances up or down based on demand. This is particularly useful in a banking platform where traffic can be highly variable. The group can scale out (add more instances) during high-demand periods and scale in (remove instances) during low demand periods, ensuring optimal resource utilization and cost-effectiveness.

Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. In the context of a banking platform, SQS can act as a buffer for messages as they travel between distributed applications, ensuring that no message is lost and that the order of transactions is maintained.

There are three main parts in a distributed messaging system: the components of your distributed system, which can be hosted on EC2 instance; your queue (distributed on Amazon SQS servers); and the messages in the queue.

Hence, the correct answer is: Launch an Auto-Scaling group of EC2 instances to host your application services and an SQS queue. Include an Auto Scaling trigger to watch the SQS queue size which will either scale in or scale out the number of EC2 instances based on the queue.

The option that says: Launch multiple EC2 instances behind an Application Load Balancer to host your application services and SNS which will act as a highly-scalable buffer that stores messages as they travel between distributed applications is incorrect because Amazon SNS is designed for pub-sub (publish-subscribe) messaging and mobile notifications. It is not the best choice for a banking platform where transactions and data consistency are critical.

The option that says: Launch multiple EC2 instances behind an Application Load Balancer to host your application services, and Step Functions which will act as a highly-scalable buffer that stores messages as they travel between distributed applications is incorrect because Step Functions is primarily designed for orchestrating complex workflows and business processes, rather than acting as a buffer for messages. For a banking platform, a more suitable service might be one that provides reliable message queuing and delivery, such as Amazon SQS.

The option that says: Launch multiple On-Demand EC2 instances to host your application services and an SQS queue which will act as a highly-scalable buffer that stores messages as they travel between distributed applications is incorrect because On-Demand EC2 instances are expensive and might not be the most cost-effective solution for a banking platform that needs to be highly scalable. SQS (Simple Queue Service) is a good choice for buffering messages between distributed applications, but the cost of running multiple On-Demand EC2 instances might outweigh the benefits.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

A tech company has a CRM application hosted on an Auto Scaling group of On-Demand EC2 instances with different instance types and sizes. The application is extensively used during office hours from 9 in the morning to 5 in the afternoon. Their users are complaining that the performance of the application is slow during the start of the day but then works normally after a couple of hours.

Which of the following is the MOST operationally efficient solution to implement to ensure the application works properly at the beginning of the day?

  1. Configure a Predictive scaling policy for the Auto Scaling group to automatically adjust the number of Amazon EC2 instances
  2. Configure a Dynamic scaling policy for the Auto Scaling group to launch new instances based on the CPU utilization.
  3. Configure a Dynamic scaling policy for the Auto Scaling group to launch new instances based on the Memory utilization.
  4. Configure a Scheduled scaling policy for the Auto Scaling group to launch new instances before the start of the day.
A
  1. Configure a Scheduled scaling policy for the Auto Scaling group to launch new instances before the start of the day.

Scaling based on a schedule allows you to scale your application in response to predictable load changes. For example, every week the traffic to your web application starts to increase on Wednesday, remains high on Thursday, and starts to decrease on Friday. You can plan your scaling activities based on the predictable traffic patterns of your web application.

To configure your Auto Scaling group to scale based on a schedule, you create a scheduled action. The scheduled action tells Amazon EC2 Auto Scaling to perform a scaling action at specified times. To create a scheduled scaling action, you specify the start time when the scaling action should take effect and the new minimum, maximum, and desired sizes for the scaling action. At the specified time, Amazon EC2 Auto Scaling updates the group with the values for minimum, maximum, and desired size specified by the scaling action. You can create scheduled actions for scaling one time only or for scaling on a recurring schedule.

Hence, configuring a Scheduled scaling policy for the Auto Scaling group to launch new instances before the start of the day is the correct answer. You need to configure a Scheduled scaling policy. This will ensure that the instances are already scaled up and ready before the start of the day since this is when the application is used the most.

The following options are both incorrect. Although these are valid solutions, it is still better to configure a Scheduled scaling policy as you already know the exact peak hours of your application. By the time either the CPU or Memory hits a peak, the application already has performance issues, so you need to ensure the scaling is done beforehand using a Scheduled scaling policy:

-Configure a Dynamic scaling policy for the Auto Scaling group to launch new instances based on the CPU utilization

-Configure a Dynamic scaling policy for the Auto Scaling group to launch new instances based on the Memory utilization

The option that says: Configure a Predictive scaling policy for the Auto Scaling group to automatically adjust the number of Amazon EC2 instances is incorrect. Although this type of scaling policy can be used in this scenario, it is not the most operationally efficient option. Take note that the scenario mentioned that the Auto Scaling group consists of Amazon EC2 instances with different instance types and sizes. Predictive scaling assumes that your Auto Scaling group is homogenous, which means that all EC2 instances are of equal capacity. The forecasted capacity can be inaccurate if you are using a variety of EC2 instance sizes and types on your Auto Scaling group.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Your company runs a web application on a fleet of EC2 instances behind an Elastic Load Balancer (ELB). You need to ensure that the application can handle sudden traffic spikes. Which Auto Scaling policy should you use?

A. Scheduled Scaling
B. Target Tracking Scaling
C. Step Scaling
D. Predictive Scaling

A

Answer: B. Target Tracking Scaling

Explanation: Target Tracking Scaling adjusts the number of instances in your Auto Scaling group to maintain a target metric, such as CPU utilization, which is ideal for handling sudden traffic spikes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

You have an Auto Scaling group with a minimum size of 2 instances, a desired capacity of 4 instances, and a maximum size of 6 instances. If one instance fails, what will Auto Scaling do?

A. Launch a new instance to replace the failed one.
B. Terminate another instance to maintain balance.
C. Do nothing until the desired capacity is manually adjusted.
D. Reduce the desired capacity to 3.

A

Answer: A. Launch a new instance to replace the failed one.

Explanation: Auto Scaling ensures that the desired capacity is maintained by launching a new instance to replace any failed instances.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Your application requires a specific instance type for optimal performance, but during peak times, this instance type is not always available. How can you ensure your Auto Scaling group can still scale out?

A. Use a single instance type in the launch configuration.
B. Use multiple instance types in the launch template.
C. Increase the maximum size of the Auto Scaling group.
D. Use Spot Instances exclusively.

A

Answer: B. Use multiple instance types in the launch template.

Explanation: Using multiple instance types in the launch template allows the Auto Scaling group to choose from a variety of instance types, increasing the chances of finding available capacity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

You need to ensure that your Auto Scaling group launches instances in multiple Availability Zones for high availability. What should you configure?

A. Set the desired capacity to the number of Availability Zones.
B. Specify multiple subnets in the Auto Scaling group.
C. Use a single subnet with multiple Availability Zones.
D. Enable cross-zone load balancing on the ELB.

A

Answer: B. Specify multiple subnets in the Auto Scaling group.

Explanation: Specifying multiple subnets in different Availability Zones ensures that instances are distributed across these zones, enhancing high availability.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Your Auto Scaling group is not scaling out as expected during high CPU utilization. What could be a possible reason?

A. The scaling policy is set to scale in only.
B. The cooldown period is too short.
C. The CloudWatch alarm threshold is too high.
D. The Auto Scaling group is using Spot Instances.

A

Answer: C. The CloudWatch alarm threshold is too high.

Explanation: If the CloudWatch alarm threshold is set too high, the alarm may not trigger the scaling action even when CPU utilization is high.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

You want to ensure that your Auto Scaling group scales in gradually to avoid sudden drops in capacity. Which scaling policy should you use?

A. Simple Scaling
B. Step Scaling
C. Target Tracking Scaling
D. Predictive Scaling

A

Answer: B. Step Scaling

Explanation: Step Scaling allows you to define multiple steps for scaling in or out, enabling gradual adjustments to the number of instances.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Your application runs on a mix of On-Demand and Spot Instances. How can you ensure that your Auto Scaling group maintains a balance between these instance types?

A. Use a single launch configuration.
B. Use a launch template with mixed instance policies.
C. Manually adjust the instance types.
D. Use Reserved Instances.

A

Answer: B. Use a launch template with mixed instance policies.

Explanation: A launch template with mixed instance policies allows you to specify a mix of On-Demand and Spot Instances, ensuring a balanced approach.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

You need to scale your Auto Scaling group based on the number of messages in an SQS queue. What should you configure?

A. A CloudWatch alarm based on CPU utilization.
B. A CloudWatch alarm based on the SQS queue length.
C. A scheduled scaling policy.
D. A target tracking policy based on memory usage.

A

Answer: B. A CloudWatch alarm based on the SQS queue length.

Explanation: Scaling based on the SQS queue length ensures that the number of instances adjusts according to the workload in the queue.

17
Q

Your Auto Scaling group is configured with a health check grace period of 300 seconds. What does this mean?

A. Instances are considered healthy for 300 seconds after launch.
B. Instances are terminated if unhealthy for 300 seconds.
C. Health checks are performed every 300 seconds.
D. Instances are launched every 300 seconds.

A

Answer: A. Instances are considered healthy for 300 seconds after launch.

Explanation: The health check grace period allows new instances time to start and pass initial health checks before being considered unhealthy.

18
Q

You need to ensure that your Auto Scaling group launches instances with the latest AMI. What should you do?

A. Manually update the launch configuration.
B. Use a launch template with versioning.
C. Use a scheduled scaling policy.
D. Enable automatic AMI updates.

A

Answer: B. Use a launch template with versioning.

Explanation: Launch templates with versioning allow you to specify the latest AMI version, ensuring that new instances use the most up-to-date image.

19
Q

Your application requires a specific configuration file to be present on all instances. How can you ensure this file is available on all instances launched by your Auto Scaling group?

A. Use a custom AMI with the configuration file.
B. Use EC2 User Data to download the file at launch.
C. Manually copy the file to each instance.
D. Use an EFS file system.

A

Answer: B. Use EC2 User Data to download the file at launch.

Explanation: EC2 User Data scripts can be used to download and configure files on instances at launch, ensuring consistency.

20
Q

You need to ensure that your Auto Scaling group scales out when the average CPU utilization exceeds 70% for 5 minutes. What should you configure?

A. A simple scaling policy.
B. A step scaling policy.
C. A target tracking policy.
D. A scheduled scaling policy.

A

Answer: C. A target tracking policy.

Explanation: Target tracking policies automatically adjust the number of instances to maintain a specified target, such as CPU utilization.

21
Q

Your Auto Scaling group is configured to use Spot Instances, but you want to ensure that at least 2 instances are always On-Demand. How can you achieve this?

A. Use a single launch configuration with On-Demand instances.
B. Use a launch template with mixed instance policies and set the On-Demand base capacity to 2.
C. Manually launch On-Demand instances.
D. Use Reserved Instances for the base capacity.

A

Answer: B. Use a launch template with mixed instance policies and set the On-Demand base capacity to 2.

Explanation: Mixed instance policies in a launch template allow you to specify a base capacity of On-Demand instances, ensuring a minimum number of On-Demand instances.

22
Q

You need to ensure that your Auto Scaling group scales in only during off-peak hours. What should you configure?

A. A simple scaling policy.
B. A step scaling policy.
C. A target tracking policy.
D. A scheduled scaling policy.

A

Answer: D. A scheduled scaling policy.

Explanation: Scheduled scaling policies allow you to define specific times for scaling actions, such as scaling in during off-peak hours.

23
Q

Your application requires instances to be evenly distributed across multiple Availability Zones. How can you ensure this with your Auto Scaling group?

A. Set the desired capacity to the number of Availability Zones.
B. Use a launch configuration with multiple subnets.
C. Enable cross-zone load balancing on the ELB.
D. Set the Auto Scaling group to balance across Availability Zones.

A

Answer: D. Set the Auto Scaling group to balance across Availability Zones.

Explanation: Configuring the Auto Scaling group to balance across Availability Zones ensures even distribution of instances for high availability.

24
Q

You need to ensure that your Auto Scaling group scales out based on memory usage. What should you configure?

A. A CloudWatch alarm based on CPU utilization.
B. A CloudWatch alarm based on memory usage.
C. A scheduled scaling policy.
D. A target tracking policy based on memory usage.

A

Answer: B. A CloudWatch alarm based on memory usage.

Explanation: CloudWatch alarms based on memory usage can trigger scaling actions when memory usage exceeds a specified threshold.

25
Q

Your company runs a high-traffic e-commerce website on EC2 instances behind an Application Load Balancer (ALB). During a flash sale, you notice that the instances are not scaling out quickly enough, causing performance issues. How can you improve the responsiveness of your Auto Scaling group?

A. Increase the cooldown period.
B. Decrease the cooldown period.
C. Use a combination of step scaling and target tracking policies.
D. Switch to a Network Load Balancer (NLB).

A

Answer: C. Use a combination of step scaling and target tracking policies.

Explanation: Combining step scaling and target tracking policies allows for more responsive scaling actions, addressing both immediate and sustained increases in traffic.

26
Q

You have an Auto Scaling group that needs to maintain a specific number of instances across multiple regions for disaster recovery purposes. How can you achieve this?

A. Use a single Auto Scaling group with cross-region replication.
B. Create separate Auto Scaling groups in each region and use AWS Global Accelerator.
C. Use AWS CloudFormation to manage cross-region scaling.
D. Implement a custom script to synchronize Auto Scaling groups across regions.

A

Answer: B. Create separate Auto Scaling groups in each region and use AWS Global Accelerator.

Explanation: Separate Auto Scaling groups in each region ensure regional independence and high availability, while AWS Global Accelerator provides global traffic management.

27
Q

Your application requires instances to be launched with specific tags for cost allocation and management. How can you ensure that all instances in your Auto Scaling group have the required tags?

A. Manually tag each instance after launch.
B. Use a launch template with the required tags.
C. Use AWS Lambda to tag instances after they are launched.
D. Configure the Auto Scaling group to apply tags to instances.

A

Answer: D. Configure the Auto Scaling group to apply tags to instances.

Explanation: Configuring the Auto Scaling group to apply tags ensures that all instances are tagged automatically upon launch.

28
Q

Your Auto Scaling group is configured to use Spot Instances, but you want to ensure that your application remains highly available even if Spot Instances are interrupted. What should you do?

A. Use a launch template with mixed instance policies and set a base capacity of On-Demand instances.
B. Increase the maximum size of the Auto Scaling group.
C. Use Reserved Instances for critical workloads.
D. Implement a fallback mechanism using Lambda functions.

A

Answer: A. Use a launch template with mixed instance policies and set a base capacity of On-Demand instances.

Explanation: Mixed instance policies with a base capacity of On-Demand instances ensure that critical workloads remain available even if Spot Instances are interrupted.

29
Q

You need to ensure that your Auto Scaling group scales in gradually to avoid sudden drops in capacity, but also scales out quickly during traffic spikes. Which combination of scaling policies should you use?

A. Simple Scaling and Scheduled Scaling
B. Step Scaling and Target Tracking Scaling
C. Predictive Scaling and Simple Scaling
D. Target Tracking Scaling and Scheduled Scaling

A

Answer: B. Step Scaling and Target Tracking Scaling

Explanation: Step Scaling allows for gradual scaling in, while Target Tracking Scaling ensures quick scaling out during traffic spikes.

30
Q

Your application runs on a mix of On-Demand and Spot Instances. You notice that during peak hours, the Spot Instances are frequently interrupted, causing instability. How can you improve the stability of your application?

A. Increase the number of Spot Instances.
B. Use a launch template with a higher base capacity of On-Demand instances.
C. Switch to using only On-Demand Instances.
D. Use Reserved Instances for peak hours.

A

Answer: B. Use a launch template with a higher base capacity of On-Demand instances.

Explanation: Increasing the base capacity of On-Demand instances ensures that a stable number of instances are always available, reducing the impact of Spot Instance interruptions.

31
Q

You need to ensure that your Auto Scaling group scales out based on the latency of requests to your application. What should you configure?

A. A CloudWatch alarm based on CPU utilization.
B. A CloudWatch alarm based on request latency.
C. A scheduled scaling policy.
D. A target tracking policy based on request latency.

A

Answer: B. A CloudWatch alarm based on request latency.

Explanation: A CloudWatch alarm based on request latency allows you to trigger scaling actions when latency exceeds a specified threshold.

32
Q
A