Practice Exam 1 Flashcards

1
Q
  1. A development team has configured its AWS VPC with one public and one private subnet. The public subnet has an Amazon EC2 instance that hosts the application. The private subnet has the RDS database that the application needs to communicate with.

Which of the following would you identify as the correct way to configure a solution for the given requirement?

  • Subnets inside a VPC can communicate with each other without the need for any further configuration. Hence, no additional configurations are needed
  • Configure a VPC peering for enabling communication between the subnets
  • Elastic IP can be configured to initiate communication between private and public subnets
  • Create a Security Group that allows connection from different subnets inside a VPC
A

Subnets inside a VPC can communicate with each other without the need for any further configuration. Hence, no additional configurations are needed - Subnets inside a VPC can communicate with each other without any additional configurations.

A virtual private cloud (VPC) is a virtual network dedicated to your AWS account. It is logically isolated from other virtual networks in the AWS Cloud. A subnet is a range of IP addresses in your VPC. You can launch AWS resources into a specified subnet. Use a public subnet for resources that must be connected to the internet, and a private subnet for resources that won’t be connected to the internet.

A route table contains a set of rules, called routes, that are used to determine where network traffic from your VPC is directed. You can explicitly associate a subnet with a particular route table. Otherwise, the subnet is implicitly associated with the main route table.

The first entry in the Main route table is the default entry for local routing in the VPC; this entry enables the instances (potentially belonging to different subnets) in the VPC to communicate with each other.

via - https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Scenario2.html

Incorrect options:

Elastic IP can be configured to initiate communication between private and public subnets - An Elastic IP address is a reserved public IP address that you can assign to any EC2 instance in a particular region until you choose to release it. Elastic IP is not needed for resources to talk across subnets in the same VPC.

Configure a VPC peering for enabling communication between the subnets - A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them using private IPv4 addresses or IPv6 addresses. It is not needed for resources inside the same VPC.

Create a Security Group that allows connection from different subnets inside a VPC - A security group acts as a virtual firewall for your instance to control inbound and outbound traffic. Security groups act at the instance level, not the subnet level.

References:

https: //docs.aws.amazon.com/vpc/latest/userguide/VPC_Scenario2.html
https: //docs.aws.amazon.com/vpc/latest/userguide/how-it-works.html#what-is-route-tables
https: //docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q
  1. As a SysOps Administrator, you have been contacted by a team for troubleshooting a security issue they seem to be facing. A security check red flag is being raised for the security groups created by AWS Directory Services. The flag message says “Security Groups - Unrestricted Access.”

How will you troubleshoot this issue?

  • Ignore or suppress the red flag since it is safe to do so, in this scenario
  • AWS Directory Service might have been initiated from an account that does not have proper permissions. Check the permissions on the IAM roles and IAM users used to initiate the service
  • Use AWS Trusted Advisor to know the exact reason for this error and take action as recommended by the Trusted Advisor
  • The security group configurations have to be checked and edited to cater to AWS security standards
A

Explanation
Correct option:

Ignore or suppress the red flag since it is safe to do so, in this scenario - AWS Directory Services is a managed service that automatically creates an AWS security group in your VPC with network rules for traffic in and out of AWS managed domain controllers. The default inbound rules allow traffic from any source (0.0.0.0/0) to ports required by Active Directory. These rules do not introduce security vulnerabilities, as traffic to the domain controllers is limited to traffic from your VPC, other peered VPCs, or networks connected using AWS Direct Connect, AWS Transit Gateway or Virtual Private Network.

In addition, the ENIs the security group is attached to, do not and cannot have Elastic IPs attached to them, limiting inbound traffic to local VPC and VPC routed traffic.

Incorrect options:

The security group configurations have to be checked and edited to cater to AWS security standards

Use AWS Trusted Advisor to know the exact reason for this error and take action as recommended by the Trusted Advisor

AWS Directory Service might have been initiated from an account that does not have proper permissions. Check the permissions on the IAM roles and IAM users used to initiate the service

These three options contradict the explanation provided above, so these options are incorrect.

Reference:

https://aws.amazon.com/premiumsupport/faqs/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

2 As part of the ongoing system maintenance, a SysOps Administrator has decided to increase the storage capacity of an EBS volume that is attached to an Amazon EC2 instance. However, the increased size is not reflected in the file system.

What has gone wrong in the configuration and how can it be fixed?

  • After you increase the size of an EBS volume, you must extend the file system to a larger size
  • EBS volume needs to be detached and attached back again to the instance for the modifications to show
  • EBS volume might be encrypted. Encrypted EBS volumes will not show modifications done when still attached to the instance. Detach the EBS volume and attach it back
  • Linux servers automatically pick the modifications done to EBS volumes, but Windows servers do not offer this feature. Use the Windows Disk Management utility to increase the disk size to the new modified volume size
A

Explanation
Correct option:

After you increase the size of an EBS volume, you must extend the file system to a larger size - After you increase the size of an EBS volume, you must use the file-system specific commands to extend the file system to the larger size. You can resize the file system as soon as the volume enters the optimizing state.

The process for extending a file system on Linux is as follows:

Your EBS volume might have a partition that contains the file system and data. Increasing the size of a volume does not increase the size of the partition. Before you extend the file system on a resized volume, check whether the volume has a partition that must be extended to the new size of the volume.

Use a file system-specific command to resize each file system to the new volume capacity.

Incorrect options:

EBS volume needs to be detached and attached back again to the instance for the modifications to show - This is incorrect and has been added as a distractor.

EBS volume might be encrypted. Encrypted EBS volumes will not show modifications done when still attached to the instance. Detach the EBS volume and attach it back - EBS volume encryption has no bearing on the given scenario.

Linux servers automatically pick the modifications done to EBS volumes, but Windows servers do not offer this feature. Use the Windows Disk Management utility to increase the disk size to the new modified volume size - As discussed above, You need to manually extend the size of the file system after increasing the size of EBS volume.

On the Windows file system, after you increase the size of an EBS volume, use the Windows Disk Management utility or PowerShell to extend the disk size to the new size of the volume. You can begin resizing the file system as soon as the volume enters the optimizing state.

References:

https: //docs.aws.amazon.com/AWSEC2/latest/UserGuide/recognize-expanded-volume-linux.html
https: //docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/recognize-expanded-volume-windows.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

3 A systems administrator is configuring Amazon EC2 status check alarm to publish a notification to an SNS topic when the instance fails either the instance check or system status check.

Which CloudWatch metric is the right choice for this configuration?

StatusCheckFailed
CombinedStatusCheckFailed
StatusCheckFailed_System

A

Explanation
Correct option:

StatusCheckFailed - The AWS/EC2 namespace includes a few status check metrics. By default, status check metrics are available at a 1-minute frequency at no charge. For a newly-launched instance, status check metric data is only available after the instance has completed the initialization state (within a few minutes of the instance entering the running state).

StatusCheckFailed - Reports whether the instance has passed both the instance status check and the system status check in the last minute. This metric can be either 0 (passed) or 1 (failed). By default, this metric is available at a 1-minute frequency at no charge.

List of EC2 status check metrics: via - https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/viewing_metrics_with_cloudwatch.html#status-check-metrics

Incorrect options:

CombinedStatusCheckFailed - This is a made-up option, given only as a distractor.

`StatusCheckFailed_Instance - Reports whether the instance has passed the instance status check in the last minute. This metric can be either 0 (passed) or 1 (failed).

StatusCheckFailed_System - Reports whether the instance has passed the system status check in the last minute. This metric can be either 0 (passed) or 1 (failed).

Reference:

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/viewing_metrics_with_cloudwatch.html#status-check-metrics

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q
  1. An e-commerce company uses AWS Elastic Beanstalk to create test environments comprising of an Amazon EC2 instance and an RDS instance whenever a new product or line-of-service is launched. The company is currently testing one such environment but wants to decouple the database from the environment to run some analysis and reports later in another environment. Since testing is in progress for a high-stakes product, the company wants to avoid downtime and database sync issues.

As a SysOps Administrator, which solution will you recommend to the company

Use an Elastic Beanstalk blue (environment A)/green (environment B) deployment to decouple the RDS DB instance from environment A. Create a new Elastic Beanstalk environment (environment B) with the necessary information to connect to the decouple RDS DB instanc

Since it is a test environment, take a snapshot of the database and terminate the current environment. Create a new one without attaching an RDS instance directly to it (from the snapshotj

Use an Elastic Beanstalk Immutable deployment to make the entire architecture completely reliable. You can terminate the first environment whenever you are confident of the second environment working correct

Decoupling an RDS instance that is part of a running Elastic Beanstalk environment is not currently supported by AWS. You will need to terminate the current environment after taking the snapshot of the database and create a new one with RDS configured outside the environment
j

A

Explanation
Correct option:

Use an Elastic Beanstalk blue (environment A)/green (environment B) deployment to decouple the RDS DB instance from environment A. Create a new Elastic Beanstalk environment (environment B) with the necessary information to connect to the decouple RDS DB instance - Attaching an RDS DB instance to an Elastic Beanstalk environment is ideal for development and testing environments. However, it’s not recommended for production environments because the lifecycle of the database instance is tied to the lifecycle of your application environment. If you terminate the environment, then you lose your data because the RDS DB instance is deleted by the environment.

Since the current use case mentions not having downtime on the database, we can follow these steps for resolution: 1. Use an Elastic Beanstalk blue (environment A)/green (environment B) deployment to decouple an RDS DB instance from environment A. Create an RDS DB snapshot and enable Deletion protection on the DB instance to Safeguard your RDS DB instance from deletion. 2. Create a new Elastic Beanstalk environment (environment B) with the necessary information to connect to the RDS DB instance. Your new Elastic Beanstalk environment (environment B) must not include an RDS DB instance in the same Elastic Beanstalk application.

Step-by-step instructions to configure the above solution: via - https://aws.amazon.com/premiumsupport/knowledge-center/decouple-rds-from-beanstalk/

Incorrect options:

Since it is a test environment, take a snapshot of the database and terminate the current environment. Create a new one without attaching an RDS instance directly to it (from the snapshot) - It is mentioned in the problem statement that the company is looking at a solution with no downtime. Hence, this is an incorrect option.

Use an Elastic Beanstalk Immutable deployment to make the entire architecture completely reliable. You can terminate the first environment whenever you are confident of the second environment working correctly - Immutable deployments perform an immutable update to launch a full set of new instances running the new version of the application in a separate Auto Scaling group, alongside the instances running the old version. Immutable deployments can prevent issues caused by partially completed rolling deployments. If the new instances don’t pass health checks, Elastic Beanstalk terminates them, leaving the original instances untouched. This solution is an over-kill for the test environment, even if the company is looking at a no-downtime option.

Decoupling an RDS instance that is part of a running Elastic Beanstalk environment is not currently supported by AWS. You will need to terminate the current environment after taking the snapshot of the database and create a new one with RDS configured outside the environment - This is a made-up option and given only as a distractor.

References:

https: //aws.amazon.com/premiumsupport/knowledge-center/decouple-rds-from-beanstalk/
https: //docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.rolling-version-deploy.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

As a SysOps Administrator, you have been asked to fix the network performance issues for a fleet of Amazon EC2 instances of a company.

Which of the following use-cases represents the right fit for using enhanced networking?

To support throughput near or exceeding 20K packets per second (PPS) on the VIF driver
To reach speeds up to 2,500 Gbps between EC2 instan
To configure multi-attach for an EBS volume that can be attached to a maximum of 16 EC2 instances in a single Availability Zone
To configure Direct Connect to reach speeds up to 25 Gbps between EC2 instances -

A

Explanation
Correct option:

To support throughput near or exceeding 20K packets per second (PPS) on the VIF driver - Enhanced networking uses single root I/O virtualization (SR-IOV) to provide high-performance networking capabilities on supported instance types. SR-IOV is a method of device virtualization that provides higher I/O performance and lower CPU utilization when compared to traditional virtualized network interfaces. Enhanced networking provides higher bandwidth, higher packet per second (PPS) performance, and consistently lower inter-instance latencies. There is no additional charge for using enhanced networking.

Consider using enhanced networking for the following scenarios:

If your packets-per-second rate reaches its ceiling, consider moving to enhanced networking. If your rate reaches its ceiling, you’ve likely reached the upper thresholds of the virtual network interface driver.

If your throughput is near or exceeding 20K packets per second (PPS) on the VIF driver, it’s a best practice to use enhanced networking.

All current generation instance types support enhanced networking, except for T2 instances.

Incorrect options:

To reach speeds up to 2,500 Gbps between EC2 instances - If you need to reach speeds up to 25 Gbps between instances, launch instances in a cluster placement group along with ENA compatible instances. If you need to reach speeds up to 10 Gbps between instances, launch your instances into a cluster placement group with the enhanced networking instance type. This option has been added as a distractor, as it is not possible to support speeds up to 2,500 Gbps between EC2 instances.

To configure multi-attach for an EBS volume that can be attached to a maximum of 16 EC2 instances in a single Availability Zone - An EBS (io1 or io2) volume, when configured with the new Multi-Attach option, can be attached to a maximum of 16 EC2 instances in a single Availability Zone. Additionally, each Nitro-based EC2 instance can support the attachment of multiple Multi-Attach enabled EBS volumes. Multi-Attach capability makes it easier to achieve higher availability for applications that provide write-ordering to maintain storage consistency. You do not need to use enhanced networking to configure this option.

To configure Direct Connect to reach speeds up to 25 Gbps between EC2 instances - AWS Direct Connect is a networking service that provides an alternative to using the internet to connect your on-premises resources to AWS Cloud. In many circumstances, private network connections can reduce costs, increase bandwidth, and provide a more consistent network experience than internet-based connections. You cannot use enhanced networking to configure Direct Connect to reach speeds up to 25 Gbps between EC2 instances.

References:

https: //docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking.html
https: //aws.amazon.com/premiumsupport/knowledge-center/enable-configure-enhanced-networking/
https: //docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volumes-multi.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

A multi-national company extensively uses AWS CloudFormation to model and provision its AWS resources. A human error had earlier deleted a critical service from the CloudFormation stack that resulted in business loss. The company is looking at a quick and effective solution to lock the critical resources from any updates or deletes.

As a SysOps Administrator, what will you suggest to address this requirement?

A

Explanation
Correct option:

Use Stack policies to protect critical stack resources from unintentional updates

Stack policies help protect critical stack resources from unintentional updates that could cause resources to be interrupted or even replaced. A stack policy is a JSON document that describes what update actions can be performed on designated resources. Specify a stack policy whenever you create a stack that has critical resources.

During a stack update, you must explicitly specify the protected resources that you want to update; otherwise, no changes are made to protected resources.

Example Stack policy: via - https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/protect-stack-resources.html

Incorrect options:

Use nested stacks that will retain the configuration in the parent configuration even if the child configuration is lost or cannot be used - Nested stacks are stacks that create other stacks. As your infrastructure grows, common patterns can emerge in which you declare the same components in each of your templates. You can separate these common components and create dedicated templates for them. Nested stacks make it easy to manage resources, but it does not protect them from updation.

Use revision controls to protect critical stack resources from unintentional updates - Your stack templates describe the configuration of your AWS resources, such as their property values. To review changes and to keep an accurate history of your resources, use code reviews and revision controls. Although it’s a useful feature, it is not relevant for the current scenario.

Use parameter constraints to specify the Identities that can update the Stack - With constraints, you can describe allowed input values so that AWS CloudFormation catches any invalid values before creating a stack. You can set constraints such as a minimum length, maximum length, and allowed patterns. However, you cannot protect resources from deletion.

Reference:

https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/best-practices.html#nested

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

A video streaming app uses Amazon Kinesis Data Streams for streaming data. The systems administration team needs to be informed of the shard capacity when it is reaching its limits.

How will you configure this requirement?

A

Explanation
Correct option:

Monitor Trusted Advisor service check results with Amazon CloudWatch Events - AWS Trusted Advisor checks for service usage that is more than 80% of the service limit.

A partial list of Trusted Advisor service limit checks: via - https://aws.amazon.com/premiumsupport/technology/trusted-advisor/best-practice-checklist/

You can use Amazon CloudWatch Events to detect and react to changes in the status of Trusted Advisor checks. Then, based on the rules that you create, CloudWatch Events invokes one or more target actions when a status check changes to the value you specify in a rule. Depending on the type of status change, you might want to send notifications, capture status information, take corrective action, initiate events, or take other actions.

Incorrect options:

Configure Amazon CloudWatch Events to pick data from Amazon Inspector - Amazon Inspector is an automated security assessment service that helps you test the network accessibility of your Amazon EC2 instances and the security state of your applications running on the instances. Not the right service for the given requirement.

Use CloudWatch ServiceLens to monitor data on service limits of various AWS services - CloudWatch ServiceLens enhances the observability of your services and applications by enabling you to integrate traces, metrics, logs, and alarms into one place. So, ServiceLens can be used once we define the alarms in CloudWatch, not without it.

Configure Amazon CloudTrail to generate logs for the service limits. CloudTrail and CloudWatch are integrated and hence alarm can be generated for customized service checks - AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. With CloudTrail, you can log, continuously monitor, and retain account activity related to actions across your AWS infrastructure. CloudTrail however, does not monitor service limits.

References:

https: //docs.aws.amazon.com/awssupport/latest/user/cloudwatch-events-ta.html
https: //docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/ServiceLens.html
https: //aws.amazon.com/premiumsupport/technology/trusted-advisor/best-practice-checklist/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Question 8: Correct
A developer has created rules for different events on Amazon EventBridge with AWS Lambda function as a target. The developer has also created an IAM Role with the necessary permissions and associated it with the rule. The rule however is failing, and on initial analysis, it is clear that the IAM Role associated with the rule is not being used when calling the Lambda function.

What could have gone wrong with the configuration and how can you fix the issue?K

A

Explanation
Correct option:

For Lambda functions configured as a target to EventBridge, you need to provide resource-based policy. IAM Roles will not work - IAM roles for rules are only used for events related to Kinesis Streams. For Lambda functions and Amazon SNS topics, you need to provide resource-based permissions.

When a rule is triggered in EventBridge, all the targets associated with the rule are invoked. Invocation means invoking the AWS Lambda functions, publishing to the Amazon SNS topics, and relaying the event to the Kinesis streams. In order to be able to make API calls against the resources you own, EventBridge needs the appropriate permissions. For Lambda, Amazon SNS, Amazon SQS, and Amazon CloudWatch Logs resources, EventBridge relies on resource-based policies. For Kinesis streams, EventBridge relies on IAM roles.

Incorrect options:

The IAM Role is wrongly configured. Delete the existing Role and recreate with necessary permissions and associate the newly created Role with the EventBridge rule - This option has been added as a distractor.

For Lambda, EventBridge relies on Access Control Lists (ACLs) to define permissions. IAM Roles will not work for Lambda when configured as a target for an EventBridge rule - Access Control Lists are not used with EventBridge and ACLs are defined at the account level and not at the individual user level.

AWS Command Line Interface (CLI) should not be used to add permissions to EventBridge targets - This statement is incorrect. AWS CLI can be used to add permissions to targets for EventBridge rules.

References:

https: //docs.aws.amazon.com/eventbridge/latest/userguide/resource-based-policies-eventbridge.html
https: //docs.aws.amazon.com/eventbridge/latest/userguide/eventbridge-troubleshooting.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

An IT services company runs its technology infrastructure on AWS Cloud. The company runs audits for all the development and testing teams against the standards set by the organization. During a recent audit, the company realized that most of the patch compliance standards are not being followed by the teams. The teams have however tagged all their AWS resources as per the guidelines.

As a SysOps Administrator, which of the following would you recommend as an easy way of fixing the issue as quickly as possible?

A

Explanation
Correct option:

Use AWS Systems Manager Patch Manager to automate the process of patching managed instances

AWS Systems Manager Patch Manager automates the process of patching managed instances with both security-related and other types of updates. You can use Patch Manager to apply patches for both operating systems and applications. You can use Patch Manager to install Service Packs on Windows instances and perform minor version upgrades on Linux instances. You can patch fleets of EC2 instances or your on-premises servers and virtual machines (VMs) by operating system type.

Patch Manager uses patch baselines, which include rules for auto-approving patches within days of their release, as well as a list of approved and rejected patches. You can install patches on a regular basis by scheduling patching to run as a Systems Manager maintenance window task. You can also install patches individually or to large groups of instances by using Amazon EC2 tags. (Tags are keys that help identify and sort your resources within your organization.) You can add tags to your patch baselines themselves when you create or update them.

Patch Manager provides options to scan your instances and report compliance on a schedule, install available patches on a schedule, and patch or scan instances on demand whenever you need to.

Patch Manager integrates with AWS Identity and Access Management (IAM), AWS CloudTrail, and Amazon EventBridge to provide a secure patching experience that includes event notifications and the ability to audit usage.

Incorrect options:

Use Amazon Inspector to automate the process of patching instances that helps improve the security and compliance of the instances - Amazon Inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on AWS. Amazon Inspector automatically assesses applications for exposure, vulnerabilities, and deviations from best practices. Inspector is not a patch management service.

Use Amazon Patch Manager to automate the process of patching instances - This is a made-up option and given only as a distractor.

Use AWS Systems Manager Automation to simplify the patch application process across all instances - Systems Manager Automation simplifies common maintenance and deployment tasks of EC2 instances and other AWS resources. Automation enables you to do the following: Build Automation workflows to configure and manage instances and AWS resources, Create custom workflows or use pre-defined workflows maintained by AWS, Receive notifications about Automation tasks and workflows by using Amazon EventBridge, Monitor Automation progress and execution details by using the Amazon EC2 or the AWS Systems Manager console. Systems Manager Automation, however, does not include patch management.

References:

https: //docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-patch.html
https: //aws.amazon.com/inspector/
https: //docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-automation.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

As a SysOps Administrator, you create and maintain various system configurations for the teams you work with. You have created a CloudFront distribution with origin as an Amazon S3 bucket. The configuration has worked fine so far. However, for a few hours now, an error similar to this has cropped up - The authorization header is malformed; the region ‘’ is wrong; expecting ‘’.

What is the reason for this error and how will you fix it?

A

Explanation
Correct option:

This error indicates the configured Amazon S3 bucket has been moved from one AWS Region to the other. That is, deleted from one AWS Region and created with the same name in another. To fix this error, update your CloudFront distribution so that it finds the S3 bucket in the bucket’s current AWS Region - If CloudFront requests an object from your origin, and the origin returns an HTTP 4xx or 5xx status code, there’s a problem with communication between CloudFront and your origin.

Your CloudFront distribution might send error responses with HTTP status code 400 Bad Request, and a message similar to the following: The authorization header is malformed; the region ‘’ is wrong; expecting ‘’.

This problem can occur in the following scenario: 1)Your CloudFront distribution’s origin is an Amazon S3 bucket, 2)You moved the S3 bucket from one AWS Region to another. That is, you deleted the S3 bucket, then later you created a new bucket with the same bucket name, but in a different AWS Region than where the original S3 bucket was located.

To fix this error, update your CloudFront distribution so that it finds the S3 bucket in the bucket’s current AWS Region.

Incorrect options:

This error indicates that the CloudFront distribution and Amazon S3 are not in the same AWS Region. Move one resource so that, both the CloudFront distribution and Amazon S3 are in the same AWS Region - Amazon CloudFront uses a global network of edge locations and regional edge caches for content delivery. You can configure CloudFront to server content from particular Regions, but CloudFront is not Region-specific.

This error indicates that the API key used for authorization is from an AWS Region that is different from the Region that S3 bucket is created in - This is a made-up option, given only as a distractor.

This error indicates that when CloudFront forwarded a request to the origin, the origin didn’t respond before the request expired. This could be an access issue caused by a firewall or a Security Group not allowing access to CloudFront to access S3 resources - When CloudFront forwards a request to the origin, and the origin didn’t respond before the request expired, a Gateway Timeout error is generated.

Reference:

https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/http-400-bad-request.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

As a SysOps Administrator, you have been asked to calculate the total network usage for all the EC2 instances of a company and determine which instance used the most bandwidth within a date range.

Which Amazon CloudWatch metric(s) will help you get the needed data?

A

Explanation
Correct option:

NetworkIn and NetworkOut - You can determine which instance is causing high network usage using the Amazon CloudWatch NetworkIn and NetworkOut metrics. You can aggregate the data points from these metrics to calculate the network usage for your instance.

NetworkIn - The number of bytes received by the instance on all network interfaces. This metric identifies the volume of incoming network traffic to a single instance.

The number reported is the number of bytes received during the period. If you are using basic (five-minute) monitoring and the statistic is Sum, you can divide this number by 300 to find Bytes/second. If you have detailed (one-minute) monitoring and the statistic is Sum, divide it by 60. Units of this metric are Bytes.

NetworkOut - The number of bytes sent out by the instance on all network interfaces. This metric identifies the volume of outgoing network traffic from a single instance.

The number reported is the number of bytes sent during the period. If you are using basic (five-minute) monitoring and the statistic is Sum, you can divide this number by 300 to find Bytes/second. If you have detailed (one-minute) monitoring and the statistic is Sum, divide it by 60. Units of this metric are Bytes.

Incorrect options:

DataTransfer-Out-Bytes - DataTransfer-Out-Bytes metric is used in AWS Cost Explorer reports and is not useful for the current scenario.

DiskReadBytes and DiskWriteBytes - DiskReadBytes is the bytes read from all instance store volumes available to the instance. This metric is used to determine the volume of the data the application reads from the hard disk of the instance. This can be used to determine the speed of the application.

DiskWriteBytes is the bytes written to all instance store volumes available to the instance. This metric is used to determine the volume of the data the application writes onto the hard disk of the instance. This can be used to determine the speed of the application.

NetworkTotalBytes - This is a made-up option, given only as a distractor.

Reference:

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/viewing_metrics_with_cloudwatch.htmlK

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

An IT company runs its server infrastructure on Amazon EC2 instances configured in an Auto Scaling Group (ASG) fronted by an Elastic Load Balancer (ELB). For ease of deployment and flexibility in scaling, this AWS architecture is maintained via an Elastic Beanstalk environment. The Technology Lead of a project has requested to automate the replacement of unhealthy Amazon EC2 instances in the Elastic Beanstalk environment.

How will you configure a solution for this requirement?

A

Explanation
Correct option:

To automate the replacement of unhealthy EC2 instances, you must change the health check type of your instance’s Auto Scaling group from EC2 to ELB by using a configuration file of your Beanstalk environment

By default, the health check configuration of your Auto Scaling group is set as an EC2 type that performs a status check of EC2 instances. To automate the replacement of unhealthy EC2 instances, you must change the health check type of your instance’s Auto Scaling group from EC2 to ELB by using a configuration file.

The following are some important points to remember:

Status checks cover only an EC2 instance’s health, and not the health of your application, server, or any Docker containers running on the instance.

If your application crashes, the load balancer removes the unhealthy instances from its target. However, your Auto Scaling group doesn’t automatically replace the unhealthy instances marked by the load balancer.

By changing the health check type of your Auto Scaling group from EC2 to ELB, you enable the Auto Scaling group to automatically replace the unhealthy instances when the health check fails.

Complete list of steps to configure the above: via - https://aws.amazon.com/premiumsupport/knowledge-center/elastic-beanstalk-instance-automation/

Incorrect options:

To automate the replacement of unhealthy EC2 instances, you must change the health check type of your instance’s Auto Scaling group from ELB to EC2 by using a configuration file of your Beanstalk environment - As mentioned earlier, the health check type of your instance’s Auto Scaling group should be changed from EC2 to ELB.

Modify the Auto Scaling Group from Amazon EC2 console directly to change the health check type to ELB

Modify the Auto Scaling Group from Amazon EC2 console directly to change the health check type to EC2

You should configure your Amazon EC2 instances in an Elastic Beanstalk environment by using Elastic Beanstalk configuration files (.ebextensions). Configuration changes made to your Elastic Beanstalk environment won’t persist if you use the following configuration methods:

Configuring an Elastic Beanstalk resource directly from the console of a specific AWS service.

Installing a package, creating a file, or running a command directly from your Amazon EC2 instance.

Both these options contradict the above explanation and therefore these two options are incorrect.

Reference:

https://aws.amazon.com/premiumsupport/knowledge-center/elastic-beanstalk-configuration-files/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

A junior developer created multiple stacks of resources in different AWS Regions per the CloudFormation template given to him. The development team soon started having issues with the created resources and their behavior. Initial checks have confirmed that some resources were created and some omitted, though the same template has been used. As a SysOps Administrator, you have been tasked to resolve these issues.

Which of the following could be the possible reason for this unexpected behavior?

A

Explanation
Correct option:

The CloudFormation template might have custom named IAM resources that are responsible for the unintended behavior - If your template contains custom named IAM resources, don’t create multiple stacks reusing the same template. IAM resources must be globally unique within your account. If you use the same template to create multiple stacks in different Regions, your stacks might share the same IAM resources, instead of each having a unique one. Shared resources among stacks can have unintended consequences from which you can’t recover. For example, if you delete or update shared IAM resources in one stack, you will unintentionally modify the resources of other stacks.

Incorrect options:

There might have been dependency errors, that resulted in the stack not being created completely - Any error during stack creation will rollback the entire stack creation process and the result is, none of the mentioned resources are created.

Insufficient IAM permissions can lead to issues. When you work with an AWS CloudFormation stack, you not only need permissions to use AWS CloudFormation, you must also have permission to use the underlying services that are described in your template - If permissions were an issue, the stack wouldn’t be created at all.

The CloudFormation template was created using use-once only option and is not supposed to be reused for creating other stacks - This is a made-up option and given only as a distractor.

Reference:

https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-iam-template.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

As SysOps Administrator, you have created two configuration files for CloudWatch Agent configuration. The first configuration file collects a set of metrics and logs from all servers and the second configuration file collects metrics from certain applications. You have given the same name to both the files but stored these files in different file paths.

What is the outcome when the CloudWatch Agent is started with the first configuration file and then the second configuration file is appended to it?

A

Explanation
Correct option:

The append command overwrites the information from the first configuration file instead of appending to it

You can set up the CloudWatch agent to use multiple configuration files. For example, you can use a common configuration file that collects a set of metrics and logs that you always want to collect from all servers in your infrastructure. You can then use additional configuration files that collect metrics from certain applications or in certain situations.

To set this up, first create the configuration files that you want to use. Any configuration files that will be used together on the same server must have different file names. You can store the configuration files on servers or in Parameter Store.

Start the CloudWatch agent using the fetch-config option and specify the first configuration file. To append the second configuration file to the running agent, use the same command but with the append-config option. All metrics and logs listed in either configuration file are collected.

Any configuration files appended to the configuration must have different file names from each other and from the initial configuration file. If you use append-config with a configuration file with the same file name as a configuration file that the agent is already using, the append command overwrites the information from the first configuration file instead of appending to it. This is true even if the two configuration files with the same file name are on different file paths.

Incorrect options:

Second configuration file parameters are added to the Agent already running with the first configuration file parameters

Two different Agents are started with different configurations, collecting the metrics and logs listed in either of the configuration files

A CloudWatch Agent can have only one configuration file and all required parameters are defined in this file alone

These three options contradict the explanation provided above, so these options are incorrect.

Reference:

https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Agent-common-scenarios.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

A highly critical financial services application is being moved to AWS Cloud from the on-premises data center. The application uses a fleet of Amazon EC2 instances provisioned in different geographical areas. The Chief Technology Officer (CTO) of the company needs to understand the communication network used between instances at various locations when they interact using public IP addresses.

Which of the following options would you identify as correct? (Select two)

A

Explanation
Correct option:

Traffic between EC2 instances in different AWS Regions stays within the AWS network, if there is an Inter-Region VPC Peering connection between the VPCs where the two instances reside

Traffic between two EC2 instances in the same AWS Region stays within the AWS network, even when it goes over public IP addresses

When two instances communicate using public IP addresses, the following three scenarios are possible: 1. Traffic between two EC2 instances in the same AWS Region stays within the AWS network, even when it goes over public IP addresses.

Traffic between EC2 instances in different AWS Regions stays within the AWS network if there is an Inter-Region VPC Peering connection between the VPCs where the two instances reside.

Traffic between EC2 instances in different AWS Regions where there is no Inter-Region VPC Peering connection between the VPCs where these instances reside, is not guaranteed to stay within the AWS network.

Incorrect options:

Traffic between two EC2 instances always stays within the AWS network, even when it goes over public IP addresses by using AWS Global Infrastructure

Traffic between EC2 instances in different AWS Regions where there is no Inter-Region VPC Peering connection between the VPCs where these instances reside will use edge locations to communicate without going over the internet

These two options contradict the explanation provided above, so both options are incorrect.

Direct Connect is the default way of communication where there is no Inter-Region VPC Peering connection between the VPCs. All traffic between instances will use Direct Connect and does not go over the internet - AWS Direct Connect is a network service that provides an alternative to using the Internet to utilize AWS cloud services. AWS Direct Connect enables customers to have low latency and private connections to AWS for workloads that require higher speed or lower latency than the internet. Direct Connect is a paid service and is available only if the customer opts for it.

Reference:

https://aws.amazon.com/vpc/faqs/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Consider this scenario - the primary instance of an Amazon Aurora cluster is unavailable because of an outage that has affected an entire AZ. The primary instance and all the reader instances are in the same AZ.

As a SysOps Administrator, what action will you take to get the database onlin

A

Explanation
Correct option:

You must manually create one or more new DB instances in another AZ

Suppose that the primary instance in your cluster is unavailable because of an outage that affects an entire AZ. In this case, the way to bring a new primary instance online depends on whether your cluster uses a multi-AZ configuration. If the cluster contains any reader instances in other AZs, Aurora uses the failover mechanism to promote one of those reader instances to be the new primary instance. If your provisioned cluster only contains a single DB instance, or if the primary instance and all reader instances are in the same AZ, you must manually create one or more new DB instances in another AZ.

Incorrect options:

Aurora promotes an existing replica in another AZ to a new primary instance - The use case states that the primary instance and all the reader instances are in the same AZ. So, this is not possible.

Aurora automatically creates a new primary instance in the same AZ - If the primary instance in a DB cluster using single-master replication fails, Aurora automatically fails over to a new primary instance in one of two ways:

By promoting an existing Aurora Replica to the new primary instance
By creating a new primary instance
But, in this use case, the AZ itself has failed. So, creating a new primary in the same AZ is not possible.

For a cluster using single-master replication, Aurora can create up to 15 read-only Aurora Replicas to serve requests from users - Generally, an Aurora DB cluster can contain up to 15 Aurora Replicas. The Aurora Replicas can be distributed across the Availability Zones that a DB cluster spans within an AWS Region. But, this use case is a single AZ deployment with failure at the AZ level. So, this solution is not possible.

Reference:

https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Replication.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

An automobile company manages its AWS resource creation and maintenance process through AWS CloudFormation. The company has successfully used CloudFormation so far, and wishes to continue using the service. However, while moving to CloudFormation, the company only moved critical resources and left out the other resources to be managed manually. To leverage the ease of creation and maintenance that CloudFormation offers, the company wants to move rest of the resources to CloudFormation.

Which of the following options is the recommended way to configure this requirement

A

Explanation
Correct option:

You can bring an existing resource into AWS CloudFormation management using resource import

If you created an AWS resource outside of AWS CloudFormation management, you can bring this existing resource into AWS CloudFormation management using resource import. You can manage your resources using AWS CloudFormation regardless of where they were created without having to delete and re-create them as part of a stack.

During an import operation, you create a change set that imports your existing resources into a stack or creates a new stack from your existing resources. You provide the following during import.

A template that describes the entire stack, including both the original stack resources and the resources you’re importing. Each resource to import must have a DeletionPolicy attribute.

Identifiers for the resources to import. You provide two values to identify each target resource.

a) An identifier property. This is a resource property that can be used to identify each resource type. For example, an AWS::S3::Bucket resource can be identified using its BucketName.
b) An identifier value. This is the target resource’s actual property value. For example, the actual value for the BucketName property might be MyS3Bucket.

Incorrect options:

Use Parameters section of CloudFormation template to input the required resources - Parameters are a way to provide inputs to your AWS CloudFormation template. They are useful when you want to reuse your templates. Some inputs can not be determined ahead of time. They aren’t useful for importing resources into CloudFormation.

You can use Mappings part of CloudFormation template to input the needed resources - Mappings are fixed variables within your CloudFormation Template. They’re very handy to differentiate between different environments (dev vs prod), regions (AWS regions), AMI types, etc. They aren’t useful for importing resources into CloudFormation.

Drift detection is the mechanism by which you add resources to the stack of Cloudformation resources already created - Performing a drift detection operation on a stack determines whether the stack has drifted from its expected template configuration, and returns detailed information about the drift status of each resource in the stack that supports drift detection. It is not useful for importing resources into CloudFormation.

Reference:

https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/resource-import.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

As part of regular maintenance, a systems administrator was checking through the configured Auto Scaling groups (ASGs). An error was raised by an Auto Scaling group when attempting to launch an instance that has an encrypted EBS volume. The service-linked role did not have access to the customer-managed CMK used to encrypt the volume.

Which of the following represents the best solution to fix this issue?

A

Explanation
Correct option:

Use a CMK in the same AWS account as the Auto Scaling group (ASG). Copy and re-encrypt the snapshot with another CMK that belongs to the same account as the Auto Scaling group. Allow the service-linked role to use the new CMK

Client.InternalError: Client error on launch error is thrown when an Auto Scaling group attempts to launch an instance that has an encrypted EBS volume, but the service-linked role does not have access to the customer-managed CMK used to encrypt it.

There are two possible solutions:

Solution 1: Use a CMK in the same AWS account as the Auto Scaling group. Copy and re-encrypt the snapshot with another CMK that belongs to the same account as the Auto Scaling group. Allow the service-linked role to use the new CMK.

Solution 2: Continue to use the CMK in a different AWS account from the Auto Scaling group. Determine which service-linked role to use for this Auto Scaling group. Allow the Auto Scaling group account access to the CMK. Define an IAM user or role in the Auto Scaling group account that can create a grant. Create a grant to the CMK with the service-linked role as the grantee principal. Update the Auto Scaling group to use the service-linked role.

Incorrect options:

Determine which service-linked role to use for this Auto Scaling group. Update the key policy on the CMK and allow the service-linked role to use the CMK. Update the Auto Scaling group to use the service-linked role - This is possible only when CMK and Auto Scaling group are in the same AWS account.

Export the CMK to the ASG account from the instance account. Then, define a role to access this CMK and attach the role to ASG - It is not possible to export CMKs.

It is not possible for ASGs to initiate EC2 instances that have encrypted volumes attached to them - This statement is incorrect and only given as a distractor.

Reference:

https://docs.aws.amazon.com/autoscaling/ec2/userguide/ts-as-instancelaunchfailure.html#ts-as-instancelaunchfailure-10

20
Q

A team noticed that it has accidentally deleted the AMI of Amazon EC2 instances belonging to the test environment. The team had configured backups via EBS snapshots for these instances.

Which of the following options would you suggest to recover/rebuild the accidentally deleted AMI? (Select two)

A

Explanation
Correct options:

Create a new AMI from Amazon EBS snapshots that were created as backups

Create a new AMI from Amazon EC2 instances that were launched before the deletion of AMI

It isn’t possible to restore or recover a deleted or deregistered AMI. However, you can create a new, identical AMI using one of the following:

Amazon Elastic Block Store (Amazon EBS) snapshots that were created as backups: When you delete or deregister an Amazon EBS-backed AMI, any snapshots created for the volume of the instance during the AMI creation process are retained. If you accidentally delete the AMI, you can launch an identical AMI using one of the retained snapshots.

Amazon Elastic Compute Cloud (Amazon EC2) instances that were launched from the deleted AMI: If you deleted the AMI and the snapshots are also deleted, then you can recover the AMI from any existing EC2 instances launched using the deleted AMI. Unless you have selected the No reboot option on the instance, performing this step will reboot the instance.

Incorrect options:

AWS Support retains backups of AMIs. Write to the support team to get help for recovering the lost AMI - For security and privacy reasons, AWS Support doesn’t have visibility or access to customer data. If you don’t have backups of your deleted AMI, AWS Support can’t recover it for you.

Recover the AMI from the current Amazon EC2 instances that were launched before the deletion of AMI

Recover the AMI from Amazon EBS snapshots that were created as backups before the deletion of AMI

As discussed above, it is not possible to restore or recover a deleted or deregistered AMI. The only option is to create a new, identical AMI as discussed above.

Reference:

https://aws.amazon.com/premiumsupport/knowledge-center/recover-ami-accidentally-deleted-ec2/

21
Q

A systems administrator has configured Amazon EC2 instances in an Auto Scaling Group (ASG) for two separate development teams. However, only one configuration has CloudWatch agent installed on the instances, whereas the other one does not have it. The administrator has not manually installed the agents on either group of instances.

Which of the following would you identify as a root-cause behind this issue?

A

Explanation
Correct option:

If your AMI contains a CloudWatch agent, it’s automatically installed on EC2 instances when you create an EC2 Auto Scaling group. The developer needs to choose the AMI that has CloudWatch agent pre-configured on it

If your AMI contains a CloudWatch agent, it’s automatically installed on EC2 instances when you create an EC2 Auto Scaling group. With the stock Amazon Linux AMI, you need to install it (AWS recommends to install via yum).

Incorrect options:

CloudWatch agent can be configured to be loaded on the EC2 instances while configuring the ASG. The developer could have unintentionally checked this flag on one of the ASGs he created - This is incorrect and added only as a distractor.

The architecture of the InstanceType mentioned in your launch configuration does not match the image architecture. So, the ASG was created with errors, resulting in skipping CloudWatch agent. A thorough check is needed for such ASGs, more services could have been skipped - This is incorrect. Either the ASG is created successfully or fails completely. Partial installation of services will not take place.

The instance architecture might not have been compatible with the AMI chosen. The incompatibility results in various errors, one of which is, some of the AWS services will not be installed as expected - If there are compatibility issues, the ASG will not be able to spin up instances, and throws an error that explains the compatibility error.

Reference:

https://aws.amazon.com/ec2/autoscaling/faqs/

22
Q

An automobile company uses a hybrid environment to run its technology infrastructure using a mix of on-premises instances and AWS Cloud. The company has a few managed instances in Amazon VPC. The company wants to avoid using the internet for accessing AWS Systems Manager APIs from this VPC.

As a Systems Administrator, which of the following would you recommend to address this requirement?

A

Explanation
Correct option:

You can privately access AWS Systems Manager APIs from Amazon VPC by creating VPC Endpoint - A managed instance is any machine configured for AWS Systems Manager. You can configure EC2 instances or on-premises machines in a hybrid environment as managed instances.

You can improve the security posture of your managed instances (including managed instances in your hybrid environment) by configuring AWS Systems Manager to use an interface VPC endpoint in Amazon Virtual Private Cloud (Amazon VPC). An interface VPC endpoint (interface endpoint) enables you to connect to services powered by AWS PrivateLink, a technology that enables you to privately access Amazon EC2 and Systems Manager APIs by using private IP addresses. PrivateLink restricts all network traffic between your managed instances, Systems Manager, and Amazon EC2 to the Amazon network. This means that your managed instances don’t have access to the Internet. If you use PrivateLink, you don’t need an Internet gateway, a NAT device, or a virtual private gateway.

How to use AWS PrivateLink: via - https://docs.aws.amazon.com/vpc/latest/userguide/how-it-works.html#what-is-privatelink

Incorrect options:

You can privately access AWS Systems Manager APIs from Amazon VPC by creating Internet Gateway

An internet gateway is a horizontally scaled, redundant, and highly available VPC component that allows communication between instances in your VPC and the internet. It, therefore, imposes no availability risks or bandwidth constraints on your network traffic. Internet Gateways must be deployed in a public subnet and the corresponding entry should be added to the route table.

You can privately access AWS Systems Manager APIs from Amazon VPC by creating NAT gateway

You can use a network address translation (NAT) gateway to enable instances in a private subnet to connect to the internet or other AWS services, but prevent the internet from initiating a connection with those instances.

You can privately access AWS Systems Manager APIs from Amazon VPC by creating VPN connection

By default, instances that you launch into an Amazon VPC can’t communicate with your own (remote) network. You can enable access to your remote network from your VPC by creating an AWS Site-to-Site VPN (Site-to-Site VPN) connection, and configuring routing to pass traffic through the connection.

These three options contradict the explanation above, so these options are incorrect.

Reference:

https://docs.aws.amazon.com/systems-manager/latest/userguide/setup-create-vpc.html

23
Q

A retail company has realized that their Amazon EBS volume backed EC2 instance is consistently over-utilized and needs an upgrade. A developer has connected with you to understand the key parameters to be considered when changing the instance type.

As a SysOps Administrator, which of the following would you identify as correct regarding the instance types for the given use-case? (Select three)

A

Explanation
Correct options:

Resizing of an instance is only possible if the root device for your instance is an EBS volume - If the root device for your instance is an EBS volume, you can change the size of the instance simply by changing its instance type, which is known as resizing it. If the root device for your instance is an instance store volume, you must migrate your application to a new instance with the instance type that you need.

You must stop your Amazon EBS–backed instance before you can change its instance type. AWS moves the instance to new hardware; however, the instance ID does not change - You must stop your Amazon EBS–backed instance before you can change its instance type. When you stop and start an instance, AWS moves the instance to new hardware; however, the instance ID does not change.

If your instance is in an Auto Scaling group, the Amazon EC2 Auto Scaling service marks the stopped instance as unhealthy, and may terminate it and launch a replacement instance - If your instance is in an Auto Scaling group, the Amazon EC2 Auto Scaling service marks the stopped instance as unhealthy, and may terminate it and launch a replacement instance. To prevent this, you can suspend the scaling processes for the group while you’re resizing your instance.

Incorrect options:

The new instance retains its public, private IPv4 addresses, any Elastic IP addresses, and any IPv6 addresses that were associated with the old instance - If your instance has a public IPv4 address, AWS releases the address and gives it a new public IPv4 address. The instance retains its private IPv4 addresses, any Elastic IP addresses, and any IPv6 addresses.

There is no downtime on the instance if you choose an instance of a compatible type since AWS starts the new instance and shifts the applications from current instance - AWS suggests that you plan for downtime while your instance is stopped. Stopping and resizing an instance may take a few minutes, and restarting your instance may take a variable amount of time depending on your application’s startup scripts.

Resizing of an instance is possible if the root device is either EBS volume or an instance store volume. However, instance store volumes taking longer to start on the new instance, since cache data is lost on these instances - As discussed above, resizing of an instance is possible only if the root device for the instance is an EBS volume.

Reference:

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-resize.html

24
Q

A development team has written configurable scripts that need to be run every day to monitor the business endpoints and APIs. The team wants to integrate these scripts with Amazon CloudWatch service to help in overall monitoring and analysis.

What is the right way of configuring this requirement?

A

Explanation
Correct option:

Use CloudWatch Synthetics to create canaries, which create CloudWatch metrics to track and monitor the services - You can use Amazon CloudWatch Synthetics to create canaries, configurable scripts that run on a schedule, to monitor your endpoints and APIs. Canaries follow the same routes and perform the same actions as a customer, which makes it possible for you to continually verify your customer experience even when you don’t have any customer traffic on your applications. By using canaries, you can discover issues before your customers do.

Canaries are Node.js scripts. They create Lambda functions in your account that use Node.js as a framework. Canaries work over both HTTP and HTTPS protocols.

UI canaries offer programmatic access to a headless Google Chrome Browser via Puppeteer. For more information about Puppeteer, see Puppeteer.

Canaries check the availability and latency of your endpoints and can store load time data and screenshots of the UI. They monitor your REST APIs, URLs, and website content, and they can check for unauthorized changes from phishing, code injection and cross-site scripting.

You can run a canary once or on a regular schedule. Scheduled canaries can run 24 hours a day, as often as once per minute.

Incorrect options:

Configure a CloudWatch Composite Alarm and integrate the configurable script, written by the team, with the CloudWatch logs - A composite alarm includes a rule expression that takes into account the alarm states of other alarms that you have created. The composite alarm goes into ALARM state only if all conditions of the rule are met. The alarms specified in a composite alarm’s rule expression can include metric alarms and other composite alarms. Not the right choice for the current scenario.

CloudWatch Dashboard settings can be used to integrate the user-written scripts into Alarms generated and managed by CloudWatch - This is a made-up option, given only as a distractor.

Use CloudWatch ServiceLens to integrate the custom script into CloudWatch system for generating metrics and logs - CloudWatch ServiceLens enhances the observability of your services and applications by enabling you to integrate traces, metrics, logs, and alarms into one place. ServiceLens integrates CloudWatch with AWS X-Ray to provide an end-to-end view of your application to help you more efficiently pinpoint performance bottlenecks and identify impacted users. A very useful service, but not for our current requirement.

References:

https: //docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch_Synthetics_Canaries.html
https: //docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/AlarmThatSendsEmail.html
https: //docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/ServiceLens.html

25
Q

A retail company has built its server infrastructure on Amazon EC2 instances that run on Windows OS. The development team has defined a few custom metrics that need to be collected by the unified CloudWatch agent.

As a SysOps Administrator, can you identify the correct configuration to be used for this scenario?

A

Explanation
Correct option:

Configure the CloudWatch agent with StatsD protocol to collect the necessary system metrics - You can retrieve custom metrics from your applications or services using the StatsD and collectd protocols. StatsD is supported on both Linux servers and servers running Windows Server. collectd is supported only on Linux servers. Here, the instances are running on Windows servers, hence StatsD is the right protocol.

More information on Collecting Metrics and Logs from Amazon EC2 Instances: via - https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Install-CloudWatch-Agent.html

Incorrect options:

Configure the CloudWatch agent with collectd protocol to collect the necessary system metrics - collectd is supported only on Linux servers and hence it is not the correct choice here.

CloudWatch agent can be configured with either StatsD protocol or collectd protocol to collect the necessary system metrics on windows servers - StatsD is supported on both Linux servers and servers running Windows Server. collectd is supported only on Linux servers.

Unified CloudWatch agent cannot be custom configured - This is an incorrect statement and used only as a distractor.

Reference:

https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Install-CloudWatch-Agent.html

26
Q

Question 31: Incorrect
A retail company wants to get out of the business of owning and maintaining its own IT infrastructure. As part of this digital transformation, the company wants to archive about 5PB of data in its on-premises data center to durable long term storage.

As a SysOps Administrator, what is your recommendation to migrate this data in the MOST cost-optimal way?

A

Explanation
Correct option:

Transfer the on-premises data into multiple Snowball Edge Storage Optimized devices. Copy the Snowball Edge data into Amazon S3 and create a lifecycle policy to transition the data into AWS Glacier

Snowball Edge Storage Optimized is the optimal choice if you need to securely and quickly transfer dozens of terabytes to petabytes of data to AWS. It provides up to 80 TB of usable HDD storage, 40 vCPUs, 1 TB of SATA SSD storage, and up to 40 Gb network connectivity to address large scale data transfer and pre-processing use cases. The data stored on the Snowball Edge device can be copied into the S3 bucket and later transitioned into AWS Glacier via a lifecycle policy. You can’t directly copy data from Snowball Edge devices into AWS Glacier.

Incorrect options:

Transfer the on-premises data into multiple Snowball Edge Storage Optimized devices. Copy the Snowball Edge data into AWS Glacier - As mentioned earlier, you can’t directly copy data from Snowball Edge devices into AWS Glacier. Hence, this option is incorrect.

Setup AWS direct connect between the on-premises data center and AWS Cloud. Use this connection to transfer the data into AWS Glacier - AWS Direct Connect lets you establish a dedicated network connection between your network and one of the AWS Direct Connect locations. Using industry-standard 802.1q VLANs, this dedicated connection can be partitioned into multiple virtual interfaces. Direct Connect involves significant monetary investment and takes more than a month to set up, therefore it’s not the correct fit for this use-case where just a one-time data transfer has to be done.

Setup Site-to-Site VPN connection between the on-premises data center and AWS Cloud. Use this connection to transfer the data into AWS Glacier - AWS Site-to-Site VPN enables you to securely connect your on-premises network or branch office site to your Amazon Virtual Private Cloud (Amazon VPC). VPN Connections are a good solution if you have an immediate need, and have low to modest bandwidth requirements. Because of the high data volume for the given use-case, Site-to-Site VPN is not the correct choice.

Reference:

https://aws.amazon.com/snowball/

27
Q

A financial services company runs its server infrastructure on a fleet of Amazon EC2 instances running behind an Auto Scaling Group (ASG). The SysOps Administrator has configured the instances to be protected from termination during scale-in.

A scale-in event has occurred. What is the outcome of the event?

A

Explanation
Correct option:

The desired capacity of the ASG is decremented, but ASG will not be able to terminate any instance - To control whether an Auto Scaling group can terminate a particular instance when scaling in, use instance scale-in protection. You can enable the instance scale-in protection setting on an Auto Scaling group or an individual Auto Scaling instance.

If all instances in an Auto Scaling group are protected from termination during scale in, and a scale-in event occurs, its desired capacity is decremented. However, the Auto Scaling group can’t terminate the required number of instances until their instance scale-in protection settings are disabled.

Instance scale-in protection does not protect Auto Scaling instances from the following:

Manual termination through the Amazon EC2 console, the terminate-instances command, or the TerminateInstances action. To protect Auto Scaling instances from manual termination, enable Amazon EC2 termination protection.

Health check replacement if the instance fails health checks. To prevent Amazon EC2 Auto Scaling from terminating unhealthy instances, suspend the ReplaceUnhealthy process.

Spot Instance interruptions. A Spot Instance is terminated when capacity is no longer available or the Spot price exceeds your maximum price.

Incorrect options:

The minimum capacity of the ASG is decremented, but ASG will not be able to terminate any instance

The desired capacity of the ASG is decremented and the instances are terminated based on the configuration

When all instances are termination protected, scale-in event is not generated

These three options contradict the explanation above, so these options are incorrect.

Reference:

https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-instance-termination.html

28
Q

A hospitality company runs their applications on its on-premises infrastructure but stores the critical customer data on AWS Cloud using AWS Storage Gateway. At a recent audit, the company has been asked if the customer data is secure while in-transit and at rest in the Cloud.

What is the correct answer to the auditor’s question? And what should the company change to meet the security requirements?

A

Explanation
Correct option:

AWS Storage Gateway uses SSL/TLS (Secure Socket Layers/Transport Layer Security) to encrypt data that is transferred between your gateway appliance and AWS storage. By default, Storage Gateway uses Amazon S3-Managed Encryption Keys to server-side encrypt all data it stores in Amazon S3

AWS Storage Gateway uses SSL/TLS (Secure Socket Layers/Transport Layer Security) to encrypt data that is transferred between your gateway appliance and AWS storage. By default, Storage Gateway uses Amazon S3-Managed Encryption Keys (SSE-S3) to server-side encrypt all data it stores in Amazon S3. You have an option to use the Storage Gateway API to configure your gateway to encrypt data stored in the cloud using server-side encryption with AWS Key Management Service (SSE-KMS) customer master keys (CMKs).

File, Volume and Tape Gateway data is stored in Amazon S3 buckets by AWS Storage Gateway. Tape Gateway supports backing data to Amazon S3 Glacier apart from the standard storage.

Encrypting a file share: For a file share, you can configure your gateway to encrypt your objects with AWS KMS–managed keys by using SSE-KMS.

Encrypting a volume: For cached and stored volumes, you can configure your gateway to encrypt volume data stored in the cloud with AWS KMS–managed keys by using the Storage Gateway API.

Encrypting a tape: For a virtual tape, you can configure your gateway to encrypt tape data stored in the cloud with AWS KMS–managed keys by using the Storage Gateway API.

Incorrect options:

AWS Storage Gateway uses IPsec to encrypt data that is transferred between your gateway appliance and AWS storage. File and Volume Gateway data stored on Amazon S3 is encrypted. Tape Gateway data cannot be encrypted at-rest

AWS Storage Gateway uses IPsec to encrypt data that is transferred between your gateway appliance and AWS storage. All three Gateway types store data in encrypted form at-rest

There is no such thing as using IPSec for encrypting in-transit data between your gateway appliance and AWS storage. You need to use SSL/TLS for this. So both these options are incorrect.

AWS Storage Gateway uses SSL/TLS (Secure Socket Layers/Transport Layer Security) to encrypt data that is transferred between your gateway appliance and AWS storage. File and Volume Gateway data stored on Amazon S3 is encrypted. Tape Gateway data cannot be encrypted at-rest - For a virtual tape, you can configure your gateway to encrypt tape data stored in the cloud with AWS KMS–managed keys by using the Storage Gateway API. So this option is incorrect.

Reference:

https://docs.aws.amazon.com/storagegateway/latest/userguide/encryption.html

29
Q

An organization has multiple AWS accounts to manage different lines of business. A user from the Finance account has to access reports stored in Amazon S3 buckets of two other AWS accounts (belonging to the HR and Audit departments) and copy these reports back to the S3 bucket in the Finance account. The user has requested the necessary permissions from the systems administrator to perform this task.

As a SysOps Administrator, how will you configure a solution for this requirement?

A

Explanation
Correct option:

Create identity-based IAM policy in the Finance account that allows the user to make a request to the S3 buckets in the HR and Audit accounts. Also, create resource-based IAM policies in the HR, Audit accounts that will allow the requester from the Finance account to access the respective S3 buckets

Identity-based policies are attached to an IAM user, group, or role. These policies let you specify what that identity can do (its permissions).

Resource-based policies are attached to a resource. For example, you can attach resource-based policies to Amazon S3 buckets, Amazon SQS queues, and AWS Key Management Service encryption keys.

Identity-based policies and resource-based policies are both permissions policies and are evaluated together. For a request to which only permissions policies apply, AWS first checks all policies for a Deny. If one exists, then the request is denied. Then AWS checks for each Allow. If at least one policy statement allows the action in the request, the request is allowed. It doesn’t matter whether the Allow is in the identity-based policy or the resource-based policy.

For requests made from one account to another, the requester in Account A must have an identity-based policy that allows them to make a request to the resource in Account B. Also, the resource-based policy in Account B must allow the requester in Account A to access the resource. There must be policies in both accounts that allow the operation, otherwise, the request fails.

Comparing IAM policies: via - https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_identity-vs-resource.html

Incorrect options:

Create resource-based policies in the HR, Audit accounts that will allow the requester from the Finance account to access the respective S3 buckets - Creating resource-based policy alone will be sufficient when the request is made within a single AWS account.

Create resource-level permissions in the HR, Audit accounts to allow access to respective S3 buckets for the user in the Finance account - Resource-based policies differ from resource-level permissions. You can attach resource-based policies directly to a resource, as described in this topic. Resource-level permissions refer to the ability to use ARNs to specify individual resources in a policy. Resource-based policies are supported only by some AWS services.

Create IAM roles in the HR, Audit accounts, which can be assumed by the user from the Finance account when the user needs to access the S3 buckets of the accounts - Cross-account access with a resource-based policy has some advantages over cross-account access with a role. With a resource that is accessed through a resource-based policy, the principal still works in the trusted account and does not have to give up his or her permissions to receive the role permissions. In other words, the principal continues to have access to resources in the trusted account at the same time as he or she has access to the resource in the trusting account. This is useful for tasks such as copying information to or from the shared resource in the other account.

We chose resource-based policy, so the user from the Finance account will continue to have access to resources in his own account while also getting permissions on resources from other accounts.

References:

https: //docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_identity-vs-resource.html
https: //docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_compare-resource-policies.html

30
Q

An e-commerce web application is built on a fleet of Amazon EC2 instances with an Auto Scaling Group. The application performance remains consistent throughout the day. But, for a few weeks now, users have been complaining about lagging screens and failing orders between 5-6 PM almost every day. Server logs show a sharp spike in user activity for this one hour every day.

What is an optimal way to fix the issue while keeping the application available?

A

Explanation
Correct option:

Create a scheduled scaling action to scale up before the traffic spike hits the servers - Scheduled scaling allows you to set your own scaling schedule. For example, let’s say that every week the traffic to your web application starts to increase on Wednesday, remains high on Thursday, and starts to decrease on Friday. You can plan your scaling actions based on the predictable traffic patterns of your web application. Scaling actions are performed automatically as a function of time and date.

To configure your Auto Scaling group to scale based on a schedule, you create a scheduled action. The scheduled action tells Amazon EC2 Auto Scaling to perform a scaling action at specified times. To create a scheduled scaling action, you specify the start time when the scaling action should take effect, and the new minimum, maximum, and desired sizes for the scaling action. At the specified time, Amazon EC2 Auto Scaling updates the group with the values for minimum, maximum, and desired size that are specified by the scaling action.

You can create scheduled actions for scaling one time only, or for scaling on a recurring schedule.

Incorrect options:

Modify the Auto Scaling Group launch configuration to include more number of instances - An Auto Scaling group is associated with one launch configuration at a time, and you can’t modify a launch configuration after you’ve created it.

You can choose to manually add few more instances to the ASG to deal with the sudden spike - At any time, you can change the size of an existing Auto Scaling group manually. You can either update the desired capacity of the Auto Scaling group, or update the instances that are attached to the Auto Scaling group. But, this is not an optimal solution, since it requires user intervention on daily basis and an elegant and effective method is already available.

Configure an Elastic Load Balancer, to replace the ASG, and move all the instances to ELB - Elastic Load Balancer can balance the incoming traffic across instances. It cannot scale-out and launch new instances in the absence of an attached Auto Scaling Group.

References:

https: //docs.aws.amazon.com/autoscaling/ec2/userguide/scaling_plan.html
https: //docs.aws.amazon.com/autoscaling/ec2/userguide/as-manual-scaling.html

31
Q

A large online business uses multiple Amazon EBS volumes for their storage requirements. According to the company guidelines, the EBS snapshots have to be taken every few minutes to retain the business-critical data in case of failure.

As a SysOps Administrator, can you suggest an effective way of addressing this requirement?

A

Explanation
Correct option:

Use Amazon CloudWatch events to schedule automated EBS Snapshots - You can run CloudWatch Events rules according to a schedule. It is possible to create an automated snapshot of an existing Amazon Elastic Block Store (Amazon EBS) volume on a schedule. You can choose a fixed rate to create a snapshot every few minutes or use a cron expression to specify that the snapshot is made at a specific time of day.

Snapshots are incremental backups, which means that only the blocks on the device that have changed after your most recent snapshot are saved. This minimizes the time required to create the snapshot and saves on storage costs by not duplicating data. Each snapshot contains all of the information that is needed to restore your data (from the moment when the snapshot was taken) to a new EBS volume.

Steps to create a rule that takes snapshots on a schedule: via - https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/TakeScheduledSnapshot.html

Incorrect options:

Use AWS Lambda functions to initiate automatic EBS snapshots every few minutes - AWS Lambda is not a self-invoking function and needs a service to invoke it. Writing code to self invoke in the same Lambda function will result in too many parallel invocations and will turn out to be a very expensive solution. Hence, this option is incorrect.

Use Amazon SNS Notification service to trigger AWS Lambda function that can initiate the EBS snapshots - Amazon SNS and AWS Lambda are integrated so you can invoke Lambda functions with Amazon SNS notifications. Lambda function can be coded to take a snapshot of EBS volume every few minutes. However, this process is neither direct nor is cost-effective way of achieving the stated requirement.

Automated EBS snapshots is a configurable item from Amazon EC2 configuration screen on AWS console - This is an incorrect statement and given only as a distractor.

Reference:

https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/TakeScheduledSnapshot.html

32
Q

A firm uses Amazon EC2 instances for running its flagship application. With new business expansion plans, the firm is looking at a bigger footprint for its AWS infrastructure. The development team needs to share Amazon Machine Images (AMIs) across AZs, AWS accounts and Regions.

What are the key points to be considered before planning the expansion? (Select two)

A

Explanation
Correct options:

You can only share AMIs that have unencrypted volumes and volumes that are encrypted with a customer-managed CMK - You can only share AMIs that have unencrypted volumes and volumes that are encrypted with a customer-managed CMK. If you share an AMI with encrypted volumes, you must also share any CMKs used to encrypt them.

You do not need to share the Amazon EBS snapshots that an AMI references in order to share the AMI - You do not need to share the Amazon EBS snapshots that an AMI references in order to share the AMI. Only the AMI itself needs to be shared; the system automatically provides the access to the referenced Amazon EBS snapshots for the launch.

Incorrect options:

You can only share AMIs that have unencrypted volumes and volumes that are encrypted with an AWS-managed CMK - You cannot share an AMI that has volumes that are encrypted with an AWS-managed CMK.

You need to share any CMKs used to encrypt snapshots and any Amazon EBS snapshots that the AMI references - You do not need to share the Amazon EBS snapshots that an AMI references in order to share the AMI.

AMIs are regional resources and can be shared across Regions - AMIs are a regional resource. Therefore, sharing an AMI makes it available in that Region. To make an AMI available in a different Region, copy the AMI to the Region and then share it. Sharing an AMI from different Regions is not available.

Reference:

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/sharingamis-explicit.html

33
Q

A data analytics company has its server infrastructure built on Amazon EC2 instances fronted with Elastic Load Balancers (ELBs). The ELBs are maintained in two AZs with each ELB having two EC2 instances registered with it. Both the instances in one AZ have been recorded as unhealthy.

What is the status of traffic that flows to the ELB connected to unhealthy instances?

A

Explanation
Correct option:

The Load Balancer routes requests to the unhealthy targets - If there is at least one healthy target in a target group, the load balancer routes requests only to the healthy targets. If a target group contains only unhealthy targets, the load balancer routes requests to the unhealthy targets. Hence, it is advised to configure an Auto Scaling Group, if the instances are hosting a business-critical application.

Incorrect options:

HTTP 503: Service unavailable will be received as response - 503 error is returned if the target groups for the load balancer have no registered targets.

The Load Balancer will display an `unhealthy’ status and will not accept any incoming requests - This is a made-up option, given only as a distractor.

HTTP 403: Forbidden will be returned - 403 error is returned if you configured an AWS WAF web access control list (web ACL) to monitor requests to your Application Load Balancer and it blocked the request.

Reference:

https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-troubleshooting.html

34
Q

The technology team at a retail company has set the DisableApiTermination attribute for a business-critical Amazon EC2 Windows instance to prevent termination of the instance via an API. This instance is behind an Auto Scaling Group (ASG) and the InstanceInitiatedShutdownBehavior attribute is set for the instance. A developer has initiated shutdown from the instance using operating system commands.

What will be the outcome of the above scenario?

A

Explanation
Correct option:

The instance will be terminated - By default, you can terminate your instance using the Amazon EC2 console, command line interface, or API. To prevent your instance from being accidentally terminated using Amazon EC2, you can enable termination protection for the instance. The DisableApiTermination attribute controls whether the instance can be terminated using the console, CLI, or API. By default, termination protection is disabled for your instance. You can set the value of this attribute when you launch the instance, while the instance is running, or while the instance is stopped (for Amazon EBS-backed instances).

The DisableApiTermination attribute does not prevent you from terminating an instance by initiating shutdown from the instance (using an operating system command for system shutdown) when the InstanceInitiatedShutdownBehavior attribute is set.

Incorrect options:

The instance will not shutdown because DisableApiTermination attribute is set - As discussed above, this flag is only for controlling instance termination from console, command line interface, or API. If does not protect from shutdown commands issued from the operating system of the instance if the InstanceInitiatedShutdownBehavior attribute is set.

The operating system of the instance will send an Amazon SNS notification to the concerned person, that was configured when DisableApiTermination attribute was set. The operating system will hold the shutdown for few configured minutes and then progress with instance shutdown - This is a made-up option and given only as a distractor.

ASG cannot terminate an instance whose DisableApiTermination attribute is set - This statement is false. The DisableApiTermination attribute does not prevent Amazon EC2 Auto Scaling from terminating an instance.

Reference:

https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/terminating-instances.html#Using_ChangingDisableAPITermination

35
Q

A company initially used a manual process to create and manage different IAM roles needed for the organization. As the company expanded and lines of business grew, different AWS accounts were created to manage the AWS resources as well as the users. The manual process has resulted in errors with IAM roles getting created with insufficient permissions. The company is looking at automating the process of creating and managing the necessary IAM roles for multiple AWS accounts. The company already uses AWS Organizations to manage multiple AWS accounts.

As a SysOps Administrator, can you suggest an effective way to automate this process?

A

Explanation
Correct option:

Use CloudFormation StackSets with AWS Organizations to deploy and manage IAM roles to multiple AWS accounts simultaneously

CloudFormation StackSets allow you to roll out CloudFormation stacks over multiple AWS accounts and in multiple Regions with just a couple of clicks. When AWS launched StackSets, grouping accounts was primarily for billing purposes. Since the launch of AWS Organizations, you can centrally manage multiple AWS accounts across diverse business needs including billing, access control, compliance, security and resource sharing.

You can now centrally orchestrate any AWS CloudFormation enabled service across multiple AWS accounts and regions. For example, you can deploy your centralized AWS Identity and Access Management (IAM) roles, provision Amazon Elastic Compute Cloud (EC2) instances or AWS Lambda functions across AWS Regions and accounts in your organization. CloudFormation StackSets simplify the configuration of cross-accounts permissions and allow for automatic creation and deletion of resources when accounts are joining or are removed from your Organization.

You can get started by enabling data sharing between CloudFormation and Organizations from the StackSets console. Once done, you will be able to use StackSets in the Organizations master account to deploy stacks to all accounts in your organization or in specific organizational units (OUs). A new service managed permission model is available with these StackSets. Choosing Service managed permissions allows StackSets to automatically configure the necessary IAM permissions required to deploy your stack to the accounts in your organization.

How to use AWS CloudFormation StackSets for Multiple Accounts in an AWS Organization: via - https://aws.amazon.com/blogs/aws/new-use-aws-cloudformation-stacksets-for-multiple-accounts-in-an-aws-organization/

Incorrect options:

Create CloudFormation templates and reuse them to create necessary IAM roles in each of the AWS accounts - CloudFormation templates can ease the current manual process that the company is using. However, it’s not a completely automated process that the company needs.

Use AWS Directory Service with AWS Organizations to automatically associate necessary IAM roles with the Microsoft Active Directory users - AWS Directory Service for Microsoft Active Directory, or AWS Managed Microsoft AD, lets you run Microsoft Active Directory (AD) as a managed service. AWS Directory Service makes it easy to set up and run directories in the AWS Cloud or connect your AWS resources with an existing on-premises Microsoft Active Directory. It is not meant for the automatic creation of IAM roles across AWS accounts.

Use AWS Resource Access Manager that integrates with AWS Organizations to deploy and manage shared resources across AWS accounts - AWS Resource Access Manager (AWS RAM) enables you to share specified AWS resources that you own with other AWS accounts. It’s a centralized service that provides a consistent experience for sharing different types of AWS resources across multiple accounts. This service enables you to share resources across AWS accounts. It’s not meant for re-creating the same resource definitions in different AWS accounts.

References:

https: //aws.amazon.com/blogs/aws/new-use-aws-cloudformation-stacksets-for-multiple-accounts-in-an-aws-organization/
https: //docs.aws.amazon.com/organizations/latest/userguide/services-that-can-integrate-ram.html

36
Q

A financial analytics company stores their confidential reports in an Amazon S3 bucket. These reports are no more valid or useful for the company after 5 years. Manual deletion is often delayed which results in higher storage costs for the company.

As a SysOps Administrator, what would you do to delete the expired reports on-time to save costs?

A

Explanation
Correct option:

Configure the “Retain Until Date” in the object lock settings to a date that is 5 years away from the current date - A retention period protects an object version for a fixed amount of time. When you place a retention period on an object version, Amazon S3 stores a timestamp in the object version’s metadata to indicate when the retention period expires. After the retention period expires, the object version can be overwritten or deleted unless you also placed a legal hold on the object version.

You can place a retention period on an object version either explicitly or through a bucket default setting. When you apply a retention period to an object version explicitly, you specify a Retain Until Date for the object version. Amazon S3 stores the Retain Until Date setting in the object version’s metadata and protects the object version until the retention period expires.

Incorrect options:

Configure the Amazon S3 bucket default settings to specify the “Retain Until Date” for all the objects in the bucket - When you use bucket default settings, you don’t specify a Retain Until Date. Instead, you specify a duration, in either days or years, for which every object version placed in the bucket should be protected.

Disable versioning on the S3 bucket for which the retention period is being set, to avoid creating retention periods for all versions of the object. Then, configure the retention period in the object lock settings to 5 years - This statement is incorrect. Object Lock works only in versioned buckets, and retention periods and legal holds apply to individual object versions. When you lock an object version, Amazon S3 stores the lock information in the metadata for that object version. Placing a retention period or legal hold on an object protects only the version specified in the request.

Use S3 replication to replicate the latest data to another bucket and delete the entire bucket - This method is resource, time and cost-intensive, since the replication has to be done quite often to delete the old objects. This is an inelegant way to address the given requirement.

References:

https: //docs.aws.amazon.com/AmazonS3/latest/dev/object-lock.html
https: //docs.aws.amazon.com/AmazonS3/latest/dev/object-lock-overview.html
https: //docs.aws.amazon.com/AmazonS3/latest/dev/replication.html

37
Q

A team needs to create an AMI from their Amazon EC2 instances for use in another environment.

What is the right way to create an application-consistent AMI from existing EC2 instances?

A

Explanation
Correct option:

Create the AMI by disabling the No reboot option - On the Create image page, No reboot flag is present. The default functionality is, Amazon EC2 shuts down the instance, takes snapshots of any attached volumes, creates and registers the AMI, and then reboots the instance. When No reboot option is selected, the instance is not shut down while creating the AMI. This option is not selected by default.

If you select No reboot, the AMI will be crash-consistent (all the volumes are snapshotted at the same time), but not application-consistent (all the operating system buffers are not flushed to disk before the snapshots are created).

Incorrect options:

Create the AMI with No reboot option enabled - If the No reboot flag is selected, the instance is not shutdown while creating an AMI. This implies, the Operating System buffers are not flushed before creating an AMI, so data integrity could be an issue with AMIs created in this way. Such AMIs are crash-consistent but not application-consistent.

Create an EBS-backed AMI for application consistency - When you create a new instance from an EBS-backed AMI, you are using persistent storage. No reboot flag should still be unchecked to ensure that everything on the instance is stopped and in a consistent state during the creation process.

Create the AMI with Delete on termination enabled - If you select Delete on termination, when you terminate the instance created from this AMI, the EBS volume is deleted. If you clear Delete on termination, when you terminate the instance, the EBS volume is not deleted. This option has been added as a distractor.

Reference:

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/creating-an-ami-ebs.html

38
Q

The development team at a retail company manages the deployment and scaling of their web application through AWS Elastic Beanstalk. After configuring the Elastic Beanstalk environment, the team has realized that Beanstalk is not handling the scaling activities the way they expected. This has impacted the application’s ability to respond to the variations in traffic.

How should the environment be configured to get the best of Beanstalk’s auto-scaling capabilities?

A

Explanation
Correct option:

The Auto Scaling group in your Elastic Beanstalk environment uses two default Amazon CloudWatch alarms to trigger scaling operations. These alarms must be configured based on the parameters appropriate for your application

The Auto Scaling group in your Elastic Beanstalk environment uses two Amazon CloudWatch alarms to trigger scaling operations. Default Auto Scaling triggers are configured to scale when the average outbound network traffic (NetworkOut) from each instance is higher than 6 MB or lower than 2 MB over a period of five minutes.

For more efficient Amazon EC2 Auto Scaling, configure triggers that are appropriate for your application, instance type, and service requirements. You can scale based on several statistics including latency, disk I/O, CPU utilization, and request count.

Incorrect options:

The IAM Role attached to the Auto Scaling group might not have enough permissions to scale instances on-demand - The Auto Scaling group will not be able to spin up Amazon EC2 instances if the IAM Role associated with Beanstalk does not have enough permissions. Since the current use-case talks about scaling not happening at the expected rate, this should not be the issue.

By default, Auto Scaling group created from Beanstalk uses Elastic Load Balancing health checks. Configure the Beanstalk to use Amazon EC2 status checks - This statement is incorrect. By default, Auto Scaling group created from Beanstalk uses Amazon EC2 status checks.

The Auto Scaling group in your Elastic Beanstalk environment uses the number of logged-in users, as the criteria to trigger auto-scaling action. These alarms must be configured based on the parameters appropriate for your application - The default scaling criteria has already been discussed above (and it is not the number of logged-in users).

Reference:

https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.alarms.html

39
Q

As a SysOps Administrator, you maintain the development account of a large team that comprises of both developers and testers. The Development account has two IAM groups: Developers and Testers. Users in both groups have permission to work in the development account and access resources there. From time to time, a developer must update the live S3 Bucket in the production account.

How will you configure the permissions for developers to access the production environment?

A

Explanation
Correct option:

Create a Role in production account, that defines the Development account as a trusted entity and specify a permissions policy that allows trusted users to update the bucket. Then, modify the IAM group policy in development account, so that testers are denied access to the newly created role. Developers can use the newly created role to access the live S3 buckets in production environment -

First, you use the AWS Management Console to establish trust between the production account and the development account. You start by creating an IAM role. When you create the role, you define the development account as a trusted entity and specify a permissions policy that allows trusted users to update the production bucket.

You need to then modify the IAM group policy so that Testers are explicitly denied access to the created role.

Finally, as a developer, you use the created role to update the bucket in the Production account.

Incorrect options:

Create a Role in development account, that defines the production account as a trusted entity and specify a permissions policy that allows trusted users to update the bucket. Then, modify the IAM group policy in development account, so that testers are denied access to the newly created role. Developers can use the newly created role to access the live S3 buckets in production environment - Role has to be created in production account since the resource to be accessed is in this account.

Use Inline policies to be sure that the permissions in a policy are not inadvertently assigned to an identity other than the one they’re intended for - An inline policy is a policy that’s embedded in an IAM identity (a user, group, or role). That is, the policy is an inherent part of the identity. You can create a policy and embed it in an identity, either when you create the identity or later.

Inline policies are useful if you want to maintain a strict one-to-one relationship between a policy and the identity that it’s applied to. For example, you want to be sure that the permissions in a policy are not inadvertently assigned to an identity other than the one they’re intended for. When you use an inline policy, the permissions in the policy cannot be inadvertently attached to the wrong identity.

Create a Role in Production account, that defines the Development account as a trusted entity and specify a permissions policy that allows trusted users to update the bucket. Developers can use the newly created role to access the live S3 buckets in production environment - This option does not deny access to Testers, so it is not correct.

References:

https: //docs.aws.amazon.com/IAM/latest/UserGuide/tutorial_cross-account-with-roles.html
https: //docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_managed-vs-inline.html#customer-managed-policies

40
Q

As a SysOps Administator, you are writing a CloudFormation template in YAML. The template consists of an EC2 instance creation and one RDS resource. Once your resources are created you would like to output the connection endpoint for the RDS database.

Which intrinsic function returns the value needed?

A

Explanation
Correct option:

AWS CloudFormation provides several built-in functions that help you manage your stacks. Intrinsic functions are used in templates to assign values to properties that are not available until runtime.

!GetAtt - The Fn::GetAtt intrinsic function returns the value of an attribute from a resource in the template. This example snippet returns a string containing the DNS name of the load balancer with the logical name myELB - YML : !GetAtt myELB.DNSName JSON : “Fn::GetAtt” : [ “myELB” , “DNSName” ]

Incorrect options:

!Sub - The intrinsic function Fn::Sub substitutes variables in an input string with values that you specify. In your templates, you can use this function to construct commands or outputs that include values that aren’t available until you create or update a stack.

!Ref - The intrinsic function Ref returns the value of the specified parameter or resource.

!FindInMap - The intrinsic function Fn::FindInMap returns the value corresponding to keys in a two-level map that is declared in the Mappings section. For example, you can use this in the Mappings section that contains a single map, RegionMap, that associates AMIs with AWS regions.

References:

https: //docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference.html
https: //docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-getatt.html

K

41
Q

Multiple teams of an e-commerce company use the same AWS CloudFormation template to create stacks of resources needed by them. For the next deployment, the teams need to update the stacks and have been testing the changes through change sets. However, the teams suddenly realized that all their change sets have been lost. Unable to figure out the error they have approached you.

As a SysOps Administrator, how will you identify the error and suggest a way to fix the issue?

A

Explanation
Correct option:

A change set was successfully executed and this resulted in rest of the change sets being deleted by CloudFormation

Change sets allow you to preview how proposed changes to a stack might impact your existing resources, for example, whether your changes will delete or replace any critical resources, AWS CloudFormation makes the changes to your stack only when you decide to execute the change set, allowing you to decide whether to proceed with your proposed changes or explore other changes by creating another change set. You can create and manage change sets using the AWS CloudFormation console, AWS CLI, or AWS CloudFormation API.

After you execute a change, AWS CloudFormation removes all change sets that are associated with the stack because they aren’t applicable to the updated stack.

How to use change sets to update a stack: via - https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-updating-stacks-changesets.html

Incorrect options:

An invalid change set was executed and this resulted in all stacks and change sets getting deleted - This option has been added as a distractor. An invalid change set won’t result in any resource changes as it won’t go through for provisioning.

The change set while being validated, surpassed the account limit of some AWS resource. Since the stacks cannot be updated when the account limit is reached, the change sets have been deleted by CloudFormation - Change sets don’t indicate whether AWS CloudFormation will successfully update a stack. For example, a change set doesn’t check if you will surpass an account limit, if you’re updating a resource that doesn’t support updates, or if you have insufficient permissions to modify a resource, all of which can cause a stack update to fail. If an update fails, AWS CloudFormation attempts to roll back your resources to their original state.

CloudFormation had issued a rollback on the change sets while validating them and deleted all the invalid sets - There is no rollback for change sets, since there is no real change. When they are applied on a stack and stack fails, the stack is rolled back to its previous state.

Reference:

https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-updating-stacks-changesets.html

42
Q

After configuring Amazon EC2 Auto Scaling, a systems administrator had tried to launch the Auto Scaling Group. But, the following launch failure message was displayed - Client.InternalError: Client error on launch.

What is the cause of this error and how can it be fixe

A

Explanation
Correct option:

This error can be caused when an Auto Scaling group attempts to launch an instance that has an encrypted EBS volume, but the service-linked role does not have access to the customer-managed CMK used to encrypt it

Client.InternalError: Client error on launch error is caused when an Auto Scaling group attempts to launch an instance that has an encrypted EBS volume, but the service-linked role does not have access to the customer-managed CMK used to encrypt it. Additional setup is required to allow the Auto Scaling group to launch instances.

There are two scenarios possible: 1)CMK and Auto Scaling group are in the same AWS account, 2)CMK and Auto Scaling group are in different AWS accounts.

Full instructions for configuring the above two scenarios: via - https://docs.aws.amazon.com/autoscaling/ec2/userguide/ts-as-instancelaunchfailure.html#ts-as-instancelaunchfailure-12

Incorrect options:

The security group specified in your launch configuration might have been deleted - This configuration will generate an error like so - “The security group does not exist. Launching EC2 instance failed.”

The block device mappings in your launch configuration might contain block device names that are not available or currently not supported - This configuration will generate an error like so - “Invalid device name upload. Launching EC2 instance failed.”

Your cluster placement group contains an invalid instance type - This configuration will generate an error like so - “Placement groups may not be used with instances of type ‘m1.large’. Launching EC2 instance failed.”

Reference:

https://docs.aws.amazon.com/autoscaling/ec2/userguide/ts-as-instancelaunchfailure.html#ts-as-instancelaunchfailure-12

43
Q

A large IT company manages several projects on AWS Cloud and has decided to use AWS X-Ray to trace application workflows. The company uses a plethora of AWS services like API Gateway, Amazon EC2 instances, Amazon S3 storage service, Elastic Load Balancers and AWS Lambda functions.

Which of the following should the company keep in mind while using AWS X-Ray for the AWS services they use?

A

Explanation
Correct option:

Application Load balancers do not send data to X-Ray - Elastic Load Balancing application load balancers add a trace ID to incoming HTTP requests in a header named X-Amzn-Trace-Id. Load balancers do not send data to X-Ray and do not appear as a node on your service map.

Incorrect options:

AWS X-Ray does not integrate with Amazon S3 and you need to use CloudTrail for tracking requests on S3 - AWS X-Ray integrates with Amazon S3 to trace upstream requests to update your application’s S3 buckets.

AWS X-Ray cannot be used to trace your AWS Lambda functions since they are not integrated - You can use AWS X-Ray to trace your AWS Lambda functions. Lambda runs the X-Ray daemon and records a segment with details about the function invocation and execution.

You cannot use X-Ray to trace or analyze user requests to your Amazon API Gateway APIs - You can use X-Ray to trace and analyze user requests as they travel through your Amazon API Gateway APIs to the underlying services. API Gateway supports X-Ray tracing for all API Gateway endpoint types: Regional, edge-optimized, and private. You can use X-Ray with Amazon API Gateway in all AWS Regions where X-Ray is available.

Reference:

https://docs.aws.amazon.com/xray/latest/devguide/xray-services-elb.html

44
Q

Your company has decided that certain users should have Multi-Factor Authentication (MFA) enabled for their sign-in credentials. A newly hired manager has a Gemalto MFA device that he used in his earlier company. He has approached you to configure it for his AWS account.

How will you configure his existing Gemalto MFA device so he can seamlessly connect with AWS services in the new company?

A

Explanation
Correct option:

AWS MFA does not support the use of your existing Gemalto device - AWS MFA relies on knowing a unique secret associated with your hardware MFA (Gemalto) device in order to support its use. Because of security constraints that mandate such secrets never be shared between multiple parties, AWS MFA cannot support the use of your existing Gemalto device. Only a compatible hardware MFA device purchased from Gemalto can be used with AWS MFA. You can re-use an existing U2F security key with AWS MFA, as U2F security keys do not share any secrets between multiple parties.

Incorrect options:

You can re-use an existing Gemalto device with AWS MFA, as Gemalto devices do not share any secrets between multiple parties - As discussed above, you cannot re-use an existing Gemalto device with AWS MFA because secrets cannot be shared with multiple parties.

AWS MFA relies on knowing a unique secret associated with your hardware MFA. This has to be generated again with AWS MFA for the Gemalto device to work with AWS - As discussed above, an existing Gemalto device cannot be used with AWS MFA.

Security constraints mandate that sharing of secrets between multiple parties can only happen in edge cases. Hence, formal approval is needed between AWS and the previous company to use the same Gemalto device - This is a made-up option, given only as a distractor.

Reference:

https://aws.amazon.com/iam/faqs/

45
Q

A junior systems administrator has created read replicas for Amazon RDS for MYSQL. The created read replicas are running into errors consistently.

As a SysOps Administrator, which of the following items would you suggest while troubleshooting read replica errors? (Select two)

A

Explanation
Correct option:

Writing to tables on a read replica can break the replication - If you’re writing to tables on the read replica, it can break replication.

If the value for the max_allowed_packet parameter for a read replica is less than the max_allowed_packet parameter for the source DB instance, replica errors occur - The max_allowed_packet parameter is a custom parameter that you can set in a DB parameter group. The max_allowed_packet parameter is used to specify the maximum size of data manipulation language (DML) that can be run on the database. If the max_allowed_packet value for the source DB instance is larger than the max_allowed_packet value for the read replica, the replication process can throw an error and stop replication.

Diagnosing MySQL read replication failure: via - https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/CHAP_Troubleshooting.html#CHAP_Troubleshooting.MySQL.ReplicaLag

Incorrect options:

To safely write to tables on a read replica, create indexes on the table after setting the read_only parameter to 0 - As discussed above, writing to tables in read replica breaks the replication process. Setting the read_only paramater to 0 will not help.

Though read replicas can work on both transactional and nontransactional storage engines, nontransactional engines are error-prone because of the way memory is managed on these engines - Read replicas can only work on a transactional storage engine. Using a nontransactional storage engine such as MyISAM can break the replication process.

Statements containing non-deterministic functions like SYSDATE() should be predefined in the configuration to successfully create the read replica - This statement is incorrect. Using unsafe nondeterministic queries such as SLEEP(), SYSDATE(), SYSTEM_USER(), etc can break the replication. There is no option to predefine such functions in the configuration.

References:

https: //docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/CHAP_Troubleshooting.html#CHAP_Troubleshooting.MySQL.ReplicaLag
https: //dev.mysql.com/doc/refman/8.0/en/replication-rbr-safe-unsafe.html
https: //aws.amazon.com/premiumsupport/knowledge-center/rds-read-replica/