Practice Exam 1 Flashcards
- A development team has configured its AWS VPC with one public and one private subnet. The public subnet has an Amazon EC2 instance that hosts the application. The private subnet has the RDS database that the application needs to communicate with.
Which of the following would you identify as the correct way to configure a solution for the given requirement?
- Subnets inside a VPC can communicate with each other without the need for any further configuration. Hence, no additional configurations are needed
- Configure a VPC peering for enabling communication between the subnets
- Elastic IP can be configured to initiate communication between private and public subnets
- Create a Security Group that allows connection from different subnets inside a VPC
Subnets inside a VPC can communicate with each other without the need for any further configuration. Hence, no additional configurations are needed - Subnets inside a VPC can communicate with each other without any additional configurations.
A virtual private cloud (VPC) is a virtual network dedicated to your AWS account. It is logically isolated from other virtual networks in the AWS Cloud. A subnet is a range of IP addresses in your VPC. You can launch AWS resources into a specified subnet. Use a public subnet for resources that must be connected to the internet, and a private subnet for resources that won’t be connected to the internet.
A route table contains a set of rules, called routes, that are used to determine where network traffic from your VPC is directed. You can explicitly associate a subnet with a particular route table. Otherwise, the subnet is implicitly associated with the main route table.
The first entry in the Main route table is the default entry for local routing in the VPC; this entry enables the instances (potentially belonging to different subnets) in the VPC to communicate with each other.
via - https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Scenario2.html
Incorrect options:
Elastic IP can be configured to initiate communication between private and public subnets - An Elastic IP address is a reserved public IP address that you can assign to any EC2 instance in a particular region until you choose to release it. Elastic IP is not needed for resources to talk across subnets in the same VPC.
Configure a VPC peering for enabling communication between the subnets - A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them using private IPv4 addresses or IPv6 addresses. It is not needed for resources inside the same VPC.
Create a Security Group that allows connection from different subnets inside a VPC - A security group acts as a virtual firewall for your instance to control inbound and outbound traffic. Security groups act at the instance level, not the subnet level.
References:
https: //docs.aws.amazon.com/vpc/latest/userguide/VPC_Scenario2.html
https: //docs.aws.amazon.com/vpc/latest/userguide/how-it-works.html#what-is-route-tables
https: //docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html
- As a SysOps Administrator, you have been contacted by a team for troubleshooting a security issue they seem to be facing. A security check red flag is being raised for the security groups created by AWS Directory Services. The flag message says “Security Groups - Unrestricted Access.”
How will you troubleshoot this issue?
- Ignore or suppress the red flag since it is safe to do so, in this scenario
- AWS Directory Service might have been initiated from an account that does not have proper permissions. Check the permissions on the IAM roles and IAM users used to initiate the service
- Use AWS Trusted Advisor to know the exact reason for this error and take action as recommended by the Trusted Advisor
- The security group configurations have to be checked and edited to cater to AWS security standards
Explanation
Correct option:
Ignore or suppress the red flag since it is safe to do so, in this scenario - AWS Directory Services is a managed service that automatically creates an AWS security group in your VPC with network rules for traffic in and out of AWS managed domain controllers. The default inbound rules allow traffic from any source (0.0.0.0/0) to ports required by Active Directory. These rules do not introduce security vulnerabilities, as traffic to the domain controllers is limited to traffic from your VPC, other peered VPCs, or networks connected using AWS Direct Connect, AWS Transit Gateway or Virtual Private Network.
In addition, the ENIs the security group is attached to, do not and cannot have Elastic IPs attached to them, limiting inbound traffic to local VPC and VPC routed traffic.
Incorrect options:
The security group configurations have to be checked and edited to cater to AWS security standards
Use AWS Trusted Advisor to know the exact reason for this error and take action as recommended by the Trusted Advisor
AWS Directory Service might have been initiated from an account that does not have proper permissions. Check the permissions on the IAM roles and IAM users used to initiate the service
These three options contradict the explanation provided above, so these options are incorrect.
Reference:
https://aws.amazon.com/premiumsupport/faqs/
2 As part of the ongoing system maintenance, a SysOps Administrator has decided to increase the storage capacity of an EBS volume that is attached to an Amazon EC2 instance. However, the increased size is not reflected in the file system.
What has gone wrong in the configuration and how can it be fixed?
- After you increase the size of an EBS volume, you must extend the file system to a larger size
- EBS volume needs to be detached and attached back again to the instance for the modifications to show
- EBS volume might be encrypted. Encrypted EBS volumes will not show modifications done when still attached to the instance. Detach the EBS volume and attach it back
- Linux servers automatically pick the modifications done to EBS volumes, but Windows servers do not offer this feature. Use the Windows Disk Management utility to increase the disk size to the new modified volume size
Explanation
Correct option:
After you increase the size of an EBS volume, you must extend the file system to a larger size - After you increase the size of an EBS volume, you must use the file-system specific commands to extend the file system to the larger size. You can resize the file system as soon as the volume enters the optimizing state.
The process for extending a file system on Linux is as follows:
Your EBS volume might have a partition that contains the file system and data. Increasing the size of a volume does not increase the size of the partition. Before you extend the file system on a resized volume, check whether the volume has a partition that must be extended to the new size of the volume.
Use a file system-specific command to resize each file system to the new volume capacity.
Incorrect options:
EBS volume needs to be detached and attached back again to the instance for the modifications to show - This is incorrect and has been added as a distractor.
EBS volume might be encrypted. Encrypted EBS volumes will not show modifications done when still attached to the instance. Detach the EBS volume and attach it back - EBS volume encryption has no bearing on the given scenario.
Linux servers automatically pick the modifications done to EBS volumes, but Windows servers do not offer this feature. Use the Windows Disk Management utility to increase the disk size to the new modified volume size - As discussed above, You need to manually extend the size of the file system after increasing the size of EBS volume.
On the Windows file system, after you increase the size of an EBS volume, use the Windows Disk Management utility or PowerShell to extend the disk size to the new size of the volume. You can begin resizing the file system as soon as the volume enters the optimizing state.
References:
https: //docs.aws.amazon.com/AWSEC2/latest/UserGuide/recognize-expanded-volume-linux.html
https: //docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/recognize-expanded-volume-windows.html
3 A systems administrator is configuring Amazon EC2 status check alarm to publish a notification to an SNS topic when the instance fails either the instance check or system status check.
Which CloudWatch metric is the right choice for this configuration?
StatusCheckFailed
CombinedStatusCheckFailed
StatusCheckFailed_System
Explanation
Correct option:
StatusCheckFailed - The AWS/EC2 namespace includes a few status check metrics. By default, status check metrics are available at a 1-minute frequency at no charge. For a newly-launched instance, status check metric data is only available after the instance has completed the initialization state (within a few minutes of the instance entering the running state).
StatusCheckFailed - Reports whether the instance has passed both the instance status check and the system status check in the last minute. This metric can be either 0 (passed) or 1 (failed). By default, this metric is available at a 1-minute frequency at no charge.
List of EC2 status check metrics: via - https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/viewing_metrics_with_cloudwatch.html#status-check-metrics
Incorrect options:
CombinedStatusCheckFailed - This is a made-up option, given only as a distractor.
`StatusCheckFailed_Instance - Reports whether the instance has passed the instance status check in the last minute. This metric can be either 0 (passed) or 1 (failed).
StatusCheckFailed_System - Reports whether the instance has passed the system status check in the last minute. This metric can be either 0 (passed) or 1 (failed).
Reference:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/viewing_metrics_with_cloudwatch.html#status-check-metrics
- An e-commerce company uses AWS Elastic Beanstalk to create test environments comprising of an Amazon EC2 instance and an RDS instance whenever a new product or line-of-service is launched. The company is currently testing one such environment but wants to decouple the database from the environment to run some analysis and reports later in another environment. Since testing is in progress for a high-stakes product, the company wants to avoid downtime and database sync issues.
As a SysOps Administrator, which solution will you recommend to the company
Use an Elastic Beanstalk blue (environment A)/green (environment B) deployment to decouple the RDS DB instance from environment A. Create a new Elastic Beanstalk environment (environment B) with the necessary information to connect to the decouple RDS DB instanc
Since it is a test environment, take a snapshot of the database and terminate the current environment. Create a new one without attaching an RDS instance directly to it (from the snapshotj
Use an Elastic Beanstalk Immutable deployment to make the entire architecture completely reliable. You can terminate the first environment whenever you are confident of the second environment working correct
Decoupling an RDS instance that is part of a running Elastic Beanstalk environment is not currently supported by AWS. You will need to terminate the current environment after taking the snapshot of the database and create a new one with RDS configured outside the environment
j
Explanation
Correct option:
Use an Elastic Beanstalk blue (environment A)/green (environment B) deployment to decouple the RDS DB instance from environment A. Create a new Elastic Beanstalk environment (environment B) with the necessary information to connect to the decouple RDS DB instance - Attaching an RDS DB instance to an Elastic Beanstalk environment is ideal for development and testing environments. However, it’s not recommended for production environments because the lifecycle of the database instance is tied to the lifecycle of your application environment. If you terminate the environment, then you lose your data because the RDS DB instance is deleted by the environment.
Since the current use case mentions not having downtime on the database, we can follow these steps for resolution: 1. Use an Elastic Beanstalk blue (environment A)/green (environment B) deployment to decouple an RDS DB instance from environment A. Create an RDS DB snapshot and enable Deletion protection on the DB instance to Safeguard your RDS DB instance from deletion. 2. Create a new Elastic Beanstalk environment (environment B) with the necessary information to connect to the RDS DB instance. Your new Elastic Beanstalk environment (environment B) must not include an RDS DB instance in the same Elastic Beanstalk application.
Step-by-step instructions to configure the above solution: via - https://aws.amazon.com/premiumsupport/knowledge-center/decouple-rds-from-beanstalk/
Incorrect options:
Since it is a test environment, take a snapshot of the database and terminate the current environment. Create a new one without attaching an RDS instance directly to it (from the snapshot) - It is mentioned in the problem statement that the company is looking at a solution with no downtime. Hence, this is an incorrect option.
Use an Elastic Beanstalk Immutable deployment to make the entire architecture completely reliable. You can terminate the first environment whenever you are confident of the second environment working correctly - Immutable deployments perform an immutable update to launch a full set of new instances running the new version of the application in a separate Auto Scaling group, alongside the instances running the old version. Immutable deployments can prevent issues caused by partially completed rolling deployments. If the new instances don’t pass health checks, Elastic Beanstalk terminates them, leaving the original instances untouched. This solution is an over-kill for the test environment, even if the company is looking at a no-downtime option.
Decoupling an RDS instance that is part of a running Elastic Beanstalk environment is not currently supported by AWS. You will need to terminate the current environment after taking the snapshot of the database and create a new one with RDS configured outside the environment - This is a made-up option and given only as a distractor.
References:
https: //aws.amazon.com/premiumsupport/knowledge-center/decouple-rds-from-beanstalk/
https: //docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.rolling-version-deploy.html
As a SysOps Administrator, you have been asked to fix the network performance issues for a fleet of Amazon EC2 instances of a company.
Which of the following use-cases represents the right fit for using enhanced networking?
To support throughput near or exceeding 20K packets per second (PPS) on the VIF driver
To reach speeds up to 2,500 Gbps between EC2 instan
To configure multi-attach for an EBS volume that can be attached to a maximum of 16 EC2 instances in a single Availability Zone
To configure Direct Connect to reach speeds up to 25 Gbps between EC2 instances -
Explanation
Correct option:
To support throughput near or exceeding 20K packets per second (PPS) on the VIF driver - Enhanced networking uses single root I/O virtualization (SR-IOV) to provide high-performance networking capabilities on supported instance types. SR-IOV is a method of device virtualization that provides higher I/O performance and lower CPU utilization when compared to traditional virtualized network interfaces. Enhanced networking provides higher bandwidth, higher packet per second (PPS) performance, and consistently lower inter-instance latencies. There is no additional charge for using enhanced networking.
Consider using enhanced networking for the following scenarios:
If your packets-per-second rate reaches its ceiling, consider moving to enhanced networking. If your rate reaches its ceiling, you’ve likely reached the upper thresholds of the virtual network interface driver.
If your throughput is near or exceeding 20K packets per second (PPS) on the VIF driver, it’s a best practice to use enhanced networking.
All current generation instance types support enhanced networking, except for T2 instances.
Incorrect options:
To reach speeds up to 2,500 Gbps between EC2 instances - If you need to reach speeds up to 25 Gbps between instances, launch instances in a cluster placement group along with ENA compatible instances. If you need to reach speeds up to 10 Gbps between instances, launch your instances into a cluster placement group with the enhanced networking instance type. This option has been added as a distractor, as it is not possible to support speeds up to 2,500 Gbps between EC2 instances.
To configure multi-attach for an EBS volume that can be attached to a maximum of 16 EC2 instances in a single Availability Zone - An EBS (io1 or io2) volume, when configured with the new Multi-Attach option, can be attached to a maximum of 16 EC2 instances in a single Availability Zone. Additionally, each Nitro-based EC2 instance can support the attachment of multiple Multi-Attach enabled EBS volumes. Multi-Attach capability makes it easier to achieve higher availability for applications that provide write-ordering to maintain storage consistency. You do not need to use enhanced networking to configure this option.
To configure Direct Connect to reach speeds up to 25 Gbps between EC2 instances - AWS Direct Connect is a networking service that provides an alternative to using the internet to connect your on-premises resources to AWS Cloud. In many circumstances, private network connections can reduce costs, increase bandwidth, and provide a more consistent network experience than internet-based connections. You cannot use enhanced networking to configure Direct Connect to reach speeds up to 25 Gbps between EC2 instances.
References:
https: //docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking.html
https: //aws.amazon.com/premiumsupport/knowledge-center/enable-configure-enhanced-networking/
https: //docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volumes-multi.html
A multi-national company extensively uses AWS CloudFormation to model and provision its AWS resources. A human error had earlier deleted a critical service from the CloudFormation stack that resulted in business loss. The company is looking at a quick and effective solution to lock the critical resources from any updates or deletes.
As a SysOps Administrator, what will you suggest to address this requirement?
Explanation
Correct option:
Use Stack policies to protect critical stack resources from unintentional updates
Stack policies help protect critical stack resources from unintentional updates that could cause resources to be interrupted or even replaced. A stack policy is a JSON document that describes what update actions can be performed on designated resources. Specify a stack policy whenever you create a stack that has critical resources.
During a stack update, you must explicitly specify the protected resources that you want to update; otherwise, no changes are made to protected resources.
Example Stack policy: via - https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/protect-stack-resources.html
Incorrect options:
Use nested stacks that will retain the configuration in the parent configuration even if the child configuration is lost or cannot be used - Nested stacks are stacks that create other stacks. As your infrastructure grows, common patterns can emerge in which you declare the same components in each of your templates. You can separate these common components and create dedicated templates for them. Nested stacks make it easy to manage resources, but it does not protect them from updation.
Use revision controls to protect critical stack resources from unintentional updates - Your stack templates describe the configuration of your AWS resources, such as their property values. To review changes and to keep an accurate history of your resources, use code reviews and revision controls. Although it’s a useful feature, it is not relevant for the current scenario.
Use parameter constraints to specify the Identities that can update the Stack - With constraints, you can describe allowed input values so that AWS CloudFormation catches any invalid values before creating a stack. You can set constraints such as a minimum length, maximum length, and allowed patterns. However, you cannot protect resources from deletion.
Reference:
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/best-practices.html#nested
A video streaming app uses Amazon Kinesis Data Streams for streaming data. The systems administration team needs to be informed of the shard capacity when it is reaching its limits.
How will you configure this requirement?
Explanation
Correct option:
Monitor Trusted Advisor service check results with Amazon CloudWatch Events - AWS Trusted Advisor checks for service usage that is more than 80% of the service limit.
A partial list of Trusted Advisor service limit checks: via - https://aws.amazon.com/premiumsupport/technology/trusted-advisor/best-practice-checklist/
You can use Amazon CloudWatch Events to detect and react to changes in the status of Trusted Advisor checks. Then, based on the rules that you create, CloudWatch Events invokes one or more target actions when a status check changes to the value you specify in a rule. Depending on the type of status change, you might want to send notifications, capture status information, take corrective action, initiate events, or take other actions.
Incorrect options:
Configure Amazon CloudWatch Events to pick data from Amazon Inspector - Amazon Inspector is an automated security assessment service that helps you test the network accessibility of your Amazon EC2 instances and the security state of your applications running on the instances. Not the right service for the given requirement.
Use CloudWatch ServiceLens to monitor data on service limits of various AWS services - CloudWatch ServiceLens enhances the observability of your services and applications by enabling you to integrate traces, metrics, logs, and alarms into one place. So, ServiceLens can be used once we define the alarms in CloudWatch, not without it.
Configure Amazon CloudTrail to generate logs for the service limits. CloudTrail and CloudWatch are integrated and hence alarm can be generated for customized service checks - AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. With CloudTrail, you can log, continuously monitor, and retain account activity related to actions across your AWS infrastructure. CloudTrail however, does not monitor service limits.
References:
https: //docs.aws.amazon.com/awssupport/latest/user/cloudwatch-events-ta.html
https: //docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/ServiceLens.html
https: //aws.amazon.com/premiumsupport/technology/trusted-advisor/best-practice-checklist/
Question 8: Correct
A developer has created rules for different events on Amazon EventBridge with AWS Lambda function as a target. The developer has also created an IAM Role with the necessary permissions and associated it with the rule. The rule however is failing, and on initial analysis, it is clear that the IAM Role associated with the rule is not being used when calling the Lambda function.
What could have gone wrong with the configuration and how can you fix the issue?K
Explanation
Correct option:
For Lambda functions configured as a target to EventBridge, you need to provide resource-based policy. IAM Roles will not work - IAM roles for rules are only used for events related to Kinesis Streams. For Lambda functions and Amazon SNS topics, you need to provide resource-based permissions.
When a rule is triggered in EventBridge, all the targets associated with the rule are invoked. Invocation means invoking the AWS Lambda functions, publishing to the Amazon SNS topics, and relaying the event to the Kinesis streams. In order to be able to make API calls against the resources you own, EventBridge needs the appropriate permissions. For Lambda, Amazon SNS, Amazon SQS, and Amazon CloudWatch Logs resources, EventBridge relies on resource-based policies. For Kinesis streams, EventBridge relies on IAM roles.
Incorrect options:
The IAM Role is wrongly configured. Delete the existing Role and recreate with necessary permissions and associate the newly created Role with the EventBridge rule - This option has been added as a distractor.
For Lambda, EventBridge relies on Access Control Lists (ACLs) to define permissions. IAM Roles will not work for Lambda when configured as a target for an EventBridge rule - Access Control Lists are not used with EventBridge and ACLs are defined at the account level and not at the individual user level.
AWS Command Line Interface (CLI) should not be used to add permissions to EventBridge targets - This statement is incorrect. AWS CLI can be used to add permissions to targets for EventBridge rules.
References:
https: //docs.aws.amazon.com/eventbridge/latest/userguide/resource-based-policies-eventbridge.html
https: //docs.aws.amazon.com/eventbridge/latest/userguide/eventbridge-troubleshooting.html
An IT services company runs its technology infrastructure on AWS Cloud. The company runs audits for all the development and testing teams against the standards set by the organization. During a recent audit, the company realized that most of the patch compliance standards are not being followed by the teams. The teams have however tagged all their AWS resources as per the guidelines.
As a SysOps Administrator, which of the following would you recommend as an easy way of fixing the issue as quickly as possible?
Explanation
Correct option:
Use AWS Systems Manager Patch Manager to automate the process of patching managed instances
AWS Systems Manager Patch Manager automates the process of patching managed instances with both security-related and other types of updates. You can use Patch Manager to apply patches for both operating systems and applications. You can use Patch Manager to install Service Packs on Windows instances and perform minor version upgrades on Linux instances. You can patch fleets of EC2 instances or your on-premises servers and virtual machines (VMs) by operating system type.
Patch Manager uses patch baselines, which include rules for auto-approving patches within days of their release, as well as a list of approved and rejected patches. You can install patches on a regular basis by scheduling patching to run as a Systems Manager maintenance window task. You can also install patches individually or to large groups of instances by using Amazon EC2 tags. (Tags are keys that help identify and sort your resources within your organization.) You can add tags to your patch baselines themselves when you create or update them.
Patch Manager provides options to scan your instances and report compliance on a schedule, install available patches on a schedule, and patch or scan instances on demand whenever you need to.
Patch Manager integrates with AWS Identity and Access Management (IAM), AWS CloudTrail, and Amazon EventBridge to provide a secure patching experience that includes event notifications and the ability to audit usage.
Incorrect options:
Use Amazon Inspector to automate the process of patching instances that helps improve the security and compliance of the instances - Amazon Inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on AWS. Amazon Inspector automatically assesses applications for exposure, vulnerabilities, and deviations from best practices. Inspector is not a patch management service.
Use Amazon Patch Manager to automate the process of patching instances - This is a made-up option and given only as a distractor.
Use AWS Systems Manager Automation to simplify the patch application process across all instances - Systems Manager Automation simplifies common maintenance and deployment tasks of EC2 instances and other AWS resources. Automation enables you to do the following: Build Automation workflows to configure and manage instances and AWS resources, Create custom workflows or use pre-defined workflows maintained by AWS, Receive notifications about Automation tasks and workflows by using Amazon EventBridge, Monitor Automation progress and execution details by using the Amazon EC2 or the AWS Systems Manager console. Systems Manager Automation, however, does not include patch management.
References:
https: //docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-patch.html
https: //aws.amazon.com/inspector/
https: //docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-automation.html
As a SysOps Administrator, you create and maintain various system configurations for the teams you work with. You have created a CloudFront distribution with origin as an Amazon S3 bucket. The configuration has worked fine so far. However, for a few hours now, an error similar to this has cropped up - The authorization header is malformed; the region ‘’ is wrong; expecting ‘’.
What is the reason for this error and how will you fix it?
Explanation
Correct option:
This error indicates the configured Amazon S3 bucket has been moved from one AWS Region to the other. That is, deleted from one AWS Region and created with the same name in another. To fix this error, update your CloudFront distribution so that it finds the S3 bucket in the bucket’s current AWS Region - If CloudFront requests an object from your origin, and the origin returns an HTTP 4xx or 5xx status code, there’s a problem with communication between CloudFront and your origin.
Your CloudFront distribution might send error responses with HTTP status code 400 Bad Request, and a message similar to the following: The authorization header is malformed; the region ‘’ is wrong; expecting ‘’.
This problem can occur in the following scenario: 1)Your CloudFront distribution’s origin is an Amazon S3 bucket, 2)You moved the S3 bucket from one AWS Region to another. That is, you deleted the S3 bucket, then later you created a new bucket with the same bucket name, but in a different AWS Region than where the original S3 bucket was located.
To fix this error, update your CloudFront distribution so that it finds the S3 bucket in the bucket’s current AWS Region.
Incorrect options:
This error indicates that the CloudFront distribution and Amazon S3 are not in the same AWS Region. Move one resource so that, both the CloudFront distribution and Amazon S3 are in the same AWS Region - Amazon CloudFront uses a global network of edge locations and regional edge caches for content delivery. You can configure CloudFront to server content from particular Regions, but CloudFront is not Region-specific.
This error indicates that the API key used for authorization is from an AWS Region that is different from the Region that S3 bucket is created in - This is a made-up option, given only as a distractor.
This error indicates that when CloudFront forwarded a request to the origin, the origin didn’t respond before the request expired. This could be an access issue caused by a firewall or a Security Group not allowing access to CloudFront to access S3 resources - When CloudFront forwards a request to the origin, and the origin didn’t respond before the request expired, a Gateway Timeout error is generated.
Reference:
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/http-400-bad-request.html
As a SysOps Administrator, you have been asked to calculate the total network usage for all the EC2 instances of a company and determine which instance used the most bandwidth within a date range.
Which Amazon CloudWatch metric(s) will help you get the needed data?
Explanation
Correct option:
NetworkIn and NetworkOut - You can determine which instance is causing high network usage using the Amazon CloudWatch NetworkIn and NetworkOut metrics. You can aggregate the data points from these metrics to calculate the network usage for your instance.
NetworkIn - The number of bytes received by the instance on all network interfaces. This metric identifies the volume of incoming network traffic to a single instance.
The number reported is the number of bytes received during the period. If you are using basic (five-minute) monitoring and the statistic is Sum, you can divide this number by 300 to find Bytes/second. If you have detailed (one-minute) monitoring and the statistic is Sum, divide it by 60. Units of this metric are Bytes.
NetworkOut - The number of bytes sent out by the instance on all network interfaces. This metric identifies the volume of outgoing network traffic from a single instance.
The number reported is the number of bytes sent during the period. If you are using basic (five-minute) monitoring and the statistic is Sum, you can divide this number by 300 to find Bytes/second. If you have detailed (one-minute) monitoring and the statistic is Sum, divide it by 60. Units of this metric are Bytes.
Incorrect options:
DataTransfer-Out-Bytes - DataTransfer-Out-Bytes metric is used in AWS Cost Explorer reports and is not useful for the current scenario.
DiskReadBytes and DiskWriteBytes - DiskReadBytes is the bytes read from all instance store volumes available to the instance. This metric is used to determine the volume of the data the application reads from the hard disk of the instance. This can be used to determine the speed of the application.
DiskWriteBytes is the bytes written to all instance store volumes available to the instance. This metric is used to determine the volume of the data the application writes onto the hard disk of the instance. This can be used to determine the speed of the application.
NetworkTotalBytes - This is a made-up option, given only as a distractor.
Reference:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/viewing_metrics_with_cloudwatch.htmlK
An IT company runs its server infrastructure on Amazon EC2 instances configured in an Auto Scaling Group (ASG) fronted by an Elastic Load Balancer (ELB). For ease of deployment and flexibility in scaling, this AWS architecture is maintained via an Elastic Beanstalk environment. The Technology Lead of a project has requested to automate the replacement of unhealthy Amazon EC2 instances in the Elastic Beanstalk environment.
How will you configure a solution for this requirement?
Explanation
Correct option:
To automate the replacement of unhealthy EC2 instances, you must change the health check type of your instance’s Auto Scaling group from EC2 to ELB by using a configuration file of your Beanstalk environment
By default, the health check configuration of your Auto Scaling group is set as an EC2 type that performs a status check of EC2 instances. To automate the replacement of unhealthy EC2 instances, you must change the health check type of your instance’s Auto Scaling group from EC2 to ELB by using a configuration file.
The following are some important points to remember:
Status checks cover only an EC2 instance’s health, and not the health of your application, server, or any Docker containers running on the instance.
If your application crashes, the load balancer removes the unhealthy instances from its target. However, your Auto Scaling group doesn’t automatically replace the unhealthy instances marked by the load balancer.
By changing the health check type of your Auto Scaling group from EC2 to ELB, you enable the Auto Scaling group to automatically replace the unhealthy instances when the health check fails.
Complete list of steps to configure the above: via - https://aws.amazon.com/premiumsupport/knowledge-center/elastic-beanstalk-instance-automation/
Incorrect options:
To automate the replacement of unhealthy EC2 instances, you must change the health check type of your instance’s Auto Scaling group from ELB to EC2 by using a configuration file of your Beanstalk environment - As mentioned earlier, the health check type of your instance’s Auto Scaling group should be changed from EC2 to ELB.
Modify the Auto Scaling Group from Amazon EC2 console directly to change the health check type to ELB
Modify the Auto Scaling Group from Amazon EC2 console directly to change the health check type to EC2
You should configure your Amazon EC2 instances in an Elastic Beanstalk environment by using Elastic Beanstalk configuration files (.ebextensions). Configuration changes made to your Elastic Beanstalk environment won’t persist if you use the following configuration methods:
Configuring an Elastic Beanstalk resource directly from the console of a specific AWS service.
Installing a package, creating a file, or running a command directly from your Amazon EC2 instance.
Both these options contradict the above explanation and therefore these two options are incorrect.
Reference:
https://aws.amazon.com/premiumsupport/knowledge-center/elastic-beanstalk-configuration-files/
A junior developer created multiple stacks of resources in different AWS Regions per the CloudFormation template given to him. The development team soon started having issues with the created resources and their behavior. Initial checks have confirmed that some resources were created and some omitted, though the same template has been used. As a SysOps Administrator, you have been tasked to resolve these issues.
Which of the following could be the possible reason for this unexpected behavior?
Explanation
Correct option:
The CloudFormation template might have custom named IAM resources that are responsible for the unintended behavior - If your template contains custom named IAM resources, don’t create multiple stacks reusing the same template. IAM resources must be globally unique within your account. If you use the same template to create multiple stacks in different Regions, your stacks might share the same IAM resources, instead of each having a unique one. Shared resources among stacks can have unintended consequences from which you can’t recover. For example, if you delete or update shared IAM resources in one stack, you will unintentionally modify the resources of other stacks.
Incorrect options:
There might have been dependency errors, that resulted in the stack not being created completely - Any error during stack creation will rollback the entire stack creation process and the result is, none of the mentioned resources are created.
Insufficient IAM permissions can lead to issues. When you work with an AWS CloudFormation stack, you not only need permissions to use AWS CloudFormation, you must also have permission to use the underlying services that are described in your template - If permissions were an issue, the stack wouldn’t be created at all.
The CloudFormation template was created using use-once only option and is not supposed to be reused for creating other stacks - This is a made-up option and given only as a distractor.
Reference:
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-iam-template.html
As SysOps Administrator, you have created two configuration files for CloudWatch Agent configuration. The first configuration file collects a set of metrics and logs from all servers and the second configuration file collects metrics from certain applications. You have given the same name to both the files but stored these files in different file paths.
What is the outcome when the CloudWatch Agent is started with the first configuration file and then the second configuration file is appended to it?
Explanation
Correct option:
The append command overwrites the information from the first configuration file instead of appending to it
You can set up the CloudWatch agent to use multiple configuration files. For example, you can use a common configuration file that collects a set of metrics and logs that you always want to collect from all servers in your infrastructure. You can then use additional configuration files that collect metrics from certain applications or in certain situations.
To set this up, first create the configuration files that you want to use. Any configuration files that will be used together on the same server must have different file names. You can store the configuration files on servers or in Parameter Store.
Start the CloudWatch agent using the fetch-config option and specify the first configuration file. To append the second configuration file to the running agent, use the same command but with the append-config option. All metrics and logs listed in either configuration file are collected.
Any configuration files appended to the configuration must have different file names from each other and from the initial configuration file. If you use append-config with a configuration file with the same file name as a configuration file that the agent is already using, the append command overwrites the information from the first configuration file instead of appending to it. This is true even if the two configuration files with the same file name are on different file paths.
Incorrect options:
Second configuration file parameters are added to the Agent already running with the first configuration file parameters
Two different Agents are started with different configurations, collecting the metrics and logs listed in either of the configuration files
A CloudWatch Agent can have only one configuration file and all required parameters are defined in this file alone
These three options contradict the explanation provided above, so these options are incorrect.
Reference:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Agent-common-scenarios.html
A highly critical financial services application is being moved to AWS Cloud from the on-premises data center. The application uses a fleet of Amazon EC2 instances provisioned in different geographical areas. The Chief Technology Officer (CTO) of the company needs to understand the communication network used between instances at various locations when they interact using public IP addresses.
Which of the following options would you identify as correct? (Select two)
Explanation
Correct option:
Traffic between EC2 instances in different AWS Regions stays within the AWS network, if there is an Inter-Region VPC Peering connection between the VPCs where the two instances reside
Traffic between two EC2 instances in the same AWS Region stays within the AWS network, even when it goes over public IP addresses
When two instances communicate using public IP addresses, the following three scenarios are possible: 1. Traffic between two EC2 instances in the same AWS Region stays within the AWS network, even when it goes over public IP addresses.
Traffic between EC2 instances in different AWS Regions stays within the AWS network if there is an Inter-Region VPC Peering connection between the VPCs where the two instances reside.
Traffic between EC2 instances in different AWS Regions where there is no Inter-Region VPC Peering connection between the VPCs where these instances reside, is not guaranteed to stay within the AWS network.
Incorrect options:
Traffic between two EC2 instances always stays within the AWS network, even when it goes over public IP addresses by using AWS Global Infrastructure
Traffic between EC2 instances in different AWS Regions where there is no Inter-Region VPC Peering connection between the VPCs where these instances reside will use edge locations to communicate without going over the internet
These two options contradict the explanation provided above, so both options are incorrect.
Direct Connect is the default way of communication where there is no Inter-Region VPC Peering connection between the VPCs. All traffic between instances will use Direct Connect and does not go over the internet - AWS Direct Connect is a network service that provides an alternative to using the Internet to utilize AWS cloud services. AWS Direct Connect enables customers to have low latency and private connections to AWS for workloads that require higher speed or lower latency than the internet. Direct Connect is a paid service and is available only if the customer opts for it.
Reference:
https://aws.amazon.com/vpc/faqs/
Consider this scenario - the primary instance of an Amazon Aurora cluster is unavailable because of an outage that has affected an entire AZ. The primary instance and all the reader instances are in the same AZ.
As a SysOps Administrator, what action will you take to get the database onlin
Explanation
Correct option:
You must manually create one or more new DB instances in another AZ
Suppose that the primary instance in your cluster is unavailable because of an outage that affects an entire AZ. In this case, the way to bring a new primary instance online depends on whether your cluster uses a multi-AZ configuration. If the cluster contains any reader instances in other AZs, Aurora uses the failover mechanism to promote one of those reader instances to be the new primary instance. If your provisioned cluster only contains a single DB instance, or if the primary instance and all reader instances are in the same AZ, you must manually create one or more new DB instances in another AZ.
Incorrect options:
Aurora promotes an existing replica in another AZ to a new primary instance - The use case states that the primary instance and all the reader instances are in the same AZ. So, this is not possible.
Aurora automatically creates a new primary instance in the same AZ - If the primary instance in a DB cluster using single-master replication fails, Aurora automatically fails over to a new primary instance in one of two ways:
By promoting an existing Aurora Replica to the new primary instance
By creating a new primary instance
But, in this use case, the AZ itself has failed. So, creating a new primary in the same AZ is not possible.
For a cluster using single-master replication, Aurora can create up to 15 read-only Aurora Replicas to serve requests from users - Generally, an Aurora DB cluster can contain up to 15 Aurora Replicas. The Aurora Replicas can be distributed across the Availability Zones that a DB cluster spans within an AWS Region. But, this use case is a single AZ deployment with failure at the AZ level. So, this solution is not possible.
Reference:
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Replication.html
An automobile company manages its AWS resource creation and maintenance process through AWS CloudFormation. The company has successfully used CloudFormation so far, and wishes to continue using the service. However, while moving to CloudFormation, the company only moved critical resources and left out the other resources to be managed manually. To leverage the ease of creation and maintenance that CloudFormation offers, the company wants to move rest of the resources to CloudFormation.
Which of the following options is the recommended way to configure this requirement
Explanation
Correct option:
You can bring an existing resource into AWS CloudFormation management using resource import
If you created an AWS resource outside of AWS CloudFormation management, you can bring this existing resource into AWS CloudFormation management using resource import. You can manage your resources using AWS CloudFormation regardless of where they were created without having to delete and re-create them as part of a stack.
During an import operation, you create a change set that imports your existing resources into a stack or creates a new stack from your existing resources. You provide the following during import.
A template that describes the entire stack, including both the original stack resources and the resources you’re importing. Each resource to import must have a DeletionPolicy attribute.
Identifiers for the resources to import. You provide two values to identify each target resource.
a) An identifier property. This is a resource property that can be used to identify each resource type. For example, an AWS::S3::Bucket resource can be identified using its BucketName.
b) An identifier value. This is the target resource’s actual property value. For example, the actual value for the BucketName property might be MyS3Bucket.
Incorrect options:
Use Parameters section of CloudFormation template to input the required resources - Parameters are a way to provide inputs to your AWS CloudFormation template. They are useful when you want to reuse your templates. Some inputs can not be determined ahead of time. They aren’t useful for importing resources into CloudFormation.
You can use Mappings part of CloudFormation template to input the needed resources - Mappings are fixed variables within your CloudFormation Template. They’re very handy to differentiate between different environments (dev vs prod), regions (AWS regions), AMI types, etc. They aren’t useful for importing resources into CloudFormation.
Drift detection is the mechanism by which you add resources to the stack of Cloudformation resources already created - Performing a drift detection operation on a stack determines whether the stack has drifted from its expected template configuration, and returns detailed information about the drift status of each resource in the stack that supports drift detection. It is not useful for importing resources into CloudFormation.
Reference:
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/resource-import.html