Neal Davis - Practice Test 6 - Incorrect Flashcards
Question 1:
A legacy application is being migrated into AWS. The application has a large amount of data that is rarely accessed. When files are accessed they are retrieved sequentially. The application will be migrated onto an Amazon EC2 instance.
What is the LEAST expensive EBS volume type for this use case?
A. Provisioned IOPS SSD (io1)
B. Throughput Optimized HDD (st1)
C. Cold HDD (sc1)
D. General Purpose SSD (gp2)
Explanation
The cold HDD (sc1) EBS volume type is the lowest cost option that is suitable for this use case. The sc1 volume type is suitable for infrequently accessed data and use cases that are oriented towards throughput like sequential data access.
CORRECT: “Cold HDD (sc1)” is the correct answer.
INCORRECT: “Provisioned IOPS SSD (io1)” is incorrect. This is the most expensive option and used for use cases that demand high IOPS.
INCORRECT: “General Purpose SSD (gp2)” is incorrect. This is a more expensive SSD volume type that is used for general use cases.
INCORRECT: “Throughput Optimized HDD (st1)” is incorrect. This is also used for throughput-oriented use cases however it is higher cost than sc1 and better for frequently accessed data.
Question 4:
An application uses an Amazon RDS database and Amazon EC2 instances in a web tier. The web tier instances must not be directly accessible from the internet to improve security.
How can a Solutions Architect meet these requirements?
A. Launch the EC2 instances in a public subnet and use AWS WAF to protect the instances from internet-based attacks
B. Launch the EC2 Instances in a private subnet and create an Application Load Balancer in a public subnet
C. Launch the EC2 instances in a public subnet and create an Application Load Balancer in a public subnet
D. Launch the EC2 instances in a private subnet with a NAT gateway and update the route table
Explanation
To prevent direct connectivity to the EC2 instances from the internet you can deploy your EC2 instances in a private subnet and have the ELB in a public subnet. To configure this you must enable a public subnet in the ELB that is in the same AZ as the private subnet.
CORRECT: “Launch the EC2 instances in a private subnet and create an Application Load Balancer in a public subnet” is the correct answer.
INCORRECT: “Launch the EC2 instances in a private subnet with a NAT gateway and update the route table” is incorrect. This configuration will not allow the application to be accessible from the internet, the aim is to only prevent direct access to the EC2 instances.
INCORRECT: “Launch the EC2 instances in a public subnet and use AWS WAF to protect the instances from internet-based attacks” is incorrect. With the EC2 instances in a public subnet, direct access from the internet is possible. It only takes a security group misconfiguration or software exploit and the instance becomes vulnerable to attack.
INCORRECT: “Launch the EC2 instances in a public subnet and create an Application Load Balancer in a public subnet” is incorrect. The EC2 instances should be launched in a private subnet.
Question 7:
An Auto Scaling group of Amazon EC2 instances behind an Elastic Load Balancer (ELB) is running in an Amazon VPC. Health checks are configured on the ASG to use EC2 status checks. The ELB has determined that an EC2 instance is unhealthy and has removed it from service. A Solutions Architect noticed that the instance is still running and has not been terminated by EC2 Auto Scaling.
What would be an explanation for this behavior?
A. The health check grace period has not yet expired
B. The ASG is waiting for the cooldown timer to expire before terminating the instance
C. The ELB health check type has not been selected for the ASG and so it is unaware that the instance has been determined to be unhealthy by the ELB and has been removed from service
D. Connection draining is enabled and the ASG is waiting for in-flight requests to complete
Explanation
If using an ELB it is best to enable ELB health checks as otherwise EC2 status checks may show an instance as being healthy that the ELB has determined is unhealthy. In this case the instance will be removed from service by the ELB but will not be terminated by Auto Scaling
More information on ASG health checks:
- By default uses EC2 status checks.
- Can also use ELB health checks and custom health checks.
- ELB health checks are in addition to the EC2 status checks.
- If any health check returns an unhealthy status the instance will be terminated.
- With ELB an instance is marked as unhealthy if ELB reports it as OutOfService
- A healthy instance enters the InService state.
- If an instance is marked as unhealthy it will be scheduled for replacement.
- If connection draining is enabled, Auto Scaling waits for in-flight requests to complete or timeout before terminating instances.
- The health check grace period allows a period of time for a new instance to warm up before performing a health check (300 seconds by default).
CORRECT: “The ELB health check type has not been selected for the ASG and so it is unaware that the instance has been determined to be unhealthy by the ELB and has been removed from service” is the correct answer.
INCORRECT: “The ASG is waiting for the cooldown timer to expire before terminating the instance” is incorrect as the ASG does not wait for the cooldown time to expire.
INCORRECT: “Connection draining is enabled and the ASG is waiting for in-flight requests to complete” is incorrect. Connection draining is not the correct answer as the ELB has taken the instance out of service so there are no active connections.
INCORRECT: “The health check grace period has not yet expired” is incorrect. The health check grace period allows a period of time for a new instance to warm up before performing a health check.
Question 8:
A company has a fleet of Amazon EC2 instances behind an Elastic Load Balancer (ELB) that are a mixture of c4.2xlarge instance types and c5.large instances. The load on the CPUs on the c5.large instances has been very high, often hitting 100% utilization, whereas the c4.2xlarge instances have been performing well.
What should a Solutions Architect recommend to resolve the performance issues?
A. Add all of the instances into a Placement Group
B. Enable the weighted routing policy on the ELB and configure a higher weighting for the c4.2xlarge instances
C. Change the configuration to use only c4.2xlarge instance types
D. Add more c5.large instances to spread the load more evenly
Explanation
The 2xlarge instance type provides more CPUs. The best answer is to use this instance type for all instances as the CPU utilization has been lower.
CORRECT: “Change the configuration to use only c4.2xlarge instance types” is the correct answer.
INCORRECT: “Enable the weighted routing policy on the ELB and configure a higher weighting for the c4.2xlarge instances” is incorrect. The weighted routing policy is a Route 53 feature that would not assist in this situation.
INCORRECT: “Add all of the instances into a Placement Group” is incorrect. A placement group helps provide low-latency connectivity between instances and would not help here.
INCORRECT: “Add more c5.large instances to spread the load more evenly” is incorrect. This would not help as this instance type is underperforming with high CPU utilization rates.
Question 10:
A Solutions Architect needs to run a PowerShell script on a fleet of Amazon EC2 instances running Microsoft Windows. The instances have already been launched in an Amazon VPC. What tool can be run from the AWS Management Console that to execute the script on all target EC2 instances?
A. AWS OpsWorks
B. AWS CodeDeploy
C. AWS Config
D. Run Command
Explanation
Run Command is designed to support a wide range of enterprise scenarios including installing software, running ad hoc scripts or Microsoft PowerShell commands, configuring Windows Update settings, and more.
Run Command can be used to implement configuration changes across Windows instances on a consistent yet ad hoc basis and is accessible from the AWS Management Console, the AWS Command Line Interface (CLI), the AWS Tools for Windows PowerShell, and the AWS SDKs.
CORRECT: “Run Command” is the correct answer.
INCORRECT: “AWS CodeDeploy” is incorrect. AWS CodeDeploy is a deployment service that automates application deployments to Amazon EC2 instances, on-premises instances, serverless Lambda functions, or Amazon ECS services.
INCORRECT: “AWS Config” is incorrect. AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. It is not used for ad-hoc script execution.
INCORRECT: “AWS OpsWorks” is incorrect. AWS OpsWorks provides instances of managed Puppet and Chef.
Question 14:
Some data has become corrupted in an Amazon RDS database. A Solutions Architect plans to use point-in-time restore to recover the data to the last known good configuration. Which of the following statements is correct about restoring an RDS database to a specific point-in-time? (choose 2)
A. You can restore up to the last 5 minutes
B. The database restore overwrites the existing database
C. You can restore up to the last 1 minute
D. Custom DB security groups are applied to the new DB instance
E. The default DB security group is applied to the new DB instance
Explanation
You can restore a DB instance to a specific point in time, creating a new DB instance. When you restore a DB instance to a point in time, the default DB security group is applied to the new DB instance. If you need custom DB security groups applied to your DB instance, you must apply them explicitly using the AWS Management Console, the AWS CLI modify-db-instance command, or the Amazon RDS API ModifyDBInstance operation after the DB instance is available.
Restored DBs will always be a new RDS instance with a new DNS endpoint and you can restore up to the last 5 minutes.
CORRECT: “You can restore up to the last 5 minutes” is a correct answer.
CORRECT: “The default DB security group is applied to the new DB instance” is also a correct answer.
INCORRECT: “Custom DB security groups are applied to the new DB instance” is incorrect. Only default DB parameters and security groups are restored – you must manually associate all other DB parameters and SGs..
INCORRECT: “You can restore up to the last 1 minute” is incorrect. You can restore up to the last 5 minutes.
INCORRECT: “The database restore overwrites the existing database” is incorrect. You cannot restore from a DB snapshot to an existing DB – a new instance is created when you restore.
Question 18:
A company has multiple Amazon VPCs that are peered with each other. The company would like to use a single Elastic Load Balancer (ELB) to route traffic to multiple EC2 instances in peered VPCs within the same region. How can this be achieved?
A. This is possible using the Classic Load Balancer (CLB) if using Instance IDs
B. This is not possible with ELB, you would need to use Route 53
C. This is possible using the Network Load Balancer (NLB) and Application Load Balancer (ALB) if using IP addresses as targets
D. This is not possible, the instances an ELB routes traffic to must be in the same VPC
Explanation
With ALB and NLB IP addresses can be used to register:
- Instances in a peered VPC.
- AWS resources that are addressable by IP address and port.
- On-premises resources linked to AWS through Direct Connect or a VPN connection.
CORRECT: “This is possible using the Network Load Balancer (NLB) and Application Load Balancer (ALB) if using IP addresses as targets” is the correct answer.
INCORRECT: “This is not possible, the instances that an ELB routes traffic to must be in the same VPC” is incorrect. Instances can be in peered VPCs.
INCORRECT: “This is possible using the Classic Load Balancer (CLB) if using Instance IDs” is incorrect. This is not possible with the CLB.
INCORRECT: “This is not possible with ELB, you would need to use Route 53” is incorrect. This is not true, as detailed above.
Question 20:
A Solutions Architect has logged into an Amazon EC2 Linux instance using SSH and needs to determine a few pieces of information including what IAM role is assigned, the instance ID and the names of the security groups that are assigned to the instance.
From the options below, what would be the best source of this information?
A. Metadata
B. Tags
C. Parameters
D. User data
Explanation
Instance metadata is data about your instance that you can use to configure or manage the running instance. Instance metadata is divided into categories, for example, host name, events, and security groups.
Instance metadata is available at http://169.254.169.254/latest/meta-data.
CORRECT: “Metadata” is the correct answer.
INCORRECT: “Tags” is incorrect. Tags are used to categorize and label resources.
INCORRECT: “User data” is incorrect. User data is used to configure the system at launch time and specify scripts.
INCORRECT: “Parameters” is incorrect. Parameters are used in databases.
Question 21:
A Solutions Architect needs to capture information about the traffic that reaches an Amazon Elastic Load Balancer. The information should include the source, destination, and protocol.
What is the most secure and reliable method for gathering this data?
A. Use Amazon CloudWatch Logs to review detailed logging information
B. Create a VPC flow log for each network interface associated with the ELB
C. Enable Amazon CloudTrail logging and configure packet capturing
D. Create a VPC flow log for the subnets in which the ELB is running
Explanation
You can use VPC Flow Logs to capture detailed information about the traffic going to and from your Elastic Load Balancer. Create a flow log for each network interface for your load balancer. There is one network interface per load balancer subnet.
CORRECT: “Create a VPC flow log for each network interface associated with the ELB” is the correct answer.
INCORRECT: “Enable Amazon CloudTrail logging and configure packet capturing” is incorrect. CloudTrail performs auditing of API actions, it does not do packet capturing.
INCORRECT: “Use Amazon CloudWatch Logs to review detailed logging information” is incorrect as this service does not record this information in CloudWatch logs.
INCORRECT: “Create a VPC flow log for the subnets in which the ELB is running” is incorrect as the more secure option is to use the ELB network interfaces.
Question 22:
The load on a MySQL database running on Amazon EC2 is increasing and performance has been impacted. Which of the options below would help to increase storage performance? (choose 2)
A. Use EBS optimized instances
B. Use Provisioned IOPS (io1) EBS volumes
C. Create a RAID1 array from multiple EBS volumes
D. Use HDD, Cold (sc1) EBS volumes
E. Use a larger instance size within the instance family
Explanation
EBS optimized instances provide dedicated capacity for Amazon EBS I/O. EBS optimized instances are designed for use with all EBS volume types.
Provisioned IOPS EBS volumes allow you to specify the amount of IOPS you require up to 50 IOPS per GB. Within this limitation you can therefore choose to select the IOPS required to improve the performance of your volume.
RAID can be used to increase IOPS, however RAID 1 does not. For example:
– RAID 0 = 0 striping – data is written across multiple disks and increases performance but no redundancy.
– RAID 1 = 1 mirroring – creates 2 copies of the data but does not increase performance, only redundancy.
HDD, Cold – (SC1) provides the lowest cost storage and low performance
CORRECT: “Use Provisioned IOPS (I01) EBS volumes” is a correct answer.
CORRECT: “Use EBS optimized instances” is also a correct answer.
INCORRECT: “Use a larger instance size within the instance family” is incorrect as this may not increase storage performance.
INCORRECT: “Use HDD, Cold (SC1) EBS volumes” is incorrect. As this will likely decrease storage performance.
INCORRECT: “Create a RAID 1 array from multiple EBS volumes” is incorrect. As explained above, mirroring does not increase performance.
Question 26:
A Solutions Architect has created a new Network ACL in an Amazon VPC. No rules have been created. Which of the statements below are correct regarding the default state of the Network ACL? (choose 2)
A. There is a default inbound rule denying all traffic
B. There is a default inbound rule allowing traffic from the VPC CIDR block
C. There is a default outbound rule allowing all traffic
D. There is a default outbound rule allowing traffic to the Internet Gateway
E. There is a default outbound rule denying all traffic
Explanation
A VPC automatically comes with a default network ACL which allows all inbound/outbound traffic. A custom NACL denies all traffic both inbound and outbound by default.
Network ACL’s function at the subnet level and you can have permit and deny rules. Network ACLs have separate inbound and outbound rules and each rule can allow or deny traffic.
Network ACLs are stateless so responses are subject to the rules for the direction of traffic. NACLs only apply to traffic that is ingress or egress to the subnet not to traffic within the subnet.
CORRECT: “There is a default inbound rule denying all traffic” is a correct answer.
CORRECT: “There is a default outbound rule denying all traffic” is also a correct answer.
INCORRECT: “There is a default inbound rule allowing traffic from the VPC CIDR block” is incorrect as inbound traffic is not allowed from anywhere by default.
INCORRECT: “There is a default outbound rule allowing traffic to the Internet Gateway” is incorrect as outbound traffic is not allowed to anywhere by default.
INCORRECT: “There is a default outbound rule allowing all traffic” is incorrect as all traffic is denied.
Question 29:
A Solutions Architect is designing the system monitoring and deployment layers of a serverless application. The system monitoring layer will manage system visibility through recording logs and metrics and the deployment layer will deploy the application stack and manage workload changes through a release management process.
The Architect needs to select the most appropriate AWS services for these functions. Which services and frameworks should be used for the system monitoring and deployment layers? (choose 2)
A. Use AWS Lambda to package, test, and deploy the serverless application stack
B. Use AWS SAM to package, test, and deploy the serverless application stack
C. Use Amazon CloudWatch for consolidating system and application logs and monitoring custom metrics
D. Use AWS X-Ray to package, test, and deploy the serverless application stack
E. Use AWS CloudTrail for consolidating system and application logs and monitoring custom metrics
Explanation
AWS Serverless Application Model (AWS SAM) is an extension of AWS CloudFormation that is used to package, test, and deploy serverless applications.
With Amazon CloudWatch, you can access system metrics on all the AWS services you use, consolidate system and application level logs, and create business key performance indicators (KPIs) as custom metrics for your specific needs.
CORRECT: “Use AWS SAM to package, test, and deploy the serverless application stack” is a correct answer.
CORRECT: “Use Amazon CloudWatch for consolidating system and application logs and monitoring custom metrics” is also a correct answer.
INCORRECT: “Use AWS CloudTrail for consolidating system and application logs and monitoring custom metrics” is incorrect as CloudTrail is used for auditing not performance monitoring.
INCORRECT: “Use AWS X-Ray to package, test, and deploy the serverless application stack” is incorrect. AWS X-Ray lets you analyze and debug serverless applications by providing distributed tracing and service maps to easily identify performance bottlenecks by visualizing a request end-to-end.
INCORRECT: “Use AWS Lambda to package, test, and deploy the serverless application stack” is incorrect. AWS Lambda is used for executing your code as functions, it is not used for packaging, testing and deployment. AWS Lambda is used with AWS SAM.
Question 30:
An Amazon DynamoDB table has a variable load, ranging from sustained heavy usage some days, to only having small spikes on others. The load is 80% read and 20% write. The provisioned throughput capacity has been configured to account for the heavy load to ensure throttling does not occur.
What would be the most efficient solution to optimize cost?
A. Use DynamoDB DAX to increase the performance of the database
B. Create a DynamoDB Auto Scaling scaling policy
C. Create a CloudWatch alarm that notifies you of increased/decreased load, and manually adjust the provisioned throughput
D. Create a CloudWatch alarm that triggers an AWS Lambda function that adjusts the provisioned throughput
Explanation
Amazon DynamoDB auto scaling uses the AWS Application Auto Scaling service to dynamically adjust provisioned throughput capacity on your behalf, in response to actual traffic patterns. This is the most efficient and cost-effective solution to optimizing for cost.
CORRECT: “Create a DynamoDB Auto Scaling scaling policy” is the correct answer.
INCORRECT: “Create a CloudWatch alarm that triggers an AWS Lambda function that adjusts the provisioned throughput” is incorrect. Using AWS Lambda to modify the provisioned throughput is possible but it would be more cost-effective to use DynamoDB Auto Scaling as there is no cost to using it.
INCORRECT: “Create a CloudWatch alarm that notifies you of increased/decreased load, and manually adjust the provisioned throughput” is incorrect. Manually adjusting the provisioned throughput is not efficient.
INCORRECT: “Use DynamoDB DAX to increase the performance of the database” is incorrect. DynamoDB DAX is an in-memory cache that increases the performance of DynamoDB. However, it costs money and there is no requirement to increase performance.
Question 37:
A Solutions Architect is designing the disk configuration for an Amazon EC2 instance. The instance needs to support a MapReduce process that requires high throughput for a large dataset with large I/O sizes.
Which Amazon EBS volume is the MOST cost-effective solution for these requirements?
A. EBS General Purpose SSD
B. EBS Throughput Optimized HDD
C. EBS Provisioned IOPS SSD
D. EBS General Purpose SSD in a RAID1 configuration
Explanation
EBS Throughput Optimized HDD is good for the following use cases (and is the most cost-effective option:
- Frequently accessed, throughput intensive workloads with large datasets and large I/O sizes, such as MapReduce, Kafka, log processing, data warehouse, and ETL workloads.
Throughput is measured in MB/s, and includes the ability to burst up to 250 MB/s per TB, with a baseline throughput of 40 MB/s per TB and a maximum throughput of 500 MB/s per volume.
CORRECT: “EBS Throughput Optimized HDD” is the correct answer.
INCORRECT: “EBS General Purpose SSD in a RAID 1 configuration” is incorrect. This is not the best solution for the requirements or the most cost-effective.
INCORRECT: “EBS Provisioned IOPS SSD” is incorrect. SSD disks are more expensive.
INCORRECT: “EBS General Purpose SSD” is incorrect. SSD disks are more expensive.
Question 38:
Several Amazon EC2 Spot instances are being used to process messages from an Amazon SQS queue and store results in an Amazon DynamoDB table. Shortly after picking up a message from the queue AWS terminated the Spot instance. The Spot instance had not finished processing the message. What will happen to the message?
A. The message will be lost as it would have been deleted from the queue when processed
B. The message will remain in the queue and be immediately picked up by another instance
C. The results may be duplicated in DynamoDB as the message will likely be processed multiple times
D. The message will become available for processing again after the visibility timeout expires
Explanation
The visibility timeout is the amount of time a message is invisible in the queue after a reader picks up the message. If a job is processed within the visibility timeout the message will be deleted. If a job is not processed within the visibility timeout the message will become visible again (could be delivered twice). The maximum visibility timeout for an Amazon SQS message is 12 hours.
CORRECT: “The message will become available for processing again after the visibility timeout expires” is the correct answer.
INCORRECT: “The message will be lost as it would have been deleted from the queue when processed” is incorrect. The message will not be lost and will not be immediately picked up by another instance.
INCORRECT: “The message will remain in the queue and be immediately picked up by another instance” is incorrect. As mentioned above it will be available for processing in the queue again after the timeout expires.
INCORRECT: “The results may be duplicated in DynamoDB as the message will likely be processed multiple times” is incorrect. As the instance had not finished processing the message it should only be fully processed once. Depending on your application process however it is possible some data was written to DynamoDB.