AWS Solutions Architect Exam Flashcards
You have taken over management of several instances in the company AWS environment. You want to quickly review scripts used to bootstrap the instances at runtime. A URL command can be used to do this. What can you append to the URL http://169.254.169.254/latest/ to retrieve this data?
A. instance-data
B. user-data
C. instance-demographic-data
D. meta-data
B. user-data/
When you launch an instance in Amazon EC2, you have the option of passing user data to the instance that can be used to perform common automated configuration tasks and even run scripts after the instance starts. You can pass two types of user data to Amazon EC2: shell scripts and cloud-init directives.
You are working for a large financial institution and have been tasked with creating a relational database solution to deal with a read-heavy workload. The database needs to be highly available within the Oregon region and quickly recover if an Availability Zone goes offline. Which of the following would you select to meet these requirements?
A. Create a read replica and point your read workloads to the new endpoint RDS provides.
B. Using RDS, create a read replica. If a region fails, RDS will automatically cut over to the read replica.
C. Enable Multi-AZ support for the RDS database.
D. Split your database into multiple RDS instances across different regions. In the event of a failure, point your application to the new region.
A. Create a read replica and point your read workloads to the new endpoint RDS provides.
Amazon RDS uses the MariaDB, MySQL, Oracle, PostgreSQL, and Microsoft SQL Server DB engines’ built-in replication functionality to create a special type of DB instance called a read replica from a source DB instance. Updates made to the source DB instance are asynchronously copied to the read replica. You can reduce the load on your source DB instance by routing read queries from your applications to the read replica. Using read replicas, you can elastically scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads.
Several instances you are creating have a specific data requirement. The requirement states that the data on the root device needs to persist independently from the lifetime of the instance. After considering AWS storage options, which is the simplest way to meet these requirements?
A. Create a cron job to migrate the data to S3.
B. Store the data on the local instance store.
C. Send the data to S3 using S3 lifecycle rules.
D. Store your root device data on Amazon EBS and set the DeleteOnTermination attribute to false using a block device mapping.
D. Store your root device data on Amazon EBS and set the DeleteOnTermination attribute to false using a block device mapping.
An Amazon EBS-backed instance can be stopped and later restarted without affecting data stored in the attached volumes. By default, the root volume for an AMI backed by Amazon EBS is deleted when the instance terminates. You can change the default behavior to ensure that the volume persists after the instance terminates. To change the default behavior, set the DeleteOnTermination attribute to false using a block device mapping.
A company has a great deal of data in S3 buckets for which they want to create a database. Creating the RDS database, normalizing the data, and migrating to the RDS database will take time and is the long-term plan. But there’s an immediate need to query this data to retrieve information necessary for an audit. Which AWS service will enable querying data in S3 using standard SQL commands?
A. There is no such service, but there are third-party tools.
B. DynamoDB
C. Amazon SQL Connector
D. Amazon Athena
Amazon Athena
Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL. Athena is serverless, so there is no infrastructure to manage, and you only pay for the queries you run.
Athena is easy to use. Simply point to your data in Amazon S3, define the schema, and start querying using standard SQL. Most results are delivered within seconds. With Athena, there’s no need for complex ETL jobs to prepare your data for analysis. This makes it easy for anyone with SQL skills to quickly analyze large-scale datasets.
Your application is housed on an Auto Scaling Group of EC2 instances. The application is backed by the Multi-AZ MySQL RDS database and an additional read replica. You need to simulate some failures for disaster recovery drills. Which event will not cause an RDS to perform a failover to the standby replica?
A. Compute unit failure on primary
B. Storage failure on primary
C. Read replica failure
D. Loss of network connectivity to primary
C. Read replica failure
Correct. When you provision a Multi-AZ DB instance, Amazon RDS automatically creates a primary DB instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Each AZ runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable. In case of an infrastructure failure, Amazon RDS performs an automatic failover to the standby (or to a read replica in the case of Amazon Aurora), so that you can resume database operations as soon as the failover is complete. Since the endpoint for your DB Instance remains the same after a failover, your application can resume database operation without the need for manual administrative intervention. https://aws.amazon.com/rds/features/multi-az/ Amazon RDS handles failovers automatically so you can resume database operations as quickly as possible without administrative intervention. The primary DB instance switches over automatically to the standby replica if any of the following conditions occur:
An Availability Zone outage
The primary DB instance fails
The DB instance’s server type is changed
The operating system of the DB instance is undergoing software patching
A manual failover of the DB instance was initiated using Reboot with failover
There are several ways to determine if your Multi-AZ DB instance has failed over:
DB event subscriptions can be set up to notify you by email or SMS that a failover has been initiated. For more information about events, see Using Amazon RDS Event Notification.
You can view your DB events by using the Amazon RDS console or API operations.
You can view the current state of your Multi-AZ deployment by using the Amazon RDS console and API operations.
You have several EC2 instances in an auto scaling group fronted by a network load balancer. The instances will need access to S3 and DynamoDB. The auto scaling group was created from a launch template. What needs to be configured in the launch template to enable newly launched instances access to S3 and DynamoDB?
A. IAM policy attached to newly launched instances with permissions to S3 and DynamoDB.
B. An IAM Group for EC2 with policies giving permission to S3 and DynamoDB.
C. Access keys to be passed to newly launched EC2 instances.
D. An IAM Role attached to newly launched instances with permissions to S3 and DynamoDB.
D. An IAM Role attached to newly launched instances with permissions to S3 and DynamoDB.
Correct. IAM roles are designed so that your applications can securely make API requests from your instances, without requiring you to manage the security credentials that the applications use. Instead of creating and distributing your AWS credentials, you can delegate permission to make API requests using IAM roles.
Your company has recently converted to a hybrid cloud environment and will slowly be migrating to a fully AWS cloud environment. The AWS side is in need of some steps to prepare for disaster recovery. A disaster recovery plan needs drawn up and disaster recovery drills need to be performed for compliance reasons. The company wants to establish Recovery Time and Recovery Point Objectives. The RTO and RPO can be pretty relaxed. The main point is to have a plan in place, with as much cost savings as possible. Which AWS disaster recovery pattern will best meet these requirements?
A. Multi Site
B. Warm Standby
C. Backup and restore
D. Pilot Light
C. Backup and restore
Correct: This is the least expensive option and cost is the overriding factor.
An online media company has created an application which provides analytical data to its clients. The application is hosted on EC2 instances in an Auto Scaling Group. You have been brought on as a consultant and add an Application Load Balancer to front the Auto Scaling Group and distribute the load between the instances. The VPC which houses this architecture is running IPv4 and IPv6. The last thing you need to do to complete the configuration is point the domain name to the Application Load Balancer. Using Route 53, which record type at the zone apex will you use to point the DNS name of the Application Load Balancer?
Choose two.
A. Alias with an AAAA type record set.
B. Alias with a CNAME record set.
C. Alias with an A type record set.
D. Alias with an MX type record set.
A. Alias with an AAAA type record set.
Alias with a type “AAAA” record set and Alias with a type “A” record set are correct. To route domain traffic to an ELB, use Amazon Route 53 to create an alias record that points to your load balancer.
C. Alias with an A type record set.
Alias with a type “AAAA” record set and Alias with a type “A” record set are correct. To route domain traffic to an ELB load balancer, use Amazon Route 53 to create an alias record that points to your load balancer. An alias record is a Route 53 extension to DNS.
Your company is using a hybrid configuration because there are some legacy applications which are not easily converted and migrated to AWS. With this configuration comes a typical scenario where the legacy apps must maintain the same private IP address and MAC address. You are attempting to convert the application to the cloud and have configured an EC2 instance to house the application. What you are currently testing is removing the ENI from the legacy instance and attaching it to the EC2 instance. You want to attempt a cold attach. What does this mean?
A. Attach ENI before the public IP address is assigned.
B. Attach ENI when the instance is being launched.
C. Attach ENI to an instance when it’s running.
D. Attach ENI when it’s stopped.
B. Attach ENI when the instance is being launched.
You can attach a network interface to an instance when it’s running (hot attach), when it’s stopped (warm attach), or when the instance is being launched (cold attach). You can detach secondary network interfaces when the instance is running or stopped. However, you can’t detach the primary network interface. You can move a network interface from 1 instance to another if the instances are in the same Availability Zone and VPC but in different subnets. When launching an instance using the CLI, API, or an SDK, you can specify the primary network interface and additional network interfaces. Launching an Amazon Linux or Windows Server instance with multiple network interfaces automatically configures interfaces, private IPv4 addresses, and route tables on the operating system of the instance. A warm or hot attach of an additional network interface may require you to manually bring up the second interface, configure the private IPv4 address, and modify the route table accordingly. Instances running Amazon Linux or Windows Server automatically recognize the warm or hot attach and configure themselves. Attaching another network interface to an instance (for example, a NIC teaming configuration) cannot be used as a method to increase or double the network bandwidth to or from the dual-homed instance. If you attach 2 or more network interfaces from the same subnet to an instance, you may encounter networking issues such as asymmetric routing. If possible, use a secondary private IPv4 address on the primary network interface instead. For more information, see Assigning a Secondary Private IPv4 Address.
A new startup company decides to use AWS to host their web application. They configure a VPC as well as two subnets within the VPC. They also attach an internet gateway to the VPC. In the first subnet, they create an EC2 instance to host a web application. There is a network ACL and a security group, which both have the proper ingress and egress to and from the internet. There is a route in the route table to the internet gateway. The EC2 instances added to the subnet need to have a globally unique IP address to ensure internet access. Which is not a globally unique IP address?
A. Public IP address
B. Private IP address
C. Elastic IP address
D. IPv6 address
B. Private IP address
Public IPv4 address, elastic IP address, and IPv6 address are globally unique addresses. The IPv4 addresses known for not being unique are private IPs. These are found in the following ranges: from 10.0.0.0 to 10.255.255.255, from 172.16.0.0 to 172.31.255.255, and from 192.168.0.0 to 192.168.255.255. Reference: RFC1918.
A small startup company has begun using AWS for all of its IT infrastructure. The company has two AWS Solutions Architects, and they are very proficient with AWS deployments. They want to choose a deployment service that best meets the given requirements. Those requirements include version control of their infrastructure documentation and granular control of all of the services to be deployed. Which AWS service would best meet these requirements?
A. CloudFormation
B. OpsWorks
C. Elastic Beanstalk
D. Terraform
CloudFormation
Correct. CloudFormation is infrastructure as code, and the CloudFormation feature of templates allows this infrastructure as code to be version controlled. While it can be argued that both OpsWorks and Elastic Beanstalk provide some granular control of services, this is not the main feature of either.
(Terraform - Incorrect. Terraform is not an AWS product.)
You are working as a Solutions Architect in a large healthcare organization. You have many Auto Scaling Groups that utilize launch configurations. Many of these launch configurations are similar yet have subtle differences. You’d like to use multiple versions of these launch configurations. An ideal approach would be to have a default launch configuration and then have additional versions that add additional features. Which option best meets these requirements?
A. Use launch templates instead.
B. Simply create the needed versions. Launch configurations already have versioning.
C. Create the launch configurations in CloudFormation and version the templates accordingly.
D. Store the launch configurations in S3 and turn on versioning.
A. Use launch templates instead.
A launch template is similar to a launch configuration, in that it specifies instance configuration information. Included are the ID of the Amazon Machine Image (AMI), the instance type, a key pair, security groups, and the other parameters that you use to launch EC2 instances. However, defining a launch template instead of a launch configuration allows you to have multiple versions of a template. With versioning, you can create a subset of the full set of parameters and then reuse it to create other templates or template versions. For example, you can create a default template that defines common configuration parameters and allow the other parameters to be specified as part of another version of the same template.
You have just started working at a company that is migrating from a physical data center into AWS. Currently, you have 25 TB of data that needs to be moved to an S3 bucket. Your company has just finished setting up a 1 GB Direct Connect drop, but you do not have a VPN currently up and running. This data needs to be encrypted during transit and at rest and must be uploaded to the S3 bucket within 21 days. How can you meet these requirements?
A. Use a Snowball device to transmit the data.
B. Upload the data using Direct Connect.
C. Upload the data to S3 using your public internet connection.
D. Order a Snowcone device to transmit the data.
A. Use a Snowball device to transmit the data.
This would be the perfect choice to transmit your data. Snowball encrypts your data, so all the security and speed requirements would be met.
Your team has provisioned an Auto Scaling Groups in a single Region. The Auto Scaling Group at max capacity would total 40 EC2 instances between them. However, you notice that the Auto Scaling Groups will only scale out to a portion of that number of instances at any one time. What could be the problem?
A. You can only have 20 instances per region. This is a hard limit.
B. The associated load balancer can only serve 20 instances at one time.
C. There is a vCPU-based on-demand instance limit per region.
D. You can only have 20 instances per Availability Zone.
C. There is a vCPU-based on-demand instance limit per region.
Correct. Your AWS account has default quotas, formerly referred to as limits, for each AWS service. Unless otherwise noted, each quota is Region-specific. You can request increases for some quotas, and other quotas cannot be increased. Remember that each EC2 instance can have a variance of the number of vCPUs, depending on its type and your configuration, so it’s always wise to calculate your vCPU needs to make sure you are not going to hit quotas easily. Service Quotas is an AWS service that helps you manage your quotas for over 100 AWS services from one location. Along with looking up the quota values, you can also request a quota increase from the Service Quotas console.
An Application Load Balancer is fronting an Auto Scaling Group of EC2 instances, and the instances are backed by an RDS database. The Auto Scaling Group has been configured to use the Default Termination Policy. You are testing the Auto Scaling Group and have triggered a scale-in. Which instance will be terminated first?
A. The Auto Scaling Group will randomly select an instance to terminate.
B. The longest running instance.
C. The instance for which the load balancer stops sending traffic.
D. The instance launched from the oldest launch configuration.
The instance launched from the oldest launch configuration.
What do we know? The ASG is using the Default Termination Policy. The default termination policy is designed to help ensure that your instances span Availability Zones evenly for high availability. The default policy is kept generic and flexible to cover a range of scenarios. The default termination policy behavior is as follows: Determine which Availability Zones have the most instances, and at least one instance that is not protected from scale in. Determine which instances to terminate so as to align the remaining instances to the allocation strategy for the on-demand or spot instance that is terminating. This only applies to an Auto Scaling Group that specifies allocation strategies. For example, after your instances launch, you change the priority order of your preferred instance types. When a scale-in event occurs, Amazon EC2 Auto Scaling tries to gradually shift the on-demand instances away from instance types that are lower priority. Determine whether any of the instances use the oldest launch template or configuration: [For Auto Scaling Groups that use a launch template] Determine whether any of the instances use the oldest launch template unless there are instances that use a launch configuration. Amazon EC2 Auto Scaling terminates instances that use a launch configuration before instances that use a launch template. [For Auto Scaling Groups that use a launch configuration] Determine whether any of the instances use the oldest launch configuration. After applying all of the above criteria, if there are multiple unprotected instances to terminate, determine which instances are closest to the next billing hour. If there are multiple unprotected instances closest to the next billing hour, terminate one of these instances at random.