AWS Solutions Architect Exam Flashcards

1
Q

You have taken over management of several instances in the company AWS environment. You want to quickly review scripts used to bootstrap the instances at runtime. A URL command can be used to do this. What can you append to the URL http://169.254.169.254/latest/ to retrieve this data?

A. instance-data
B. user-data
C. instance-demographic-data
D. meta-data

A

B. user-data/

When you launch an instance in Amazon EC2, you have the option of passing user data to the instance that can be used to perform common automated configuration tasks and even run scripts after the instance starts. You can pass two types of user data to Amazon EC2: shell scripts and cloud-init directives.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

You are working for a large financial institution and have been tasked with creating a relational database solution to deal with a read-heavy workload. The database needs to be highly available within the Oregon region and quickly recover if an Availability Zone goes offline. Which of the following would you select to meet these requirements?

A. Create a read replica and point your read workloads to the new endpoint RDS provides.
B. Using RDS, create a read replica. If a region fails, RDS will automatically cut over to the read replica.
C. Enable Multi-AZ support for the RDS database.
D. Split your database into multiple RDS instances across different regions. In the event of a failure, point your application to the new region.

A

A. Create a read replica and point your read workloads to the new endpoint RDS provides.

Amazon RDS uses the MariaDB, MySQL, Oracle, PostgreSQL, and Microsoft SQL Server DB engines’ built-in replication functionality to create a special type of DB instance called a read replica from a source DB instance. Updates made to the source DB instance are asynchronously copied to the read replica. You can reduce the load on your source DB instance by routing read queries from your applications to the read replica. Using read replicas, you can elastically scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Several instances you are creating have a specific data requirement. The requirement states that the data on the root device needs to persist independently from the lifetime of the instance. After considering AWS storage options, which is the simplest way to meet these requirements?

A. Create a cron job to migrate the data to S3.
B. Store the data on the local instance store.
C. Send the data to S3 using S3 lifecycle rules.
D. Store your root device data on Amazon EBS and set the DeleteOnTermination attribute to false using a block device mapping.

A

D. Store your root device data on Amazon EBS and set the DeleteOnTermination attribute to false using a block device mapping.

An Amazon EBS-backed instance can be stopped and later restarted without affecting data stored in the attached volumes. By default, the root volume for an AMI backed by Amazon EBS is deleted when the instance terminates. You can change the default behavior to ensure that the volume persists after the instance terminates. To change the default behavior, set the DeleteOnTermination attribute to false using a block device mapping.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

A company has a great deal of data in S3 buckets for which they want to create a database. Creating the RDS database, normalizing the data, and migrating to the RDS database will take time and is the long-term plan. But there’s an immediate need to query this data to retrieve information necessary for an audit. Which AWS service will enable querying data in S3 using standard SQL commands?

A. There is no such service, but there are third-party tools.
B. DynamoDB
C. Amazon SQL Connector
D. Amazon Athena

A

Amazon Athena

Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL. Athena is serverless, so there is no infrastructure to manage, and you only pay for the queries you run.

Athena is easy to use. Simply point to your data in Amazon S3, define the schema, and start querying using standard SQL. Most results are delivered within seconds. With Athena, there’s no need for complex ETL jobs to prepare your data for analysis. This makes it easy for anyone with SQL skills to quickly analyze large-scale datasets.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Your application is housed on an Auto Scaling Group of EC2 instances. The application is backed by the Multi-AZ MySQL RDS database and an additional read replica. You need to simulate some failures for disaster recovery drills. Which event will not cause an RDS to perform a failover to the standby replica?

A. Compute unit failure on primary
B. Storage failure on primary
C. Read replica failure
D. Loss of network connectivity to primary

A

C. Read replica failure

Correct. When you provision a Multi-AZ DB instance, Amazon RDS automatically creates a primary DB instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Each AZ runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable. In case of an infrastructure failure, Amazon RDS performs an automatic failover to the standby (or to a read replica in the case of Amazon Aurora), so that you can resume database operations as soon as the failover is complete. Since the endpoint for your DB Instance remains the same after a failover, your application can resume database operation without the need for manual administrative intervention. https://aws.amazon.com/rds/features/multi-az/ Amazon RDS handles failovers automatically so you can resume database operations as quickly as possible without administrative intervention. The primary DB instance switches over automatically to the standby replica if any of the following conditions occur:

An Availability Zone outage
The primary DB instance fails
The DB instance’s server type is changed
The operating system of the DB instance is undergoing software patching
A manual failover of the DB instance was initiated using Reboot with failover
There are several ways to determine if your Multi-AZ DB instance has failed over:

DB event subscriptions can be set up to notify you by email or SMS that a failover has been initiated. For more information about events, see Using Amazon RDS Event Notification.
You can view your DB events by using the Amazon RDS console or API operations.
You can view the current state of your Multi-AZ deployment by using the Amazon RDS console and API operations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

You have several EC2 instances in an auto scaling group fronted by a network load balancer. The instances will need access to S3 and DynamoDB. The auto scaling group was created from a launch template. What needs to be configured in the launch template to enable newly launched instances access to S3 and DynamoDB?

A. IAM policy attached to newly launched instances with permissions to S3 and DynamoDB.
B. An IAM Group for EC2 with policies giving permission to S3 and DynamoDB.
C. Access keys to be passed to newly launched EC2 instances.
D. An IAM Role attached to newly launched instances with permissions to S3 and DynamoDB.

A

D. An IAM Role attached to newly launched instances with permissions to S3 and DynamoDB.

Correct. IAM roles are designed so that your applications can securely make API requests from your instances, without requiring you to manage the security credentials that the applications use. Instead of creating and distributing your AWS credentials, you can delegate permission to make API requests using IAM roles.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Your company has recently converted to a hybrid cloud environment and will slowly be migrating to a fully AWS cloud environment. The AWS side is in need of some steps to prepare for disaster recovery. A disaster recovery plan needs drawn up and disaster recovery drills need to be performed for compliance reasons. The company wants to establish Recovery Time and Recovery Point Objectives. The RTO and RPO can be pretty relaxed. The main point is to have a plan in place, with as much cost savings as possible. Which AWS disaster recovery pattern will best meet these requirements?

A. Multi Site
B. Warm Standby
C. Backup and restore
D. Pilot Light

A

C. Backup and restore

Correct: This is the least expensive option and cost is the overriding factor.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

An online media company has created an application which provides analytical data to its clients. The application is hosted on EC2 instances in an Auto Scaling Group. You have been brought on as a consultant and add an Application Load Balancer to front the Auto Scaling Group and distribute the load between the instances. The VPC which houses this architecture is running IPv4 and IPv6. The last thing you need to do to complete the configuration is point the domain name to the Application Load Balancer. Using Route 53, which record type at the zone apex will you use to point the DNS name of the Application Load Balancer?
Choose two.

A. Alias with an AAAA type record set.
B. Alias with a CNAME record set.
C. Alias with an A type record set.
D. Alias with an MX type record set.

A

A. Alias with an AAAA type record set.

Alias with a type “AAAA” record set and Alias with a type “A” record set are correct. To route domain traffic to an ELB, use Amazon Route 53 to create an alias record that points to your load balancer.

C. Alias with an A type record set.

Alias with a type “AAAA” record set and Alias with a type “A” record set are correct. To route domain traffic to an ELB load balancer, use Amazon Route 53 to create an alias record that points to your load balancer. An alias record is a Route 53 extension to DNS.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Your company is using a hybrid configuration because there are some legacy applications which are not easily converted and migrated to AWS. With this configuration comes a typical scenario where the legacy apps must maintain the same private IP address and MAC address. You are attempting to convert the application to the cloud and have configured an EC2 instance to house the application. What you are currently testing is removing the ENI from the legacy instance and attaching it to the EC2 instance. You want to attempt a cold attach. What does this mean?

A. Attach ENI before the public IP address is assigned.
B. Attach ENI when the instance is being launched.
C. Attach ENI to an instance when it’s running.
D. Attach ENI when it’s stopped.

A

B. Attach ENI when the instance is being launched.

You can attach a network interface to an instance when it’s running (hot attach), when it’s stopped (warm attach), or when the instance is being launched (cold attach). You can detach secondary network interfaces when the instance is running or stopped. However, you can’t detach the primary network interface. You can move a network interface from 1 instance to another if the instances are in the same Availability Zone and VPC but in different subnets. When launching an instance using the CLI, API, or an SDK, you can specify the primary network interface and additional network interfaces. Launching an Amazon Linux or Windows Server instance with multiple network interfaces automatically configures interfaces, private IPv4 addresses, and route tables on the operating system of the instance. A warm or hot attach of an additional network interface may require you to manually bring up the second interface, configure the private IPv4 address, and modify the route table accordingly. Instances running Amazon Linux or Windows Server automatically recognize the warm or hot attach and configure themselves. Attaching another network interface to an instance (for example, a NIC teaming configuration) cannot be used as a method to increase or double the network bandwidth to or from the dual-homed instance. If you attach 2 or more network interfaces from the same subnet to an instance, you may encounter networking issues such as asymmetric routing. If possible, use a secondary private IPv4 address on the primary network interface instead. For more information, see Assigning a Secondary Private IPv4 Address.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

A new startup company decides to use AWS to host their web application. They configure a VPC as well as two subnets within the VPC. They also attach an internet gateway to the VPC. In the first subnet, they create an EC2 instance to host a web application. There is a network ACL and a security group, which both have the proper ingress and egress to and from the internet. There is a route in the route table to the internet gateway. The EC2 instances added to the subnet need to have a globally unique IP address to ensure internet access. Which is not a globally unique IP address?

A. Public IP address
B. Private IP address
C. Elastic IP address
D. IPv6 address

A

B. Private IP address

Public IPv4 address, elastic IP address, and IPv6 address are globally unique addresses. The IPv4 addresses known for not being unique are private IPs. These are found in the following ranges: from 10.0.0.0 to 10.255.255.255, from 172.16.0.0 to 172.31.255.255, and from 192.168.0.0 to 192.168.255.255. Reference: RFC1918.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

A small startup company has begun using AWS for all of its IT infrastructure. The company has two AWS Solutions Architects, and they are very proficient with AWS deployments. They want to choose a deployment service that best meets the given requirements. Those requirements include version control of their infrastructure documentation and granular control of all of the services to be deployed. Which AWS service would best meet these requirements?

A. CloudFormation
B. OpsWorks
C. Elastic Beanstalk
D. Terraform

A

CloudFormation

Correct. CloudFormation is infrastructure as code, and the CloudFormation feature of templates allows this infrastructure as code to be version controlled. While it can be argued that both OpsWorks and Elastic Beanstalk provide some granular control of services, this is not the main feature of either.

(Terraform - Incorrect. Terraform is not an AWS product.)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

You are working as a Solutions Architect in a large healthcare organization. You have many Auto Scaling Groups that utilize launch configurations. Many of these launch configurations are similar yet have subtle differences. You’d like to use multiple versions of these launch configurations. An ideal approach would be to have a default launch configuration and then have additional versions that add additional features. Which option best meets these requirements?

A. Use launch templates instead.
B. Simply create the needed versions. Launch configurations already have versioning.
C. Create the launch configurations in CloudFormation and version the templates accordingly.
D. Store the launch configurations in S3 and turn on versioning.

A

A. Use launch templates instead.

A launch template is similar to a launch configuration, in that it specifies instance configuration information. Included are the ID of the Amazon Machine Image (AMI), the instance type, a key pair, security groups, and the other parameters that you use to launch EC2 instances. However, defining a launch template instead of a launch configuration allows you to have multiple versions of a template. With versioning, you can create a subset of the full set of parameters and then reuse it to create other templates or template versions. For example, you can create a default template that defines common configuration parameters and allow the other parameters to be specified as part of another version of the same template.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

You have just started working at a company that is migrating from a physical data center into AWS. Currently, you have 25 TB of data that needs to be moved to an S3 bucket. Your company has just finished setting up a 1 GB Direct Connect drop, but you do not have a VPN currently up and running. This data needs to be encrypted during transit and at rest and must be uploaded to the S3 bucket within 21 days. How can you meet these requirements?

A. Use a Snowball device to transmit the data.
B. Upload the data using Direct Connect.
C. Upload the data to S3 using your public internet connection.
D. Order a Snowcone device to transmit the data.

A

A. Use a Snowball device to transmit the data.

This would be the perfect choice to transmit your data. Snowball encrypts your data, so all the security and speed requirements would be met.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Your team has provisioned an Auto Scaling Groups in a single Region. The Auto Scaling Group at max capacity would total 40 EC2 instances between them. However, you notice that the Auto Scaling Groups will only scale out to a portion of that number of instances at any one time. What could be the problem?

A. You can only have 20 instances per region. This is a hard limit.
B. The associated load balancer can only serve 20 instances at one time.
C. There is a vCPU-based on-demand instance limit per region.
D. You can only have 20 instances per Availability Zone.

A

C. There is a vCPU-based on-demand instance limit per region.

Correct. Your AWS account has default quotas, formerly referred to as limits, for each AWS service. Unless otherwise noted, each quota is Region-specific. You can request increases for some quotas, and other quotas cannot be increased. Remember that each EC2 instance can have a variance of the number of vCPUs, depending on its type and your configuration, so it’s always wise to calculate your vCPU needs to make sure you are not going to hit quotas easily. Service Quotas is an AWS service that helps you manage your quotas for over 100 AWS services from one location. Along with looking up the quota values, you can also request a quota increase from the Service Quotas console.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

An Application Load Balancer is fronting an Auto Scaling Group of EC2 instances, and the instances are backed by an RDS database. The Auto Scaling Group has been configured to use the Default Termination Policy. You are testing the Auto Scaling Group and have triggered a scale-in. Which instance will be terminated first?

A. The Auto Scaling Group will randomly select an instance to terminate.
B. The longest running instance.
C. The instance for which the load balancer stops sending traffic.
D. The instance launched from the oldest launch configuration.

A

The instance launched from the oldest launch configuration.

What do we know? The ASG is using the Default Termination Policy. The default termination policy is designed to help ensure that your instances span Availability Zones evenly for high availability. The default policy is kept generic and flexible to cover a range of scenarios. The default termination policy behavior is as follows: Determine which Availability Zones have the most instances, and at least one instance that is not protected from scale in. Determine which instances to terminate so as to align the remaining instances to the allocation strategy for the on-demand or spot instance that is terminating. This only applies to an Auto Scaling Group that specifies allocation strategies. For example, after your instances launch, you change the priority order of your preferred instance types. When a scale-in event occurs, Amazon EC2 Auto Scaling tries to gradually shift the on-demand instances away from instance types that are lower priority. Determine whether any of the instances use the oldest launch template or configuration: [For Auto Scaling Groups that use a launch template] Determine whether any of the instances use the oldest launch template unless there are instances that use a launch configuration. Amazon EC2 Auto Scaling terminates instances that use a launch configuration before instances that use a launch template. [For Auto Scaling Groups that use a launch configuration] Determine whether any of the instances use the oldest launch configuration. After applying all of the above criteria, if there are multiple unprotected instances to terminate, determine which instances are closest to the next billing hour. If there are multiple unprotected instances closest to the next billing hour, terminate one of these instances at random.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

You have just been hired by a large organization which uses many different AWS services in their environment. Some of the services which handle data include: RDS, Redshift, ElastiCache, DynamoDB, S3, and Glacier. You have been instructed to configure a web application using stateless web servers. Which services can you use to handle session state data?

A. Amazon RDS
B. Amazon DynamoDB
C. Amazon ElastiCache
D. Amazon S3 Glacier
E. Amazon Redshift
A

A. Amazon RDS
Correct. Amazon RDS can store session state data. It is slower than Amazon DynamoDB, but may be fast enough for some situations.

B. Amazon DynamoDB
Elasticache and DynamoDB can both be used to store session data.

C. Amazon ElastiCache
Elasticache and DynamoDB both can be used to store session data.

17
Q

A software gaming company has produced an online racing game that uses CloudFront for fast delivery to worldwide users. The game also uses DynamoDB for storing in-game and historical user data. The DynamoDB table has a preconfigured read and write capacity. Users have been reporting slowdown issues, and an analysis has revealed the DynamoDB table has begun throttling during peak traffic times. What step can you take to improve game performance?

A. Add an SQS queue to queue requests that could be lost.
B. Cache common queries in CloudFront.
C. Add a load balancer in front of the web servers.
D. Adjust your auto scaling thresholds to scale more aggressively.

A

D. Adjust your auto scaling thresholds to scale more aggressively.

Amazon DynamoDB auto scaling uses the AWS Application Auto Scaling service to dynamically adjust provisioned throughput capacity on your behalf in response to actual traffic patterns. This enables a table or a global secondary index to increase its provisioned read and write capacity to handle sudden increases in traffic without throttling. When the workload decreases, Application Auto Scaling decreases the throughput so you don’t pay for unused provisioned capacity. Note that if you use the AWS Management Console to create a table or a global secondary index, DynamoDB auto scaling is enabled by default. You can modify your auto scaling settings at any time.

18
Q

You have configured an Auto Scaling Group of EC2 instances. You have begun testing the scaling of the Auto Scaling Group using a stress tool to force the CPU utilization metric being used to force scale out actions. The stress tool is also being manipulated by removing stress to force a scale in. But you notice that these actions are only taking place in five-minute intervals. What is happening?

A. The stress tool is configured to run for five minutes.
B. A load balancer is managing the load and limiting the effectiveness of stressing the servers.
C. Auto Scaling Groups can only scale in intervals of five minutes or greater.
D. The Auto Scaling Group is following the default cooldown procedure.

A

D. The Auto Scaling Group is following the default cooldown procedure.

The cooldown period helps you prevent your Auto Scaling group from launching or terminating additional instances before the effects of previous activities are visible. You can configure the length of time based on your instance startup time or other application needs. When you use simple scaling, after the Auto Scaling group scales using a simple scaling policy, it waits for a cooldown period to complete before any further scaling activities due to simple scaling policies can start. An adequate cooldown period helps to prevent the initiation of an additional scaling activity based on stale metrics. By default, all simple scaling policies use the default cooldown period associated with your Auto Scaling Group, but you can configure a different cooldown period for certain policies, as described in the following sections. Note that Amazon EC2 Auto Scaling honors cooldown periods when using simple scaling policies, but not when using other scaling policies or scheduled scaling. A default cooldown period automatically applies to any scaling activities for simple scaling policies, and you can optionally request to have it apply to your manual scaling activities. When you use the AWS Management Console to update an Auto Scaling Group, or when you use the AWS CLI or an AWS SDK to create or update an Auto Scaling Group, you can set the optional default cooldown parameter. If a value for the default cooldown period is not provided, its default value is 300 seconds.

19
Q

After an IT Steering Committee meeting you have been put in charge of configuring a hybrid environment for the company’s compute resources. You weigh the pros and cons of various technologies based on the requirements you are given. Your primary requirement is the necessity for a private, dedicated connection, which bypasses the Internet and can provide throughput of 10 Gbps. Which option will you select?

A. AWS Direct Gateway
B. AWS VPN
C. AWS Direct Connect
D. VPC Peering

A

C. AWS Direct Connect

Correct. AWS Direct Connect can reduce network costs, increase bandwidth throughput, and provide a more consistent network experience than internet-based connections. It uses industry-standard 802.1q VLANs to connect to Amazon VPC using private IP addresses. You can choose from an ecosystem of WAN service providers for integrating your AWS Direct Connect endpoint in an AWS Direct Connect location with your remote networks. AWS Direct Connect lets you establish 1 Gbps or 10 Gbps dedicated network connections (or multiple connections) between AWS networks and one of the AWS Direct Connect locations. You can also work with your provider to create sub-1G connection or use link aggregation group (LAG) to aggregate multiple 1 gigabit or 10 gigabit connections at a single AWS Direct Connect endpoint, allowing you to treat them as a single, managed connection. A Direct Connect gateway is a globally available resource to enable connections to multiple Amazon VPCs across different regions or AWS accounts.

20
Q

A large financial institution is gradually moving their infrastructure and applications to AWS. The company has data needs that will utilize all of RDS, DynamoDB, Redshift, and ElastiCache. Which description best describes Amazon Redshift?

A. Near real-time complex querying on massive data sets.
B. Cloud-based relational database.
C. Can be used to significantly improve latency and throughput for many read-heavy application workloads.
D. Key-value and document database that delivers single-digit millisecond performance at any scale.

A

A. Near real-time complex querying on massive data sets.

Amazon Redshift is a fast, fully-managed cloud data warehouse that makes it simple and cost-effective to analyze all your data using standard SQL and your existing Business Intelligence (BI) tools. It allows you to run complex analytic queries against terabytes to petabytes of structured data, using sophisticated query optimization, columnar storage on high-performance storage, and massively parallel query execution. Most results come back in seconds. With Redshift, you can start small for just $0.25 per hour with no commitments and scale out to petabytes of data for $1,000 per terabyte per year, less than a tenth the cost of traditional on-premises solutions. Amazon Redshift also includes Amazon Redshift Spectrum, allowing you to run SQL queries directly against exabytes of unstructured data in Amazon S3 data lakes. No loading or transformation is required, and you can use open data formats, including Avro, CSV, Grok, Amazon Ion, JSON, ORC, Parquet, RCFile, RegexSerDe, Sequence, Text, and TSV. Redshift Spectrum automatically scales query compute capacity based on the data retrieved, so queries against Amazon S3 run fast, regardless of data set size.

21
Q

You have been put in charge of S3 buckets for your company. The buckets are separated based on the type of data they are holding and the level of security required for that data. You have several buckets that have data you want to safeguard from accidental deletion. Which configuration will meet this requirement?

A. Enable versioning on the bucket and multi-factor authentication delete as well.
B. Archive sensitive data to Amazon Glacier.
C. Signed URLs to all users to access the bucket.
D. Configure cross-account access with an IAM Role prohibiting object deletion in the bucket.

A

A. Enable versioning on the bucket and multi-factor authentication delete as well.

Correct. Versioning is a means of keeping multiple variants of an object in the same bucket. You can use versioning to preserve, retrieve, and restore every version of every object stored in your Amazon S3 bucket. With versioning, you can easily recover from both unintended user actions and application failures. When you enable versioning for a bucket, if Amazon S3 receives multiple write requests for the same object simultaneously, it stores all of the objects. Key point: versioning is turned off by default. If a bucket’s versioning configuration is MFA Delete–enabled, the bucket owner must include the x-amz-mfa request header in requests to permanently delete an object version or change the versioning state of the bucket.

22
Q

The company you work for has reshuffled teams a bit and you’ve been moved from the AWS IAM team to the AWS Network team. One of your first assignments is to review the subnets in the main VPCs. What are two key concepts regarding subnets?

A. A subnet spans all the Availability Zones in a Region.
B. Every subnet you create is associated with the main route table for the VPC.
C. Each subnet is associated with one security group.
D. Private subnets can only hold databases.
E. Each subnet maps to a single Availability Zone.

A

B. Every subnet you create is associated with the main route table for the VPC.

Each subnet must be associated with a route table, which specifies the allowed routes for outbound traffic leaving the subnet. Every subnet that you create is automatically associated with the main route table for the VPC. You can change the association, and you can change the contents of the main route table.

E. Each subnet maps to a single Availability Zone.

When you create a subnet, you specify the CIDR block for the subnet, which is a subset of the VPC CIDR block. Each subnet must reside entirely within one Availability Zone and cannot span zones.

23
Q

A company has created a mobile application that is hugely popular. The initial plan was to give each user login credentials to the application. But due to the volume of users, this idea has become impractical. What service can you use to allow outside users to login through a third party such as Facebook, Amazon, Google or Apple?

A. AWS IAM
B. Google Authenticator
C. AWS cross account access
D. Amazon Cognito

A

D. Amazon Cognito

Correct - Amazon Cognito provides authentication, authorization, and user management for your web and mobile apps. Your users can sign in directly with a user name and password or through a third party such as Facebook, Amazon, Google, or Apple. The two main components of Amazon Cognito are user pools and identity pools. User pools are user directories that provide sign-up and sign-in options for your app users. Identity pools enable you to grant your users access to other AWS services. You can use identity pools and user pools separately or together.

24
Q

A consultant is hired by a small company to configure an AWS environment. The consultant begins working with the VPC and launching EC2 instances within the VPC. The initial instances will be placed in a public subnet. The consultant begins to create security groups. The consultant has launched several instances, created security groups, and has associated security groups with instances. The consultant wants to change the security groups for an instance. Which statement is true?

A. You can change the security groups for an instance when the instance is in the running or stopped state.
B. You can’t change security groups. Create a new instance and attach the desired security groups.
C. You can change the security groups for an instance when the instance is in the pending or stopped state.
D. You can’t change the security groups for an instance when the instance is in the running or stopped state.

A

A. You can change the security groups for an instance when the instance is in the running or stopped state.

After you launch an instance into a VPC, you can change the security groups that are associated with an instance when the instance is in the running or stopped state. Change the security groups for an instance.

25
Q

You have been evaluating the NACLs in your company. Currently, you are looking at the default network ACL. Which statement is true about NACLs?

A. The default configuration of the default NACL is Deny, and the default configuration of a custom NACL is Allow.
B. The default configuration of the default NACL is Allow, and the default configuration of a custom NACL is Deny.
C. The default configuration of the default NACL is Deny, and the default configuration of a custom NACL is Deny.
D. The default configuration of the default NACL is Allow, and the default configuration of a custom NACL is Allow.

A

B. The default configuration of the default NACL is Allow, and the default configuration of a custom NACL is Deny.

Your VPC automatically comes with a modifiable default network ACL. By default, it allows all inbound and outbound IPv4 traffic and, if applicable, IPv6 traffic. You can create a custom network ACL and associate it with a subnet. By default, each custom network ACL denies all inbound and outbound traffic until you add rules.

26
Q

A small software team is creating an application which will give subscribers real-time weather updates. The application will run on EC2 and will make several requests to AWS services such as S3 and DynamoDB. What is the best way to grant permissions to these other AWS services?

A. Create an IAM user, grant the user permissions, and pass the user credentials to the application.
B. Create an IAM policy that you attach to the EC2 instance to give temporary security credentials to applications running on the instance.
C. Embed the appropriate credentials to access AWS services in the application.
D. Create an IAM role that you attach to the EC2 instance to give temporary security credentials to applications running on the instance.

A

D. Create an IAM role that you attach to the EC2 instance to give temporary security credentials to applications running on the instance.

Create an IAM role in the following situations: You’re creating an application that runs on an Amazon Elastic Compute Cloud (Amazon EC2) instance and that application makes requests to AWS. Don’t create an IAM user and pass the user’s credentials to the application or embed the credentials in the application. Instead, create an IAM role that you attach to the EC2 instance to give temporary security credentials to applications running on the instance. When an application uses these credentials in AWS, it can perform all of the operations that are allowed by the policies attached to the role. For details, see Using an IAM Role to Grant Permissions to Applications Running on Amazon EC2 Instances.

27
Q

Your company has a small web application hosted on an EC2 instance. The application has just been deployed but no one is able to connect to the web application from a browser. You had recently ssh’d into this EC2 instance to perform a small update, but you also cannot browse to the application from Google Chrome. You have checked and there is an internet gateway attached to the VPC and a route in the route table to the internet gateway. Which situation most likely exists?

A. The instance security group has no ingress on port 22 or port 80.
B. The instance security group has ingress on port 443 but not port 22.
C. The instance security group has ingress on port 80 but not port 22.
D. The instance security group has ingress on port 22 but not port 80.

A

D. The instance security group has ingress on port 22 but not port 80.

The following are the basic characteristics of security groups for your VPC: There are quotas on the number of security groups that you can create per VPC, the number of rules that you can add to each security group, and the number of security groups that you can associate with a network interface. For more information, see Amazon VPC quotas. You can specify allow rules, but not deny rules. You can specify separate rules for inbound and outbound traffic. When you create a security group, it has no inbound rules. Therefore, no inbound traffic originating from another host to your instance is allowed until you add inbound rules to the security group. By default, a security group includes an outbound rule that allows all outbound traffic. You can remove the rule and add outbound rules that allow specific outbound traffic only. If your security group has no outbound rules, no outbound traffic originating from your instance is allowed. Security groups are stateful. If you send a request from your instance, the response traffic for that request is allowed to flow in regardless of inbound security group rules. Responses to allowed inbound traffic are allowed to flow out, regardless of outbound rules.

28
Q

You are working in a large healthcare facility which uses EBS volumes on most of the EC2 instances. The CFO has approached you about some cost savings and it has been decided that some of the EC2 instances and EBS volumes would be deleted. What step can be taken to preserve the data on the EBS volumes and keep the data available on short notice?

A. Move the data to Amazon S3.
B. Take point-in-time snapshots of your Amazon EBS volumes.
C. Archive the data to Glacier.
D. Store the data in CloudFormation user data.

A

B. Take point-in-time snapshots of your Amazon EBS volumes.

You can back up the data on your Amazon EBS volumes to Amazon S3 by taking point-in-time snapshots. Snapshots are incremental backups, which means that only the blocks on the device that have changed after your most recent snapshot are saved. This minimizes the time required to create the snapshot and saves on storage costs by not duplicating data. When you delete a snapshot, only the data unique to that snapshot is removed. Each snapshot contains all of the information that is needed to restore your data (from the moment when the snapshot was taken) to a new EBS volume.

29
Q

A company is running a teaching application which is consumed by users all over the world. The application is translated into 5 different languages. All of these language files need to be stored somewhere that is highly-durable and can be accessed frequently. As content is added to the site, the storage demands will grow by a factor of five, so the storage must be highly-scalable as well. Which storage option will be highly-durable, cost-effective, and highly-scalable?

A. Glacier
B. RDS
C. Amazon S3
D. EBS Instance Store Volumes

A

C. Amazon S3

Amazon S3 is object storage built to store and retrieve any amount of data from anywhere on the Internet. It’s a simple storage service that offers an extremely durable, highly-available, and infinitely-scalable data storage infrastructure at very low costs.

The total volume of data and number of objects you can store are unlimited. Individual Amazon S3 objects can range in size from a minimum of 0 bytes to a maximum of 5 terabytes. The largest object that can be uploaded in a single PUT is 5 gigabytes. For objects larger than 100 megabytes, customers should consider using the Multipart Upload capability.

Amazon S3 offers a range of storage classes designed for different use cases. These include S3 Standard for general-purpose storage of frequently accessed data, S3 Intelligent-Tiering for data with unknown or changing access patterns, S3 Standard-Infrequent Access (S3 Standard-IA), S3 One Zone-Infrequent Access (S3 One Zone-IA) for long-lived, but less frequently accessed data, Amazon S3 Glacier (S3 Glacier), and Amazon S3 Glacier Deep Archive (S3 Glacier Deep Archive) for long-term archive and digital preservation.

30
Q

Your company is storing stack traces for application errors in an S3 Bucket. The engineers using these stack traces review them when addressing application issues. It has been decided that the files only need to be kept for four weeks then they can be purged. How can you meet this requirement in S3?

A. Write a cron job to purge the files after one month.
B. Add an S3 Lifecycle rule to archive these files to Glacier after one month.
C. Configure the S3 Lifecycle rules to purge the files after a month.
D. Create a bucket policy to purge the rules after one month.

A

C. Configure the S3 Lifecycle rules to purge the files after a month.

Correct: To manage your objects so that they are stored cost-effectively throughout their lifecycle, configure their Amazon S3 Lifecycle. An S3 Lifecycle configuration is a set of rules that define actions that Amazon S3 applies to a group of objects. There are two types of actions:

Transition actions define when objects transition to another storage class. For example, you might choose to transition objects to the S3 Standard-IA storage class 30 days after you created them, or archive objects to the S3 Glacier storage class one year after creating them.

Expiration actions define when objects expire. Amazon S3 deletes expired objects on your behalf.

The lifecycle expiration costs depend on when you choose to expire objects.

31
Q

A company needs to deploy EC2 instances to handle overnight batch processing. This includes media transcoding and some voice to text transcription. This is not high priority work, and it is OK if these batch runs get interrupted. What is the best EC2 instance purchasing option for this work?

A. Reserved
B. Dedicated Hosts
C. On-Demand
D. Spot

A

D. Spot

32
Q

Your company needs to shift an application to the cloud. You are looking for a solution to collect, process, gain immediate insight, and then transfer the application data to AWS. Part of this effort also includes moving a large data warehouse into AWS. The warehouse is 50TB, and it would take over a month to migrate the data using the current bandwidth available. What is the best option available to perform this one time migration considering both cost and performance aspects?

A. AWS Direct Connect
B. AWS Snowball Edge
C. AWS VPN
D. AWS SnowMobile

A

B. AWS Snowball Edge

The AWS Snowball Edge is a type of Snowball device with on-board storage and compute power for select AWS capabilities. Snowball Edge can undertake local processing and edge-computing workloads in addition to transferring data between your local environment and the AWS Cloud.

Each Snowball Edge device can transport data at speeds faster than the internet. This transport is done by shipping the data in the appliances through a regional carrier. The appliances are rugged shipping containers, complete with E Ink shipping labels. The AWS Snowball Edge device differs from the standard Snowball because it can bring the power of the AWS Cloud to your on-premises location, with local storage and compute functionality.

Snowball Edge devices have three options for device configurations: storage optimized, compute optimized, and with GPU. When this guide refers to Snowball Edge devices, it’s referring to all options of the device. Whenever specific information applies to only one or more optional configurations of devices, like how the Snowball Edge with GPU has an on-board GPU, it will be called out. For more information, see Snowball Edge Device Options.