Neal Davis - Practice Test 6 - Correct Flashcards

1
Q

Question 2:
A Solutions Architect needs to upload a large (2GB) file to an S3 bucket. What is the recommended way to upload a single large file to an S3 bucket?

A. Use AWS Import/Export

B. Use a single PUT request to upload the large file

C. Use Amazon Snowball

D. Use Multipart Upload

A

Explanation
In general, when your object size reaches 100 MB, you should consider using multipart uploads instead of uploading the object in a single operation.

CORRECT: “Use Multipart Upload” is the correct answer.

INCORRECT: “Use AWS Import/Export” is incorrect. AWS Import/Export is a service in which you send in HDDs with data on to AWS and they import your data into S3. It is not used for single files.

INCORRECT: “Use a single PUT request to upload the large file” is incorrect. The largest object that can be uploaded in a single PUT is 5 gigabytes.

INCORRECT: “Use Amazon Snowball” is incorrect. Snowball is used for migrating large quantities (TB/PB) of data into AWS, it is overkill for this requirement.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Question 3:
A Solutions Architect enabled Access Logs on an Application Load Balancer (ALB) and needs to process the log files using a hosted Hadoop service. What configuration changes and services can be leveraged to deliver this requirement?

A. Configure Access Logs to be delivered to DynamoDB and use EMR for processing the log files

B. Configure Access Logs to be delivered to S3 and use EMR for processing the log files

C. Configure Access Logs to be delivered to S3 and use Kinesis for processing the log files

D. Configure Access Logs to be delivered to EC2 and install Hadoop for processing the log files

A

Explanation
Access Logs can be enabled on ALB and configured to store data in an S3 bucket. Amazon EMR is a web service that enables businesses, researchers, data analysts, and developers to easily and cost-effectively process vast amounts of data. EMR utilizes a hosted Hadoop framework running on Amazon EC2 and Amazon S3.

CORRECT: “Configure Access Logs to be delivered to S3 and use EMR for processing the log files” is the correct answer.

INCORRECT: “Configure Access Logs to be delivered to EC2 and install Hadoop for processing the log files” is incorrect. EC2 does not provide a hosted Hadoop service.

INCORRECT: “Configure Access Logs to be delivered to DynamoDB and use EMR for processing the log files” is incorrect. You cannot configure access logs to be delivered to DynamoDB.

INCORRECT: “Configure Access Logs to be delivered to S3 and use Kinesis for processing the log files” is incorrect. Kinesis does not provide a hosted Hadoop service.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Question 5:
An Amazon EC2 instance has been launched into an Amazon VPC. A Solutions Architect needs to ensure that instances have both a private and public DNS hostnames. Assuming settings were not changed during creation of the VPC, how will DNS hostnames be assigned by default? (choose 2)

A. In a non-default VPC instances will be assigned a public and private DNS hostname

B. In a default VPC instances will be assigned a private but not public DNS hostname

C. In a non-default VPC instances will be assigned a private but not public DNS hostname

D. In all VPC instances no DNS hostnames will be assigned

E. In a default VPC instances will be assigned a public and private DNS hostname

A

Explanation
When you launch an instance into a default VPC, we provide the instance with public and private DNS hostnames that correspond to the public IPv4 and private IPv4 addresses for the instance.
When you launch an instance into a nondefault VPC, we provide the instance with a private DNS hostname and we might provide a public DNS hostname, depending on the DNS attributes you specify for the VPC and if your instance has a public IPv4 address.
All other statements are incorrect with default settings.

CORRECT: “In a default VPC instances will be assigned a public and private DNS hostname” is the correct answer.

CORRECT: “In a non-default VPC instances will be assigned a private but not a public DNS hostname” is the correct answer.

INCORRECT: “In all VPCs instances no DNS hostnames will be assigned” is incorrect as explained above.

INCORRECT: “In a non-default VPC instances will be assigned a public and private DNS hostname” is incorrect as explained above.

INCORRECT: “In a default VPC instances will be assigned a private but not a public DNS hostname” is incorrect as explained above.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Question 6:
The application development team in a company have developed a Java application and saved the source code in a .war file. They would like to run the application on AWS resources and are looking for a service that can handle the provisioning and management of the underlying resources it will run on.
Which AWS service should a Solutions Architect recommend the Developers use to upload the Java source code file?

A. AWS CloudFormation
B. AWS CodeDeploy
C. AWS OpsWorks
D. AWS Elastic Beanstalk

A

Explanation
AWS Elastic Beanstalk can be used to quickly deploy and manage applications in the AWS Cloud. Developers upload applications and Elastic Beanstalk handles the deployment details of capacity provisioning, load balancing, auto-scaling, and application health monitoring
Elastic Beanstalk supports applications developed in Go, Java, .NET, Node.js, PHP, Python, and Ruby, as well as different platform configurations for each language. To use Elastic Beanstalk, you create an application, upload an application version in the form of an application source bundle (for example, a Java .war file) to Elastic Beanstalk, and then provide some information about the application.

CORRECT: “AWS Elastic Beanstalk” is the correct answer.

INCORRECT: “AWS CodeDeploy” is incorrect. AWS CodeDeploy is a deployment service that automates application deployments to Amazon EC2 instances, on-premises instances, serverless Lambda functions, or Amazon ECS services.

INCORRECT: “AWS CloudFormation” is incorrect. AWS CloudFormation uses templates to deploy infrastructure as code. It is not a PaaS service like Elastic Beanstalk and is more focused on infrastructure than applications and management of applications.

INCORRECT: “AWS OpsWorks” is incorrect. AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Question 9:
A security officer has requested that all data associated with a specific customer is encrypted. The data resides on Elastic Block Store (EBS) volumes. Which of the following statements about using EBS encryption are correct? (choose 2)

A. There is no direct way to change the encryption state of a volume

B. All instance types support encryption

C. Not all EBS types support encryption

D. All attached EBS volumes must share the same encryption state

E. Data in transit between an instance and encrypted volume is also encrypted

A

Explanation
All EBS types and all instance families support encryption but not all instance types support encryption. There is no direct way to change the encryption state of a volume. Data in transit between an instance and an encrypted volume is also encrypted.

CORRECT: “Data in transit between an instance and an encrypted volume is also encrypted” is the correct answer.

CORRECT: “There is no direct way to change the encryption state of a volume” is the correct answer.

INCORRECT: “Not all EBS types support encryption” is incorrect as all EBS volume types support encryption.

INCORRECT: “All attached EBS volumes must share the same encryption state” is incorrect. You can have encrypted and non-encrypted EBS volumes on a single instance.

INCORRECT: “All instance types support encryption” is incorrect. All instance families support encryption, but not all instance types.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Question 11:
A web application runs on a series of Amazon EC2 instances behind an Application Load Balancer (ALB). A Solutions Architect is updating the configuration with a health check and needs to select the protocol to use. What options are available? (choose 2)

A. HTTP
B. HTTPS
C. ICMP
D. TCP
E. SSL

A

Explanation
An Application Load Balancer periodically sends requests to its registered targets to test their status. These tests are called health checks.
Each load balancer node routes requests only to the healthy targets in the enabled Availability Zones for the load balancer. Each load balancer node checks the health of each target, using the health check settings for the target groups with which the target is registered. After your target is registered, it must pass one health check to be considered healthy. After each health check is completed, the load balancer node closes the connection that was established for the health check.
If a target group contains only unhealthy registered targets, the load balancer nodes route requests across its unhealthy targets.
For an ALB the possible protocols are HTTP and HTTPS. The default is the HTTP protocol.

CORRECT: “HTTP” is the correct answer.

CORRECT: “HTTPS” is the correct answer.

INCORRECT: “SSL” is incorrect as this is not supported by the ALB.

INCORRECT: “TCP” is incorrect as this is not supported by the ALB.

INCORRECT: “ICMP” is incorrect as this is not supported by the ALB.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Question 12:
A government agency is using CloudFront for a web application that receives personally identifiable information (PII) from citizens.
What feature of CloudFront applies an extra level of encryption at CloudFront edge locations to ensure the PII data is secured end-to-end?

A. Origin access identity
B. Object invalidation
C. Field-level encryption
D. RTMP distribution

A

Explanation
With Amazon CloudFront, you can enforce secure end-to-end connections to origin servers by using HTTPS. Field-level encryption adds an additional layer of security that lets you protect specific data throughout system processing so that only certain applications can see it.
Field-level encryption allows you to enable your users to securely upload sensitive information to your web servers. The sensitive information provided by your users is encrypted at the edge, close to the user, and remains encrypted throughout your entire application stack. This encryption ensures that only applications that need the data—and have the credentials to decrypt it—are able to do so.

CORRECT: “Field-level encryption” is the correct answer.

INCORRECT: “Object invalidation” is incorrect. Object invalidation is a method to remove objects from the cache.

INCORRECT: “RTMP distribution” is incorrect. An RTMP distribution is a method of streaming media using Adobe Flash.

INCORRECT: “Origin access identity” is incorrect. Origin access identity applies to S3 bucket origins, not web servers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Question 13:
A company is transitioning their web presence into the AWS cloud. As part of the migration the company will be running a web application both on-premises and in AWS for a period of time. During the period of co-existence the client would like 80% of the traffic to hit the AWS-based web servers and 20% to be directed to the on-premises web servers.
What method can a Solutions Architect use to distribute traffic as requested?

A. Use a Network Load Balancer to distribute traffic based on Instance ID

B. Use an Application Load Balancer to distribute traffic based on IP address

C. Use Route 53 with a simple routing policy

D. Use Route 53 with a weighted routing policy and configure the respective weights

A

Explanation
Route 53 weighted routing policy is similar to simple but you can specify a weight per IP address. You create records that have the same name and type and assign each record a relative weight which is a numerical value that favours one IP over another (values must total 100). To stop sending traffic to a resource you can change the weight of the record to 0.

CORRECT: “Use Route 53 with a weighted routing policy and configure the respective weights” is the correct answer.

INCORRECT: “Use Route 53 with a simple routing policy” is incorrect as this will not split traffic based on weights as required.

INCORRECT: “Use an Application Load Balancer to distribute traffic based on IP address” is incorrect. Application Load Balancer can distribute traffic to AWS and on-premise resources using IP addresses but cannot be used to distribute traffic in a weighted manner.

INCORRECT: “Use a Network Load Balancer to distribute traffic based on Instance ID” is incorrect. Network Load Balancer can distribute traffic to AWS and on-premise resources using IP addresses (not Instance IDs).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Question 15:
A company runs an application on premises that stores a large quantity of semi-structured data using key-value pairs. The application code will be migrated to AWS Lambda and a highly scalable solution is required for storing the data.
Which datastore will be the best fit for these requirements?

A. Amazon RDS MySQL
B. Amazon DynamoDB
C. Amazon EFS
D. Amazon EBS

A

Explanation
Amazon DynamoDB is a no-SQL database that stores data using key-value pairs. It is ideal for storing large amounts of semi-structured data and is also highly scalable. This is the best solution for storing this data based on the requirements in the scenario.

CORRECT: “Amazon DynamoDB” is the correct answer.

INCORRECT: “Amazon EFS” is incorrect. The Amazon Elastic File System (EFS) is not suitable for storing key-value pairs.

INCORRECT: “Amazon RDS MySQL” is incorrect. Amazon Relational Database Service (RDS) is used for structured data as it is an SQL type of database.

INCORRECT: “Amazon EBS” is incorrect. Amazon Elastic Block Store (EBS) is a block-based storage system. You attach volumes to EC2 instances. It is not used for key-value pairs or to be used by Lambda functions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Question 16:
A distribution method is required for some static files. The requests will mainly be GET requests and a high volume of GETs is expected, often exceeding 2000 per second. The files are currently stored in an S3 bucket. According to AWS best practices, how can performance be optimized?

A. Use ElastiCache to cache the content

B. Use S3 Transfer Acceleration

C. Use cross-region replication to spread the load across regions

D. Integrate CloudFront with S3 to cache the content

A

Explanation
Amazon S3 automatically scales to high request rates. For example, your application can achieve at least 3,500 PUT/POST/DELETE and 5,500 GET requests per second per prefix in a bucket. There are no limits to the number of prefixes in a bucket
If your workload is mainly sending GET requests, in addition to the preceding guidelines, you should consider using Amazon CloudFront for performance optimization. By integrating CloudFront with Amazon S3, you can distribute content to your users with low latency and a high data transfer rate.

CORRECT: “Integrate CloudFront with S3 to cache the content” is the correct answer.

INCORRECT: “Use cross-region replication to spread the load across regions” is incorrect. Cross-region replication creates a replica copy in another region but should not be used for spreading read requests across regions. There will be 2 S3 endpoints and CRR is not designed for 2 way sync so this would not work well.

INCORRECT: “Use ElastiCache to cache the content” is incorrect. CloudFront is a better fit for this use case than using ElastiCache.

INCORRECT: “Use S3 Transfer Acceleration” is incorrect. Transfer Acceleration is used to accelerate object uploads to S3 over long distances (latency).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Question 17:
A Solutions Architect is launching an Amazon EC2 instance with multiple attached volumes by modifying the block device mapping. Which block device can be specified in a block device mapping to be used with an EC2 instance? (choose 2)

A. EBS Volume
B. Snapshot
C. Instance store volume
D. S3 bucket
E. EFS Volume

A

Explanation
Each instance that you launch has an associated root device volume, either an Amazon EBS volume or an instance store volume.
You can use block device mapping to specify additional EBS volumes or instance store volumes to attach to an instance when it’s launched. You can also attach additional EBS volumes to a running instance.
You cannot use a block device mapping to specify a snapshot, EFS volume or S3 bucket.

CORRECT: “EBS volume” is a correct answer.

CORRECT: “Instance store volume” is also a correct answer.

INCORRECT: “EFS volume” is incorrect as described above.

INCORRECT: “Snapshot” is incorrect as described above.

INCORRECT: “S3 bucket” is incorrect as described above.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Question 19:
A Solutions Architect has created a VPC and is in the process of formulating the subnet design. The VPC will be used to host a two-tier application that will include Internet facing web servers, and internal-only DB servers. Zonal redundancy is required.
How many subnets are required to support this requirement?

A. 4 Subnets
B. 1 Subnet
C. 6 Subnets
D. 2 Subnets

A

Explanation
Zonal redundancy indicates that the architecture should be split across multiple Availability Zones. Subnets are mapped 1:1 to AZs.
A public subnet should be used for the Internet-facing web servers and a separate private subnet should be used for the internal-only DB servers. Therefore you need 4 subnets – 2 (for redundancy) per public/private subnet.

CORRECT: “4 subnets” is the correct answer.

INCORRECT: “2 subnets” is incorrect as explained above.

INCORRECT: “6 subnets” is incorrect as explained above.

INCORRECT: “2 subnet” is incorrect as explained above.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Question 23:
An application runs on EC2 instances in a private subnet behind an Application Load Balancer in a public subnet. The application is highly available and distributed across multiple AZs. The EC2 instances must make API calls to an internet-based service. How can the Solutions Architect enable highly available internet connectivity?

A. Configure an internet gateway. Add a route to the gateway to each private subnet route table

B. Create a NAT gateway in the public subnet of each AZ. Update the route tables for each private subnet to direct internet-bound traffic to the NAT gateway

C. Create a NAT gateway and attach it to the VPC. Add a route to the gateway to each private subnet route table

D. Create a NAT instance in the private subnet of each AZ. Update the route tables for each private subnet to direct internet-bound traffic to the NAT instance

A

Explanation
The only solution presented that actually works is to create a NAT gateway in the public subnet of each AZ. They must be created in the public subnet as they gain public IP addresses and use an internet gateway for internet access.
The route tables in the private subnets must then be configured with a route to the NAT gateway and then the EC2 instances will be able to access the internet (subject to security group configuration).

CORRECT: “Create a NAT gateway in the public subnet of each AZ. Update the route tables for each private subnet to direct internet-bound traffic to the NAT gateway” is the correct answer.

INCORRECT: “Create a NAT gateway and attach it to the VPC. Add a route to the gateway to each private subnet route table” is incorrect. You do not attach NAT gateways to VPCs, you add them to public subnets.

INCORRECT: “Configure an internet gateway. Add a route to the gateway to each private subnet route table” is incorrect. You cannot add a route to an internet gateway to a private subnet route table (private EC2 instances don’t even have public IP addresses).

INCORRECT: “Create a NAT instance in the private subnet of each AZ. Update the route tables for each private subnet to direct internet-bound traffic to the NAT instance” is incorrect. You do not create NAT instances in private subnets, they must be created in public subnets.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Question 24:
The database layer of an on-premises web application is being migrated to AWS. The database currently uses an in-memory cache. A Solutions Architect must deliver a solution that supports high availability and replication for the caching layer.
Which service should the Solutions Architect recommend?

A. Amazon ElastiCache Redis
B. Amazon DynamoDB
C. Amazon RDS Multi-AZ
D. Amazon ElastiCache Memcached

A

Explanation
Amazon ElastiCache Redis is an in-memory database cache and supports high availability through replicas and multi-AZ. The table below compares ElastiCache Redis with Memcached:

CORRECT: “Amazon ElastiCache Redis” is the correct answer.

INCORRECT: “Amazon ElastiCache Memcached” is incorrect as it does not support high availability or multi-AZ.

INCORRECT: “Amazon RDS Multi-AZ” is incorrect. This is not an in-memory database and it not suitable for use as a caching layer.

INCORRECT: “Amazon DynamoDB” is incorrect. DynamoDB is a non-relational database. You would not use it for a caching layer. Also, the in-memory, low-latency caching for DynamoDB is implemented using DynamoDB Accelerator (DAX).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Question 25:
A web application receives order processing information from customers and places the messages on an Amazon SQS queue. A fleet of Amazon EC2 instances are configured to pick up the messages, process them, and store the results in a DynamoDB table. The current configuration has been resulting in a large number of empty responses to ReceiveMessage API requests.
A Solutions Architect needs to eliminate empty responses to reduce operational overhead. How can this be done?

A. Configure Long Polling to eliminate empty responses by allowing Amazon SQS to wait until a message is available in a queue before sending a response

B. Use a Standard queue to provide at-least-once delivery, which means that each message is delivered at least once

C. Configure Short Polling to eliminate empty responses by reducing the length of time a connection request remains open

D. Use a FIFO queue to preserve the exact order in which messages are sent and received

A

Explanation
The correct answer is to use Long Polling which will eliminate empty responses by allowing Amazon SQS to wait until a message is available in a queue before sending a response.
The problem does not relate to the order in which the messages are processed in and there are no concerns over messages being delivered more than once so it doesn’t matter whether you use a FIFO or standard queue.
Long Polling:
– Uses fewer requests and reduces cost.
– Eliminates false empty responses by querying all servers.
– SQS waits until a message is available in the queue before sending a response.
Short Polling:
– Does not wait for messages to appear in the queue.
– It queries only a subset of the available servers for messages (based on weighted random execution).
– Short polling is the default.
– ReceiveMessageWaitTime is set to 0.

CORRECT: “Configure Long Polling to eliminate empty responses by allowing Amazon SQS to wait until a message is available in a queue before sending a response” is the correct answer.

INCORRECT: “Use a Standard queue to provide at-least-once delivery, which means that each message is delivered at least once” is incorrect as explained above.

INCORRECT: “Use a FIFO (first-in-first-out) queue to preserve the exact order in which messages are sent and received” is incorrect as explained above.

INCORRECT: “Configure Short Polling to eliminate empty responses by reducing the length of time a connection request remains open” is incorrect as explained above.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Question 27:
A Solutions Architect is deploying a production application that will use several Amazon EC2 instances and run constantly on an ongoing basis. The application cannot be interrupted or restarted. Which EC2 pricing model would be best for this workload?

A. Reserved instances
B. On-demand instances
C. Spot instances
D. Flexible instances

A

Explanation
In this scenario for a stable process that will run constantly on an ongoing basis RIs will be the most affordable solution.
RIs provide you with a significant discount (up to 75%) compared to On-Demand instance pricing. You have the flexibility to change families, OS types, and tenancies while benefitting from RI pricing when you use Convertible RIs.

CORRECT: “Reserved instances” is the correct answer.

INCORRECT: “On-demand instances” is incorrect. On-demand is useful for short term ad-hoc requirements for which the job cannot afford to be interrupted and are typically more expensive than Spot instances.

INCORRECT: “Spot instances” is incorrect. Spot is more suited to short term jobs that can afford to be interrupted and offer the lowest price of all options.

INCORRECT: “Flexible instances” is incorrect. There’s no such thing as flexible instances.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Question 28:
A Solutions Architect has created a new security group in an Amazon VPC. No rules have been created. Which of the statements below are correct regarding the default state of the security group? (choose 2)

A. There is an outbound rule that allows all traffic to all IP addresses

B. There is an inbound rule allowing traffic from the Internet to port 22 for management

C. There is an inbound rule that allows traffic from the Internet Gateway

D. There is an outbound rule allowing traffic to the Internet Gateway

E. There are no inbound rules and traffic will be implicitly denied

A

Explanation
Custom security groups do not have inbound allow rules (all inbound traffic is denied by default) whereas default security groups do have inbound allow rules (allowing traffic from within the group). All outbound traffic is allowed by default in both custom and default security groups.
Security groups act like a stateful firewall at the instance level. Specifically security groups operate at the network interface level of an EC2 instance. You can only assign permit rules in a security group, you cannot assign deny rules and there is an implicit deny rule at the end of the security group. All rules are evaluated until a permit is encountered or continues until the implicit deny. You can create ingress and egress rules.

CORRECT: “There is an outbound rule that allows all traffic to all IP addresses” is the correct answer.

CORRECT: “There are no inbound rules and traffic will be implicitly denied” is the correct answer.

INCORRECT: “There is an inbound rule allowing traffic from the Internet to port 22 for management” is incorrect. This is not true.

INCORRECT: “There are is an inbound rule that allows traffic from the Internet Gateway” is incorrect. There are no inbound allow rules by default.

INCORRECT: “There is an outbound rule allowing traffic to the Internet Gateway” is incorrect. There is an outbound allow rule but it allows traffic to anywhere, it does not specify the internet gateway.

18
Q

Question 31:
An application makes calls to a REST API running on Amazon EC2 instances behind an Application Load Balancer (ALB). Most API calls complete quickly. However, a single endpoint is making API calls that require much longer to complete and this is introducing overall latency into the system. What steps can a Solutions Architect take to minimize the effects of the long-running API calls?

A. Change the EC2 instance to one with enhanced networking to reduce latency

B. Change the ALB to a Network Load Balancer (NLB) and use SSL/TLS termination

C. Increase the ALB idle timeout to allow the long-running requests to complete

D. Create an Amazon SQS queue and decouple the long-running API calls

A

Explanation
An Amazon Simple Queue Service (SQS) can be used to offload and decouple the long-running requests. They can then be processed asynchronously by separate EC2 instances. This is the best way to reduce the overall latency introduced by the long-running API call.

CORRECT: “Create an Amazon SQS queue and decouple the long-running API calls” is the correct answer.

INCORRECT: “Change the EC2 instance to one with enhanced networking to reduce latency” is incorrect. This will not reduce the latency of the API call as network latency is not the issue here, it is the latency of how long the API call takes to complete.

INCORRECT: “Increase the ALB idle timeout to allow the long-running requests to complete” is incorrect. The issue is not the connection being interrupted, it is that the API call takes a long time to complete.

INCORRECT: “Change the ALB to a Network Load Balancer (NLB) and use SSL/TLS termination” is incorrect. SSL/TLS termination is not of benefit here as the problem is not encryption or processing of encryption. The issue is API call latency.

19
Q

Question 32:
An Amazon EC2 instance behind an Elastic Load Balancer (ELB) is in the process of being de-registered. Which ELB feature is used to allow existing connections to close cleanly?

A. Connection Draining
B. Sticky Sessions
C. Deletion Protection
D. Proxy Protocol

A

Explanation
Connection draining is enabled by default and provides a period of time for existing connections to close cleanly. When connection draining is in action an CLB will be in the status “InService: Instance deregistration currently in progress”.

CORRECT: “Connection Draining” is the correct answer.

INCORRECT: “Sticky Sessions” is incorrect. Session stickiness uses cookies and ensures a client is bound to an individual back-end instance for the duration of the cookie lifetime.

INCORRECT: “Proxy Protocol” is incorrect. The Proxy Protocol header helps you identify the IP address of a client when you have a load balancer that uses TCP for back-end connections.

INCORRECT: “Deletion Protection” is incorrect. Deletion protection is used to protect the ELB from deletion.

20
Q

Question 33:
An organization in the agriculture sector is deploying sensors and smart devices around factory plants and fields. The devices will collect information and send it to cloud applications running on AWS.
Which AWS service will securely connect the devices to the cloud applications?

A. AWS DMS
B. AWS IoT Core
C. AWS Lambda
D. AWS Glue

A

Explanation
AWS IoT Core is a managed cloud service that lets connected devices easily and securely interact with cloud applications and other devices. AWS IoT Core can support billions of devices and trillions of messages, and can process and route those messages to AWS endpoints and to other devices reliably and securely.

CORRECT: “AWS IoT Core” is the correct answer.

INCORRECT: “AWS Glue” is incorrect. AWS Glue is a fully managed extract, transform, and load (ETL) service that makes it easy for customers to prepare and load their data for analytics.

INCORRECT: “AWS DMS” is incorrect. AWS Database Migration Service helps you migrate databases to AWS quickly and securely.

INCORRECT: “AWS Lambda” is incorrect. AWS Lambda lets you run code without provisioning or managing servers.

21
Q

Question 34:
A company needs to ensure that they can failover between AWS Regions in the event of a disaster seamlessly with minimal downtime and data loss. The applications will run in an active-active configuration.
Which DR strategy should a Solutions Architect recommend?

A. Backup and restore
B. Warm standby
C. Pilot light
D. Multi-site

A

Explanation
A multi-site solution runs on AWS as well as on your existing on-site infrastructure in an active- active configuration. The data replication method that you employ will be determined by the recovery point that you choose. This is either Recovery Time Objective (the maximum allowable downtime before degraded operations are restored) or Recovery Point Objective (the maximum allowable time window whereby you will accept the loss of transactions during the DR process).

CORRECT: “Multi-site” is the correct answer.

INCORRECT: “Backup and restore” is incorrect. This is the lowest cost DR approach that simply entails creating online backups of all data and applications.

INCORRECT: “Pilot light” is incorrect. With a pilot light strategy a core minimum of services are running and the remainder are only brought online during a disaster recovery situation.

INCORRECT: “Warm standby” is incorrect. The term warm standby is used to describe a DR scenario in which a scaled-down version of a fully functional environment is always running in the cloud.

22
Q

Question 35:
A customer has requested some advice on how to implement security measures in their Amazon VPC. The client has recently been the victim of some hacking attempts. The client wants to implement measures to mitigate further threats. The client has explained that the attacks always come from the same small block of IP addresses.
What would be a quick and easy measure to help prevent further attacks?

A. Create a Bastion Host restrict all connections to the Bastion Host only

B. Use a Network ACL rule that denies connections from the block of IP addresses

C. Use CloudFronts DDoS prevention features

D. Use a Security Group rule that denies connections from the block of IP addresses

A

Explanation
With NACLs you can have permit and deny rules. Network ACLs contain a numbered list of rules that are evaluated in order from the lowest number until the explicit deny. Network ACLs have separate inbound and outbound rules and each rule can allow or deny traffic.

CORRECT: “Use a Network ACL rule that denies connections from the block of IP addresses” is the correct answer.

INCORRECT: “Use a Security Group rule that denies connections from the block of IP addresses” is incorrect. With Security Groups you can only assign permit rules, you cannot assign deny rules.

INCORRECT: “Use CloudFront’s DDoS prevention features” is incorrect. CloudFront does have DDoS prevention features but we don’t know that this is a DDoS style of attack and CloudFront can only help where the traffic is using the CloudFront service to access cached content.

INCORRECT: “Create a Bastion Host restrict all connections to the Bastion Host only” is incorrect. A bastion host is typically used for admin purposes, allowing access to a single endpoint in the AWS cloud for administration using SSH/RDP. From the bastion instance you then connect to other EC2 instances in your subnets. This is not used as a method of adding security to production systems and cannot stop traffic from hitting application ports.

23
Q

Question 36:
An on-premises server runs a MySQL database and will be migrated to the AWS Cloud. The company require a managed solution that supports high availability and automatic failover in the event of the outage of an Availability Zone (AZ).
Which solution is the BEST fit for these requirements?

A. Use the AWS Database Migration Service (DMS) to directly migrate the database to Amazon RDS MySQL. Use the Schema Conversion Tool (SCT) to enable conversion from MySQL to Amazon RDS.

B. Create a snapshot of the MySQL database server and use AWS DataSync to migrate the data to Amazon S3. Launch a new Amazon RDS MySQL Multi-AZ deployment from the snapshot

C. Use the AWS Database Migration Service (DMS) to directly migrate the database to an Amazon EC2 MySQL Multi-AZ deployment

D. Use the AWS Database Migration Service (DMS) to directly migrate the database to an Amazon RDS MySQL Multi-AZ deployment

A

Explanation
The AWS DMS service can be used to directly migrate the MySQL database to an Amazon RDS Multi-AZ deployment. The entire process can be online and is managed for you. There is no need to perform schema translation between MySQL and RDS (assuming you choose the MySQL RDS engine).

CORRECT: “Use the AWS Database Migration Service (DMS) to directly migrate the database to an Amazon RDS MySQL Multi-AZ deployment” is the correct answer.

INCORRECT: “Use the AWS Database Migration Service (DMS) to directly migrate the database to an Amazon EC2 MySQL Multi-AZ deployment” is incorrect as there is no such thing as “multi-AZ” on Amazon EC2 with MySQL, you must use RDS.

INCORRECT: “Create a snapshot of the MySQL database server and use AWS DataSync to migrate the data Amazon S3. Launch a new Amazon RDS MySQL Multi-AZ deployment from the snapshot” is incorrect. You cannot create a snapshot of a MySQL database server running on-premises.

INCORRECT: “Use the AWS Database Migration Service (DMS) to directly migrate the database to Amazon RDS MySQL. Use the Schema Conversion Tool (SCT) to enable conversion from MySQL to Amazon RDS” is incorrect. There is no need to convert the schema when migrating from MySQL to Amazon RDS (MySQL engine).

24
Q

Question 39:
The Solutions Architect in charge of a critical application must ensure the Amazon EC2 instances are able to be launched in another AWS Region in the event of a disaster.
What steps should the Solutions Architect take? (Select TWO.)

A. Launch instances in the second Region from the AMIs

B. Copy the snapshots using Amazon S3 cross-region replication

C. Launch instances in the second Region using the S3 API

D. Create AMIs of the instances and copy them to another Region

E. Enable cross-region snapshots for the Amazon EC2 instances

A

Explanation
You can create AMIs of the EC2 instances and then copy them across Regions. This provides a point-in-time copy of the state of the EC2 instance in the remote Region.
Once you’ve created AMIs of EC2 instances and copied them to the second Region, you can then launch the EC2 instances from the AMIs in that Region.
This is a good DR strategy as you have moved stateful EC2 instances to another Region.

CORRECT: “Create AMIs of the instances and copy them to another Region” is the correct answer.

CORRECT: “Launch instances in the second Region from the AMIs” is also a correct answer.

INCORRECT: “Launch instances in the second Region using the S3 API” is incorrect. Though snapshots (and EBS-backed AMIs) are stored on Amazon S3, you cannot actually access them using the S3 API. You must use the EC2 API.

INCORRECT: “Enable cross-region snapshots for the Amazon EC2 instances” is incorrect. You cannot enable “cross-region snapshots” as this is not a feature that currently exists.

INCORRECT: “Copy the snapshots using Amazon S3 cross-region replication” is incorrect. You cannot work with snapshots using Amazon S3 at all including leveraging the cross-region replication feature.

25
Q

Question 40:
A tool needs to analyze data stored in an Amazon S3 bucket. Processing the data takes a few seconds and results are then written to another S3 bucket. Less than 256 MB of memory is needed to run the process. What would be the MOST cost-effective compute solutions for this use case?

A. AWS Lambda functions
B. AWS Fargate tasks
C. Amazon Elastic Beanstalk
D. Amazon EC2 spot instances

A

Explanation
AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume. Lambda has a maximum execution time of 900 seconds and memory can be allocated up to 3008 MB. Therefore, the most cost-effective solution will be AWS Lambda.

CORRECT: “AWS Lambda functions” is the correct answer.

INCORRECT: “AWS Fargate tasks” is incorrect. Fargate runs Docker containers and is serverless. However, you do pay for the running time of the tasks so it will not be as cost-effective.

INCORRECT: “Amazon EC2 spot instances” is incorrect. EC2 instances must run continually waiting for jobs to process so even with spot this would be less cost-effective (and subject to termination).

INCORRECT: “Amazon Elastic Beanstalk” is incorrect. This services also relies on Amazon EC2 instances so would not be as cost-effective.

26
Q

Question 41:
A financial services company regularly runs an analysis of the day’s transaction costs, execution reporting, and market performance. The company currently uses third-party commercial software for provisioning, managing, monitoring, and scaling the computing jobs which utilize a large fleet of EC2 instances.
The company is seeking to reduce costs and utilize AWS services. Which AWS service could be used in place of the third-party software?

A. AWS Batch
B. AWS Systems Manager
C. Amazon Lex
D. Amazon Athena

A

Explanation
AWS Batch eliminates the need to operate third-party commercial or open source batch processing solutions. There is no batch software or servers to install or manage. AWS Batch manages all the infrastructure for you, avoiding the complexities of provisioning, managing, monitoring, and scaling your batch computing jobs.

CORRECT: “AWS Batch” is the correct answer.

INCORRECT: “Amazon Athena” is incorrect. Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL.

INCORRECT: “AWS Systems Manager” is incorrect. AWS Systems Manager gives you visibility and control of your infrastructure on AWS.

INCORRECT: “Amazon Lex” is incorrect. Amazon Lex is a service for building conversational interfaces into any application using voice and text.

27
Q

Question 42:
Three AWS accounts are owned by the same company but in different regions. Account Z has two AWS Direct Connect connections to two separate company offices. Accounts A and B require the ability to route across account Z’s Direct Connect connections to each company office. A Solutions Architect has created an AWS Direct Connect gateway in account Z.
How can the required connectivity be configured?

A. Associate the Direct Connect gateway to a virtual private gateway in account A and B

B. Create a VPC Endpoint to Direct Connect gateway in account A and B

C. Create a PrivateLink connection in Account Z and ENIs in accounts A and B

D. Associate the Direct Connect gateway to a transit gateway in each region

A

Explanation
You can associate an AWS Direct Connect gateway with either of the following gateways:
- A transit gateway when you have multiple VPCs in the same Region.
- A virtual private gateway.
In this case account Z owns the Direct Connect gateway so a VPG in accounts A and B must be associated with it to enable this configuration to work. After Account Z accepts the proposals, Account A and Account B can route traffic from their virtual private gateway to the Direct Connect gateway.

CORRECT: “Associate the Direct Connect gateway to a virtual private gateway in account A and B” is the correct answer.

INCORRECT: “Associate the Direct Connect gateway to a transit gateway in each region” is incorrect. This would be a good solution if the accounts were in VPCs within a region rather than across regions.

INCORRECT: “Create a VPC Endpoint to the Direct Connect gateway in account A and B” is incorrect. You cannot create a VPC endpoint for Direct Connect gateways.

INCORRECT: “Create a PrivateLink connection in Account Z and ENIs in accounts A and B” is incorrect. You cannot use PrivateLink connections to publish a Direct Connect gateway.

28
Q

Question 44:
A company requires an Elastic Load Balancer (ELB) for an application they are planning to deploy on AWS. The application requires extremely high throughput and extremely low latencies. The connections will be made using the TCP protocol and the ELB must support load balancing to multiple ports on an instance. Which ELB would should the company use?

A. Route 53
B. Application Load Balancer
C. Network Load Balancer
D. Classic Load Balancer

A

Explanation
The Network Load Balancer operates at the connection level (Layer 4), routing connections to targets – Amazon EC2 instances, containers and IP addresses based on IP protocol data. It is architected to handle millions of requests/sec, sudden volatile traffic patterns and provides extremely low latencies.
The NLB provides high throughput and extremely low latencies and is designed to handle traffic as it grows and can load balance millions of requests/second. NLB also supports load balancing to multiple ports on an instance.

CORRECT: “Network Load Balancer” is the correct answer.

INCORRECT: “Classic Load Balancer” is incorrect. The CLB operates using the TCP, SSL, HTTP and HTTPS protocols. It is not the best choice for requirements of extremely high throughput and low latency and does not support load balancing to multiple ports on an instance.

INCORRECT: “Application Load Balancer” is incorrect. The ALB operates at the HTTP and HTTPS level only (does not support TCP load balancing).

INCORRECT: “Route 53” is incorrect. Route 53 is a DNS service, it is not a type of ELB (though you can do some types of load balancing with it).

29
Q

Question 45:
A company runs a web-based application that uses Amazon EC2 instances for the web front-end and Amazon RDS for the database back-end. The web application writes transaction log files to an Amazon S3 bucket and the quantity of files is becoming quite large. It is acceptable to retain the most recent 60 days of log files and permanently delete the rest.
Which action can a Solutions Architect take to enable this to happen automatically?

A. Use an S3 lifecycle policy with object expiration configured to automatically remove objects that are more than 60 days old

B. Write a Ruby script that checks the age of objects and deletes any that are more than 60 days old

C. Use an S3 bucket policy that deletes objects that are more than 60 days old

D. Use an S3 lifecycle policy to move the log files that are more than 60 days old to the GLACIER storage class

A

Explanation
To manage your objects so that they are stored cost effectively throughout their lifecycle, configure their Amazon S3 Lifecycle. An S3 Lifecycle configuration is a set of rules that define actions that Amazon S3 applies to a group of objects. There are two types of actions:
- Transition actions—Define when objects transition to another storage class. For example, you might choose to transition objects to the S3 Standard-IA storage class 30 days after you created them, or archive objects to the S3 Glacier storage class one year after creating them.
- Expiration actions—Define when objects expire. Amazon S3 deletes expired objects on your behalf.

CORRECT: “Use an S3 lifecycle policy with object expiration configured to automatically remove objects that are more than 60 days old” is the correct answer.

INCORRECT: “Write a Ruby script that checks the age of objects and deletes any that are more than 60 days old” is incorrect as the automated method is to use object expiration.

INCORRECT: “Use an S3 bucket policy that deletes objects that are more than 60 days old” is incorrect as you cannot do this with bucket policies.

INCORRECT: “Use an S3 lifecycle policy to move the log files that are more than 60 days old to the GLACIER storage class” is incorrect. Moving logs to Glacier may save cost but the question requests that the files are permanently deleted.

30
Q

Question 47:
A development team needs to run up a few lab servers on a weekend for a new project. The servers will need to run uninterrupted for a few hours. Which EC2 pricing option would be most suitable?

A. On-demand
B. Dedicated instances
C. Reserved
D. Spot

A

Explanation
On-Demand pricing ensures that instances will not be terminated and is the most economical option. Use on-demand for ad-hoc requirements where you cannot tolerate interruption.

CORRECT: “On-Demand” is the correct answer.

INCORRECT: “Spot” is incorrect. Spot pricing may be the most economical option for a short duration over a weekend but you may have the instances terminated by AWS and there is a requirement that the servers run uninterrupted.

INCORRECT: “Reserved” is incorrect. Reserved pricing provides a reduced cost for a contracted period (1 or 3 years), and is not suitable for ad hoc requirements.

INCORRECT: “Dedicated instances” is incorrect. Dedicated instances run on hardware that’s dedicated to a single customer and are more expensive than regular On-Demand instances.

31
Q

Question 49:
An application is generating a large amount of clickstream events data that is being stored on S3. The business needs to understand customer behaviour and want to run complex analytics queries against the data.
Which AWS service can be used for this requirement?

A. Amazon RedShift
B. Amazon Kinesis Firehose
C. Amazon RDS
D. Amazon Neptune

A

Explanation
Amazon Redshift is a fast, fully managed data warehouse that makes it simple and cost-effective to analyze all your data using standard SQL and existing Business Intelligence (BI) tools.
RedShift is used for running complex analytic queries against petabytes of structured data, using sophisticated query optimization, columnar storage on high-performance local disks, and massively parallel query execution.
With RedShift you can load data from Amazon S3 and perform analytics queries. RedShift Spectrum can analyze data directly in Amazon S3, but was not presented as an option.

CORRECT: “Amazon RedShift” is the correct answer.

INCORRECT: “Amazon Neptune” is incorrect. Amazon Neptune is a new product that offers a fully-managed Graph database.

INCORRECT: “Amazon RDS” is incorrect. RDS is a relational database that is used for transactional workloads not analytics workloads.

INCORRECT: “Amazon Kinesis Firehose” is incorrect. Amazon Kinesis Firehose processes streaming data, not data stored on S3.

32
Q

Question 51:
A fleet of Amazon EC2 instances running Linux will be launched in an Amazon VPC. An application development framework and some custom software must be installed on the instances. The installation will be initiated using some scripts. What feature enables a Solutions Architect to specify the scripts the software can be installed during the EC2 instance launch?

A. AWS Config
B. Metadata
C. Run command
D. User data

A

Explanation
When you launch an instance in Amazon EC2, you have the option of passing user data to the instance that can be used to perform common automated configuration tasks and even run scripts after the instance starts. You can pass two types of user data to Amazon EC2: shell scripts and cloud-init directives
User data is data that is supplied by the user at instance launch in the form of a script and is limited to 16KB.

CORRECT: “User Data” is the correct answer.

INCORRECT: “Metadata” is incorrect. Instance metadata is data about your instance that you can use to configure or manage the running instance. Instance metadata is divided into categories, for example, host name, events, and security groups.

INCORRECT: “Run Command” is incorrect. The AWS Systems Manager run command is used to manage the configuration of existing instances by using remotely executed commands. User data is better for specifying scripts to run at startup.

INCORRECT: “AWS Config” is incorrect. This service is used to manage the configuration of AWS resources, it does not run scripts on instances.

33
Q

Question 52:
A Solutions Architect created a new IAM user account for a temporary employee who recently joined the company. The user does not have permissions to perform any actions, which statement is true about newly created users in IAM?

A. They are created with limited permissions
B. They are created with full permissions
C. They are created with user privileges
D. They are created with no permissions

A

Explanation
Every IAM user starts with no permissions.. In other words, by default, users can do nothing, not even view their own access keys. To give a user permission to do something, you can add the permission to the user (that is, attach a policy to the user). Or you can add the user to a group that has the intended permission.

CORRECT: “They are created with no permissions” is the correct answer.

INCORRECT: “They are created with limited permissions” is incorrect as they are created with no permissions.

INCORRECT: “They are created with full permissions” is incorrect as they are created with no permissions.

INCORRECT: “They are created with user privileges” is incorrect as they are created with no permissions.

34
Q

Question 53:
A customer is deploying services in a hybrid cloud model. The customer has mandated that data is transferred directly between cloud data centers, bypassing ISPs.
Which AWS service can be used to enable hybrid cloud connectivity?

A. Amazon Route 53
B. Amazon VPC
C. AWS Direct Connect
D. IPSec VPN

A

Explanation
With AWS Direct Connect, you can connect to all your AWS resources in an AWS Region, transfer your business-critical data directly from your datacenter, office, or colocation environment into and from AWS, bypassing your Internet service provider and removing network congestion.

CORRECT: “AWS Direct Connect” is the correct answer.

INCORRECT: “Amazon VPC” is incorrect. Amazon Virtual Private Cloud (Amazon VPC) enables you to launch AWS resources into a virtual network that you’ve defined.

INCORRECT: “IPSec VPN” is incorrect. An IPSec VPN can be used to connect to AWS however it does not bypass the ISPs or Internet.

INCORRECT: “Amazon Route 53” is incorrect. Amazon Route 53 is a highly available and scalable cloud Domain Name System (DNS) web service.

35
Q

Question 57: Correct
A company is deploying a new two-tier web application that uses EC2 web servers and a DynamoDB database backend. An Internet facing ELB distributes connections between the web servers.
The Solutions Architect has created a security group for the web servers and needs to create a security group for the ELB. What rules should be added? (choose 2)

A. Add an Inbound rule that allows HTTP/HTTPS, and specify the source as 0.0.0.0/32

B. Add an Outbound rule that allows ALL TCP, and specify the destination as the Internet Gateway

C. Add an Outbound rule that allows HTTP/HTTPS, and specify the destination as the web server security group

D. Add an Outbound rule that allows HTTP/HTTPS, and specify the destination as VPC CIDR

E. Add an Inbound rule that allows HTTP/HTTPS, and specify the source 0.0.0.0/0

A

Explanation
An inbound rule should be created for the relevant protocols (HTTP/HTTPS) and the source should be set to any address (0.0.0.0/0).
The outbound rule should forward the relevant protocols (HTTP/HTTPS) and the destination should be set to the web server security group.
Note that on the web server security group you’d want to add an Inbound rule allowing HTTP/HTTPS from the ELB security group.

CORRECT: “Add an Outbound rule that allows HTTP/HTTPS, and specify the destination as the web server security group” is a correct answer.

CORRECT: “Add an Inbound rule that allows HTTP/HTTPS, and specify the source as 0.0.0.0/0” is also a correct answer.

INCORRECT: “Add an Outbound rule that allows ALL TCP, and specify the destination as the Internet Gateway” is incorrect as the relevant protocol should be specified and the destination should be the web server security group.

INCORRECT: “Add an Outbound rule that allows HTTP/HTTPS, and specify the destination as VPC CIDR” is incorrect. Using the VPC CIDR would not be secure and you cannot specify an Internet Gateway in a security group (not that you’d want to anyway).

INCORRECT: “Add an Inbound rule that allows HTTP/HTTPS, and specify the source as 0.0.0.0/32” is incorrect. The address 0.0.0.0/32 is incorrect as the 32 mask means an exact match is required (0.0.0.0).

36
Q

Question 58:
A Solutions Architect has created an AWS Organization with several AWS accounts. Security policy requires that use of specific API actions are limited across all accounts. The Solutions Architect requires a method of centrally controlling these actions.
What is the SIMPLEST method of achieving the requirements?

A. Create an IAM policy in the root account and attach it to users and groups in each account

B. Create cross-account roles in each account to limit access to the services and actions that are allowed

C. Create a Network ACL that limits access to the services or actions and attach it to all relevant subnets

D. Create a service control policy in the root organizational unit to deny access to the services or actions

A

Explanation
Service control policies (SCPs) offer central control over the maximum available permissions for all accounts in your organization allowing you to ensure your accounts stay within your organization’s access control guidelines.
In the example below, a policy in OU1 restricts all users from launching EC2 instance types other than a t2.micro:

CORRECT: “Create a service control policy in the root organizational unit to deny access to the services or actions” is the correct answer.

INCORRECT: “Create a Network ACL that limits access to the services or actions and attach it to all relevant subnets” is incorrect. Network ACLs control network traffic - not API actions.

INCORRECT: “Create an IAM policy in the root account and attach it to users and groups in each account” is incorrect. This is not an efficient or centrally managed method of applying the security restrictions.

INCORRECT: “Create cross-account roles in each account to limit access to the services and actions that are allowed” is incorrect. This is another example of a complex and inefficient method of providing access across accounts and does not restrict API actions within the account.

37
Q

Question 59:
One of the departments in a company has been generating a large amount of data on Amazon S3 and costs are increasing. Data older than 90 days is rarely accessed but must be retained for several years. If this data does need to be accessed at least 24 hours notice is provided.
How can a Solutions Architect optimize the costs associated with storage of this data whilst ensuring it is accessible if required?

A. Implement archival software that automatically moves data to tape

B. Use S3 lifecycle policies to move data to GLACIER after 90 days

C. Use S3 lifecycle policies to move data to the STANDARD_IA storage

D. Select the older data and manually migrate it to GLACIER

A

Explanation
To manage your objects so that they are stored cost effectively throughout their lifecycle, configure their lifecycle. A lifecycle configuration is a set of rules that define actions that Amazon S3 applies to a group of objects. Transition actions define when objects transition to another storage class.
For example, you might choose to transition objects to the STANDARD_IA storage class 30 days after you created them, or archive objects to the GLACIER storage class one year after creating them.
GLACIER retrieval times:
- Standard retrieval is 3-5 hours which is well within the requirements here.
- You can use Expedited retrievals to access data in 1 – 5 minutes.
- You can use Bulk retrievals to access up to petabytes of data in approximately 5 – 12 hours.

CORRECT: “Use S3 lifecycle policies to move data to GLACIER after 90 days” is the correct answer.

INCORRECT: “Implement archival software that automatically moves the data to tape” is incorrect as this solution can be fully automated using lifecycle policies.

INCORRECT: “Use S3 lifecycle policies to move data to the STANDARD_IA storage class” is incorrect. STANDARD_IA is good for infrequently accessed data and provides faster access times than GLACIER but is more expensive so not the best option here.

INCORRECT: “Select the older data and manually migrate it to GLACIER” is incorrect as a lifecycle policy can automate the process.

38
Q

Question 60:
An Amazon EC2 instance is generating very high packets-per-second and performance of the application stack is being impacted. A Solutions Architect needs to determine a resolution to the issue that results in improved performance.
Which action should the Architect take?

A. Add multiple Elastic IP addresses to the instances

B. Configure a RAID 1 array from multiple EBS volumes

C. Use enhanced networking

D. Create a placement group and put the EC2 instance in it

A

Explanation
Enhanced networking provides higher bandwidth, higher packet-per-second (PPS) performance, and consistently lower inter-instance latencies. If your packets-per-second rate appears to have reached its ceiling, you should consider moving to enhanced networking because you have likely reached the upper thresholds of the VIF driver. It is only available for certain instance types and only supported in VPC. You must also launch an HVM AMI with the appropriate drivers.
AWS currently supports enhanced networking capabilities using SR-IOV. SR-IOV provides direct access to network adapters, provides higher performance (packets-per-second) and lower latency.

CORRECT: “Use enhanced networking” is the correct answer.

INCORRECT: “Configure a RAID 1 array from multiple EBS volumes” is incorrect. You do not need to create a RAID 1 array (which is more for redundancy than performance anyway).

INCORRECT: “Create a placement group and put the EC2 instance in it” is incorrect. A placement group is used to increase network performance between instances. In this case there is only a single instance so it won’t help.

INCORRECT: “Add multiple Elastic IP addresses to the instance” is incorrect. Adding multiple IP addresses is not a way to increase performance of the instance as the same amount of bandwidth is available to the Elastic Network Interface (ENI).

39
Q

Question 61:
A company has launched a multi-tier application architecture. The web tier and database tier run on Amazon EC2 instances in private subnets within the same Availability Zone.
Which combination of steps should a Solutions Architect take to add high availability to this architecture? (Select TWO.)

A. Create an Amazon EC2 Auto Scaling group and Application Load Balancer (ALB) spanning multiple AZs

B. Create new private subnets in the same VPC but in a different AZ. Create a database using Amazon EC2 in one AZ

C. Create new private subnets in the same VPC but in a different AZ. Migrate the database to an Amazon RDS multi-AZ deployment

D. Create new public subnets in the same AZ for high availability and move the web tier to the public

E. Add the existing web application instances to an Auto Scaling group behind an Application Load Balancer (ALB)

A

Explanation
The Solutions Architect can use Auto Scaling group across multiple AZs with an ALB in front to create an elastic and highly available architecture. Then, migrate the database to an Amazon RDS multi-AZ deployment to create HA for the database tier. This results in a fully redundant architecture that can withstand the failure of an availability zone.

CORRECT: “Create an Amazon EC2 Auto Scaling group and Application Load Balancer (ALB) spanning multiple AZs” is a correct answer.

CORRECT: “Create new private subnets in the same VPC but in a different AZ. Migrate the database to an Amazon RDS multi-AZ deployment” is also a correct answer.

INCORRECT: “Create new public subnets in the same AZ for high availability and move the web tier to the public subnets” is incorrect. If subnets share the same AZ they are not suitable for splitting your tier across them for HA as the failure of a an AZ will take out both subnets.

INCORRECT: “Add the existing web application instances to an Auto Scaling group behind an Application Load Balancer (ALB)” is incorrect. The instances are in a single AZ so the Solutions Architect should create a new auto scaling group and launch instances across multiple AZs.

INCORRECT: “Create new private subnets in the same VPC but in a different AZ. Create a database using Amazon EC2 in one AZ” is incorrect. A database in a single AZ will not be highly available.

40
Q

Question 62:
An application receives a high traffic load between 7:30am and 9:30am daily. The application uses an Auto Scaling group to maintain three instances most of the time but during the peak period it requires six instances.
How can a Solutions Architect configure Auto Scaling to perform a daily scale-out event at 7:30am and a scale-in event at 9:30am to account for the peak load?

A. Use a Dynamic scaling policy
B. Use a Scheduled scaling policy
C. Use a Simple scaling policy
D. Use a Step scaling policy

A

Explanation
The following scaling policy options are available:
Simple – maintains a current number of instances, you can manually change the ASGs min/desired/max and attach/detach instances.
Scheduled – Used for predictable load changes, can be a single event or a recurring schedule
Dynamic (event based) – scale in response to an event/alarm.
Step – configure multiple scaling steps in response to multiple alarms.

CORRECT: “Use a Scheduled scaling policy” is the correct answer.

INCORRECT: “Use a Simple scaling policy” is incorrect. Please refer to the description above.

INCORRECT: “Use a Dynamic scaling policy” is incorrect. Please refer to the description above.

INCORRECT: “Use a Step scaling policy” is incorrect. Please refer to the description above.

41
Q

Question 63:
A large multi-national client has requested a design for a multi-region database. The master database will be in the EU (Frankfurt) region and databases will be located in 4 other regions to service local read traffic. The database should be a managed service including the replication.
The solution should be cost-effective and secure. Which AWS service can deliver these requirements?

A. ElastiCache with Redis and clustering mode enabled
B. RDS with Multi-AZ
C. RDS with cross-region Read Replicas
D. EC2 instances with EBS Replication

A

Explanation
Amazon RDS Read replicas are used for read heavy databases and the replication is asynchronous. Read replicas are used for workload sharing and offloading. Read replicas can be in another region. This solution will enable better performance for users in the other AWS regions for database queries and is a managed service.

CORRECT: “RDS with cross-region Read Replicas” is the correct answer.

INCORRECT: “RDS with Multi-AZ” is incorrect. RDS with Multi-AZ is within a region only

INCORRECT: “EC2 instances with EBS replication” is incorrect. EC2 instances with EBS replication is not a suitable solution.

INCORRECT: “ElastiCache with Redis and clustering mode enabled” is incorrect. ElastiCache is an in-memory key/value store database (more OLAP than OLTP) and is not suitable for this scenario. Clustering mod is only available within the same region.