Test - 2 Flashcards

1
Q

Question 397



A VPC public subnet is one that (Choose one of the correct option below):

A. Has at least one route in its associated routing table that uses an Internet gateway

B. Includes a route in its associated routing table via a Network Address Translation(NAT) instance.

C. Has a Network Access Control List (NACL) permitting outbound traffic to 0.0.0.0/0

D. Has the public Subnet option selected in its configuration

A

Answer: A

The public subnet has a route table that uses the internet gateway

For more information on public subnets please visit the below URL:
http: //docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Scenarioi.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Question 398



What action is required to establish an VPC VPN connection between an on- premise data center and an VPC virtual private gateway?

A. Assign a static internet-routable IP Address to an Amazon VPC customer gateway

B. Modify the main route table to allow traffic to a network address translation instance.

C. Use a dedicated network address translation instance in the public subnet

D. Establish a dedicated networking connection using Direct Connect

A

Answer: A

When defining a VPN connection between the on-premise network and the VPC, you need to have a customer gateway defined. Since this is accessed over the internet, it needs to have a static internet-routable IP Address.

For more information on VPC VPN connections please visit the below URLs:

http: //docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Introduction.html
http: //docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpn-connections.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Question 399



A startup company hired you to help them build a mobile application that will ultimately store billions of images and videos on S3. The company is lean on funding and wants to minimize operational costs however they have an aggressive marketing plan, and expect to double their current installation base every six months. Due to the nature of their business they are expecting a sudden and large increases in traffic to and from S3 and need to ensure that it can handle the performance needs of their application. What other information must you gather from this customer in order to determine whether S3 is the right option?

A. You must know how many customers the company has today because this is critical in understanding what their customer base will be in 2 years.

B. You must find out the total number of requests per second at peak usage.

C. You must know the size of the individual objects being written to S3, in order to properly design the key namespace.

D. In order to build the key namespace correctly you must understand the total amount of storage needs for each S3 bucket.

A

When you define an S3 bucket the billing is done on the requests.

If you go to the
URL: http://calculator.s3.amazonaws.com/index.html

which is the calculator for S3 costs, you can see that the cost is related to the total number of requests in addition to the storage. In order to find the estimated cost for the S3 storage, you should get a number of requests.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Question 400



You have configuring a solution which uses EC2 Instances and an Elastic Load Balancer. Which of the following protocols can be used to ensure that traffic is secure from the client machine to the Elastic Load Balancer. Choose 2 answers from the options given below

A. HTTP

B. HTTPS

C. TCP

D. SSL

A

Answer: B, D

The HTTPS protocol uses the SSL protocol to establish secure connections over the HTTP layer. You can also use the SSL protocol to establish secure connections over the TCP layer.

For more information on ELB Listener configuration please see the below link:
http: //docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-listener-config.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Question 401



After creating a new AWS account, you use the API to request 40 on-demand EC2 instances in a single AZ. After 20 successful requests, subsequent requests failed. What could be a reason for this issue, and how can you resolve it?

A. You encountered a soft limit of 20 instances per region. Submit the limit increase form and retry the failed requests once approved.

B. AWS allows you to provision no more than 20 instances per AZ. Select a different AZ and retry the failed request.

C. You need to use VPC in order to provision more than 20 instances in a single AZ. Simply terminate the resources already provisioned and re-launch them all in a VPC.

D. You encountered an API throttling situation and should try the failed requests using an exponential decay retry algorithm.

A

Answer: A

There is a soft limits of 20 instances. Since this is across an instance family,
option B is wrong because it will not work even if you try another availability zone.

For more information on all service limits please visit the below URL:

https: //aws.amazon.com/ec2/faqs/#How_many_instances_can_I_run_in_Amazon_ECz2
http: //docs.aws.amazon.com/general/latest/gr/aws_service_limits.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Question 402



You have been tasked with creating a VPC network topology for your company. The VPC network must support both internet facing applications and internally-facing applications accessed only over VPN. Both Internet-facing and internally- applications must be able to leverage at least 3 AZs for high availability. At a minimum, how many subnets must you create within your VPC to accommodate these requirements?

A.2

B.3

C4

D.6

A

Internet as well as intranet(private) applications must be able to make use of at least three Availability Zones for high availability. So 3 subnets for internet and 3 subnets for private is 6 subnets in total.

For more information on VPC and subnets please visit the below URL:
http: //docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Subnets.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Question 403



You receive a Linux Spot Instance at a bid of $0.05/hr. After 30 minutes, the Spot prices increases to $0.06/hr and your Spot Instance is terminated by AWS. What was the total EC2 compute cost of running your Spot Instances?

A. $0.025

B. $0.03

C. $0.05

D. $0.06

A

Answer: A

From 2nd October 2017, per second billing has come into effect for some EC2 instances and EBS. AWS per-second billing will apply to Linux. On-Demand, Reserved, and Spot EC2 instances. However, Per-second billing is not applicable to Microsoft Windows instances or to all Linux distributions, so some Linux AMIs may still have an hourly charge.

https://aws.amazon.com/blogs/aws/new-per-second-billing-for-ec2-instances-and-ebs-volumes/

With per-second billing effect, if AWS stops your instance, you will be billed for exactly what you have used. For example, if your you after a half an hour of use, you only pay for the 30 mins instead of a full hour.

For more information on spot instance pricing please visit the below URL:
https://aws.amazon.com/ec2/spot/pricing/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Question 404



Which of the following is a durable key-value store?

A. Amazon Simple Storage Service

B. Amazon Simple Queue Service

C. Amazon Simple Workflow Service

D. Amazon Simple Notification Service

A

Answer: A

This is clearly given in the AWS documentations:

For more information on S3 please visit the below URLs:

http: //docs.aws.amazon.com/AmazonS3/latest/dev/UsingObjects.html
https: //aws.amazon.com/s3/details

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Question 405



In reviewing the Auto-Scaling events for your application you notice that your application is scaling up and down multiple times in the same hour. What design choice could you make to optimize for costs while preserving elasticity? Select 2 options.

A. Modify the Auto Scaling policy to use scheduled scaling actions

B. Modify the Auto Scaling Group cool down timers

C. Modify the Amazon Cloudwatch alarm period that triggers your AutoScaling scale down policy.

D. Modify the Auto Scaling group termination policy to terminate the newest instance first.

A

Answer: B, C

The Auto Scaling cooldown period is a configurable setting for your Auto Scaling group that helps to ensure that Auto Scaling doesn’t launch or terminate additional instances before the previous scaling activity takes effect. After the Auto Scaling group dynamically scales using a simple scaling policy, Auto Scaling waits for the cool down period to complete before resuming scaling activities. When you manually scale your Auto Scaling group, the default is not to wait for the cool down period, but you can override the default and honor the cooldown period. Note that if an instance becomes unhealthy, Auto Scaling does not wait for the cooldown period to complete before replacing the unhealthy instance.

For more information on Autoscale cool down timers please visit the URL:
http: //docs.aws.amazon.com/autoscaling/latest/userguide/Cooldown.html

You can also modify the CloudWatch triggers to ensure the thresholds are appropriate for the scale down policy.

For more information on Autoscaling user guide please visit the URL:
http://docs.aws.amazon.com/autoscaling/latest/userguide/as-scale-based-on-demand.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Question 406



Which route must be added to your routing table in order to allow connections to the internet from your subnet?

A. Destination:0.0.0.0/0–>Target:your internet gateway

B. Destination:192.168.1.257/0–>Target:your internet gateway

C. Destination:0.0.0.0/33–>Target:your virtual private gateway

D. Destination:0.0.0.0/0–> Target:0.0.0.0/24

A

Answer: A

The question indicates a public subnet. The public subnet has a route table that uses the internet gateway.

For more information on public subnets please visit the below URL:
http: //docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Scenarioi.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Question 407



You are deploying an application on Amazon EC2 that must call AWS API’s. What is the method for securely passing credentials to the application that you use?

A. Embed the API credentials into your JAR files.

B. Use the AWS Identity and Access Management (IAM) roles for EC2 instances

C. Store API credentials as an object in S3.

D. Pass API credentials to the instance using instance userdata.

A

Answer: B

An IAM role is similar to a user, in that it is an AWS identity with permission policies that determine what the identity can and cannot do in AWS. However, instead of being uniquely associated with one person, a role is intended to be assumable by anyone who needs it. Also, a role does not have any credentials (password or access keys) associated with it. Instead, if a user is assigned to a role, access keys are created dynamically and provided to the user.

For more information on IAM role please visit the below URL:
http: //docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Question 408

What are some of the metrics that are monitored by AWS Lambda? Choose 3 answers from the options given below.

A. Invocations

B. Duration

C. Errors

D. Database Changes

A

Answer: A, B, C

AWS Lambda automatically monitors functions on your behalf, reporting metrics through Amazon CloudWatch. These metrics include Invocations, Duration, and Errors.

For more information on Lambda metrics please visit the below URL:

https: //docs.aws.amazon.com/lambda/latest/dg/monitoring-functions-metrics.html
http: //docs.aws.amazon.com/lambda/latest/dg/monitoring-functions-access-metrics.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Question 409



There is a new facility from AWS which allows for fast, easy, and secure transfers of files over long distances between your client and your Amazon S3 bucket. What is this service called?

A. File Transfer

B. HTTP Transfer

C. S3 Transfer Acceleration

D. Kinesis Acceleration

A

Answer: C

To know more about S3 transfer acceleration, please visit the below URL:
http: //docs.aws.amazon.com/AmazonS3/latest/dev/transfer-acceleration.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Question 410



What are the languages currently supported by AWS Lambda? Choose 3 answers from the options given below.

A. Node.js

B. Angular.js

C. Java

D. C#

A

Answer: A, C, D

AWS Lambda runs your code on a high-availability compute infrastructure and performs all of the administration of the compute resources, including server and operating system maintenance, capacity provisioning and automatic scaling, code monitoring and logging. All you need to do is supply your code in one of the languages that AWS Lambda supports (currently Node.js, Java, C# and Python).

For more information on Lamda please visit the below URL:
http: //docs.aws.amazon.com/lambda/latest/dg/welcome.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Question 411



Your company has an application hosted in AWS which makes use of DynamoDB. There is a requirement from the IT security department to ensure that all source IP addresses which make calls to the DynamoDB tables are recorded. Which of the following services can be used to ensure this requirement is fulfilled.

A. AWS Code Commit

B. AWS Code Pipeline

C. AWS CloudTrail

D. AWS Cloudwatch

A

Answer: C

The AWS Documentation mentions the following DynamoDB is integrated with CloudTrail, a service that captures low-level API requests made by or on behalf of DynamoDB in your AWS account and delivers the log files to an Amazon S3 bucket that you specify. CloudTrail captures calls made from the DynamoDB console or from the DynamoDB low-level API. Using the information collected by CloudTrail, you can determine what request was made to DynamoDB, the source IP address from which the request was made, who made the request, when it was made, and so on.

For more information on DynamoDB and Cloudtrail, please refer to the below link:
http: //docs.aws.amazon.com/amazondynamodb/latest/developerguide/logging-using-cloudtrail.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Question 412



Which of the following statements are false about Amazon Glacier. Choose one answer from the options given below.

A. It supports archive operations of Upload, Download and Delete

B. The archives are mutable

C. Uploading an archive is a synchronous operation

D. Archives can be as large at 40TB

A

Answer: B

This is clearly given in the AWS documentation. A single archive can be as large as 40 terabytes. You can store an unlimited number of archives and an unlimited amount of data in Amazon Glacier. Each archive is assigned a unique archive ID at the time of creation, and the content of the archive is immutable, meaning that after an archive is created it cannot be updated.

For more information on AWS Glacier please visit the below URL:
https://aws.amazon.com/glacier/details/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Question 413



Your company currently has a web application hosted on a single EC2 Instance.
The load on the application has increased over time and now the users are complaining
of slow response time. Which of the following implementations can help alleviate this
issue.

A. Attach an additional EBS Volume to the EC2 Instance and direct the application to make the reads from this new volume.

B. Attach an additional network interface with an Elastic IP so that requests can be made onto multiple IP’s.

C. Launch additional EC2 Instances in a web server farm type configuration and place them behind an Elastic Load Balancer.

D. Launch additional EC2 Instances in a web server farm type configuration and place them behind Routes53.

A

Answer: C

The AWS mentions the following about the Elastic Load balancer that can be used to help in this issue A load balancer distributes incoming application traffic across multiple EC2 instances in multiple Availability Zones. This increases the fault tolerance of your applications. Elastic Load Balancing detects unhealthy instances and routes traffic only to healthy instances. Your load balancer serves as a single point of contact for clients. This increases the availability of your application. You can add and remove instances from your load balancer as your needs change, without disrupting the overall flow of requests to your application. Elastic Load Balancing scales your load balancer as traffic to your application changes over time. Elastic Load Balancing can scale to the vast majority of workloads automatically.

For more information on the Elastic Load Balancer, please refer to the below link:
http: //docs.aws.amazon.com/elasticloadbalancing/latest/classic/introduction.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Question 414



Which of the following are used to get data records from Amazon Kinesis? Choose an answer from the options below

A. Consumer

B. Stream

C. Producer

D. None of the above

A

Answer: A

Aconsumer gets data records from Amazon Kinesis streams. A consumer, known
as an Amazon Kinesis Streams application, processes the data records from a stream.

For more information on AWS Kinesis consumers please visit the below URL:
http: //docs.aws.amazon.com/streams/latest/dev/amazon-kinesis-consumers.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Question 415



What is the maximum possible retention period for data in Kinesis Streams? Choose an answer from the options below.

A. 5 days

B. 7 days

C. 10 days

D. 24 hours

A

Answer: B

For more information on AWS Kinesis consumers please visit the below URL:
http: //docs.aws.amazon.com/streams/latest/dev/amazon-kinesis-consumers.html

Data records are accessible for a default of 24 hours from the time they are added toa stream. This time frame is called the retention period and is configurable in hourly increments from 24 to 168 hours (1 to 7 days).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Question 416



Which of the following is false when you create an encrypted EBS volume?

Data is encrypted at rest inside the volume

B. Data is encrypted when it is moved from one instance to another in the same subnet.

C. Data is encrypted when data is moved between the volume and the instance

D. All snapshots created from the volume are encrypted

A

Answer: B

The AWS mentions the following about EBS Encryption Amazon EBS encryption offers you a simple encryption solution for your EBS volumes without the need for you to build, maintain, and secure your own key management infrastructure. When you create an encrypted EBS volume and attach it to a supported instance type, the following types of data are encrypted: -

Data at rest inside the volume -

All data moving between the volume and the instance -

All snapshots created from the volume

For more information on EBS Encryption, please refer to the below link:
http: //docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption. html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Question 417



In AWS what constitutes temporary security credentials? Choose 3 answers from the options given below

A. AWS Access Key ID

B. Secret Access Key

C. Security Token

D. SSL Keys

A

Answer: A, B, C

This is given in the AWS documentation:

For more information on LAM please visit the below URL:
https://aws.amazon.com/iam/faqs/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Question 418



Your company has a set of resources hosted in AWS. Your IT Supervisor is concerned with the costs being incurred with the current set of AWS resources and wants to monitor the cost usage. Which of the following mechanisms can be used to monitor the costs of the AWS resources and also look at the possibility of cost optimization. Choose 3 answers from the options given below

A. Use the Cost Explorer to see the costs of AWS resources

B. Create budgets in billing section so that budgets are set beforehand

C. Send all logs to Cloudwatch logs and inspect the logs for billing details

D. Consider using the Trusted Advisor

A

Answer: A, B, D

The AWS Documentation mentions the following

1) For a quick, high-level analysis use Cost Explorer, which is a free tool that you can use to view graphs of your AWS spend data. It includes a variety of filters and preconfigured views, as well as forecasting capabilities. Cost Explorer displays data from the last 13 months, the current month, and the forecasted costs for the next three months, and it updates this data daily.
2) Consider using budgets if you have a defined spending plan for a project or service and you want to track how close your usage and costs are to exceeding your budgeted amount. Budgets use data from Cost Explorer to provide you with a quick way to see your usage-to-date and current estimated charges from AWS. You can also set up notifications that warn you if you exceed or are about to exceed your budgeted amount.
3) Visit the AWS Trusted Advisor console regularly. Trusted Advisor works like a customized cloud expert, analyzing your AWS environment and providing best practice recommendations to help you save money, improve system performance and reliability, and close security gaps.

For more information on cost optimization, please visit the below URL:
https://aws.amazon.com/answers/account-management/cost-optimization-monitor/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Question 419



Who are federated users when it comes to AWS? Choose an answer from the options given below

A. These are LAM users in aws

B. These are IAM groups in aws

C. These are Federated users (external identities) are users you manage outside of AWS in your corporate directory

D. None of the above

A

Answer: C

This is given in the AWS documentation: For more information on IAM please visit
the below URL: https://aws.amazon.com/iam/faqs/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Question 420



As a solution architect, you have been asked to decide on whether to use Amazon EBS-backed or instance-store backed instance. What is one key difference between an Amazon EBS-backed and an instance-store backed instance that you need to keep in mind.

A. Amazon EBS-backed instances can be stopped and restarted.

B. Instance-store backed instances can be stopped and restarted.

C. Auto scaling requires using Amazon EBS-backed instances.

D. Virtual Private Cloud (VPC) requires EBS backed instances.

A

Answer: A

Amazon EBS-backed instances can be stopped and restarted.

Please visit the below URL for the key differences between EBS and instance store volumes:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ComponentsAMIs.html

Amazon EBS-backed AMI can be placed in stopped state where instance is not running, but the root volume is persisted in Amazon EBS. Amazon Instance store -backed AMI cannot be in stopped state; instances are running or terminated

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Question 421



Which of the following are not supported in the classic load balancer service provided by AWS? Choose an answer from the options given below.

A. Health Checks

B. Cloudwatch Metrics

C. Host Based Routing

D. Access Logs

A

Answer: C

This is clearly given in the AWS documentation: For more information on ELB please visit the below URL:
https://aws.amazon.com/elasticloadbalancing/classicloadbalancer/faqs/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Question 422



Your company has an on-premise Active Directory setup in place. The company has extended their footprint on AWS , but still want to have the ability to use their on- premise Active Directory for authentication. Which of the following AWS services can be used to ensure that AWS resources such as AWS Workspaces can continue to use the existing credentials stored in the on-premise Active Directory.

A. Use the Active Directory service on AWS

B. Use the AWS Simple AD service

C. Use the Active Directory connector service on AWS

D. Use the ClassicLink feature on AWS

A

Answer: C

The AWS Documentation mentions the following AD Connector is a directory gateway with which you can redirect directory requests to your on-premises Microsoft Active Directory without caching any information in the cloud. AD Connector comes in two sizes, small and large. A small AD Connector is designed for smaller organizations of up to 500 users. A large AD Connector can support larger organizations of up to 5,000 users.

For more information on the AD connector, please refer to the below URL:
http://docs.aws.amazon.com/directoryservice/latest/admin-guide/directory_ad_connector.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Question 423



Which DNS record types does Amazon Route 53 support? Select 3 options.

A. A(address record)

B. AAAA(IPv6 address record)

C. TXT (txt record)

D. Host Information records (HINFO)

A

Answer: A, B, C

For more information on Route53, please visit the below URL:
https: //aws.amazon.com/route53/faqs/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Question 424



A user has been created in IAM but the user is still not able to make API calls. After creating a new IAM user which of the following must be done before they can successfully make API calls?

A. Add a password to the user.

B. Enable Multi-Factor Authentication for the user.

C. Assign a Password Policy to the user.

D. Create a set of Access Keys for the user.

A

Answer: D

In IAM , when you create a user , you need to download the Access Key ID and Secret access key so that the user can access aws.

For more information on IAM please visit the following URL:
http: //docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Question 425



Which of the following is not supported by AWS Import/Export?

Import to Amazon S3

B. Export from Amazon S3

C. Import to Amazon EBS

D. Import to Amazon Glacier

E. Export from Amazon Glacier

A

Answer: E

The AWS documentation mentions the following AWS Import/Export accelerates transferring data between the AWS cloud and portable storage devices that you mail to us. AWS Import/Export is a good choice if you have 16 terabytes (TB) or less of data to import into Amazon Simple Storage Service or Amazon Elastic Block Store (Amazon EBS). You can also export data from Amazon S3 with AWS Import/Export. Before Amazon Glacier data can be exported it needs to be restored to Amazon S3 using the S3 Lifecycle Restore feature For more information on AWS Import/Export.

please refer to the below link: https://docs.aws.amazon.com/amazonglacier/latest/dev/uploading-an-archive.html
http: //docs.aws.amazon.com/AWSImportExport/latest/DG/whatisdisk.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Question 426



Which of the following programming languages have an officially supported AWS SDK? Select 2 options.

A. PHP

B. Pascal

C. Java

D. SQL

E. Perl

A

Answer: A, C

This is as per the AWS documentation For more information on aws toolkits available, please refer to the below url: https://aws.amazon.com/tools/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Question 427



When using the following AWS services, which should be implemented in multiple Availability Zones for high availability solutions? Choose 2 answers from the options below.

A. Amazon DynamoDB

B. Amazon Elastic Compute Cloud (EC2)

C. Amazon Elastic Load Balancing

D. Amazon Simple Storage Service (S3)

A

Answer: B, C

The snapshot from the AWS documentation shows how the ELB and EC2 instances get setup for high availability. You have the ELB placed in front of the instances. The instances are placed in different AZ’s. For more information on the ELB, please visit the below URL: https://aws.amazon.com/elasticloadbalancing/

Option A is wrong because the service runs across Amazon’s proven, high-availability data centers. The service replicates data across three facilities in an AWS Region to provide fault tolerance in the event of a server failure or Availability Zone outage.

Option D is wrong because Amazon Sg Standard and Standard - IA redundantly stores your objects on multiple devices across multiple facilities in an Amazon S3 Region. The service is designed to sustain concurrent device failures by quickly detecting and repairing any lost redundancy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Question 428



Which of the following statements are true with regards to EBS Volumes. Choose 3 correct answers from the options given below

EBS Volumes are automatically replicated within that zone to prevent data loss due to failure of any single hardware component

B. EBS Volumes can be attached to any EC2 Instance in any AZ.

C. After you attach a volume, it appears as a native block device similar to a hard drive or other physical device.

D. An EBS volume can be attached to only one instance at a time

A

Answer: A, C, D

When you create an EBS volume in an Availability Zone, it is automatically replicated within that zone to prevent data loss due to failure of any single hardware component. After you create a volume, you can attach it to any EC2 instance in the same Availability Zone. After you attach a volume, it appears as a native block device similar to a hard drive or other physical device. At that point, the instance can interact with the volume just as it would with a local drive; the instance can format the EBS volume with a file system, such as ext3, and then install applications. An EBS volume can be attached to only one instance at a time within the same Availability Zone. However, multiple volumes can be attached to a single instance.

Option B is invalid because you can attach EBS Volumes to any EC2 instance in the same Availability Zone only

For more information on EBS Volumes, please visit the below URL:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumes.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Question 429



You are planning on hosting a static website on an EC2 Instance. Which of the below aspects can be used to create a highly available environment. Choose 3 answers from the options given below

A. An auto scaling group to recover from EC2 instance failures

B. Elastic Load Balancer

C. An SQS queue

D. Multiple Availability Zones

A

Answer: A, B, D

The diagram from AWS documentation shows an example of a high available architecture for hosting EC2 Instances Here you have the

1) ELB which is placed in front of the users which helps in directing the traffic to the EC2 Instances.
2) The EC2 Instances which are placed as part of an AutoScaling Group
3) And then you have multiple subnets which are mapped to multiple availability zones For a static web site , the SQS is not required to build such an environment. If you have a system such as an order processing systems , which has that sort of queuing of requests , then that could be a candidate for using SQS Queues.

For more information on high availability, please visit the below URL:
https://media.amazonwebservices.com/architecturecenter/AWS_ac_ra_ftha_o4.pdf

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

Question 430



Which of the following services does not natively encrypts data at rest within an AWS region?
(Choose two.)

A. AWS Storage Gateway

B. Amazon DynamoDB

C. Amazon CloudFront

D. Amazon Glacier

E. Amazon Simple Queue Service

A

Answer: C, E

CloudFront and SQS do not have Encryption at Rest. All remaining options have Encryption at Rest. This is clearly given in the AWS documentation

For information on Amazon Glacier, please refer to the below link:
https://aws.amazon.com/glacier/faqs/

For information on Amazon Storage gateways, please refer to the below link:
https://aws.amazon.com/storagegateway/faqs/

On Feb 8 2018, Amazon announced Encryption at Rest for DynamoDB For information on Amazon DynamoDb Encryption at Rest,

please refer to the below link: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/EncryptionAtRest.html “

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

Question 431



Amazon’s Redshift uses which block size for its columnar storage

A. 2KB

B. 8KB

C. 16KB

D. 32KB

E. 1024KB

A

Answer: E

Columnar storage for database tables is an important factor in optimizing analytic query performance because it drastically reduces the overall disk I/O requirements and reduces the amount of data you need to load from disk. Typical database block sizes efficient and further reduces the number of I/O requests needed to perform any database loading or other operations that are part of query execution. For more information on Redshift column storage,

please visit the below URL:
http: //docs.aws.amazon.com/redshift/latest/dg/c_columnar_storage_disk_mem_mgmnt.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

Question 432



Which procedure for backing up a relational database on EC2 that is using a set of RAIDed EBS volumes for storage minimizes the time during which the database cannot be written to and results in a consistent backup?

A. 1. Detach EBS volumes, 2. Start EBS snapshot of volumes, 3. Re-attach EBS volumes

B. 1. Stop the EC2 Instance. 2. Snapshot the EBS volumes

C. 1. Suspend disk I/O, 2. Create an image of the EC2 Instance, 3. Resume disk 1/0

D. 1. Suspend disk I/O, 2. Start EBS snapshot of volumes, 3. Resume disk 1/0

E. 1. Suspend disk I/O, 2. Start EBS snapshot of volumes, 3. Wait for snapshots to
complete, 4. Resume disk

A

Answer: E

The AWS Documentation mentions the following when considering snapshot for EBS Volumes in a RAID configuration When you take a snapshot of an attached Amazon EBS volume that is in use, the snapshot excludes data cached by applications or the operating system. For a single EBS volume, this is often not a problem. However, when cached data is excluded from snapshots of multiple EBS volumes in a RAID array, restoring the volumes from the snapshots can degrade the integrity of the array. When creating snapshots of EBS volumes that are configured in a RAID array, it is critical that there is no data I/O to or from the volumes when the snapshots are created. RAID arrays introduce data interdependencies and a level of complexity not present in a single EBS volume configuration. For more information on this,

please refer to the below link:
https://aws.amazon.com/premiumsupport/knowledge-center/snapshot-ebs-raid-array/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

Question 433



For which of the following use cases are Simple Workflow Service (SWF) and Amazon EC2 an appropriate solution? Choose 2 answers

A. Using as an endpoint to collect thousands of data points per hour from a distributed fleet of sensors

B. Managing a multi-step and multi-decision checkout process of an e-commerce website

C. Orchestrating the execution of distributed and auditable business processes

D. Using as an SNS (Simple Notification Service) endpoint to trigger execution of video transcoding jobs

E. Using as a distributed session store for your web application

A

Answer: B, C

The AWS Documentation mentions the following on the AWS Simple Workflow service The Amazon Simple Workflow Service (Amazon SWF) makes it easier to develop asynchronous and distributed applications by providing a programming model and infrastructure for coordinating distributed components and maintaining their execution state in a reliable way. By relying on Amazon SWF, you are freed to focus on building the aspects of your application that differentiate it. For more information on the simple workflow service,

please refer to the below link:
http: //docs.aws.amazon.com/amazonswf/latest/developerguide/swf-dg-intro-to-swf.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

Question 434



An instance can have many states that perform part of its lifecycle. Choose 3 options which are correct states of an instance lifecycle

A. rebooting

B. pending

C. running

D. Shutdown

A

Answer: A, B, C

The question indicates the different Instance states. For more information on Instance states,

please visit the url
http: //docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-lifecycle.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

Question 435



Which of the following can be used as an origin server in CloudFront? Choose 3 answers from the options given below.

A. A web server running on EC2

B. A web server running in your own datacenter

C. ARDS instance

D. An Amazon S3 bucket

A

Answer: A, B, D

Currently Cloudfront supports the following types of distributions $3 buckets - When you use Amazon S3 as an origin for your distribution, you place any objects that you want CloudFront to deliver in an Amazon S3 bucket. Custom Origin - A custom origin is an HTTP server, for example, a web server. The HTTP server can be an Amazon EC2 instance or an HTTP server that you manage privately. When you use a custom origin, you specify the DNS name of the server, along with the HTTP and HTTPS ports and the protocol that you want CloudFront to use when fetching objects from your origin.

For more information on Cloudfront Distributions, please visit the url
http: //docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/DownloadDistS3AndCustomOrigins.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

Question 436



A company has configured and peered two VPCs: VPC-1 and VPC-2. VPC-1 contains only private subnets, and VPC-2 contains only public subnets. The company uses a single AWS Direct Connect connection and private virtual interface to connect their on-premises network with VPC-1. Which two methods increases the fault tolerance of the connection to VPC-1? (Choose two.)

A. Establish a hardware VPN over the internet between VPC-2 and the on- premises network.

B. Establish a hardware VPN over the internet between VPC-1 and the on-premises network.

C. Establish a new AWS Direct Connect connection and private virtual interface in the same region as VPC-2.

D. Establish a new AWS Direct Connect connection and private virtual interface in a different AWS region than VPC-1.

E. Establish a new AWS Direct Connect connection and private virtual interface in the same AWS region as VPC-1

A

Answer: B, E

Having a VPN Connection is considered as a backup to a Direct Connect connection.

Please find the below article on configuring a VPN connection as a backup https://aws.amazon.com/premiumsupport/knowledge-center/configure-vpn-backup-dx/

One can also have another Direct Connect connection , so that if one goes down, the other one would still be active. This needs to be in the same region as VPC-1.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

Question 437



By default, what happens to data when an EC2 instance terminates? Select 3 options.

A. For EBS backed AMI, the root EBS volume with operating system preserved by default.

B. For EBS backed AMI, any volume attached apart from the OS volume is preserved

C. All the snapshots of the EBS volume with operating system is preserved

D. For S3 backed AMI, all the data in the local (ephemeral) hard drive is deleted

A

Answer: B, C, D

Option B is correct because when an instance is terminated, the volume will remain, unless you specifically delete the volume. When you create an instance, you have the root volume that does get deleted on deletion of the instance. But when you add a new volume, by default the “Delete on termination flag” is unchecked. So unless you don’t check this, the volume will remain.

Option C is correct because this is the whole idea of snapshots to remain even if the volume or instance is deleted.

Option D is correct because ephemeral storage is temporary storage by default and gets deleted when the system is terminated.

For more information on EBS volumes, please visit the link -
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumes.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

Question 438



When storing sensitive data on the cloud which of the below options should be carried out on AWS? Choose 3 answers from the options given below.

A. With AWS you do not need to worry about encryption

B. Enable EBS Encryption

C. Encrypt the file system on an EBS volume using Linux tools

D. Enable S3 Encryption

A

Answer: B, C, D

Amazon EBS encryption offers you a simple encryption solution for your EBS volumes without the need for you to build, maintain, and secure your own key management infrastructure. When you create an encrypted EBS volume and attach it to a supported instance type, the following types of data are encrypted: Data at rest inside the volume. All data moving between the volume and the instance All snapshots created from the volume

For more information on EBS Encryption, please refer to the below link
http: //docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption. html

Data protection refers to protecting data while in-transit (as it travels to and from Amazon S3) and at rest (while it is stored on disks in Amazon S3 data centers). You can protect data in transit by using SSL or by using client-side encryption.

For more information on S3 Encryption, please refer to the below link
http: //docs.aws.amazon.com/AmazonS3/latest/dev/UsingEncryption.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

Question 439

When using the following AWS services, which should be implemented in multiple Availability Zones for high availability solutions? Choose 2 answers

A. Amazon DynamoDB

B. Amazon Elastic Compute Cloud (EC2)

C. Amazon Elastic Load Balancing

D. Amazon Simple Notification Service (SNS)

E. Amazon Simple Storage Service (S3)

A

Answer: B, C

It is an architecture sample using Elastic Load Balancer , EC2 and Autoscaling Here the web servers are scaled on demand using Autoscaling. They are then placed behind an ELB which is used to distribute the traffic amongst the instances. Also the Web servers are placed between multiple availability zones for fault tolerance.

For more information on best practices for web hosting, please refer to the below URL:
https://do.awsstatic.com/whitepapers/aws-web-hosting-best-practices.pdf

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

Question 440

What is the default period for EC2 cloudwatch data with detailed monitoring disabled?

A. One second

B. Five seconds

C. One minute

D. Three minutes

E. Five minutes

A

Answer: E

In Amazon CloudWatch for basic monitoring of EC2 instances, the important metrics are collected at five minute intervals and stored for two weeks.

  • CPU load
  • disk I/O
  • network I/O

For more information on Amazon Cloudwatch EC2 basic monitoring, please visit
https://aws.amazon.com/blogs/aws/amazon-cloudwatch-basic-monitoring-for-ec2-at-no-charge/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

Question 441

You are a solutions architect working for a large digital media company. Your company is migrating their production estate to AWS and you are in the process of setting up access to the AWS console using Identity Access Management (IAM). You have created 5 users for your system administrators. What further steps do you need to take to enable your system administrators to get access to the AWS console?

A. Generate an Access Key ID & Secret Access Key, and give these to your system administrators.

B. Enable multi-factor authentication on their accounts and define a password policy.

C. Generate a password for each user created and give these passwords to your system administrators.

D. Give the system administrators the secret access key and access key id, and tell them to use these credentials to log in to the AWS console.

A

Answer: C

In order to allow the users to log into the console, you need to provide a password for the users. For more information on how to allow users to sign into an account,

please refer to the below URL:
http: //docs.aws.amazon.com/IAM/latest/UserGuide/getting-started_how-users-sign-in.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

Question 442



Which technique can be used to integrate AWS LAM (Identity and Access Management) with an on-premise LDAP (Lightweight Directory Access Protocol) directory service?

A. Use an IAM policy that references the LDAP account identifiers and the AWS credentials.

B. Use SAML (Security Assertion Markup Language) to enable single sign-on between AWS and LDAP.

C. Use AWS Security Token Service from an identity broker to issue short-lived AWS credentials.

D. Use IAM roles to automatically rotate the LAM credentials when LDAP credentials are updated.

E. Use the LDAP credentials to restrict a group of users from launching specific EC2 instance types.

A

Answer: C

For more information on AWS and SAML, please refer to the below URL:
https://aws.amazon.com/blogs/aws/aws-identity-and-access-management-now-with-identity-federation/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

Question 443



Your fortune 500 company has under taken a TCO analysis evaluating the use of Amazon S3 versus acquiring more hardware The outcome was that ail employees would be granted access to use Amazon S3 for storage of their personal documents. Which of the following will you need to consider so you can set up a solution that incorporates single sign-on from your corporate AD or LDAP directory and restricts access for each user to a designated user folder in a bucket? Choose three answers from the options given below

A. Setting up a federation proxy or identity provider

B. Using AWS Security Token Service to generate temporary tokens

C. Tagging each folder in the bucket

D. Configuring LAM role

E. Setting up a matching LAM user for every user in your corporate directory that needs access to a folder in the bucket

A

Answer: A, B, D

The diagram shows how the setup is done using the Secure token service to achieve integration between AWS and an on premise Active Directory infrastructure. You need to have an identity provider such as Active Directory Federation services. The Secure Token service is used to generate temporary credentials. These credentials are then mapped to corresponding LAM roles.

For more information please refer to the below link:
http: //docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

Question 444



Which of the following are true regarding encrypted Amazon Elastic Block Store (EBS) volumes? Choose 2 answers

A. Supported on all Amazon EBS volume types

B. Snapshots are automatically encrypted

C. Available to all instance types

D. Existing volumes can be encrypted

E. shared volumes can be encrypted

A

Answer: A, B

Please note the keyword “encrypted” in the question.

Option C is wrong because this there are some instance types that need to IOPS storage and not EBS storage.

Option D is wrong because existing volumes cannot be encrypted.

Option E is wrong because Shared volumes cannot be encrypted.

EBS volumes can be applied to all of the below Volume types:
For more information on EBS volume types, please visit the link:
http: //docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption. html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

Question 445



For which of the following use cases are Simple Workflow Service (SWF) and Amazon EC2 an appropriate solution? Choose 2 answers

A. Using as an endpoint to collect thousands of data points per hour from a distributed fleet of sensors

B. Managing a multi-step and multi-decision checkout process of an e-commerce website

C. Orchestrating the execution of distributed and auditable business processes

D. Using as an SNS (Simple Notification Service) endpoint to trigger execution of video transcoding jobs

E. Using as a distributed session store for your web application

A

Answer: B, C

Amazon Simple Workflow Service (SWF) is a web service that makes it easy to coordinate work across distributed application components. Amazon SWF enables applications for a range of use cases, including media processing, web application back-ends, business process workflows, and analytics pipelines, to be designed as a coordination of tasks. For collection of data points, this is normally done via Amazon Kinesis, so

Option A is wrong. In SWF, you can create multi-step and decision processes for managing approvals during the workflow process, hence

Option B is correct. Since business processed can be orchestrated in AWF,

Option C is correct. Video transcoding videos generally don’t need SWF and rely more on SQS, hence

Option D is wrong. Option E is wrong because you need to use a caching solution for this and now SWF.

For more information on aws SWF - Please visit the URL -
https://aws.amazon.com/swf/faqs/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

Question 446



You need to configure an Amazon S3 bucket to serve static assets for your public-facing web application. Which methods ensure that all objects uploaded to the bucket are set to public read? Choose 2 answers

A. Set permissions on the object to public read during upload.

B. Configure the bucket ACL to set all objects to public read.

C. Configure the bucket policy to set all objects to public read.

D. Use AWS Identity and Access Management roles to set the bucket to public read.

E. Amazon S3 objects default to public read, so no action is needed.

A

Answer: A, C

To set permissions on buckets and objects, you can give permissions to the bucket beforehand or you can set the permissions to the bucket when an object is uploaded to S3. Option B is incorrect, you cannot configure ACL for all objects to a public read. Even though you can use AWS to create identities, you cannot use it to give public read to a bucket Option E is incorrect, because public read is not set by default. To implement public read, just go to bucket and Permissions section. Click on Add more permissions, choose the Grantee as Everyone, ensure all permissions are given and then click on the Save button.

For more information on access control, please visit the link:
http://docs.aws.amazon.com/AmazonS3/latest/dev/s3-access-control.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
51
Q

Question 447



Which of the following are valid statements about Amazon S3? Choose 2 options.

A. 83 provides read-after-write consistency for any type of PUT or DELETE.

B. Consistency is not guaranteed for any type of PUT or DELETE.

C. A successful response to a PUT request only occurs when a complete object is saved.

D. Partially saved objects are immediately readable with a GET after an overwrite PUT.

E. S3 provides eventual consistency for overwrite PUTS and DELETES.

A

Answer: C, E

By default the documentation provides a clear description on the read and write consistency for objects on S3. Based on this information Option C and E are the right options.

For more information on S3, please visit the link -
https://aws.amazon.com/s3/faqs/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
52
Q

Question 448



Which of the following are characteristics of a standard reserved instance? Choose 3 answers

It can be migrated across Availability Zones

B. It is specific to an Amazon Machine Image (AMI)

C. It can be applied to instances launched by Auto Scaling

D. It is specific to an instance Type

E. It can be used to lower Total Cost of Ownership (TCO) of a system

A

Answer: A, C, E

Option A is correct, because you can migrate instances between AZ’s.

Please refer to the link for the confirmation on this case -
http: //docs.aws.amazon.com/AWSEC2/latest/UserGuide/ri-modifying.html

Option D is incorrect because it is specific to instance family however instance type can be changed. Also when you create a reserved instance, you can see the Instance Type as an option.

Option E is correct, because reserved instances can be used to lower costs. Reserved Instances provide you with a discount on usage of EC2 instances, and a capacity reservation when they are applied to a specific Availability Zone, giving you additional confidence that you will be able to launch the instances you have reserved when you need them.

For more information on reserved instances, please visit the link -
https://aws.amazon.com/ec2/pricing/reserved-instances/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
53
Q

Question 449



If you’re unable to connect via SSH to your EC2 instance, which of the following should you check and possibly correct to restore connectivity?

A. Adjust Security Group to permit egress traffic over TCP port 443 from your IP.

B. Configure the LAM role to permit changes to security group settings.

C. Modify the instance security group to allow ingress of ICMP packets from your IP.

D. Adjust the instance’s Security Group to permit ingress traffic over port 22 from your IP.

E. Apply the most recently released Operating System security patches.

A

Answer: D

A security group acts as a virtual firewall that controls the traffic for one or more instances. When you launch an instance, you associate one or more security groups with the instance. You add rules to each security group that allow traffic to or from its associated instances. For connecting via SSH on EC2, you need to ensure that port 22 is open on the security group for the EC2 instance.

Option A is wrong, because port 443 is for HTTPS and not for SSH.

Option B is wrong because LAM role is not pertinent to security groups

Option C is wrong because this is relevant to SSH and not ICMP

Option E is wrong because it does not matter what patches are there on the system

So in your EC2 Dashboard, go to Security groups, choose the relevant security group. Then click on Inbound rules and ensure there is a rule for TCP on port 22.

For more information on EC2 Security groups, please visit the url -
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security. html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
54
Q

Question 450



An Auto-Scaling group spans 3 AZs and currently has 4 running EC2 instances. When Auto Scaling needs to terminate an EC2 instance by default, Auto Scaling will: Choose 2 answers.

A. Allow at least five minutes for Windows/Linux shutdown scripts to complete, before terminating the instance.

B. Terminate the instance with the least active network connections. If multiple instances meet this criterion, one will be randomly selected.

C. Send a SNS notification, if configured to do so.

D. Terminate an instance in the AZ which currently has 2 running EC2 instances.

E. Randomly select one of the 3 AZs, and then terminate an instance in that AZ.

A

Answer: C, D

In the above scenario, you would probably have 2 instances running in one AZ and one each running in the other AZ’s. The below diagram shows how the instances will be terminated and the policy used by Auto scaling. So it will select the AZ with the most running instances as per the flow chart and hence Option D is correct and Option A, B and E are wrong. Also Auto scaling allows for notification via SNS, so if that is enabled, it will send out the notification accordingly.

For more information on Auto scaling Termination, please visit the link:
http: //docs.aws.amazon.com/autoscaling/latest/userguide/as-instance-termination.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
55
Q

Question 451



In order to optimize performance for a compute cluster that requires low inter- node latency, which of the following feature should you use?

A. Multiple Availability Zones

B. AWS Direct Connect

C. EC2 Dedicated Instances

D. Placement Groups

E. VPC private subnets

A

Answer: D

Option A is wrong because Multi AZ’s are used to distribute your AWS resources and is not connected to clusters for low latency. Option B is wrong because this is used to connect on-premise data centers to AWS Option C is wrong because dedicated resources does not guarantee low latency. Option E is wrong because VPC private subnets resources does not guarantee low latency. A placement group is a logical grouping of instances within a single Availability Zone. Placement groups are recommended for applications that benefit from low network latency, high network throughput, or both.

For more information on placement groups please visit
http: //docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
56
Q

Question 452



A company is preparing to give AWS Management Console access to developers Company policy mandates identity federation and role-based access control. Roles are currently assigned using groups in the corporate Active Directory. What combination of the following will give developers access to the AWS console? Choose 2 answers

A. AWS Directory Service AD Connector

B. AWS Directory Service Simple AD

C. AWS Identity and Access Management groups

D. AWS identity and Access Management roles

E. AWS identity and Access Management users

A

Answer: A,D

To enable trust relationship between AWS AD and Directory Service you need to create a New Role. After that, you need to assign Active Directory users or groups to those IAM roles. If roles are existing then you can assign Active Directory users or groups to existing LAM roles.

Find details below:
https://aws.amazon.com/blogs/security /how-to-connect-your-on-premises-active-directory-to-aws-using-ad-connector/

AWS Directory Service provides multiple ways to use Microsoft Active Directory with other AWS services. You can choose the directory service with the features you need at a cost that fits your budget. Use Simple AD if you need an inexpensive Active Directory—compatible service with the common directory features. Select AWS Directory Service for Microsoft Active Directory (Enterprise Edition) for a feature-rich managed Microsoft Active Directory hosted on the AWS cloud. The third option, AD Connector, lets you simply connect your existing on-premises Active Directory to AWS.

For more information on the Ad Connector, please visit
http: //docs.aws.amazon.com/directoryservice/latest/admin-guide/what_is.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
57
Q

Question 453



Which of the following are true regarding encrypted Amazon Elastic Block Store (EBS) volumes? Choose two answers from the options given below

A. Supported on all Amazon EBS volume types

B. Snapshots are automatically encrypted

C. Available to all instance types

D. Existing volumes can be encrypted

E. Shared volumes can be encrypted

A

Answer: A, B

The AWS Documentation mentions the following on EBS Volumes and is available for all volume types You can create EBS General Purpose SSD (gp2), Provisioned IOPS SSD (io1), Throughput Optimized HDD (st1), and Cold HDD (sc1) volumes up to 16 TiB in size The snapshots of encrypted EBS Volumes are automatically encrypted, this is given in the AWS documentation

For more information on EBS Volumes , please refer to the below URL:
http: //docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonEBS.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
58
Q

Question 454



Which of the following are characteristics of Amazon VPC subnets? Choose 2 answers from the options given below

A. Each subnet spans at least 2 Availability Zones to provide a high-availability environment.

B. Each subnet maps to a single Availability Zone.

C. CIDR block mask of/25 is the smallest range supported.

D. By default, all subnets can route between each other, whether they are private or public.

E. Instances in a private subnet can communicate with the Internet only if they have an Elastic IP.

A

Answer: B, D

Please see the below for further justification. A subnet can only map to one availability zone. So from options A and B, B is correct. When you create a CIDR block, the least allowable is /28, so option C is wrong. Option E is wrong because EC2 instances in a private subnet will not be able to route anything on the internet even if they have an elastic IP.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
59
Q

Question 455



Per the AWS Acceptable Use Policy, penetration testing of EC2 instances:

May be performed by AWS, and will be performed by AWS upon customer request.

B. May be performed by AWS, and is periodically performed by AWS.

C. Are expressly prohibited under all circumstances.

D. May be performed by the customer on their own instances with prior authorization from AWS.

E. May be performed by the customer on their own instances, only if performed from EC2 instances.

A

You need to take prior authorization from AWS before doing a penetration test on EC2 Instances.

Please refer to the below url for more details:
https://aws.amazon.com/security/penetration-testing/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
60
Q

Question 456



How can you secure data at rest on an EBS volume?

A. Attach the volume to an instance using EC2’s SSL interface.

B. Write the data randomly instead of sequentially.

C. Encrypt the volume using the S3 server-side encryption service.

D. Create an IAM policy that restricts read and write access to the volume.

E. Use an encrypted file system on top of the EBS volume.

A

Answer: E

In order to secure data at rest on an EBS volume, you either have to encrypt the volume when it is being created or encrypt the data after the volume is created.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
61
Q

Question 457



If you want to launch Amazon Elastic Compute Cloud (EC2) instances and assign each instance a private IP address in a predetermined range, you should: (choose one of the correct answer below)

A. Launch the instance from a private Amazon Machine Image (AMI).

B. Assign a group of sequential Elastic IP address to the instances.

C. Launch the instances in the Amazon Virtual Private Cloud (VPC).

D. Launch the instances in a Placement Group.

E. Use standard EC2 instances since each instance gets a private Domain Name Service (DNS) already.

A

Answer: C

This is the default reason for a VPC to host your own subnet and have EC2 instances have a private IP when it is launched in a VPC. Below is an example of an EC2 instance having a Private IP.

For more information on private IP addresses, please refer the below URL:
http: //docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-instance-addressing. html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
62
Q

Question 458



Which of the following notification endpoints or clients are supported by Amazon Simple Notification Service? Choose 2 answers from the options below.

A. Email

B. CloudFront distribution

C. File Transfer Protocol

D. Short Message Service

E. Simple Network Management Protocol

A

Answer: A, D

When you create a subscription in SNS , these are the protocols available on your console.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
63
Q

Question 459



Which of the following instance types are available as Amazon EBS-backed only? Choose 2 answers from the options below.

A. General purpose T2

B. General purpose M3

C. Compute-optimized C4

D. Compute-optimized C3

E. Storage-optimized I2

A

Answer: A, C

For details for all instance types, please visit the url -
https://aws.amazon.com/ec2/instance-types/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
64
Q

Question 460



There is an urgent requirement to monitor few database metrics for a database hosted on AWS and send notifications. Which AWS services can accomplish this requirements? Choose 2 answers from the options given below.

A. Amazon Simple Email Service

B. Amazon CloudWatch

C. Amazon Simple Queue Service (SQS)

D. Amazon Route 53

E. Amazon Simple Notification Service (SNS)

A

Answer: B, E

Amazon Cloudwatch will be used to monitor the IOP’s metrics from the RDS instance and Amazon Simple Notification Service will be used to send the notification if any alarm is triggered.

For more information on Cloudwatch and SNS, please visit the below URLs:

https: //aws.amazon.com/cloudwatch/
https: //aws.amazon.com/sns/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
65
Q

Question 461

A customer nightly EMR job processes a single 2-TB data file stored on S3. The EMR job runs on 2 on-demand core nodes and 3 on-demand task nodes. Which of the following may help reduce the EMR job completion time? Choose 2 answers from the options below

A. Use 3 spot instances rather than 3 on-demand instances for the task nodes.

B. Change the input split size in the MapReduce job configuration

C. Use a bootstrap action to present the S3 bucket as a local filesystem

D. Launch the core nodes and the task nodes with a VPC

E. Adjust the number of simultaneous mapper tasks

A

Answer: B, E

As per the AWS documentation, if you have too few tasks , then you have nodes sitting idle. You can increase the number of simultaneous mapper tasks and reduce the size of the MapReduce job configuration

For more information on EMR tasks please visit the below URL:
http://docs.aws.amazon.com/emr/latest/DeveloperGuide/TaskConfiguration_H1.0.3.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
66
Q

Question 462

What combination of the following options will protect S3 objects from both accidental deletion and accidental overwriting? Choose 2 answers from the options below

A. Enable Sg versioning on the bucket

B. Access S3 data using only signed URL’s

C. Disable S3 delete using an IAM bucket policy

D. Enable S3 RRS

E. Enable MFA protected access

A

Answer: A, E

This is clearly given in the AWS documentation:

For more information on S3 please visit the below URL:
https://aws.amazon.com/s3/faqs/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
67
Q

Question 463

You have an application running in us-west-2 that requires 6 EC2 instances running at all times. With 3 AZ available in that region, which of the following deployments provides 100% fault tolerance if any single AZ in us-west-2 becomes unavailable? Choose 2 answers from the options below:

A. us-west-2a with 2 instances, us-west-2b with 2 instances, us-west-2c with 2 instances

B. us-west-2a with 3 instances, us-west-2b with 3 instances, us-west-2c with o instances

C. us-west-2a with 4 instances, us-west-2b with 2 instances, us-west-2c with 2 instances

D. us-west-2a with 6 instances, us-west-2b with 6 instances, us-west-2c with 0
instances

E. us-west-2a with 3 instances, us-west-2b with 3 instances, us-west-2c with 3
instances

A

Answer: D, E

If you read the question carefully, it asks you the scenario when only one AZ goes down at a time. The requirement is to make 6 instances always running even if any one of the AZ is goes down. The questions doesn’t ask you if any 2 or 3 AZ goes down at a time. Hence D and E ensures that always 6 instances are running if any one AZ goes down at a time. I hope this clears your doubts.Since we need 6 instances running at all times , only D and E fulfill this option. Option A is invalid , because if any one of Availability zones goes down , then we are left with only 4 running instances. Option B is invalid because if either us-west-2a or us-west-2b goes down then we are left with less than 6 instances. Option C is invalid if us-west-2a goes down then we are left with less than 6 instances

For more information on building fault tolerant applications in AWS , please refer to the below link
http: //media.amazonwebservices.com/AWS_Building_Fault_Tolerant_Applications.pdf

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
68
Q

Question 464

You are developing a highly available web application using stateless web servers. Which services are suitable for storing session state data? Choose 3 answers from the options below

A. Amazon RDS

B. Amazon Elastic Cache

C. Amazon CloudWatch

D. Elastic Load Balancing (ELB)

E. Amazon DynamoDB

A

Answer: A, B, E

Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. It provides cost-efficient and resizable capacity while managing time-consuming database administration tasks, freeing you up to focus on your applications and business. Amazon ElastiCache is a web service that makes it easy to deploy, operate, and scale an in-memory data store or cache in the cloud. The service improves the performance of web applications by allowing you to retrieve information from fast, managed, in-memory data stores, instead of relying entirely on slower disk-based databases.

Amazon DynamoDB Amazon DynamoDB is a fast and flexible NoSQL database service for all applications that need consistent, single-digit millisecond latency at any scale. It is a fully managed cloud database and supports both document and key-value store models. Its flexible data model and reliable performance make it a great fit for mobile, web, gaming, ad tech, IoT, and many other applications.

For more information on Amazon RDS please visit the below URL:
https://aws.amazon.com/rds/ Amazon Elastic Cache

For more information on Amazon Elastic Cache please visit the below URL:
https://aws.amazon.com/elasticache/

For more information on Amazon DynamoDB please visit the below URL:
https://aws.amazon.com/dynamodb/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
69
Q

Question 465

You are developing a highly available web application using stateless web servers. Which services are suitable for storing session state data? Choose 3 answers

A. Amazon CloudWatch

B. Amazon Relational Database Service (RDS)

C. Elastic Load Balancing

D. Amazon ElastiCache

E. AWS Storage Gateway

F, Amazon DynamoDB

A

Answer: B, D, F

Please find the AWS Documentation references for Elastic Cache and DynamoDB. Relational databases have always been a source for storing session data. Amazon ElastiCache is a web service that makes it easy to deploy, operate, and scale an in- memory data store or cache in the cloud. The service improves the performance of web applications by allowing you to retrieve information from fast, managed, in-memory data stores, instead of relying entirely on slower disk-based databases.

For more information on Elastic Cache , please refer to the below link
https: //aws.amazon.com/elasticache/

An example of managing session state via DynamoDB is given below
http://docs.aws.amazon.com/sdk-for-net/v2/developer-guide/dynamodb-session-net-sdk.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
70
Q

Question 466

A customer needs corporate IT governance and cost oversight of all AWS resources consumed by its divisions. The divisions want to maintain administrative control of the discrete AWS resources they consume and keep those resources separate from the resources of other divisions. Which of the following options, when used together will support the autonomy/control of divisions while enabling corporate IT to maintain governance and cost oversight? Choose two answers from the options given below

A. Use AWS Consolidated Billing and disable AWS root account access for the child accounts.

B. Enable [AM cross-account access for all corporate IT administrators in each child account.

C. Create separate VPCs for each division within the corporate IT AWS account.

D. Use AWS Consolidated Billing by creating AWS Organisations to link the divisions’ accounts to a parent corporate account.

E. Write all child AWS CloudTrail and Amazon CloudWatch logs to each child account’s Amazon S3 ‘Log’ bucket.

A

Answer: B, D

Since the resources need to be separated and a separate governance model is required for each section of resources , then it’s better to have a separate AWS account for each division. Each division’s AWS account can sign up for consolidating billing to the main corporate account by creating AWS Organisations. The IT administrators can then be granted access via cross account role access.

For more information on consolidating billing, please visit the below URL:
http: //docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/consolidated-billing. html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
71
Q

Question 467

Which of the following are use cases for Amazon DynamoDB? Choose 3 answers

A. Storing BLOB data.

B. Managing web sessions.

C. Storing JSON documents.

D. Storing metadata for Amazon S3 objects.

E. Running relational joins and complex updates.

F. Storing large amounts of infrequently accessed data

A

Answer: B, C, D

Amazon DynamoDB stores structured data, indexed by primary key, and allows low latency read and write access to items ranging from 1 byte up to 400KB. Amazon S3 stores unstructured blobs and suited for storing large objects up to 5 TB. DynamoDB IS a good choice to store the metadata for a BLOB, such as name, date created, owner, etc… The Binary Large OBject itself would be stored in S3.

For more information on Amazon Dynamo DB, please visit
https://aws.amazon.com/dynamodb/faqs/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
72
Q

Question 468

A company is storing data on Amazon Simple Storage Service (S3). The company’s security policy mandates that data is encrypted at rest. Which of the following methods can achieve this? (Choose three.)

A. Use Amazon S3 server-side encryption with AWS Key Management Service managed keys.

B. Use Amazon S3 server-side encryption with customer-provided keys.

C. Use Amazon S3 server-side encryption with EC2 key pair.

D. Use Amazon S3 bucket policies to restrict access to the data at rest.

E. Encrypt the data on the client-side before ingesting to Amazon S3 using their own master key.

F. Use SSL to encrypt the data while in transit to Amazon S3.

A

Answer: A, B, E

One can encrypt data in an S3 bucket using both server side encryption and client side encryption. The following techniques are available

. Use Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3)

. Use Server-Side Encryption with AWS KMS-Managed Keys (SSE-KMS)

. Use Server-Side Encryption with Customer-Provided Keys (SSE-C)

. Use Client-Side Encryption with AWS KMS-—Managed Customer Master Key (CMK)

. Use Client-Side Encryption Using a Client-Side Master Key

For more information on using encryption, please refer to the below URL:
http: //docs.aws.amazon.com/AmazonS3/latest/dev/UsingEncryption.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
73
Q

Question 469

You are using an m1.small EC2 Instance with one 300 GB EBS volume to host a relational database. You determined that write throughput to the database needs to be increased. Which of the following approaches can help achieve this? Choose 2 answers

A. Use an array of EBS volumes.

B. Enable Multi-AZ mode.

C. Place the instance in an Auto Scaling Groups

D. Add an EBS volume and place into RAID 5.

E. Increase the size of the EC2 Instance.

F. Put the database behind an Elastic Load Balancer.

A

Answer: A, E

The AWS Documentation mentions the following With Amazon EBS, you can use any of the standard RAID configurations that you can use with a traditional bare metal server, as long as that particular RAID configuration is supported by the operating system for your instance. This is because all RAID is accomplished at the software level.

For greater I/O performance than you can achieve with a single volume, RAID o can stripe multiple volumes together; for on-instance redundancy, RAID 1 can mirror two volumes together.

For more information on RAID configuration, please refer to the below URL:
http: //docs.aws.amazon.com/AWSEC2/latest/UserGuide/raid-config.html

and then to offset the use of higher compute capacity, it is better to use a better instance type

For more information on Instance types, please refer to the below URL:
https://aws.amazon.com/ec2/instance-types/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
74
Q

Question 470

You are designing a social media site and are considering how to mitigate distributed denial-of-service (DDoS) attacks. Which of the below are viable mitigation techniques? Choose 3 answers from the options below

A. Add multiple elastic network interfaces (ENIs) to each EC2 instance to increase the network bandwidth.

B. Use dedicated instances to ensure that each instance has the maximum performance possible.

C. Use an Amazon CloudFront distribution for both static and dynamic content.

D. Use an Elastic Load Balancer with auto scaling groups at the web, App. Restricting direct internet traffic to Amazon Relational Database Service (RDS) tiers.

E. Add alert Amazon CloudWatch to look for high Network in and CPU utilization.

F. Create processes and capabilities to quickly add and remove rules to the instance OS firewall.

A

Answer: C, D, E

The snapshot from the aws documentation shows the best architecture practices for avoiding DDos attacks.

For best practises against DDos attacks , please visit the below link
https://do.awsstatic.com/whitepapers/DDoS_White_Paper_June2015.pdf

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
75
Q

Question 471

In AWS, which security aspects are the customer’s responsibility? Choose 4 answers

A. Security Group and ACL (Access Control List) settings

B. Decommissioning storage devices

C. Patch management on the EC2 instance’s operating system

D. Life-cycle management of IAM credentials

E. Controlling physical access to compute resources

F. Encryption of EBS (Elastic Block Storage) volumes

A

Answer: A, C, D, F

Please view the shared responsibility model shared by AWS
https://aws.amazon.com/compliance/shared-responsibility-model/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
76
Q

Question 472

A Solutions Architect is developing a document sharing application and needs a storage layer. The storage should provide automatic support for versioning so that users can easily roll back to a previous version or recover a deleted account. Which AWS service will meet the requirements?

A. Amazon 83

B. Amazon EBS

C. Amazon EFS

D. Amazon Storage Gateway VTL

A

Answer: A

Amazon S3 is a perfect storage layer for storing documents and other types of objects Amazon S3 also has the option for versioning as shown below. The versioning is on the bucket level and can be used to recover prior versions of an object.

For more information on Amazon S3, please visit the following URL:
https://aws.amazon.com/s3/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
77
Q

Question 473

You have an application running in us-west-2 that requires 6 EC2 Instances running at all times. With 3 availability zones in that region us-west-2a,us-west-2b,us- west-2c) which of the following deployments provides fault tolerance if any Availability zone in us-west-2 becomes unavailable. Choose 2 answers from the options given below

A. 2 EC2 Instances in us-west-2a, 2 EC2 Instances in us-west-2b,2 EC2 Instances in us-west-2c

B. 3 EC2 Instances in us-west-2a, 3 EC2 Instances in us-west-2b,no EC2 Instances in us-west-2c

C. 4 EC2 Instances in us-west-2a, 2 EC2 Instances in us-west-2b,2 EC2 Instances in us-west-2c

D. 6 EC2 Instances in us-west-2a, 6 EC2 Instances in us-west-2b,no EC2 Instances in us-west-2c

E. 3 EC2 Instances in us-west-2a, 3 EC2 Instances in us-west-2b,3 EC2 Instances in us-west-2c

A

Answer: D, E

Option A is incorrect because if one AZ becomes unavailable, then you would only have 4 instances available which does not meet the requirement. Option B is incorrect because if either us-west-2a or us-west-2b becomes unavailable, then you would only have 3 instances available which does not meet the requirement. Option C is incorrect because if us-west-2a becomes unavailable, then you would only have 4 instances available which does not meet the requirement.

For more information on AWS Regions and Availability Zones, please visit the following URL:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.RegionsAndAvailabilityZones.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
78
Q

Question 474

An application allows manufacturing site to upload files. Each 3 GB file is then processed to extract metadata with the processing taking a few seconds for each file. The frequency updates is unpredictable. There may be no updates for hours then several files uploaded concurrently. What architecture will address this workload the most cost efficiently

A. Use a Kinesis data delivery stream to store the file and use Lambda for processing

B. Use an SQS queue to store the file, which is then accessed by a fleet of EC2 Instances.

C. Store the file in an EBS volume which can then be accessed by another EC2 Instance for processing.

D. Store the file in an S3 bucket and use Amazon S3 event notification to invoke a Lambda function to process the file

A

Answer: D

One can create a Lambda function which can contain the code to process the file. You can then use the Event notification from the S3 bucket to invoke the Lambda function whenever the file is uploaded.

For more information on Amazon S3 event notification, please visit the following URL:
https://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
79
Q

Question 475

A company is migrating an on-premise 10TB MySQL database to AWS. The company expects the database to quadruple in size and the business requirement is that replica lag must be kept under 100 milliseconds. Which Amazon RDS engine meets these requirements?

A. MySQL

B. Microsoft SQL Server

C. Oracle

D. Amazon Aurora

A

Answer: D

The AWS Documentation supports the mentioned requirements which is supported by AWS Aurora Amazon Aurora (Aurora) is a fully managed, MySQL- and PostgreSQL-compatible, relational database engine. It combines the speed and reliability of high-end commercial databases with the simplicity and cost-effectiveness of open-source databases. It delivers up to five times the throughput of MySQL and up to three times the throughput of PostgreSQL without requiring changes to most of your existing applications. All Aurora Replicas return the same data for query results with minimal replica lag—usually much less than 100 milliseconds after the primary instance has written an update

For more information on AWS Aurora, please visit the following URL:
http: //docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Aurora.Overview.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
80
Q

Question 476

For which of the following workloads should a Solutions Architect consider using Elastic Beanstalk. Choose 2 answers from the options given below

A. A Web application using Amazon RDS

B. An Enterprise data warehouse

C. Along running worker process

D. A static Website

E. A management task run once nightly

A

Answer: A, C

The AWS Documentation clearly mentions that the Elastic Beanstalk component can be used to create Web Server environments and Worker environments

For more information on AWS Elastic beanstalk Web server environments, please visit the following URL: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/concepts-webserver.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
81
Q

Question 477

An application with a 150 GB relational database runs on an EC2 Instance. The application is used infrequently with small peaks in the morning and evening. What is the MOST cost effective storage type? Choose 2 correct answers.

A. Amazon EBS provisioned IOPS SSD

B. Amazon EBS Throughput Optimized HDD

C. Amazon EBS General Purpose SSD

D. Amazon EFS

A

Answer: A, C

Since the database is used infrequently and really is not used throughout the day and the question mentions the MOST cost effective storage type, you need to choose EBS General Purpose SSD over EBS provisioned IOPS SSD.

For more information on AWS EBS Volumes, please visit the following URL:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumes.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
82
Q

Question 478

An administrator runs a highly available application in AWS. The Administrator needs a file storage layer that can share between instances and scale the platform more easily. Which AWS service can perform this action?

A. Amazon EBS

B. Amazon EFS

C. Amazon S3

D. Amazon EC2 Instance store

A

Answer: B

The AWS Documentation mentions the following Amazon EFS provides scalable file storage for use with Amazon EC2. You can create an EFS file system and configure your instances to mount the file system. You can use an EFS file system as a common data source for workloads and applications running on multiple instances

For more information on AWS EFS, please visit the following URL:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonEFS.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
83
Q

Question 479

A company runs a service on AWS to provide offsite backups for images on laptops and phones. The solution must support millions of customers with thousands of images per customer. Images will be retrieved infrequently but must be available for retrieval immediately. Which is the MOST cost efficient storage option that meets these requirements?

A. Amazon Glacier with expedited retrievals

B. Amazon S3 Standard Infrequent Access

C. Amazon EFS

D. Amazon S3 Standard

A

Answer: B

Amazon S3 Infrequent access is perfect if you want to store data that is not frequently access. It is must more cost effective than Option D of Amazon Sg Standard. And if you choose Amazon Glacier with expedited retrievals, then you defeat the whole purpose of the requirement, because you would have an increased cost with this option

For more information on AWS Storage classes, please visit the following URL:
https://aws.amazon.com/s3/storage-classes/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
84
Q

Question 480

A Solutions Architect is designing a solution to store and archive corporate documents and has determined that Amazon Glacier is the right solution. Data must be delivered within 10 minutes of a retrieval request. Which feature in Amazon Glacier can help meet this requirement?

A. Vault Lock

B. Expedited retrieval

C. Bulk retrieval

D. Standard retrieval

A

Answer: B

The AWS Documentation mentions the following Expedited retrievals allow you to quickly access your data when occasional urgent requests for a subset of archives are required.

For more information on AWS Glacier Retrieval, please visit the following
URL: https://docs.aws.amazon.com/amazonglacier /latest/dev/downloading-an-archive-two-steps.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
85
Q

Question 481

A data processing application in AWS must pull data from an Internet service. A Solutions Architect must design a highly available solution to access data without placing bandwidth constraints on the application traffic. Which solution meets these requirements?

A. Launch a NAT gateway and add routes for 0.0.0.0/0

B. Attach a VPC endpoint and add routes for 0.0.0.0/0

C. Attach an Internet gateway and add routes for 0.0.0.0/0

D. Deploy NAT instances in a public subnet and add routes for 0.0.0.0/0

A

Answer: C

The AWS Documentation mentions the following An Internet gateway is a horizontally scaled, redundant, and highly available VPC component that allows communication between instances in your VPC and the Internet. It therefore imposes no availability risks or bandwidth constraints on your network traffic.

For more information on the Internet gateway, please visit the following URL:
https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Internet_Gateway-html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
86
Q

Question 482

In reviewing the Auto Scaling events for your application you notice that your application is scaling up and down multiple times in the same hour. What design choice could you make to optimize for cost while preserving elasticity. Choose 2 answers from the options given below

A. Modify the Autoscaling group termination policy to terminate the older instance first

B. Modify the Autoscaling group termination policy to terminate the newest instance first

C. Modify the Autoscaling group cool down timers

D. Modify the Autoscaling group to use scheduled scaling actions

E. Modify the Cloudwatch alarm period that triggers your AutoScaling scale down policy

A

Answer: C, E

One of the main reasons for this is that not enough time is being given for the scaling activity to take effect and for the entire infrastructure to stabilize after the scaling activity. This can be defined by increasing the Autoscaling group cool down timers.

For more information on Autoscaling cool down, please visit the following URL:
https://docs.aws.amazon.com/autoscaling/ec2/userguide/Cooldown.html

Another reason is that you have defined the right threshold for the Cloudwatch alarm for the scale down policy.

For more information on Autoscaling dynamic scaling, please visit the following URL:
https: //docs.aws.amazon.com/autoscaling/ec2/userguide/as-scale-based-on-demand.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
87
Q

Question 483

A company hosts a popular web application that connects to an Amazon RDS MySQL DB instance running in a private VPC subnet that was created with default ACL settings. The web servers must be accessible only to customers on an SSL connection. The database should only accessible to web servers in a public subnet. Which solution meets these requirements without impacting other running applications? Select 2 answers from the options given below

A. Create a network ACL on the web server’s subnets, allow HTTPS port 443 inbound and specify the source as 0.0.0.0/0

B. Create a web server security group that allows HTTPS port 443 inbound traffic from anywhere (0.0.0.0/0) and apply it to the web servers.

C. Create a DB server security group that allows MySQL port 3306 inbound and specify the source as the web server security group

D. Create a network ACL on the DB subnet, allow MySQL port 3306 inbound for web servers and deny all outbound traffic.

E. Create a DB Server security groups that allows the HTTPS port 443 inbound and specify the source as a web server security group

A

Answer: B, C

This sort of setup is given in the AWS documentation.

1) To ensure that traffic can flow into your web server from anywhere on secure traffic, you need to allow inbound security at 443
2) And then ensure that traffic can flow from the database server to the web server via the database security group The below snapshot from the AWS Documentation shows the rules tables for the security groups which relate to the same requirements as the question

For more information on this use case scenario, please visit the following URL:
https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Scenario2.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
88
Q

Question 484

An application will read and write objects to an S3 bucket. When the application is fully deployed, the read/write traffic will be very high. How should the architect maximize Amazon S3 performance?

A. Prefix each object name with a random string

B. Use the STANDARD _IA storage class

C. Prefix each object name with the current data

D. Enable versioning on the S3 bucket

A

Answer: A

If the request rate is high, then you can use hash keys or random strings to prefix the object name. In such a case, the partitions used to store the objects will be better distributed and hence allow for better read/write performance for your objects.

For more information on how to ensure performance in S3, please visit the following
URL: https://docs.aws.amazon.com/AmazonS3/latest/dev/request-rate-perf-considerations.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
89
Q

Question 485

You are deploying an application on Amazon EC2 that must call AWS API’s. What method of securely passing credentials to the application should you use

A. Pass API credentials to the instance using instance userdata

B. Store API credentials as an object in Amazon S3

C. Embed the API credentials into your application

D. Assign IAM roles to the EC2 Instances

A

Answer: D

The AWS Documentation mentions the following You can use roles to delegate access to users, applications, or services that don’t normally have access to your AWS resources Its not a best practice to use IAM credentials for any production based application. It’s always a good practice to use IAM Roles.

For more information on IAM Roles, please visit the following URL:
https://docs.aws.amazon.com/IAM /latest/UserGuide/id_roles.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
90
Q

Question 486

A website runs on EC2 Instances behind an ELB Application Load Balancer. The instances run in an AutoScaling Group across multiple Availability Zones. The instances deliver several large files that are stored on a shared Amazon EFS file system. The company needs to avoid serving the files from EC2 Instances every time a user requests these digital assets. What should the company do to improve the user experience of the web site?

A. Move the digital assets to Amazon Glacier

B. Cache static content using Cloudfront

C. Resize the images so that they are smaller

D. Use reserved EC2 Instances

A

Answer: B

The AWS Documentation mentions the following on the benefits of using Cloudfront Amazon CloudFront is a web service that speeds up distribution of your static and dynamic web content, such as .html, .css, .js, and image files, to your users. CloudFront delivers your content through a worldwide network of data centers called edge locations. When a user requests content that you’re serving with CloudFront, the user is routed to the edge location that provides the lowest latency (time delay), so that content is delivered with the best possible performance. If the content is already in the edge location with the lowest latency

For more information on AWS Cloudfront, please visit the following URL:
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Introduction.btml

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
91
Q

Question 487

A Solutions Architect is designing a highly scalable system to track records. Records must remain available for immediate download for three months and then the records must be deleted. What is the most appropriate decision for this use case?

A. Store the files in Amazon EBS and create a lifecycle policy to remove the files after 3 months.

B. Store the files in Amazon S3 and create a lifecycle policy to remove the files after 3 months.

C. Store the files in Amazon Glacier and create a lifecycle policy to remove the files after 3 months.

D. Store the files in Amazon EFS and create a lifecycle policy to remove the files after 3 months.

A

Answer: B

Option A is invalid since the records need to be stored in a highly scalable system

Option C is invalid since the records must be available for immediate download

Option D is invalid because it does not have the concept of the lifecycle policy

The AWS Documentation mentions the following on lifecycle policies Lifecycle configuration enables you to specify the lifecycle management of objects in a bucket.

The configuration is a set of one or more rules, where each rule defines an action for Amazon S3 to apply to a group of objects. These actions can be classified as follows:

- Transition actions — In which you define when objects transition to
another storage class. For example, you may choose to transition objects to the
STANDARD_IA (IA, for infrequent access) storage class 30 days after creation, or
archive objects to the GLACIER storage class one year after creation.
  • Expiration actions — In which you specify when the objects expire. Then
    Amazon S3 deletes the expired objects on your behalf.

For more information on AWS S3 Lifecycle policies, please visit the following URL:
https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
92
Q

Question 488

A consulting firm repeatedly build large architectures for their customers using AWS resources from many AWS services including IAM, Amazon EC2, Amazon RDS, DynamoDB and Amazon VPC. The consultants have architecture diagrams for each of their architectures and they are frustrated that they cannot use them to automatically create their resources. Which service should provide immediate benefits to the organization?

A. AWS Beanstalk

B. AWS Cloudformation

C. AWS CodeBuild

D. AWS CodeDeploy

A

Answer: B

The AWS Documentation mentions the below on AWS Cloudformation. This supplements the requirement in the question for the consultants to use their architecture diagrams to construct cloudformation templates. AWS CloudFormation is a service that helps you model and set up your Amazon Web Services resources so that you can spend less time managing those resources and more time focusing on your applications that run in AWS. You create a template that describes all the AWS resources that you want (like Amazon EC2 instances or Amazon RDS DB instances), and AWS CloudFormation takes care of provisioning and configuring those resources for you.

For more information on AWS Cloudformation, please visit the following URL:
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
93
Q

Question 489

The security policy of an organization requires an application to encrypt data before writing to the disk. Which solution should the organization use to meet this requirement?

A. AWS KMS API

B. AWS Certificate Manager

C. API Gateway with STS

D. IAM Access Key

A

Answer: A

Option B is incorrect - The AWS Certificate manager can be used to generate SSL certificates that can be used to encrypt traffic in transit, but not at rest

Option C is incorrect is again used for issuing tokens when using API gateway for traffic in transit.

Option D is used for secure access to EC2 Instances

The AWS Documentation mentions the following on AWS KMS AWS Key Management Service (AWS KMS) is a managed service that makes it easy for you to create and control the encryption keys used to encrypt your data. AWS KMS is integrated with other AWS services including Amazon Elastic Block Store (Amazon EBS), Amazon Simple Storage Service (Amazon S3), Amazon Redshift, Amazon Elastic Transcoder, Amazon WorkMail, Amazon Relational Database Service (Amazon RDS), and others to make it simple to encrypt your data with encryption keys that you manage

For more information on AWS KMS, please visit the following URL:
https://docs.aws.amazon.com/kms/latest/developerguide/overview.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
94
Q

Question 490

An application currently stores all data on Amazon EBS Volumes. All EBS volumes must be backed up durably across multiple Availability Zones. What is the MOST resilient way to backup the volumes?

A. Take regular EBS snapshots

B. Enable EBS volume encryption

C. Create a script to copy data to an EC2 Instance store

D. Mirror data across 2 EBS volumes

A

Answer: A

Option B is incorrect because it does not help in durability of EBS Volumes

Option C is incorrect since EC2 Instance stores are not durable

Option D is incorrect since mirroring data across EBS volumes is inefficient, when you already have the option for
EBS snapshots

The AWS Documentation mentions the following on AWS EBS Snapshots You can back up the data on your Amazon EBS volumes to Amazon S3 by taking point-in-time snapshots. Snapshots are incremental backups, which means that only the blocks on the device that have changed after your most recent snapshot are saved. This minimizes the time required to create the snapshot and saves on storage costs by not duplicating data. When you delete a snapshot, only the data unique to that snapshot is removed. Each snapshot contains all of the information needed to restore your data (from the moment when the snapshot was taken) to a new EBS volume

For more information on AWS EBS Snapshots, please visit the following URL:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSSnapshots.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
95
Q

Question 491

A retailer exports data from its transactional databases daily into an S3 bucket. The retailer’s data warehousing team wants to import that data into an existing Amazon Redshift cluster in their VPC. Corporate security policy mandates that the data can only be transported within a VPC. What combination of the following steps will satisfy the security policy? Choose 2 answers from the options given below

A. Enable Amazon Redshift Enhanced VPC routing

B. Create a cluster security group to allow the Amazon Redshift cluster to access Amazon S3

C. Create a NAT gateway in a public subnet to allow the Amazon Redshift cluster to access Amazon S3.

D. Create and configure an Amazon S3 VPC endpoint.

E. Setup a NAT gateway in a private subnet to allow the Amazon Redshift cluster to Access Amazon S3

A

Answer: C, D

The AWS Documentation mentions the following on the benefits for using NAT gateways and VPC endpoints for better and secure communication of private resources to public endpoints like $3 You can use a network address translation (NAT) gateway to enable instances in a private subnet to connect to the internet or other AWS services, but prevent the internet from initiating a connection with those instances

For more information on AWS NAT Gateway, please visit the following URL:
https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-nat-gateway.html

A VPC endpoint enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Instances in your VPC do not require public IP addresses to communicate with resources in the service. Traffic between your VPC and the other service does not leave the Amazon network.

For more information on AWS VPC endpoints, please visit the following URL:
https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-endpoints.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
96
Q

Question 492

A team is building an application that must persist and index JSON files in a highly available data store. Latency of data access must remain consistent despite very high application traffic. Which services should the team choose?

A. Amazon EFS

B. Amazon Redshift

C. DynamoDB

D. AWS Cloudformation

A

Answer: C

The AWS Documentation mentions the following on DynamoDB Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. The data in DynamoDB is stored in JSON format and hence is the perfect data store for the requirement in the question.

For more information on AWS DynamoDB, please visit the following URL:
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/ Introduction.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
97
Q

Question 493

An organization hosts a multi-language website on AWS. The website is served using Cloudfront. The language is specified in the HTTP request

http: //d11111f8.cloudfront.net/main.htm]l?language=de
http: //d11111f8.cloudfront.net/main.htm]?language=en
http: //d11111f8.cloudfront.net/main.html?language=es

How should AWS Cloudfront be configured to delivered cache data in the correct language?

A. Forward cookies to the origin

B. Based on query string parameters

C. Cache objects at the origin

D. Serve dynamic content

A

Answer: B

Since the language is specified in the query string parameters, hence the Cloudfront should be configured for query string parameters

For more information on configuring cloudfront via Query string parameters, please visit the following URL:
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/QueryStringParameters.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
98
Q

Question 494

A Solutions Architect is designing a web page for event registrations and needs a managed service to send a text message to users every time users sign up for an event. Which AWS Service should the Architect use to achieve this?

Amazon STS

B. Amazon SQS

C. AWS Lambda

D. Amazon SNS

A

Answer: D

The AWS Documentation mentions the following You can use Amazon SNS to send text messages, or SMS messages, to SMS-enabled devices. You can send a message directly to a phone number, or you can send a message to multiple phone numbers at once by subscribing those phone numbers to a topic and sending your message to the topic.

For more information on configuring SNS and SMS messages, please visit the
following URL: https://docs.aws.amazon.com/sns/latest/dg/SMSMessages.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
99
Q

Question 495

A Solutions Architect is designing a shared service for hosting containers from several customers on Amazon ECS. These containers will use several AWS services. A container from one customer must not be able access data from another customer. Which solution should the architect use to meet the requirements?

A. IAM roles for tasks

B. IAM roles for EC2 Instances

C. IAM Instance profile for EC2 Instances

D. Security Group rules

A

Answer: A

The AWS Documentation mentions the following With LAM roles for Amazon ECS tasks, you can specify an IAM role that can be used by the containers in a task. Applications must sign their AWS API requests with AWS credentials, and this feature provides a strategy for managing credentials for your applications to use, similar to the way that Amazon EC2 instance profiles provide credentials to EC2 instances.

For more information on configuring IAM Roles for tasks in ECS, please visit the following URL:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
100
Q

Question 496

A company is generating large datasets with millions of rows that must be summarized by column. Existing business intelligence tools will be used to build daily reports. Which storage service meets the requirements?

A. Amazon Redshift

B. Amazon RDS

C. ElastiCache

D. DynamoDB

A

Answer: A

The AWS Documentation mentions the following Amazon Redshift is a fully managed, petabyte-scale data warehouse service in the cloud. You can start with just a few hundred gigabytes of data and scale to a petabyte or more. This enables you to use your data to acquire new insights for your business and customers.

For more information on AWS Redshift, please visit the following URL:
https://docs.aws.amazon.com/redshift/latest/mgmt/welcome.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
101
Q

Question 497

A company is developing a web application that will be hosted in AWS. The application needs to have a data store for session data. As an AWS Solution Architect, which of the following would you recommend for this requirement? Choose 2 answer from the options given below

A. CloudWatch

B. DynamoDB

C. Elastic Load Balancing

D. ElastiCache

E. Storage Gateway

A

Answer: B, D

DynamoDB and Elasticache are the perfect options for storing session data. The AWS Documentation mentions the following on these services Amazon DynamoDB is a fast and flexible NoSQL database service for all applications that need consistent, single-digit millisecond latency at any scale. It is a fully managed cloud database and supports both document and key-value store models. Its flexible data model, reliable performance, and automatic scaling of throughput capacity, makes it a great fit for mobile, web, gaming, ad tech, IoT, and many other applications

For more information on AWS DynamoDB, please visit the following URL:
https://aws.amazon.com/dynamodb/

ElastiCache is a web service that makes it easy to set up, manage, and scale a distributed in-memory data store or cache environment in the cloud. It provides a high-performance, scalable, and cost-effective caching solution, while removing the complexity associated with deploying and managing a distributed cache environment.

For more information on AWS Elasticache, please visit the following URL:
https://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/Whatls.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
102
Q

Question 498

A company needs to store images that are uploaded by users via a mobile application. There is also a need to ensure that there is a security measure in place to protect against users accidentally deleting images. Which action will protect against unintended user actions?

A. Store data in an EBS volume and create snapshots once a week.

B. Store data in an S3 bucket and enable versioning.

C. Store data in two S3 buckets in different AWS regions.

D. Store data on EC2 instance storage

A

Answer: B

Amazon S3 also has the option for versioning as shown below. The versioning is on the bucket level and can be used to recover prior versions of an object.

For more information on AWS S3 versioning, please visit the following URL:
https://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
103
Q

Question 499

An application needs to have a data store hosted in AWS. The following requirements are in place for the data store

a) Ability to have an initial storage of 8 TB
b) The database will grow by 8 GB every day.
c) The ability to have 4 read replicas

Which of the following data store would you choose for this requirement?

A. DynamoDB

B. Amazon S3

C. Amazon Aurora

D. Amazon Redshift

A

Answer: D

Amazon Redshift has all the features which meet the requirements. The AWS Documentation mentions the following Amazon Redshift is a fully managed, petabyte- scale data warehouse service in the cloud. You can start with just a few hundred gigabytes of data and scale to a petabyte or more Amazon Redshift replicates all your data within your data warehouse cluster when it is loaded and also continuously backs up your data to S3. Amazon Redshift always attempts to maintain at least three copies of your data (the original and replica on the compute nodes and a backup in Amazon S3). Redshift can also asynchronously replicate your snapshots to $3 in another region for disaster recovery.

For more information on AWS Redshift, please visit the following
URL: https://docs.aws.amazon.com/redshift/latest/mgmt/welcome.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
104
Q

Question 500

There is a requirement to host a database on an EC2 Instance. There is a requirement for the EBS volume to support 12,000 IOPS. Which Amazon EBS volume type can meet the performance requirements of this database?

A. EBS Provisioned IOPS SSD

B. EBS Throughput Optimized HDD

C. EBS General Purpose SSD

D. EBS Cold HDD

A

Answer: A

Since there is a high performance requirement with high IOPS needed, one needs to opt for EBS Provisioned IOPS SSD The below snapshot from the AWS Documentation mentions the need of using Provisioned IOPS for better IOPS performance for database based applications.

For more information on AWS EBS Volume types, please visit the following URL:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
105
Q

Question 501

Development teams in your organization use S3 buckets to store the log files for various application hosted in development environments in AWS. The developers want to keep the logs for one month for troubleshooting purposes, and then purge the logs. What feature will enable this requirement?

A. Adding a bucket policy on the S3 bucket.

B. Configuring lifecycle configuration rules on the S3 bucket.

C. Creating an IAM policy for the S3 bucket.

D. Enabling CORS on the S3 bucket.

A

Answer: B

The AWS Documentation mentions the following on lifecycle policies Lifecycle configuration enables you to specify the lifecycle management of objects in a bucket. The configuration is a set of one or more rules, where each rule defines an action for Amazon S3 to apply to a group of objects. These actions can be classified as follows:

  • Transition actions — In which you define when objects transition to another storage class. For example, you may choose to transition objects to the STANDARD_IA (IA, for infrequent access) storage class 30 days after creation, or archive objects to the GLACIER storage class one year after creation.
  • Expiration actions — In which you specify when the objects expire. Then Amazon S3 deletes the expired objects on your behalf.

For more information on AWS S3 Lifecycle policies, please visit the following URL:
https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
106
Q

Question 502

A legacy application needs a propriety file system. Which of the following can be used to store the data which can be used by an EC2 Instance.

A. AWS EBS Volumes

B. AWS S3

C. AWS Glacier

D. AWS EFS

A

Answer: D

The AWS Documentation mentions the following Amazon Elastic File System (Amazon EFS) provides simple, scalable file storage for use with Amazon EC2 instances in the AWS Cloud. Amazon EFS is easy to use and offers a simple interface that allows you to create and configure file systems quickly and easily. With Amazon EFS, storage capacity is elastic, growing and shrinking automatically as you add and remove files, so your applications have the storage they need, when they need it. When mounted on Amazon EC2 instances, an Amazon EFS file system provides a standard file system
interface and file system access semantics, allowing you to seamlessly integrate Amazon EFS with your existing applications and tools. Multiple Amazon EC2 instances can access an Amazon EFS file system at the same time, allowing Amazon EFS to provide a common data source for workloads and applications running on more than one Amazon EC2 instance.

For more information on AWS EFS, please visit the
following URL: https://aws.amazon.com/efs/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
107
Q

Question 503

Which of the following can be used to host an application which uses NGINX and can be scaled at any point in time

A. AWS EC2

B. AWS Elastic Beanstalk

C. AWS SQS

D. AWS ELB

A

Answer: B

The below snippet from the AWS Documentation shows the server available for Web server environments that can be created via Elastic Beanstalk. The server shows that nginx servers can be provisioned via the Elastic Beanstalk service.

For more information on the supported platforms for AWS Elastic beanstalk, please visit the following URL:
https: //docs.aws.amazon.com/elasticbeanstalk/latest/dg/concepts.platforms.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
108
Q

Question 504

There is a requirement to upload a million images to S3. Which of the following can be used to ensure optimal performance

A. Use a sequential ID for the prefix

B. Use a hexadecimal hash for the prefix

C. Use a hexadecimal hash for the suffix

D. Use a sequential ID for the suffix

A

Answer: B

This recommendation for increasing performance if you have a high request rate in S3 is given in the AWS documentation

For more information on S3 performance considerations, please visit the following URL:
https://docs.aws.amazon.com/AmazonS3/latest/dev/request-rate-perf-considerations.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
109
Q

Question 505

There is a requirement to get the IP address for resources accessed in a private subnet. Which of the following can be used

A. Trusted Advisor

B. VPC Flow Logs

C. Use Cloudwatch metrics

D. Use Cloudtrail

A

Answer: B

The AWS Documentation mentions the following VPC Flow Logs is a feature that enables you to capture information about the IP traffic going to and from network interfaces in your VPC. Flow log data is stored using Amazon CloudWatch Logs. After you’ve created a flow log, you can view and retrieve its data in Amazon CloudWatch Logs.

For more information on VPC Flow Logs, please visit the following URL:
https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/flow-logs.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
110
Q

Question 506

There is a requirement for 500 message to be sent and processed in order. Which service can be used in this regard?

A. AWS SQS

B. AWS SNS

C. AWS Config

D. AWS ELB

A

Answer: A

One can use SQS FIFO queues for this purpose. The AWS Documentation mentions the following on SQS FIFO Queues. Amazon SQS is a reliable and highly-scalable managed message queue service for storing messages in transit between application components. FIFO queues complement the existing Amazon SQS standard queues, which offer high throughput, best-effort ordering, and at-least-once delivery. FIFO queues have essentially the same features as standard queues, but provide the added benefits of supporting ordering and exactly-once processing. FIFO queues provide additional features that help prevent unintentional duplicates from being sent by message producers or from being received by message consumers. Additionally, message groups allow multiple separate ordered message streams within the same queue.

For more information on SQS FIFO Queues, please visit the following URL:
https://aws.amazon.com/about-aws/whats-new/2016/11/amazon-sqs-introduces-fifo-queues-with-exactly-once-processing-and-lower-prices-for-standard-queues/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
111
Q

Question 507

There is a requirement for a database for a two tier application. The data would go
through multiple schema changes. The database needs to be durable and also changes
to the database should not result in downtime for the database. Which of the following
is the best option for data storage

A. AWS S83

B. AWS Redshift

C. AWS DynamoDB

D. AWS Aurora

A

Answer: C

AWS DynamoDB is a database that is schema-less and hence is ideal if you have multiple schema changes. It is also durable. Option A is incorrect because S3 is an object storage device and not a database. Option B is more of a data warehousing solution. Option D needs support for a constant schema and hence is not an ideal solution.

For more information on AWS Aurora, please visit the following URL:
http: //docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Aurora.Overview.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
112
Q

Question 508

A redshift cluster currently contains 60TB of data. There is a requirement to ensure a disaster recovery site in a region located 600 KM away is put in place. Which of the following solutions would help ensure that this requirement is fulfilled.

A. Take a copy of the underlying EBS volumes to S3 and then do cross region replication

B. Enable cross region snapshots for the Redshift Cluster

C. Create a Cloudformation template to restore the Cluster in another region

D. Enable cross availability zone snapshots for the Redshift Cluster

A

Answer: B

The diagram in the article shows that snapshots are available for Redshift clusters which enables clusters to be available in different regions

For more information on managing Redshift snapshots, please visit the following URL:
https://docs.aws.amazon.com/redshift/latest/mgmt/managing-snapshots-console.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
113
Q

Question 509

A company is using a Redshift cluster to store their data warehouse. There is a requirement from the Internal IT Security team to ensure that data gets encrypted for the Redshift database. How can this be achieved?

A. Encrypt the EBS volumes of the underlying EC2 Instances

B. Use AWS KMS Customer Default master key

C. Use SSL/TLS for encrypting the data

D. Use S3 Encryption

A

Answer: B

The AWS Documentation mentions the following Amazon Redshift uses a hierarchy of encryption keys to encrypt the database. You can use either AWS Key Management Service (AWS KMS) or a hardware security module (HSM) to manage the top-level encryption keys in this hierarchy. The process that Amazon Redshift uses for encryption differs depending on how you manage keys.

For more information on Redshift encryption, please visit the following URL:
https://docs.aws.amazon.com/redshift/latest/mgmt/working-with-db-encryption.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
114
Q

Question 510

There is a requirement for block level storage which would be able to store 500GB of data. Also encryption of the data is required. Which of the following can be used in such a case

A. AWS EBS Volumes

B. AWS S3

C. AWS Glacier

D. AWS EFS

A

Answer: A

When you consider block level storage , then you need to consider EBS Volumes.

Option B and C is incorrect since they are object level storage. Option D is incorrect since this is file level storage.

For more information on EBS volumes, please visit the following URL:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumes.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
115
Q

Question 511

An application requires an EC2 Instance to do continuously batch processing activities which requires at least 500MiB/s throughput of data. Which of the following is the best storage option for this.

A. EBS IOPS

B. EBS SSD

C. EBS Throughput Optimized

D. EBS Cold Storage

A

Answer: C

When you are considering storage volume types for batch processing activities with large throughput , then consider using EBS Throughput Optimized volume type.

This is also mentioned in the AWS Documentation For more information on EBS volume types, please visit the following URL:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
116
Q

Question 512

An application needs to access data in another AWS account in the same region. Which of the following can be used to ensure data can be accessed as required

A. Establish a NAT instance between both accounts

B. Use a VPN between both accounts

C. Use a NAT gateway between both accounts

D. Use VPC Peering between both accounts

A

Answer: D

Option A and C are incorrect because you normally use these options when you want private resources to access the Internet.

Option B is incorrect since the resources are in the same region, so you don’t need a VPN connection.

The AWS Documentation mentions the following about VPC Peering A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them privately. Instances in either VPC can communicate with each other as if they are within the same network. You can create a VPC peering connection between your own VPCs, with a VPC in another AWS account, or with a VPC in a different AWS Region.

For more information on VPC Peering, please visit the following URL:
https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-peering.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
117
Q

Question 513

An application currently uses a NAT instance and now wants to use a NAT gateway. Which of the following can be used to accomplish this

A. Use NAT Instances along with the NAT Gateway

B. Host the NAT Instance in the private subnet

C. Migrate NAT Instance to NAT Gateway and host the NAT Gateway in the public
subnet

D. Convert the NAT Instance to a NAT Gateway

A

Answer: C

One can simply start using the NAT gateway service and stop using the deployed NAT instances. But you need to ensure that the NAT gateway is deployed in the public subnet

For more information on migrating to a NAT gateway, please visit the following URL:
https://aws.amazon.com/premiumsupport/knowledge-center/migrate-nat-instance-gateway/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
118
Q

Question 514

An application consists of the following architecture.

a. EC2 Instances in multiple AZ’s behind an ELB.
b. The EC2 Instances are launched via an Autoscaling Group
c. There is a NAT instance which is used to ensure that instances can download updates from the internet.

Which of the following is the bottleneck in the architecture?
A. The EC2 Instances
B. The ELB
C. The NAT Instance
D. The Autoscaling Group
A

Answer: C

Since there is only one NAT instance, this is a bottleneck for the architecture. For high availability, launch NAT instances in multiple available zones and make it as part of an Autoscaling Group.

For more information on NAT Instances, please visit the following URL:
https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_NAT_Instance.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
119
Q

Question 515

A company owns an API which currently gets 1000 requests per sec. They want to host this using AWS. Which of the following is the best cost effective solution for this. The API is currently hosted on a t2.xlarge instance

A. Use API gateway with the backend services as it is.

B. Use the API gateway along with AWS Lambda

C. Use Cloudfront along with the API backend service as it is.

D. Use Elastic Cache along with the API backend service as it is.

A

Answer: B

Since the company has full ownership of the API, the best solution would be to convert the code for the API and use it in a Lambda function. You can save on cost, since in Lambda you don’t pay for any infrastructure and only pay for how much time the Lambda function runs. And then you can use the API gateway along with the AWS Lambda function which can scale accordingly.

For more information on using API gateway with AWS Lambda, please visit the following URL:
https://docs.aws.amazon.com/apigateway /latest/developerguide/getting-started-with-lambda-integration.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
120
Q

Question 516

There is a requirement to host a database application which will have a lot of resource intensive reads and writes. Which of the following is the best storage option to ensure that the data is persistent.

A. EBS IOPS

B. EBS SSD

C. EBS Throughput Optimized

D. EBS Cold Storage

A

Answer: A

Since there is a high performance requirement with high IOPS needed, one needs to opt for EBS Provisioned IOPS SSD The below snapshot from the AWS Documentation mentions the need of using Provisioned IOPS for better IOPS performance for database based applications.

For more information on AWS EBS Volume types, please visit the following URL:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
121
Q

Question 517

An application sends images to S3. The metadata for these images needs to be saved in persistent storage. The metadata needs to be indexed. Which of the following can be used for the underlying storage.

A. AWS Aurora

B. AWS S3

C. AWS DynamoDB

D. AWS RDS

A

Answer: C

The most efficient storage mechanism for just storing metadata is DynamoDB. DynamoDB is normally used in conjunction with the Simple Storage service. So after storing the images in S3 , you can store the metadata in DynamoDB. You can also create secondary indexes for DynamoDB Tables

For more information on managing indexes in DynamoDB, please visit the following URL:
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/SQLtoNoSQL-Indexes.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
122
Q

Question 518

An application is hosted on EC2 Instances for an application. There is a promotion campaign due to start in 2 weeks for the application. There is a mandate from management to ensure that no performance problems are encountered due to traffic growth during this time. Which of the following must be done to the Autoscaling Group to ensure this requirement can be fulfilled.

A. Configure step scaling for the Autoscaling Group

B. Configure Dynamic scaling for the Autoscaling Group

C. Configure Scheduled scaling for the Autoscaling Group

D. Configure static scaling for the Autoscaling Group

A

Answer: C

The AWS Documentation mentions the following Scaling based on a schedule allows you to scale your application in response to predictable load changes. For example, every week the traffic to your web application starts to increase on Wednesday, remains high on Thursday, and starts to decrease on Friday. You can plan your scaling activities based on the predictable traffic patterns of your web application. To configure your Auto Scaling group to scale based on a schedule, you create a scheduled action, which tells Amazon EC2 Auto Scaling to perform a scaling action at specified times.

For more information on Autoscaling scheduled scaling, please visit the following URL:
https://docs.aws.amazon.com/autoscaling/ec2/userguide/schedule_time.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
123
Q

Question 519

Currently a company makes user of EBS snapshots to back up their EBS Volumes. As part of the business continuity requirement, these snapshots need to be made available in another region. How can this be achieved?

A. Directly create the snapshot in the other region

B. Create a snapshot and then create it in the new region

C. Copy the snapshot to an S3 bucket and then enable cross region replication for the bucket.

D. Copy the EBS Snapshot to an EC2 instance in another region

A

Answer: B

The AWS Documentation mentions the following A snapshot is constrained to the region where it was created. After you create a snapshot of an EBS volume, you can use it to create new volumes in the same region. For more information, see Restoring an Amazon EBS Volume from a Snapshot. You can also copy snapshots across regions, making it possible to use multiple regions for geographical expansion, data center migration, and disaster recovery

For more information on EBS Snapshots, please visit the following URL:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSSnapshots.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
124
Q

Question 520

A company has an application hosted in AWS. This application consists of EC2 Instances which sits behind an ELB with EC2 Instances. The following are requirements from an administrative perspective

a) Ensure notifications are sent when the read requests goes beyond 1000 requests per minute
b) Ensure notifications are sent when the latency goes beyond 10 seconds
c) Also any API activity which calls for sensitive data should monitored

Which of the following can be used to achieve this requirement. Choose 2 answers from the options given below

A. Use Cloudtrail to monitor the API Activity

B. Use Cloudwatch logs to monitor the API Activity

C. Use Cloudwatch metrics for whatever metrics need to be monitored.

D. Use a custom log software to monitor the latency and read requests to the ELB

A

Answer: A, C

AWS Cloudtrail can be used to monitor the API calls.

For more information on Cloudtrail, please visit the following URL: https://aws.amazon.com/cloudtrail/

When you use Cloudwatch metrics for an ELB, you can get the amount of read requests and latency out of the box.

For more information on using Cloudwatch with the ELB, please visit the following
URL: https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-cloudwatch-metrics.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
125
Q

Question 521

A company has resources hosted in their AWS Account. There is a requirement to
monitor all API activity for all regions. The audit needs to be applied for future regions
as well. Which of the following can be used to fulfil this requirement.

A. Ensure Cloudtrail for each region. Then enable for each future region.

B. Ensure one Cloudtrail trail is enabled for all regions.

C. Create a Cloudtrail for each region. Use Cloudformation to enable the trail for all
future regions.

D. Create a Cloudtrail for each region. Use AWS Config to enable the trail for all future regions.

A

Answer: A

The AWS Documentation mentions the following You can now turn on a trail across all regions for your AWS account. CloudTrail will deliver log files from all regions to the Amazon S3 bucket and an optional CloudWatch Logs log group you specified. Additionally, when AWS launches a new region, CloudTrail will create the same trail in the new region. As a result, you will receive log files containing API activity for the new region without taking any action.

For more information on this feature, please visit the following URL: https://aws.amazon.com/about-aws/whats-
new/2015/12/turn-on-cloudtrail-across-all-regions-and-support-for-multiple-trails/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
126
Q

Question 522

There is a requirement for an iSCI device and the legacy application needs local
storage. Which of the following can be used to meet the demands of the
application.

A. Configure the Simple storage service

B. Configure Storage gateway cached volume

C. Configure Storage gateway stored volume

D. Configure Amazon Glacier

A

Answer: C

The AWS Documentation mentions the following If you need low-latency access to your entire dataset, first configure your on-premises gateway to store all your data locally. Then asynchronously back up point-in-time snapshots of this data to Amazon S3. This configuration provides durable and inexpensive offsite backups that you can recover to your local data center or Amazon EC2. For example, if you need replacement capacity for disaster recovery, you can recover the backups to Amazon EC2.

For more information on the Storage gateway, please visit the following URL: https://docs.aws.amazon.com/storagegateway /latest/userguide/WhatIsStorageGateway-html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
127
Q

Question 523

There is a requirement for EC2 Instances in a private subnet to access an S3 bucket. The traffic should not traverse to the internet. Which of the following can be used to fulfill this requirement

A. VPC endpoint

B. NAT Instance

C. NAT gateway

D. Internet gateway

A

Answer: A

A VPC endpoint enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Instances in your VPC do not require public IP addresses to communicate with resources in the service. Traffic between your VPC and the other service does not leave the Amazon network.

For more information on AWS VPC endpoints, please visit the following URL: https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-endpoints.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
128
Q

Question 524

There is an application which consists of EC2 Instances behind a classic ELB. An EC2 proxy is used for content management to backend instances. The application might not be able to scale properly. Which of the following can be used to scale the proxy and backend instances appropriately. Choose 2 answers from the options given below

A. Use Autoscaling for the proxy servers

B. Use Autoscaling for the backend instances

C. Replace the Classic ELB with Application ELB

D. Use Application ELB for both the front end and backend instances

A

Answer: A, B

As soon as you see the requirement for scaling , automatically think of the Autoscaling service provided by AWS. This can be used to scale both the proxy servers and the backend instances.

For more information on Autoscaling, please visit the following URL:
https://docs.aws.amazon.com/autoscaling /plans/userguide/what-is-aws-auto-scaling.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
129
Q

Question 525

There is a marketing application hosted in AWS that might get a lot of traffic over the next couple of weeks. Which of the following can be used to reduce the potential disruption to users incase of any issues.

A. Use an ELB to divert traffic to an Infrastructure hosted in another region

B. Use an ELB to divert traffic to an Infrastructure hosted in another AZ

C. Use Cloudformation to create backup resources in another AZ

D. Use Routes3 to route to static web site

A

Answer: D

In a disaster recovery scenario , the best from the above options is to divert the traffic to a static web site. Option A is wrong because ELB can only balance traffic in one region and not across regions. Option B and C are incorrect because using backups across AZ’s is not enough for disaster recovery purposes.

For more information on disaster recovery in AWS, please visit the following URL:
https://aws.amazon.com/disaster-recovery/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
130
Q

Question 526

You have a requirement to host a static web site for a domain called mycompany.com in AWS. You need to ensure that the traffic is scaled properly. How can this be achieved. Choose 2 answers from the options given below

A. Host the static site on an EC2 Instance

B. Use Route53 with static web site in S3

C. Enter the NS records from Route§3 in the domain registrar

D. Place the EC2 instance behind the ELB

A

Answer: B, C

You can host a static web site in S3. You need to ensure that the nameserver
records for the Route53 hosted zone are entered in your domain registrar.

For more information on website hosting in S3, please visit the following URL:
https: //docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteHosting.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
131
Q

Question 527
A database is hosted using the AWS RDS service. The database is getting a lot of database queries and is now become a bottleneck for the associating application. Which can be used to ensure that the database is not a performance bottleneck?

Setup a CloudFront distribution in front of the database

B. Setup an ELB in front of the database

C. Setup Elasticache in front of the database

D. Setup SNS in front of the database

A

Answer: C

Elastic cache is an in-memory solution that can be used in front of a database to cache the common queries issued against the database. This can reduce the overall load on the database.

Option A is incorrect because normally this is used for content distribution

Option B is partially correct , but you need to have one more database as an internal load balancing solution.

Option D is incorrect because SNS is a simple notification service.

For more information on Elasticache, please visit the following
URL: https://aws.amazon.com/elasticache/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
132
Q

Question 528

A database is being hosted using the AWS RDS service. The database is now going to be made into a production database. There is a requirement for the database to be made highly available. Which of the following can be used to achieve this requirement.

A. Use Multi-AZ for the RDS instance to ensure that a secondary database is created in another region

B. Use the Read Replica feature to create another instance of the DB in another region

C. Use Multi-AZ for the RDS instance to ensure that a secondary database is created in another Availability zone.

D. Use the Read Replica feature to create another instance of the DB in another Availability zone.

A

Answer: C

Option A is incorrect because the Multi-AZ feature allows for high availability across availability zones and not regions. Option B and D are incorrect because Read Replica’s can be used to offload database reads. But if you want high availability then opt for the Multi-AZ feature. The AWS Documentation mentions the following Amazon RDS Multi-AZ deployments provide enhanced availability and durability for Database (DB) Instances, making them a natural fit for production database workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ).

For more information on AWS RDS Multi-AZ, please visit the following URL:
https://aws.amazon.com/rds/details/multi-az/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
133
Q

Question 529

A company wants to host a web application and a database layer in AWS. This will be done with the use of subnets in a VPC. Which of the following is the proper architecture design for supporting the required tiers of the application

A. Use a public subnet for the web tier and a public subnet for the database layer

B. Use a public subnet for the web tier and a private subnet for the database layer

C. Use a private subnet for the web tier and a private subnet for the database layer

D. Use a private subnet for the web tier and a public subnet for the database layer

A

Answer: B

The ideal setup is to ensure that the web server is hosted in the public subnet so that it can be accessed by users on the internet. The database server can be hosted in the private subnet. The below diagram from the AWS Documentation shows how this can be setup

For more information on public and private subnets in AWS, please visit the following URL:
https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Scenario2.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
134
Q

Question 530

You require the ability to analyze a customer’s clickstream data on a website so they can do behavioral analysis. Your customer needs to know what sequence of pages and ads their customer clicked on. This data will be used in real time to modify the page layouts as customers click through the site to increase stickiness and advertising click- through. Which option meets the requirements for captioning and analyzing this data?

A. Log clicks in weblogs by URL store to Amazon S3, and then analyze with Elastic MapReduce

B. Push web clicks by session to Amazon Kinesis and analyze behavior using Kinesis workers

C. Write click events directly to Amazon Redshift and then analyze with SQL

D. Publish web clicks by session to an Amazon SQS queue. Then send the events to AWS RDs for further processing

A

Answer: B

The AWS Documentation mentions the following Amazon Kinesis Data Streams enables you to build custom applications that process or analyze streaming data for specialized needs. Kinesis Data Streams can continuously capture and store terabytes of data per hour from hundreds of thousands of sources such as website clickstreams, financial transactions, social media feeds, IT logs, and location-tracking events.

For more information on Amazon Kinesis, please visit the following URL:
https://aws.amazon.com/kinesis/data-streams/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
135
Q

Question 531

A company has an infrastructure that consist of machines that keep on sending log information every 5 minutes. The number of machines can run into thousands. There should be a requirement to ensure that the data can be analyzed at a later stage. Which of the following would help in fulfilling this requirement.

A. Use Kinesis Firehose with S3 to take the logs and store them in S3 for further processing

B. Launch an Elastic beanstalk application to take the processing job of the logs

C. Launch an EC2 instance with enough EBS volumes to consume the logs which can be used for further processing

D. Use Cloudtrail to store all the logs which can be analyzed at a later stage

A

Answer: A

The AWS Documentation mentions the following which perfectly matches this requirement Amazon Kinesis Data Firehose is the easiest way to load streaming data into data stores and analytics tools. It can capture, transform, and load streaming data into Amazon S3, Amazon Redshift, Amazon Elasticsearch Service, and Splunk, enabling near real-time analytics with existing business intelligence tools and dashboards you’re already using today.

For more information on Amazon Kinesis firehose, please visit the following URL: https://aws.amazon.com/kinesis/data-firehose/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
136
Q

Question 532

An application hosted in AWS allows for users to upload videos in an S3 bucket. There is a requirement for a user to upload some videos during one week based on the profile. How can be this be accomplished in the best way possible

A. Create an IAM bucket policy to provide access for a week’s duration

B. Create a pre-signed URL for each profile which will last for a week’s duration

C. Create an S3 bucket policy to provide access for a week’s duration

D. Create an IAM role to provide access for a week’s duration

A

Answer: B

Pre-signed URL’s are the perfect solution when you want to give temporary access to users for S3 buckets. So whenever a new profile is created, you can create a pre- signed URL to ensure that the URL lasts for a week to allow for users to upload the required objects.

For more information on pre-signed URL’s, please visit the following URL:
https://docs.aws.amazon.com/AmazonS3/latest/dev/PresignedUrlUploadObject.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
137
Q

Question 533

A company is planning to use Docker containers and the necessary container orchestration tools for their batch processing requirements. There is a requirement for batch processing for both critical and not critical data. Which of the following is the best implementation steps for this requirement, to ensure that cost is effectively managed.

Use Kubernetes for container orchestration and Reserved instances for all underlying instances

B. Use ECS orchestration and use Reserved instances for all underlying instances

C. Use Docker for container orchestration and a combination of Spot and Reserved instances for the underlying instances

D. Use ECS for container orchestration and a combination of Spot and Reserved instances for the underlying instances

A

Answer: D

The Elastic Container service from AWS can be used for container orchestration. Since there are both critical and non-critical loads, one can use Spot instances for the non-critical workloads for ensuring cost is kept at a minimal.

For more information on AWS ECS, please visit the following URL:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
138
Q

Question 534

A company has a requirement for archival of 6 TB of data. There is an agreement with the stakeholders for an 8hr agreed retrieval time. Which of the following can be used as the MOST cost effective storage option.

A. AWS S3 Standard

B. AWS S3 Infrequent Access

C. AWS Glacier

D. AWS EBS Volumes

A

Answer: C

Amazon Glacier is the perfect solution for this. Since the agreed timeframe for retrieval is met at 8h, this will be the most cost effective option.

For more information on AWS Glacier, please visit the following URL:
https://aws.amazon.com/documentation/glacier/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
139
Q

Question 535

A company hosts 5 web servers in AWS. They want to ensure that Route53 can be used to randomly provide users with the web server when they request for the underlying web application. Which routing policy should be used to fulfil this requirement

A. Simple

B. Weighted

C. Multi-Answer

D. Latency

A

Answer: C

The AWS Documentation mentions the following to support this If you want to route traffic approximately randomly to multiple resources, such as web servers, you can create one multivalue answer record for each resource and, optionally, associate an Amazon Route 53 health check with each record. For example, suppose you manage an HTTP web service with a dozen web servers that each have their own IP address. No one web server could handle all of the traffic, but if you create a dozen multi-value answer records, Amazon Route 53 responds to DNS queries with up to eight healthy records in response to each DNS query. Amazon Route 53 gives different answers to different DNS resolvers. If a web server becomes unavailable after a resolver caches a response, client software can try another IP address in the response.

For more information on this option, please visit the following URL:
https://aws.amazon.com/about-aws/whats-new/2017/06/amazon-route-53-announces-support-for-multivalue-answers-in-response-to-dns-queries/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
140
Q

Question 536

Acompany has a requirement for a managed database in AWS. There is a
requirement that joins need to be performed on the underlying queries. Which of the
following can be used as the underlying database

A. AWS Aurora

B. AWS DynamoDB

C. AWS S3

D. AWS Redhsift

A

Answer: A

In this case AWS Aurora would be the perfect choice

Option B is incorrect because joins is not supported in DynamoDB

Option C is incorrect because this is more an option for object storage

Option D is incorrect because this option is better for data warehousing solutions

For more information on AWS Aurora please visit the following URL:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Aurora.Overview.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
141
Q

Question 537

A customer wants to create a stream of EBS Volumes in AWS. The customer has a requirement to ensure that data on the volume is encrypted at rest. How can this be achieved?

A. Create an SSL certificate and attach it to the EBS Volume

B. Use KMS to generate encryption keys which can be used to encrypt the volume

C. Use Cloudfront in front of the EBS volume to encrypt all requests.

D. Use EBS snapshots to encrypt the requests.

A

Answer: B

When you create a volume, you have the option to encrypt the volume using keys generated by the Key Management service.

For more information on using KMS, please refer to the below URL:
https://docs.aws.amazon.com/kms/latest/developerguide/services-ebs.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
142
Q

Question 538

Acompany has a requirement to store 100 TB of data to AWS. The data will be exported using AWS Snowball. The data needs to then reside in a database layer. The database should have the facility to be queries from a business intelligence application. Each item is roughly 500KB in size. Which of the following is the ideal storage mechanism for the underlying data layer

A. AWS DynamoDB

B. AWS Aurora

C. AWS RDS

D. AWS Redshift

A

Answer: D

For the sheer data size, the ideal storage unit would be to use AWS Redshift. The AWS Documentation mentions the following on AWS Redshift Amazon Redshift is a fully managed, petabyte-scale data warehouse service in the cloud. You can start with just a few hundred gigabytes of data and scale to a petabyte or more. This enables you to use your data to acquire new insights for your business and customers. The first step to create a data warehouse is to launch a set of nodes, called an Amazon Redshift cluster. After you provision your cluster, you can upload your data set and then perform data analysis queries. Regardless of the size of the data set, Amazon Redshift offers fast query performance using the same SQL-based tools and business intelligence applications that you use today.

For more information on AWS Redshift, please refer to the below URL:
https://docs.aws.amazon.com/redshift/latest/mgmt/welcome.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
143
Q

Question 539

A company is planning on testing a large set of IoT enabled devices. These devices will be streaming data every second. A proper service needs to be chosen in AWS which could be used to collect and analyze these streams in real time. Which of the following could be used for this purpose.

A. Use AWS EMR to store and process the streams

B. Use AWS Kinesis streams to process and analyze the data

C. Use AWS SQS to store the data

D. Use SNS to store the data

A

Answer: B

The AWS Documentation mentions the following on Amazon Kinesis Amazon Kinesis makes it easy to collect, process, and analyze real-time, streaming data so you can get timely insights and react quickly to new information. Amazon Kinesis offers key capabilities to cost-effectively process streaming data at any scale, along with the flexibility to choose the tools that best suit the requirements of your application. With Amazon Kinesis, you can ingest real-time data such as video, audio, application logs, website clickstreams, and IoT telemetry data for machine learning, analytics, and other applications.

For more information on Amazon Kinesis, please refer to the below URL:
https://aws.amazon.com/kinesis/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
144
Q

Question 540

Your company currently has a set of EC2 Instances hosted in AWS. The state of the instances needs to be monitored and each state change needs to be recorded. Which of the following can help fulfil this requirement. Choose 2 answers from the options given below

A. Use Cloudwatch logs to store the state change of the instances

B. Use Cloudwatch Events to monitor the state change of the events

C. Use SQS to trigger a record to be added to a DynamoDB table.

D. Use AWS Lambda to store a change record in a DynamoDB table.

A

Answer: B, D

Cloudwatch Events can be used to monitor the state change of EC2 Instances. You can choose the Event Source and the Event type as shown below. You can then have a AWS Lambda function as a target which can then be used to store the record in a DynamoDB table.

For more information on Cloudwatch events, please refer to the below URL:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/WhatIsCloudWatcEvents.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
145
Q

Question 541

You have instances hosted in a private subnet in a VPC. There is a need for the instance to download updates from the internet. As an architect what change can you suggest to the IT operations team which would be MOST efficient and secure.

A. Create a new public subnet and move the instance to that subnet

B. Create a new EC2 Instance to download the updates separately and then push them to the required instance.

C. Use a NAT gateway to allow the instances in the private subnet to download the updates

D. Create a VPC link to the internet to allow the instances in the private subnet to download the updates

A

Answer: C

The NAT gateway is the ideal option to ensure that instances in the private subnet
have the ability to download updates from the internet.

For more information on the NAT gateway, please refer to the below URL:
https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-nat-gateway.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
146
Q

Question 542

Acompany has opted to store their cold data on EBS volumes. To ensure optimal cost which of the following would be the ideal EBS volume type to host this type of data.

A. General Purpose SSD

B. Provisioned IOPS SSD

C. Throughput Optimized HDD

D. Cold HDD

A

Answer: D

AWS Documentation also shows that the ideal and cost efficient storage type would be Cold HDD

For more information on EBS volume types, please refer to the below URL:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
147
Q

Question 543

Acompany is planning to have their application hosted in AWS. The application consists of users uploading files and then having a public URL for downloading them at a later stage. Which of the following designs would help fulfil this requirement

A. Have EBS volumes hosted on EC2 Instances to store the files

B. Use Amazon S3 to host the files

C. Use Amazon Glacier to host the files since this would be the cheapest storage option

D. Use EBS snapshots attached to EC2 Instances to store the files

A

Answer: B

If you need storage for the Internet, then AWS Simple Storage service is the best option. Each file uploaded would automatically get a public URL which could be used to download the file at a later point in time.

For more information on Amazon S3, please refer to the below URL:
https://aws.amazon.com/s3/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
148
Q

Question 544

You are planning on hosting a web application on AWS. You create an EC2 Instance in a public subnet. This instance needs to connect to an EC2 Instance that will host an Oracle database. Which of the following steps should be followed to ensure a secure setup is in place

A. Place the EC2 Instance with the Oracle database in the same public subnet as the Web server for faster communication.

B. Place the EC2 Instance with the Oracle database in a separate private subnet

C. Create a database security group and ensure the web security group to allowed incoming access

D. Ensure the database security group allows incoming traffic from 0.0.0.0/0

A

Answer: B, C

The best secure option is to place the database in a private subnet. The below diagram from the AWS Documentation shows this setup. Also ensure that access is not allowed from all sources but just from the web servers.

For more information on this type of setup, please refer to the below URL:
https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Scenario2.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
149
Q

Question 545

An EC2 Instance hosts a Java based application that access a DynamoDB table. This EC2 Instance is currently serving production based users. Which of the following is a secure way of ensuring that the EC2 Instance access the DynamoDB table

A. Use IAM Roles with permissions to interact with DynamoDB and assign it to the EC2 Instance

B. Use KMS keys with the right permissions to interact with DynamoDB and assign it to the EC2 Instance

C. Use LAM Access Keys with the right permissions to interact with DynamoDB and assign it to the EC2 Instance

D. Use LAM Access Groups with the right permissions to interact with DynamoDB and assign it to the EC2 Instance

A

Answer: A

To always ensure secure access to AWS resources from EC2 Instances, always ensure to assign a Role to the EC2 Instance

For more information on IAM Roles, pleaserefer to the below URL:
https://docs.aws.amazon.com/IAM /latest/UserGuide/id_roles.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
150
Q

Question 546

Acompany is planning on building and deploying a web application on AWS. They need to have a data store to store session data. Which of the below services can be used to meet this requirement.

A. AWS RDS

B. AWS SQS

C. AWS ELB

D. AWS Elasticache

A

Answer: D

The AWS Documentation mentions the following Amazon ElastiCache offers fully managed Redis and Memcached. Seamlessly deploy, operate, and scale popular open source compatible in-memory data stores. Build data-intensive apps or improve the performance of your existing apps by retrieving data from high throughput and low latency in-memory data stores. Amazon ElastiCache is a popular choice for Gaming, Ad-Tech, Financial Services, Healthcare, and IoT apps.

For more information on Elasticache, please refer to the below URL:
https://aws.amazon.com/elasticache/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
151
Q

Question 547

A company has setup an application in AWS that interacts with DynamoDB. There is a requirement that when an item is modified in a DynamoDB table, an immediate entry is made to an associating application. How can this be accomplished? Choose 2 correct answers.

A. Setup Cloudwatch to monitor the DynamoDB table for any changes. Then trigger a Lambda function to send the changes to the application.

B. Setup Cloudwatch logs to monitor the DynamoDB table for any changes. Then trigger AWS SQS to send the changes to the application.

C. Use DynamoDB streams to monitor the changes to the DynamoDB table

D. Use an AWS Lambda function on a scheduled basis to monitor the changes to the DynamoDB table

A

Answer: D

One can use DynamoDB streams to monitor the changes to a DynamoDB table The AWS Documentation mentions the following A DynamoDB stream is an ordered flow of information about changes to items in an Amazon DynamoDB table. When you enable a stream on a table, DynamoDB captures information about every modification to data items in the table.

For more information on DynamoDB streams, please refer to the below URL:
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.html

If you enable DynamoDB Streams on a table, you can associate the stream ARN with a Lambda function that you write. Immediately after an item in the table is modified, a new record appears in the table’s stream. AWS Lambda polls the stream and invokes your Lambda function synchronously when it detects new stream records. Since wehave a requirement that when an item is modified in a DynamoDB table, an immediate entry need to be made to an associating application a lambda function is also required.

For more information on DynamoDB streams Lambda, please refer to the below URL:
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.Lambda.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
152
Q

Question 548

A company currently has an application hosted on their On-premise environment. The application has a combination of web instances with worker instances and Rabbit-MQ for messaging purposes. They now want to move this infrastructure to the AWS Cloud. How could they easily start using messaging on the AWS Cloud?

A. Continue using Rabbit-MQ. Host is on a separate EC2 Instance.

B. Make use of AWS SQS to manage the messages

C. Make use of DynamoDB to store the messages

D. Make use of AWS RDS to store the messages

A

Answer: B

The ideal option would be to make use of AWS Simple Queue Service to manage the messages between the application components. The AWS SQS service is a highly scalable and durable service.

For more information on Amazon SQS, please refer to the
below URL: https://aws.amazon.com/sqs/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
153
Q

Question 549

An application currently uses AWS RDS MySQL as their data layer. Recently they have been getting a lot of performance issues on the database. They are planning to separate the querying part of the application by setting up a separate reporting layer. Which of the following additional steps could also potential assist in improving the performance of the underlying database.

A. Make use of Multi-AZ to setup a secondary database in another Availability Zone

B. Make use of Multi-AZ to setup a secondary database in another Region

C. Make use of Read Replica’s to setup a secondary read-only database

D. Make use of Read Replica’s to setup a secondary read and write database

A

Answer: C

The AWS Documentation mentions the following Amazon RDS Read Replicas provide enhanced performance and durability for database (DB) instances. This feature makes it easy to elastically scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads. You can create one or more replicas of a given source DB Instance and serve high-volume application read traffic from multiple copies of your data, thereby increasing aggregate read throughput

For more information on Amazon Read Replica’s, please refer to the below URL:
https://aws.amazon.com/rds/details/read-replicas/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
154
Q

Question 550

A company is asking their developers to store the application logs in an S3 bucket. These logs are only required for a temporary period of time. After this, the logs can be deleted. Which of the following steps can be used to effectively manage this.

A. Create a cron job to detect the stale logs and delete them accordingly.

B. Use a bucket policy to manage the deletion

C. Use an IAM policy to manage the deletion

D. Use 83 lifecycle policies to manage the deletion

A

Answer: D

The AWS Documentation mentions the following which can be used to support the requirement Lifecycle configuration enables you to specify the lifecycle management of objects in a bucket. The configuration is a set of one or more rules, where each rule defines an action for Amazon S3 to apply to a group of objects.

These actions can be classified as follows:

- Transition actions — In which you define when objects transition to
another storage class. For example, you may choose to transition objects to the
STANDARD_IA (IA, for infrequent access) storage class 30 days after creation, or
archive objects to the GLACIER storage class one year after creation. 
  • Expiration actions — In which you specify when the objects expire. Then Amazon S3 deletes the
    expired objects on your behalf.

For more information on S3 lifecycle policies, please refer to the below URL: https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
155
Q

Question 551

An application running on EC2 instances processes sensitive information stored on Amazon S3. The information is accessed over the Internet. The security team is concerned that the Internet connectivity to Amazon S3 is a security risk. Which solution will resolve the security concern?

A. Access the data through an Internet Gateway.

B. Access the data through a VPN connection.

C. Access the data through a NAT Gateway.

D. Access the data through a VPC endpoint for Amazon S3

A

Answer: D

The AWS Documentation mentions the following A VPC endpoint enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Instances in your VPC do not require public IP addresses to communicate with resources in the service. Traffic between your VPC and the other service does not leave the Amazon network.

For more information on VPC endpoints, please refer to the below URL:
https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-endpoints.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
156
Q

Question 552

You have setup a Redshift cluster in AWS. You are trying to access the Redshift Cluster, but are not able to do so. What can be done to ensure you can access the Redshift Cluster?

A. Ensure the Cluster is created in the right Availability Zone

B. Ensure the Cluster is created in the right Region

C. Change the security groups for the cluster

D. Change the encryption key associated with the cluster

A

Answer: C

The AWS Documentation mentions the following When you provision an Amazon Redshift cluster, it is locked down by default so nobody has access to it. To grant other users inbound access to an Amazon Redshift cluster, you associate the cluster with a security group.

For more information on Redshift Security Groups, please refer to the below URL: https://docs.aws.amazon.com/redshift/latest/mgmt/working-with-security-groups.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
157
Q

Question 553

You have a web application hosted on an EC2 Instance in AWS. The application is now being accessed by users across the globe. The Operations team is getting support requests from users in some parts that is experiencing extreme slowness. What can be done to the architecture to improve the response time for users?

A. Add more EC2 Instances to support the load

B. Change the Instance type to a higher instance type

C. Add Route53 health checks to improve the performance

D. Place the EC2 Instance behind Cloudfront

A

Answer: D

The AWS Documentation mentions the following Amazon CloudFront is a web service that speeds up distribution of your static and dynamic web content, such as -html, .css, .js, and image files, to your users. CloudFront delivers your content through a worldwide network of data centers called edge locations. When a user requests content that you’re serving with CloudFront, the user is routed to the edge location that provides the lowest latency (time delay), so that content is delivered with the best possible performance.

For more information on Amazon Cloudfront, please refer to the below URL:
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Introduction.btml

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
158
Q

Question 554

You currently have a NAT gateway defined for your private instances. You need to make the NAT gateway highly available. How can this be accomplished?

A. Create another NAT gateway and place is behind an ELB

B. Create a NAT gateway in another Availability Zone

C. Create a NAT gateway in another Region

D. Use Autoscaling groups to scale the NAT gateway

A

Answer: B

The AWS Documentation mentions the following If you have resources in multiple Availability Zones and they share one NAT gateway, in the event that the NAT gateway’s Availability Zone is down, resources in the other Availability Zones lose internet access. To create an Availability Zone-independent architecture, create a NAT gateway in each Availability Zone and configure your routing to ensure that resources use the NAT gateway in the same Availability Zone.

For more information on the NAT gateway, please refer to the below URL:
https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-nat-gateway.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
159
Q

Question 555

A company wants to have a fully managed data store in AWS. It should be a compatible MySQL database , since it is an application requirement. Which of the following database can be used for this purpose.

A. AWS RDS

B. AWS Aurora

C. AWS DynamoDB

D. AWS Redshift

A

Answer: B

The AWS Documentation mentions the following Amazon Aurora (Aurora) is a fully managed, MySQL- and PostgreSQL-compatible, relational database engine. It combines the speed and reliability of high-end commercial databases with the simplicity and cost-effectiveness of open-source databases. It delivers up to five times the throughput of MySQL and up to three times the throughput of PostgreSQL without requiring changes to most of your existing applications.

For more information on AWS Aurora, please refer to the below URL:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Aurora.Overview.html

160
Q

Question 556

A Solutions Architect is designing an online shopping application running in a VPC on EC2 instances behind an ELB Application Load Balancer. The instances run in an Auto Scaling group across multiple Availability Zones. The application tier must read and write data to a customer managed database cluster. There should be no access to the database from the Internet, but the cluster must be able to obtain software patches from the Internet. Which VPC design meets these requirements?

A. Public subnets for both the application tier and the database cluster

B. Public subnets for the application tier, and private subnets for the database cluster

C. Public subnets for the application tier and NAT Gateway, and private subnets for the database cluster

D. Public subnets for the application tier, and private subnets for the database cluster and NAT Gateway

A

Answer: C

The diagram from the AWS Documentation shows the right setup for this.

For more information on this setup, please refer to the below URL:
https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-nat-gateway.html

161
Q

Question 557

A mobile based application has the need to upload images to S3. But as an architect you don’t want to make use of the existing web server to upload the images due to the load it would incur. How could this be handed?

A. Create a secondary S3 bucket. Then use an AWS Lambda to sync the contents to the primary bucket

B. Use pre-signed URL’s instead to upload the images

C. Use ECS containers to upload the images

D. Upload the images to SQS and then push them to the S3 bucket

A

Answer: B

One can directly create a pre-signed URL for the images to be uploaded. So the S3 bucket owner can create the pre-signed URL’s to upload the images to S3.

For more information on pre-signed URL’s, please refer to the below URL:
https://docs.aws.amazon.com/AmazonS3/latest/dev/PresignedUrlUploadObject.html

162
Q

Question 558

A company has a requirement to use the AWS RDS service to host a MySQL database. The database is going to be used for production purposes. It is expected that the database will experience a higher number of read/write activities. Which of the below underlying EBS volume type would be ideal for the database

A. General Purpose SSD

B. Provisioned IOPS SSD

C. Throughput Optimized HDD

D. Cold HDD

A

Answer: B

The snapshot from the AWS Documentation also shows that the ideal storage option is Provisioned IOPS SSD because this will provide a high number of IOPS for the underlying database.

For more information on EBS volume types, please refer to the below URL:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html

163
Q

Question 559

You have a set of On-premise virtual machines which are used to serve a web based application. This is placed behind an On-premise load balanced solution. You need to ensure that if a virtual machine is unhealthy, then it should be taken out of rotation. Which of the following would quickly help fulfil this requirement.

A. Use Route53 health checks to monitor the endpoints

B. Move the solution to AWS and use a Classic load balancer

C. Move the solution to AWS and use an Application load balancer

D. Move the solution to AWS and use a Network load balancer

A

Answer: A

Routes3 health checks can be used for any endpoint which can be accessed via the Internet. Hence this would be an ideal option for monitoring the endpoints. The AWS Documentation mentions the following You can configure a health check that monitors an endpoint that you specify either by IP address or by domain name. At regular intervals that you specify, Route 53 submits automated requests over the internet to your application, server, or other resource to verify that it’s reachable, available and functional.

For more information on Route53 Health checks, please refer to the below URL: https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover-simple-configs.html

164
Q

Question 560

A company has a set of web servers. They want to ensure that all the logs from these web servers can be analyzed in real time for any sort of threat detection. Which of the following would assist in this regard.

A. Upload all the logs to the SQS service and then use EC2 Instances to scan the logs

B. Upload the logs to Amazon Kinesis and then analyze the logs accordingly.

C. Upload the logs to Cloudtrail and then analyze the logs accordingly.

D. Upload the logs to Glacier and then analyze the logs accordingly.

A

Answer: B

The AWS Documentation provides the following information that can be used to support this requirement Amazon Kinesis makes it easy to collect, process, and analyze real-time, streaming data so you can get timely insights and react quickly to new information. Amazon Kinesis offers key capabilities to cost-effectively process streaming data at any scale, along with the flexibility to choose the tools that best suit the requirements of your application. With Amazon Kinesis, you can ingest real-time data such as video, audio, application logs, website clickstreams, and IoT telemetry data for machine learning, analytics, and other applications. For more information on Amazon Kinesis, please refer to the below URL: https: //aws.amazon.com/kinesis/

165
Q

Question 561

You currently have the following architecture in AWS.

a. Acouple of EC2 Instances located in us-west-2a
b. The EC2 Instances are launched via an Autoscaling group
c. The EC2 Instances sit behind a Classic ELB

Which of the following additional steps should be taken to ensure the above architecture conforms to a well architected framework

A. Convert the classic ELB to an Application ELB

B. Add an additional Autoscaling Group

C. Add additional EC2 Instances to us-west-2a

D. Add or spread existing instances across multiple Availability Zones

A

Answer: D

The AWS Documentation provides the following information to support this concept Balancing resources across Availability Zones is a best practice for well- architected applications, as this greatly increases aggregate system availability. Auto
Scaling automatically balances EC2 instances across zones when you configure multiple zones in your Auto Scaling group settings. Auto Scaling always launches new instances such that they are balanced between zones as evenly as possible across the entire fleet.

For more information on Managing resources with Autoscaling, please refer to the
below URL: https://aws.amazon.com/blogs/compute/fleet-management-made-easy-with-auto-scaling/

166
Q

Question 562

Your company manages an application that currently allows users to upload images to an S3 bucket. These images are then picked up by EC2 Instances for processing and then placed in another S3 bucket. You need an area where the metadata for these images can be stored. Which of the following would be the ideal data store for this.

A. AWS Redshift

B. AWS Glacier

C. AWS DynamoDB

D. AWS SQS

A

Answer: C

Option A is incorrect because this is normally used for petabyte based storage

Option B is incorrect because this is used for archive storage

Option D is incorrect because this used for messaging purposes.

AWS DynamoDB is the best light weight and durable storage option for the metadata.

For more information on DynamoDB, please
refer to the below URL: https://aws.amazon.com/dynamodb/

167
Q

Question 563

An application team needs to quickly provision a development environment which consists of a web and database layer.

Which of the following would be quickest and ideal to get this setup in place

A. Create Spot instances and install the Web and database components.

B. Create reserved instances and install the Web and database components.

C. Use AWS Lambda to create the web components and AWS RDS for the database layer.

D. Use Elastic Beanstalk to quickly provision the environment

A

Answer: D

The AWS Documentation mentions the following With Elastic Beanstalk, you can quickly deploy and manage applications in the AWS Cloud without worrying about the infrastructure that runs those applications. AWS Elastic Beanstalk reduces management complexity without restricting choice or control. You simply upload your application, and Elastic Beanstalk automatically handles the details of capacity provisioning, load balancing, scaling, and application health monitoring

For more information on AWS Elastic beanstalk. please refer to the below URL:
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/Welcome.html

168
Q

Question 564

A company has a requirement for having a file system which can be used across a set of instances. Which of the following storage options would be ideal for this requirement

A. AWS S83

B. AWS EBS Volumes

C. AWS EFS

D. AWS EBS snapshots

A

Answer: C

Amazon EFS provides scalable file storage for use with Amazon EC2. You can create an EFS file system and configure your instances to mount the file system. You can use an EFS file system as a common data source for workloads and applications running on multiple instances

For more information on AWS EFS, please visit the following URL:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonEFS.html

169
Q

Question 565

A company has an application that stores images and thumbnails for those images on S3. The thumbnail images itself need to be available for download immediately, while the images themselves are not accessed that frequently. Which is the MOST cost efficient storage option that meets these requirements?

A. Amazon Glacier with expedited retrievals

B. Amazon S3 Standard Infrequent Access

C. Amazon EFS

D. Amazon S3 Standard

A

Answer: B

Amazon S3 Infrequent access is perfect if you want to store data that is not frequently access. It is must more cost effective than Option D of Amazon Sg Standard. And if you choose Amazon Glacier with expedited retrievals, then you defeat the whole purpose of the requirement, because you would have an increased cost with this option

For more information on AWS Storage classes, please visit the following URL:
https://aws.amazon.com/s3/storage-classes/

170
Q

Question 566

You have an EC2 Instance placed inside a subnet. You have created the VPC from scratch and the subnet and then added the EC2 Instance to the subnet. You need to ensure that the EC2 instance has complete access to the Internet, since it is going to be used by users on the Internet. Which of the following would help ensure this can be accomplished.

A. Launch a NAT gateway and add routes for 0.0.0.0/0

B. Attach a VPC endpoint and add routes for 0.0.0.0/0

C. Attach an Internet gateway and add routes for 0.0.0.0/0

D. Deploy NAT instances in a public subnet and add routes for 0.0.0.0/0

A

Answer: C

The AWS Documentation mentions the following An Internet gateway is a horizontally scaled, redundant, and highly available VPC component that allows communication between instances in your VPC and the Internet. It therefore imposes no availability risks or bandwidth constraints on your network traffic.

For more information on the Internet gateway, please visit the following URL:
https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Internet_Gateway-html

171
Q

Question 567

You have an application hosted on AWS. It consists of EC2 instances launched via an Autoscaling Group. You are noticing that the EC2 instances are not scaling up on demand. What checks can be done to ensure that the scaling can occur as expected.

A. Ensure that the right metrics are being used to trigger the scale out.

B. Ensure that ELB health checks are being used

C. Ensure that the instances are placed across multiple Availability Zones

D. Ensure that the instances are placed across multiple Regions

A

Answer: A

If you scaling events are not based on the right metrics and have the right threshold defined, then the scaling will not occur as you want it to happen.

For more information on Autoscaling Dynamic Scaling, please visit the following URL: https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scale-based-on-demand.html

172
Q

Question 568

A company hosts a popular web application that connects to an Amazon RDS MySQL DB instance running in a private VPC subnet that was created with default ACL settings. The web servers must be accessible only to customers on an SSL connection. The database should only accessible to web servers in a public subnet. As an architect which of the following would you not recommend for such an architecture?

Ensure to create a separate web server and database server security group

B. Ensure the web server security group allows HTTPS port 443 inbound traffic from anywhere (0.0.0.0/0) and apply it to the web servers.

C. Ensure the web server security group allows MySQL port 3306 inbound traffic from anywhere (0.0.0.0/0) and apply it to the web servers.

D. Ensure the DB server security group allows MySQL port 3306 inbound and specify the source as the web server security group

A

Answer: C

This sort of setup is given in the AWS documentation.

1) To ensure that traffic can flow into your web server from anywhere on secure traffic, you need to allow inbound security at 443
2) And then ensure that traffic can flow from the database server to the web server via the database security group The snapshot from the AWS Documentation shows the rules tables for the security groups which relate to the same requirements as the question

For more information on this use case scenario, please visit the following URL:
https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Scenario2.html

173
Q

Question 569

You have an application hosted on AWS that writes images to an S3 bucket. The concurrent number of users on the application is expected to reach around 10,000 with around 500 reads and write expected per second. How should the architect maximize Amazon S3 performance?

A. Prefix each object name with a random string

B. Use the STANDARD _IA storage class

C. Prefix each object name with the current data

D. Enable versioning on the S3 bucket

A

Answer: A

If the request rate is high, then you can use hash keys or random strings to prefix the object name. In such a case, the partitions used to store the objects will be better distributed and hence allow for better read/write performance for your objects.

For more information on how to ensure performance in S3, please visit the following URL:
https://docs.aws.amazon.com/AmazonS3/latest/dev/request-rate-perf-considerations.html

174
Q

Question 570

A company has an entire infrastructure hosted on AWS. They want to create code templates which can be used to provision the same set of resources in another region in case of a disaster in the primary region. Which of the following services can help in this regard

A. AWS Beanstalk

B. AWS Cloudformation

C. AWS CodeBuild

D. AWS CodeDeploy

A

Answer: B

The AWS Documentation provides the following information to support this requirement AWS CloudFormation provisions your resources in a safe, repeatable manner, allowing you to build and rebuild your infrastructure and applications, without having to perform manual actions or write custom scripts. CloudFormation takes care of determining the right operations to perform when managing your stack, and rolls back changes automatically if errors are detected.

For more information on AWS Cloudformation, please visit the following URL:
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html

175
Q

Question 571

A company has a set of EBS volumes that need to be catered to incase of a disaster. How could one achieve this in an efficient manner using the existing AWS services?

A. Create a script to copy the EBS volume to another availability zone

B. Create a script to copy the EBS volume to another region

C. Use EBS Snapshots to create the volumes in another region

D. Use EBS Snapshots to create the volumes in another Availability Zone

A

Answer: C

Option A and B are incorrect, because you can’t directly copy EBS volumes

Option D is incorrect, because disaster recovery always looks at ensuring resources are created in another region.

The AWS Documentation provides the following information to support this requirement A snapshot is constrained to the region where it was created. After you create a snapshot of an EBS volume, you can use it to create new volumes in the same region. For more information, see Restoring an Amazon EBS Volume from a Snapshot. You can also copy snapshots across regions, making it possible to use multiple regions for geographical expansion, data center migration, and disaster recovery

For more information on EBS Snapshots, please visit the following URL:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSSnapshots.html

176
Q

Question 572

Your company currently has a web distribution hosted using the AWS Cloudfront service. The IT Security department has now confirmed that the application using this web distribution now falls under the scope of PCI compliance. Which of the following steps need to be carried out to ensure that the compliance objectives can be met.

A. Enable CloudFront access logs.

B. Enable Cache in Cloudfront

C. Capture requests that are sent to the CloudFront API.

D. Enable VPC Flow Logs

A

Answer: A, C

The AWS Documentation mentions the following If you run PCI or HIPAA- compliant workloads, based on the AWS Shared Responsibility Model, we recommend that you log your CloudFront usage data for the last 365 days for future auditing purposes. To log usage data, you can do the following:

  • Enable CloudFront access logs.
  • Capture requests that are sent to the CloudFront API.

For more information on compliance with Cloudfront, please visit the following URL:
https://docs.aws.amazon.com/AmazonCloudFront/latest/ DeveloperGuide/compliance-html

177
Q

Question 573

You need to host a subscription service in AWS. Users can subscribe to this service and then get notifications on new updates to the service. Which of the following service can be used to fulfil this requirement

A. Use the SQS service to send the notification

B. Host an EC2 Instance and use the Rabbit-MQ service to send the notification

C. Use the SNS service to send the notification

D. Use the AWS DynamoDB streams to send the notification

A

Answer: C

Use the SNS service to send the notification The AWS Documentation mentions the following Amazon Simple Notification Service (Amazon SNS) is a web service that coordinates and manages the delivery or sending of messages to subscribing endpoints or clients.

For more information on AWS SNS, please visit the following URL:
https://docs.aws.amazon.com/sns/latest/dg/welcome.html

178
Q

Question 574

Your company has a set of EC2 Instances hosted in AWS. They now have a mandate to prepare for a disaster and come up with the necessary disaster recovery procedures. Which of the following would help in the mitigating the effects of a disaster for the EC2 instances

A. Place an ELB in front of the EC2 Instances

B. Use Autoscaling to ensure the minimum number of instances are always running

C. Use Cloudfront in front of the EC2 Instances

D. Use AMI’s to recreate the EC2 Instances in another region

A

Answer: D

One can create an AMI from the EC2 instances and then copy them to another region. In case of a disaster, you can create an EC2 Instance from the AMI Option A and B are good options for fault tolerance, but cannot help completely in a disaster recovery for the EC2 Instances Option C is incorrect because we don’t know what is hosted on the EC2 instance to make any sort of judgement as to whether cloudfront can be helpful in this scenario.

For more information on AWS AMI’s, please visit the following URL:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html

179
Q

Question 575

A company currently hosts a Redshift cluster in AWS. Due to security reasons it needs to be ensured that all traffic from and to the Redshift cluster does not go through the Internet. Which of the following features can be used to fulfil this requirement in an efficient manner

A. Enable Amazon Redshift Enhanced VPC routing

B. Create a NAT gateway to route the traffic

C. Create a NAT instance to route the traffic

D. Create a VPN connection to ensure traffic does not flow through the
internet

A

Answer: A

The AWS Documentation mentions the following When you use Amazon Redshift Enhanced VPC Routing, Amazon Redshift forces all COPY and UNLOAD traffic between your cluster and your data repositories through your Amazon VPC. If Enhanced VPC Routing is not enabled, Amazon Redshift routes traffic through the Internet, including traffic to other services within the AWS network.

For more information on redshift Enhanced routing, please visit the following URL:
https://docs.aws.amazon.com/redshift/latest/mgmt/enhanced-vpc-routing.html

180
Q

Question 576

A company has a set of Hyper-V machines and VM ware virtual machines. They are now planning on migrating these instances to the AWS Cloud. Which of the following can be used to move these resources to the AWS Cloud.

A. DB Migration utility

B. Use the VM import tools

C. Use AWS Migration tools

D. Use AWS Config tools

A

Answer: B

The AWS Documentation mentions the following You can import Windows and Linux VMs that use VMware ESX or Workstation, Microsoft Hyper-V, and Citrix Xen virtualization formats.

For more information on VM Import, please visit the following URL:
https://aws.amazon.com/ec2/vm-import/

181
Q

Question 577

A company has a set of Linux based instances on their On-premise infrastructure. They want to have an equivalent block storage device on AWS which can be used to store the same datasets as on the Linux based instances. As an architect which of the following storage device would you recommend

A. AWS EBS

B. AWS S3

C. AWS EFS

D. AWS DynamoDB

A

Answer: A

The AWS Documentation mentions the following on EBS Volumes Amazon Elastic Block Store (Amazon EBS) provides block level storage volumes for use with EC2 instances. EBS volumes are highly available and reliable storage volumes that can be attached to any running instance that is in the same Availability Zone. EBS volumes that are attached to an EC2 instance are exposed as storage volumes that persist independently from the life of the instance

For more information on Amazon EBS, please visit the following URL:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonEBS.html

182
Q

Question 578

A company has a set of Admin jobs which are currently setup in the C# programming language. They are moving their infrastructure to AWS. Which of the following would be an efficient means of hosting the admin related jobs in AWS

A. Use AWS DynamoDB to store the jobs and then run them on demand

B. Use AWS Lambda functions with C# for the Admin jobs

C. Use AWS S3 to store the jobs and then run them on demand

D. Use AWS Config functions with C# for the Admin jobs

A

Answer: B

The best and most efficient option is to host the jobs suing AWS Lambda. This service has the facility to have code run in the C# programming language. The AWS Documentation mentions the following on AWS Lambda AWS Lambda is a compute service that lets you run code without provisioning or managing servers. AWS Lambda executes your code only when needed and scales automatically, from a few requests per day to thousands per second. You pay only for the compute time you consume - there is no charge when your code is not running. With AWS Lambda, you can run code for virtually any type of application or backend service - all with zero administration

For more information on AWS Lambda, please visit the following URL:
https://docs.aws.amazon.com/lambda/latest/dg/welcome.html

183
Q

Question 579

Your company has a set of resources hosted on the AWS Cloud. As part of the new
governing model, there is a requirement that all activity on AWS resources be
monitored. What is the most efficient way to have this implemented?

A. Use VPC flow logs to monitor all activity in your VPC

B. Use AWS Trusted Advisor to monitor all of your AWS resources

C. Use AWS Inspector to inspect all of the resources in your account

D. Use AWS Cloudtrail to monitor all API activity

A

Answer: D

The AWS Documentation mentions the following on AWS Cloudtrail AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. With CloudTrail, you can log, continuously monitor, and retain account activity related to actions across your AWS infrastructure. CloudTrail provides event history of your AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command line tools, and other AWS services. This event history simplifies security analysis, resource change tracking, and troubleshooting.

For more information on AWS Cloudtrail, please visit the following URL:
https://aws.amazon.com/cloudtrail/

184
Q

Question 580

There is a requirement for a data store in AWS. Below are the requirements for the data store

a) Ability to perform SQL queries
b) Integration with existing business intelligence tools
c) High concurrency workload that generally involves reading and writing all of the columns for a small number of records at a time

Which of the following would be the ideal data store that can be used for such requirements. Choose 2 answers from the options below

A. AWS Redshift

B. AWS RDS

C. AWS Aurora

D. AWS S3

A

Answer: B, C

The AWS Documentation mentions this as a best practice. Because Amazon Redshift is a SQL-based relational database management system (RDBMS), it is compatible with other RDBMS applications and business intelligence tools. Although Amazon Redshift provides the functionality of a typical RDBMS, including online transaction processing (OLTP) functions, it is not designed for these workloads. If you expect a high concurrency workload that generally involves reading and writing all of the columns for a small number of records at a time you should instead consider using Amazon RDS or Amazon DynamoDB

For more information on AWS Cloud best practises, please visit the following URL:
https://do.awsstatic.com/whitepapers/AWS_Cloud_Best_Practices.pdf

185
Q

Question 581

A company is currently using Redshift in AWS. There is a mandate that the Redshift cluster is used in a cost effective manner. As an architect which of the following should be consider to ensure cost effectiveness.

Use Spot instances for the underlying nodes in the cluster

B. Ensure that unnecessary manual snapshots of the cluster are deleted.

C. Ensure VPC Enhanced Routing is enabled

D. Ensure that Cloudwatch metrics are disabled

A

Answer: B

The AWS Documentation mentions the following Amazon Redshift provides free storage for snapshots that is equal to the storage capacity of your cluster until you delete the cluster. After you reach the free snapshot storage limit, you are charged for any additional storage at the normal rate. Because of this, you should evaluate how many days you need to keep automated snapshots and configure their retention period accordingly, and delete any manual snapshots that you no longer need.

For more information on working with Redshift snapshots, please visit the following URL:
https://docs.aws.amazon.com/redshift/latest/mgmt/working-with-snapshots.html

186
Q

Question 582

A company has a set of resources hosted in a VPC. They have acquired another company and they have their own set of resources hosted in AWS. The requirement now is to ensure that resources in the VPC of the parent company can access the resources in the VPC of the child company. How can this be accomplished.

A. Establish a NAT instance to establish communication across VPC’s

B. Establish a NAT gateway to establish communication across VPC’s

C. Use a VPN connection to peer both VPC’s

D. Use VPC Peering to peer both VPC’s

A

Answer: D

The AWS Documentation mentions the following about VPC Peering A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them privately. Instances in either VPC can communicate with each other as if they are within the same network. You can create a VPC peering connection between your own VPCs, with a VPC in another AWS account, or with a VPC in a different AWS Region.

For more information on VPC Peering, please visit the
following URL: https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-peering.html

187
Q

Question 583

An application consists of the following architecture.

a. EC2 Instances in a single AZ behind an ELB.

b. ANAT instance which is used to ensure that instances can download updates
from the internet.

Which of the following can be used to ensure better fault tolerance in this setup. Choose 2 answers from the options given below

A. Add more instances in the existing Availability Zone

B. Add an Autoscaling Group to the setup

C. Add more instances in another Availability Zone

D. Add another ELB for more fault tolerance

A

Answer: B, C

The AWS Documentation mentions the following Adding Auto Scaling to your application architecture is one way to maximize the benefits of the AWS cloud. When you use Auto Scaling, your applications gain the following benefits: Better fault tolerance. Auto Scaling can detect when an instance is unhealthy, terminate it, and launch an instance to replace it. You can also configure Auto Scaling to use multiple Availability Zones. If one Availability Zone becomes unavailable, Auto Scaling can launch instances in another one to compensate. Better availability. Auto Scaling can help you ensure that your application always has the right amount of capacity to handle the current traffic demands.

For more information on the benefits of AutoScaling, please visit the following URL:
https://docs.aws.amazon.com/autoscaling/ec2/userguide/auto-scaling-benefits.html

188
Q

Question 584

A company currently has a lot of data hosted on their On-premise infrastructure. They are now running out of storage space and looking towards a quick win solution using AWS. Which of the following would allow them to easily extend their data infrastructure to AWS?

A. Let the company start using Gateway Cached volumes

B. Let the company start using Gateway Stored volumes

C. Let the company start using the Simple Storage service

D. Let the company start using Amazon Glacier

A

Answer: A

One can easily start using Volume gateways to start storing their data in S3. One can use the Cached volumes. The AWS Documentation mentions the following You store your data in Amazon Simple Storage Service (Amazon S3) and retain a copy of frequently accessed data subsets locally. Cached volumes offer a substantial cost savings on primary storage and minimize the need to scale your storage on-premises. You also retain low-latency access to your frequently accessed data.

For more information on Storage gateways please visit the following URL:
https://docs.aws.amazon.com/storagegateway /latest/userguide/WhatIsStorageGateway-html

189
Q

Question 585

Company salespeople upload their sales figures daily. A Solutions Architect needs a durable storage solution for these documents that also protects against users accidentally deleting important documents. Which action will protect against unintended user actions?

A. Store data in an EBS volume and create snapshots once a week.

B. Store data in an S3 bucket and enable versioning.

C. Store data in two S3 buckets in different AWS regions.

D. Store data on EC2 instance storage.

A

Answer: B

Amazon S3 has the option for versioning as shown below. The versioning is on the bucket level and can be used to recover prior versions of an object.

For more information on Amazon S3, please visit the following URL:
https://aws.amazon.com/s3/

190
Q

Question 586

An application requires a highly available relational database with an initial storage capacity of 8 TB. The database will grow by 8 GB every day. To support expected traffic, at least eight read replicas will be required to handle database reads. Which option will meet these requirements?

A. DynamoDB

B. Amazon S3

C. Amazon Aurora

D. Amazon Redshift

A

Answer: D

The AWS Documentation mentions the following Amazon Redshift is a fully managed, petabyte-scale data warehouse service in the cloud. You can start with just a few hundred gigabytes of data and scale to a petabyte or more. This enables you to use your data to acquire new insights for your business and customers.

For more information on AWS Redshift, please visit the following URL:
https://docs.aws.amazon.com/redshift/latest/mgmt/welcome.html

191
Q

Question 587

A company has an application in which objects from S3 are given to users. Some users across the globe are complaining of slow response times. Which of the following additional steps would allow for a COST effective solution and also ensure that the users get the desired optimal response to objects from S3.

A. Use S3 replication to replicate the objects to regions closest to the users.

B. Ensure S83 transfer acceleration is enabled to ensure all users get the desired response times.

C. Place an ELB in from S3 to distribute the load across S3

D. Place the S3 bucket behind a cloudfront distribution

A

Answer: D

The AWS Documentation mentions the following If your workload is mainly sending GET requests, in addition to the preceding guidelines, you should consider using Amazon CloudFront for performance optimization. Integrating Amazon CloudFront with Amazon S3, you can distribute content to your users with low latency and a high data transfer rate. You will also send fewer direct requests to Amazon S3, which will reduce your costs. For example, suppose that you have a few objects that are very popular. Amazon CloudFront fetches those objects from Amazon S3 and caches them. Amazon CloudFront can then serve future requests for the objects from its cache, reducing the number of GET requests it sends to Amazon S3.

For more information on performance considerations in S3, please visit the following URL:
https://docs.aws.amazon.com/AmazonS3/latest/dev/request-rate-perf-considerations.html

192
Q

Question 588

An application needs to have a messaging system in AWS. It is of the uttermost importance that the order of messages is preserved and duplicate messages are not sent. Which of the following services can help fulfil this requirement

A. AWS SQS

B. AWS SNS

C. AWS Config

D. AWS ELB

A

Answer: A

One can use SQS FIFO queues for this purpose. The AWS Documentation mentions the following on SQS FIFO Queues. Amazon SQS is a reliable and highly- scalable managed message queue service for storing messages in transit between application components. FIFO queues complement the existing Amazon SQS standard queues, which offer high throughput, best-effort ordering, and at-least-once delivery. FIFO queues have essentially the same features as standard queues, but provide the added benefits of supporting ordering and exactly-once processing. FIFO queues provide additional features that help prevent unintentional duplicates from being sent by message producers or from being received by message consumers. Additionally, message groups allow multiple separate ordered message streams within the same queue.

For more information on SQS FIFO Queues, please visit the following URL:
https://aws.amazon.com/about-aws/whats-new/2016/11/amazon-sqs-introduces-fifo-queues-with-exactly-once-processing-and-lower-prices-for-standard-queues/

193
Q

Question 589

A company is planning on building an application using the services available on AWS. The application will be stateless in nature. Which of the following would be an ideal compute service which can be used. The service should have the ability to scale accordingly

A. AWS DynamoDB

B. AWS Lambda

C. AWS S3

D. AWS SQS

A

Answer: B

The following is mentioned in the AWS Whitepaper which supplements the ability to use AWS Lambda for this requirement. A stateless application is an application that needs no knowledge of previous interactions and stores no session information. Such an example could be an application that, given the same input, provides the same response to any end user. A stateless application can scale horizontally since any request can be serviced by any of the available compute resources (e.g., EC2 instances, AWS Lambda functions)

For more information on AWS Cloud best practices, please visit the following URL:
https://d1.awsstatic.com/whitepapers/AWS_Cloud_Best_Practices.pdf

194
Q

Question 590

A company has a set of EC2 Instances hosted on the AWS Cloud. These instances form a web server farm which services a web application that is accessed by users on the internet. Which of the following would help make this architecture more fault tolerant. Choose 2 answers from the options given below

A. Ensure the Instances are placed in separate Availability Zones

B. Ensure the Instances are placed in separate Regions

C. Use an AWS Load Balancer to distribute the traffic

D. Use Autoscaling to distribute the traffic

A

Answer: A, C

The AWS Documentation mentions the following A load balancer distributes incoming application traffic across multiple EC2 instances in multiple Availability Zones. This increases the fault tolerance of your applications. Elastic Load Balancing detects unhealthy instances and routes traffic only to healthy instances.

For more information on the AWS Classic Load balancer, please visit the following URL:
https://docs.aws.amazon.com/elasticloadbalancing /latest/classic/introduction.html

195
Q

Question 591

You are planning on hosting an application on Ec2 Instances that will be used to process logs. This application is not that critical and can resume even after an interruption. Which of the following steps can help provide a COST effective solution.

A. Ensure to use Reserved Instances for the underlying EC2 Instances

B. Ensure to use Provisioned IOPS for the underlying EBS volumes

C. Ensure to use Spot Instances for the underlying EC2 Instances

D. Ensure to use S3 as the underlying data layer

A

Answer: C

One effective solution would be to use Spot Instances in this scenario. Additionally the AWS Documentation mentions the following Spot Instances are a cost-effective choice if you can be flexible about when your applications run and if your applications can be interrupted. For example, Spot Instances are well-suited for data analysis, batch jobs, background processing, and optional tasks.

For more information on using Spot Instances, please visit the following URL:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-spot-instances.html

196
Q

Question 592

A company stores their log data in an S3 bucket. They now need to have search capabilities available for the data in S3. How can this be achieved in an efficient and in on-going manner. Choose 2 answers from the options below. Each answer is part of the solution

A. Use an AWS Lambda function which gets triggered whenever data is added to the S3 bucket.

B. Create a Lifecycle policy for the S3 bucket

C. Load the data into Amazon ElasticSearch

D. Load the data into Glacier

A

Answer: A, C

AWS Elastic search provides full search capabilities and can be used for the log files stored in the S3 bucket The AWS Documentation mentions the following with regards to the integration of Elastic search with S3 You can integrate your Amazon ES domain with Amazon S3 and AWS Lambda. Any new data sent to an S3 bucket triggers an event notification to Lambda, which then runs your custom Java or Node.js application code. After your application processes the data, it streams the data to your domain.

For more information on integration between Elastic Search and S3, please visit the following URL: https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-aws-integrations.html

197
Q

Question 593

A company is planning on deploying a batch processing application in AWS. Which of the following is an ideal way to host this application. Choose 2 answers from the options below. Each answer is part of the solution

A. Copy the batch processing application to an ECS container

B. Create a docker image of your batch processing application.

C. Deploy the image as an Amazon ECS task

D. Deploy the container behind the ELB

A

Answer: B, C

The AWS Documentation mentions the following Docker containers are particularly suited for batch job workloads. Batch jobs are often short-lived and embarrassingly parallel. You can package your batch processing application into a Docker image so that you can deploy it anywhere, such as in an Amazon ECS task

For more information on the use cases for AWS ECS, please visit the following URL: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/common_use_cases.html

198
Q

Question 594

An architecture consists of the following:

a) A primary and secondary infrastructure hosted in AWS.
b) Both infrastructures consists of ELB, Autoscaling and EC2 resources

How should Route53 be configured to ensure proper failover incase the primary infrastructure goes down.

A. Configure a primary routing policy

B. Configure a weighted routing policy

C. Configure a Multi-Answer routing policy

D. Configure a failover routing policy

A

Answer: D

The AWS Documentation mentions the following You can create an active-passive failover configuration by using failover records. You create a primary and a secondary failover record that have the same name and type, and you associate a health check with each.

For more information on DNS failover using Route53, please visit the following URL: https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover-configuring-options. html

199
Q

Question 596

A company wants to self-manage a database environment. Which of the following should be adopted to fulfil this requirement

A. Use the DynamoDB service

B. Provision the database using the AWS RDS service

C. Provision the database using the AWS Aurora service

D. Create an EC2 Instance and install the database service accordingly.

A

Answer: D

If you want to self-manage a database , then you should have an EC2 Instance and then you will have complete control over the underlying database instance.

For more information on Amazon EC2, please visit the following URL: https://aws.amazon.com/ec2/

200
Q

Question 597

A company is migrating an on-premise 5TB MySQL database to AWS. The company expects the database to continue increasing in size. Which Amazon RDS engine meets these requirements?

A. MySQL

B. Microsoft SQL Server

C. Oracle

D. Amazon Aurora

A

Answer: D

The AWS Documentation supports the mentioned requirements which is supported by AWS Aurora Amazon Aurora (Aurora) is a fully managed, MySQL- and PostgreSQL-compatible, relational database engine. It combines the speed and reliability of high-end commercial databases with the simplicity and cost-effectiveness of open-source databases. It delivers up to five times the throughput of MySQL and up to three times the throughput of PostgreSQL without requiring changes to most of your existing applications. All Aurora Replicas return the same data for query results with minimal replica lag—usually much less than 100 milliseconds after the primary instance has written an update For more information on AWS Aurora,

please visit the following URL:
http: //docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Aurora.Overview.html

201
Q

Question 598

A company wants to have a 50 Mbps dedicated connection to its AWS resources. Which of the below services can help fulfill this requirement

A. Virtual private gateway

B. Virtual private connection

C. Direct Connect

D. Internet gateway

A

Answer: C

AWS Direct Connect makes it easy to establish a dedicated network connection from your premises to AWS. Using AWS Direct Connect, you can establish private connectivity between AWS and your datacenter, office, or colocation environment, which in many cases can reduce your network costs, increase bandwidth throughput, and provide a more consistent network experience than Internet-based connections

For more information on AWS direct connect please visit the below URL:
https://aws.amazon.com/directconnect/

202
Q

Question 599

You work for a company that stores records for a minimum of 10 years. Most of these records will never be accessed but must be made available upon request (within a few hours). What is the most cost-effective storage option? Choose the correct answer from the options below

A. Simple Storage Service

B. EBS Volumes

C. Glacier

D. AWS Import/Export

A

Answer: C

Amazon Glacier is a secure, durable, and extremely low-cost cloud storage service for data archiving and long-term backup. Customers can reliably store large or small amounts of data for as little as $0.004 per gigabyte per month, a significant savings compared to on-premises solutions. To keep costs low yet suitable for varying retrieval needs, Amazon Glacier provides three options for access to archives, from a few minutes to several hours.

For more information on Amazon Glacier, please refer to the below link:
https://aws.amazon.com/glacier/

203
Q

Question 600

A company is building a two-tier web application to serve dynamic transaction- based content. The data tier is leveraging an Online Transactional Processing (OLTP) database. What services should you leverage to enable an elastic and scalable web tier?

A. Elastic Load Balancing, Amazon EC2, and Auto Scaling

B. Elastic Load Balancing, Amazon RDS with Multi-AZ, and Amazon S3

C. Amazon RDS with Multi-AZ and Auto Scaling

D. Amazon EC2, Amazon Dynamo DB, and Amazon S3

A

Answer: A

The question mentioned a scalable web tier and not a database tier. So Option C, D and B are already automated eliminated, since we do not need a database option. The example shows an Elastic Load balancer connected to 2 EC2 instances connected via Auto Scaling. This is an example of an elastic and scalable web tier. By scalable we mean that the Auto scaling process will increase or decrease the number of EC2 instances as required.

For more information on the Elastic Load Balancer, please refer to the below link:
https://docs.aws.amazon.com/elasticloadbalancing /latest/classic/introduction.html

204
Q

Question 601

A customer is planning on hosting an AWS RDS instance. They have a need to ensure that the underlying data is encrypted. How can this be achieved. Choose 2 answers from the options given below

A. Ensure the right instance class is chosen for the underlying Instance

B. Choose only General Purpose SSD since only this volume type supports encryption of data

C. Encrypt the database during creation

D. Enable encryption of the underlying EBS Volume

A

Answer: A, C

Encryption for the database can be done during the creation of the database. Also you need to ensure that the underlying instance type supports DB encryption.

For more information on database encryption, please refer to the below URL:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.Encryption.html

205
Q

Question 602

You are developing a new mobile application and are considering storing user preferences in AWS. Each data item is expected to be 20KB in size. There would initially be thousands of customers who would be using the mobile application. You need to have a data store which could be used to store the user preferences. The solution needs to be cost-effective, highly available, scalable and secure. How would you design the data layer?

A. Create a new AWS MySQL RDS instance and store the user data there.

B. Create a DynamoDB table with the required Read and Write capacity and use it as the data layer

C. Use Amazon Glacier to store the user data

D. Use a Amazon Redshift cluster for managing the user preferences

A

Answer: B

In this case , since each data item is 30KB and since DynamoDB is an ideal data layer for storing user preferences, this would be an ideal option. Also DynamoDB is a highly scalable and available service.

For more information on AWS DynamoDB, please refer to the below URL:
https://aws.amazon.com/dynamodb/

206
Q

Question 603

Your operations department is using an incident based application hosted on a set of EC2 Instances. These instances are placed behind an Autoscaling Group to ensure the right number of instances are in place to support the application. The Operations department is complaining that every day at 9:00, the application has very poor performance. And at around 9:45, the performance is back to normal. But there is a lot of customer dissatisfaction due to the problems faced at 9:00. What can be done to ensure that this issue gets fixed?

A. Create another Dynamic scaling policy to ensure the scaling happens at 9:00

B. Add another Autoscaling group to support the current one

C. Change the cool down timers for the existing Autoscaling Group

D. Add a scheduled scaling policy at 8:30

A

Answer: D

One can use scheduled scaling to ensure that the capacity is peaked before 9:00 in the morning. The AWS Documentation further mentions the below on Scheduled scaling Scaling based on a schedule allows you to scale your application in response to predictable load changes. For example, every week the traffic to your web application starts to increase on Wednesday, remains high on Thursday, and starts to decrease on Friday. You can plan your scaling activities based on the predictable traffic patterns of your web application.

For more information on Scheduled scaling, please refer to the below URL:
https://docs.aws.amazon.com/autoscaling/ec2/userguide/schedule_time.html

207
Q

Question 604

A database hosted in AWS is currently encountering an extended number of write operations and is not able to handle the load. What can be done to the architecture to ensure that the write operations are not lost under any circumstance?

A. Add more IOPS to the existing EBS volume used by the database

B. Consider using DynamoDB instead of AWS RDS

C. Use SQS queues to queue the database writes

D. Use SNS to send notification on missed database writes and then add them manually at a later stage.

A

Answer: C

SQS queues can be used to store the pending database writes. These writes can then be added to the database. It is the perfect queuing system for such as architecture Note that adding more IOPS may help the situation but will not totally eliminate that database writes are not lost.

For more information on AWS SQS, please refer to the below URL:
https://aws.amazon.com/sqs/faqs/

208
Q

Question 605

You have create an AWS Lambda function that will write data toa DynamoDB table. Which of the following must be in place to ensure that the Lambda function can interact with the DynamoDB table.

A. Ensure an [AM Role is attached to the Lambda function which has the required DynamoDB privileges

B. Ensure an LAM User is attached to the Lambda function which has the required DynamoDB privileges

C. Ensure the Access keys are embedded in the AWS Lambda function

D. Ensure the [AM user password is embedded in the AWS Lambda function

A

Answer: A

The AWS Documentation mentions the following which supports this requirement Each Lambda function has an IAM role (execution role) associated with it. You specify the IAM role when you create your Lambda function. Permissions you grant to this role determine what AWS Lambda can do when it assumes the role. There are two types of permissions that you grant to the IAM role: - If your Lambda function code accesses other AWS resources, such as to read an object from an S3 bucket or write logs to CloudWatch Logs, you need to grant permissions for relevant Amazon S3 and CloudWatch actions to the role

For more information on the permission Role model for AWS Lambda please refer to the below URL:
https://docs.aws.amazon.com/lambda/latest/dg/intro-permission-model.html

209
Q

Question 606

Your company has a requirement to host a static web site in AWS. Which of the following steps would help implement a quick and COST effective solution for this requirement. Choose 2 answers from the options given below. Each answer forms part of the solution.

A. Upload the static content to an S3 bucket

B. Create an EC2 Instance and install a web server

C. Enable web site hosting for the S3 bucket

D. Upload the code to the web server on the EC2 instance

A

Answer: A, C

The AWS Documentation mentions the following on using S3 for static web site hosting. This would be an ideal and cost effective solution. You can host a static website on Amazon Simple Storage Service (Amazon S3). On a static website, individual webpages include static content. They might also contain client-side scripts.

For more information on static web site hosting using Sg please refer to the below URL: https://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteHosting.html

210
Q

Question 607

Your company currently has data hosted in an Amazon Aurora MySQL DB. Since the data is critical there is a need to ensure that data can be made available in another region in case of a disaster. How can this be achieved?

A. Make a copy of the underlying EBS volumes in the Amazon cluster in another region

B. Enable Multi-AZ for the Aurora database

C. Create a Read replica for the database

D. Create an EBS snapshot of the underlying EBS volumes in the Amazon cluster and then copy them to another region

A

Answer: C

The AWS Documentation mentions the following You can create an Amazon Aurora MySQL DB cluster as a Read Replica in a different AWS Region than the source DB cluster. Taking this approach can improve your disaster recovery capabilities, let you scale read operations into a region that is closer to your users, and make it easier to migrate from one region to another.

For more information on Amazon Aurora cross region replication please refer to the below URL:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/AuroraMySQL.Replication.CrossRegion.html

211
Q

Question 608

A company currently stores a set of documents in the AWS Simple Storage service. They are worried on the potential loss if documents were ever deleted. Which of the following can be used to ensure protection from loss for the underlying documents stored in S3.

A. Enable versioning for the underlying S3 bucket

B. Copy the bucket data to an EBS volume as a backup

C. Create a snapshot of the S3 bucket

D. Enable an LAM policy which does not allow deletion of any document from the S3 bucket

A

Answer: A

Amazon S3 also has the option for versioning as shown below. The versioning is on the bucket level and can be used to recover prior versions of an object.

For more information on S3 versioning please refer to the below URL:
https://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html

212
Q

Question 609

An application with a 150 GB relational database runs on an EC2 Instance. The application is will be used frequently and there are going to be a lot of database read and writes. What is the MOST cost effective storage type?

A. Amazon EBS provisioned IOPS SSD

B. Amazon EBS Throughput Optimized HDD

C. Amazon EBS General Purpose SSD

D. Amazon EFS

A

Answer: A

The AWS Documentation mentions the following Provisioned IOPS SSD (io1) volumes are designed to meet the needs of I/O-intensive workloads, particularly database workloads that are sensitive to storage performance and consistency.

Formore information on AWS EBS Volumes, please visit the following URL:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumes.html

213
Q

Question 610

A company has a set of EC2 Linux based instances hosted in AWS. They need to have a standard file interface to files which can be used across all linux based instances. Which of the following can be used for this purpose

A. Consider using the Simple Storage service

B. Consider using Amazon Glacier

C. Consider using AWS RDS

D. Consider using AWS EFS

A

Answer: D

The AWS Documentation mentions the following When mounted on Amazon EC2 instances, an Amazon EFS file system provides a standard file system interface and file system access semantics, allowing you to seamlessly integrate Amazon EFS with your existing applications and tools. Multiple Amazon EC2 instances can access an Amazon EFS file system at the same time, allowing Amazon EFS to provide a common data source for workloads and applications running on more than one Amazon EC2 instance.

For more information on AWS EFS, please visit the following URL:
https://aws.amazon.com/efs/

214
Q

Question 611

Your company is planning on using Route53 as the DNS provider. They want to that their company domain name points to an existing Cloudfront distribution. How this could be achieved

A. Create an Alias record which points to the Cloudfront distribution

B. Create a host record which points to the Cloudfront distribution

C. Create a CNAME record which points to the Cloudfront distribution

D. Create a non-alias record which points to the Cloudfront distribution

A

Answer: A

The AWS Documentation mentions the following. While ordinary Amazon Route 53 records are standard DNS records, alias records provide a Route 53-specific extension to DNS functionality. Instead of an IP address or a domain name, an alias record contains a pointer to a CloudFront distribution, an Elastic Beanstalk environment, an ELB Classic, Application, or Network Load Balancer, an Amazon S3 bucket that is configured as a static website, or another Route 53 record in the same hosted zone. When Route 53 receives a DNS query that matches the name and type in an alias record, Route 53 follows the pointer and responds with the applicable value

For more information on Route53 Alias records, please visit the following URL: https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resource-record-sets-choosing-alias-non-alias.html

215
Q

Question 612

A company needs to extend their storage infrastructure to the AWS Cloud. The storage needs to be available as iSCSI devices for your on-premises application servers. Which of the following would be able to fulfil this requirement

A. Create a Glacier vault. Use a Glacier connector and mount it as an iSCSI device

B. Create an S3 bucket. Use an S3 connector and mount it as an iSCSI device

C. Use the EFS file service and mount the different file systems to the on premise servers

D. Use the AWS Storage gateway cached volumes service

A

Answer: D

The AWS Documentation mentions the following By using cached volumes, you can use Amazon S3 as your primary data storage, while retaining frequently accessed data locally in your storage gateway. Cached volumes minimize the need to scale your on-premises storage infrastructure, while still providing your applications with low- latency access to their frequently accessed data. You can create storage volumes up to 32 TiB in size and attach to them as iSCSI devices from your on-premises application servers. Your gateway stores data that you write to these volumes in Amazon S3 and retains recently read data in your on-premises storage gateway’s cache and upload buffer storage.

For more information on AWS Storage gateways, please visit the following URL:
https://docs.aws.amazon.com/storagegateway/latest/userguide/StorageGatewayConcepts.html

216
Q

Question 613

Your infrastructure in AWS currently consists of a private and public subnet. The private subnet consists of database servers and the public subnet has a NAT instance which helps the instances in the private subnet to communicate with the Internet. The NAT instance is now becoming a bottleneck. Which of the following change to the architecture can help alleviate this issue from occurring in the future.

A. Use a NAT gateway instead of the NAT Instance

B. Use another Internet gateway for better bandwidth

C. Use a VPC connection for better bandwidth

D. Consider changing the Instance type for the underlying NAT instance

A

Answer: A

The NAT gateway is a managed resource which can be used in place of a NAT instance. Even though you can consider changing the Instance type for the underlying NAT instance, this would still not guarantee that the issue will not occur in the future.

For more information on the NAT gateway, please visit the following URL:
https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-nat-gateway.html

217
Q

Question 614

Your current setup in AWS consists of the following architecture. 2 public subnets, one subnet which has the web servers accessed by users across the internet and the other subnet for the database server. Which of the following changes to the architecture would add a better security boundary to the resources hosted in your setup

A. Consider moving the web server to a private subnet

B. Consider moving the database server to a private subnet

C. Consider moving both the web and database server to a private subnet

D. Consider creating a private subnet and adding a NAT instance to that subnet

A

Answer: B

The ideal setup is to ensure that the web server is hosted in the public subnet so that it can be accessed by users on the internet. The database server can be hosted in the private subnet. The diagram from the AWS Documentation shows how this can be setup

For more information on public and private subnets in AWS, please visit the following URL:
https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Scenario2.html

218
Q

Question 615

Your company has a set of applications that make use of Docker containers which is used by the development team. They need to move these to AWS. What would be the best method to setup these docker containers in a separate environment in AWS?

A. Create EC2 Instances, Install Docker and then upload the containers.

B. Create EC2 Container registries, Install Docker and then upload the containers.

C. Create an Elastic beanstalk environment with the necessary Docker containers.

D. Create EBS Optimized EC2 Instances, Install Docker and then upload the containers.

A

Answer: C

One can use the Elastic Beanstalk service to host Docker containers. The AWS Documentation further mentions the following Elastic Beanstalk supports the deployment of web applications from Docker containers. With Docker containers, you can define your own runtime environment. You can choose your own platform, programming language, and any application dependencies (such as package managers or tools), that aren’t supported by other platforms. Docker containers are self- contained and include all the configuration information and software your web application requires to run.

For more information on using Elastic Beanstalk for Docker containers, please visit the following URL:
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker.html

219
Q

Question 616

Instances in your private subnet hosted in AWS need access to important documents in S3. Due to the confidential nature of these documents, you have to ensure that this traffic does not traverse through the internet. As an architect, you would you implement this solution?

A. Consider using a VPC endpoint

B. Consider using an EC2 endpoint

C. Move the instances to a public subnet

D. Create a VPN connection and access the S3 resources from the EC2 Instance

A

Answer: A

The AWS Documentation mentions the following A VPC endpoint enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Instances in your VPC do not require public IP addresses to communicate with resources in the service. Traffic between your VPC and the other service does not leave the Amazon network.

For more information on VPC endpoints, please visit the following URL: https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-endpoints.html

220
Q

Question 617

You have a video transcoding application running on Amazon EC2. Each instance polls a queue to find out which video should be transcoded, and then runs a transcoding process. If this process is interrupted, the video will be transcoded by another instance based on the queuing system. You have a large backlog of videos which need to be transcoded and would like to reduce this backlog by adding more instances. You will need these instances only until the backlog is reduced. Which type of Amazon EC2 instances should you use to reduce the backlog in the most cost efficient way?

A. Reserved instances

B. Spot instances

C. Dedicated instances

D. On-demand instances

A

Answer: B

Since this is like a batch processing job, the best type of instance to use is a Spot instances. Spot instances are normally used in batch processing jobs. Since these jobs don’t last for the entire duration of the year, they can bid upon and allocated and de- allocated as requested. Reserved Instances/Dedicated instances cannot be used because this is not a 100% used application. There is no mentioned on a continuous demand of work from the question so there is no need to use On-demand instances.

For more information on Spot Instances, please visit the following URL:
https://aws.amazon.com/ec2/spot/

221
Q

Question 618

A company has a workflow that sends video files from their on-premise system to AWS for transcoding. They use EC2 worker instances that pull transcoding jobs from SQS. Why is SQS an appropriate service for this scenario?

A. SQS guarantees the order of the messages.

B. SQS synchronously provides transcoding output.

C. SQS checks the health of the worker instances.

D. SQS helps to facilitate horizontal scaling of encoding tasks.

A

Answer: D

Now even though SQS does guarantees the order of the messages for FIFO queues, this is still not the reason as to why this is the appropriate reason. The normal reason for using SQS, is for decoupling of systems and helps in horizontal scaling of aws
resources. SQS does not either do transcoding output or checks the health of the worker instances. The health of the worker instances can be done via ELB or Cloudwatch

For more information on SQS, please visit the following URL:
https://aws.amazon.com/sqs/faqs/

222
Q

Question 619

You run an ad-supported photo sharing website using S3 to serve photos to visitors of your site. At some point you find out that other sites have been linking to the photos on your site, causing loss to your business. What is an effective method to mitigate this?

A. Remove public read access and use signed URLs with expiry dates.

B. Use Cloud Front distributions for static content.

C. Block the IPs of the offending websites in Security Groups.

D. Store photos on an EBS volume of the web server.

A

Answer: A

Cloud front is only used for distribution of content across edge or region locations. It is not used for restricting access to content, so Option B is wrong. Blocking IP’s is challenging because they are dynamic in nature and you will not know which sites are accessing your main site, so Option C is also not feasible. Storing photos on EBS volume is not a good practice or architecture approach for an AWS Solution Architect.

For more information on pre-signed URL’s, please visit the following URL:
https://docs.aws.amazon.com/AmazonS3/latest/dev/PresignedUrlUploadObject.html

223
Q

Question 620

A company wants to create standard templates for deployment of their Infrastructure. This would also be used to provision resources in another region in disaster recovery scenarios. Which AWS service can be used in this regard?

A. Amazon Simple Workflow Service

B. AWS Elastic Beanstalk

C. AWS CloudFormation

D. AWS OpsWorks

A

Answer: C

AWS CloudFormation gives developers and systems administrators an easy way to create and manage a collection of related AWS resources, provisioning and updating them in an orderly and predictable fashion. You can use AWS Cloud Formation’s sample templates or create your own templates to describe the AWS resources, and any associated dependencies or runtime parameters, required to run your application. You don’t need to figure out the order for provisioning AWS services or the subtleties of making those dependencies work. CloudFormation takes care of this for you. After the AWS resources are deployed, you can modify and update them in a controlled and predictable way, in effect applying version control to your AWS infrastructure the same way you do with your software. You can also visualize your templates as diagrams and edit them using a drag-and-drop interface with the AWS CloudFormation Designer.

For more information on AWS Cloudformation, please visit the following URL:
https://aws.amazon.com/cloudformation/

224
Q

Question 621

A company currently hosts their architecture in the US region. They now need to duplicate that architecture to the Europe region and extend the application hosted on this architecture to the new region. In order to ensures that users across the globe get the same seamless experience from either setup, what needs to be done?

A. Create a classic Elastic Load Balancer is setup to route traffic to both locations

B. Create a weighted Route53 policy to route the policy based on the weightage for each location

C. Create an Application Elastic Load Balancer is setup to route traffic to both locations

D. Create a geolocation Route53 policy to route the policy based on the location.

A

Answer: D

The AWS Documentation mentions the following to support this requirement Geolocation routing lets you choose the resources that serve your traffic based on the geographic location of your users, meaning the location that DNS queries originate from

For more information on AWS Route53 Routing policies, please visit the following URL: https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html

225
Q

Question 622

You have a set of EC2 Instances that support an application. They are currently hosted in the US Region. In the event of a disaster, you need a way to ensure that you can quickly provision the resources in another region. How could this be accomplished? Choose 2 answers from the options given below

A. Copy the underlying EBS Volumes to the destination region

B. Create EBS Snapshots and then copy them to the destination region

C. Create AMI’s for the underlying Instances

D. Copy the metadata for the EC2 Instances to S3.

A

Answer: B, C

The AMI’s can be used to create a snapshot or template of the underlying instance. You can then copy the AMI to another region. You can also make Snapshots of the volumes and then copy them to the destination region

For more information on AMI’s and EBS Snapshots, please visit the following URL:

https: //docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSSnapshots.html
https: //docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html

226
Q

Question 623

A company wants to have a NoSQL database hosted on the AWS Cloud. They don’t have the necessary staff to manage the underlying infrastructure. Which of the following would be ideal for this requirement

A. AWS Aurora

B. AWS RDS

C. AWS DynamoDB

D. AWS Redshift

A

Answer: C

The AWS Documentation mentions the following Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. DynamoDB lets you offload the administrative burdens of operating and scaling a distributed database, so that you don’t have to worry about hardware provisioning, setup and configuration, replication, software patching, or cluster scaling.

For more information on AWS DynamoDB, please visit the following URL:
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/ Introduction.html

227
Q

Question 624

You are building an automated transcription service in which Amazon EC2 worker instances process an uploaded audio file and generate a text file. You must store both of these files in the same durable storage until the text file is retrieved. You do not know what the storage capacity requirements are. Which storage option is both cost-efficient and scalable?

A. Multiple Amazon EBS volume with snapshots

B. A single Amazon Glacier vault

C. Asingle Amazon S3 bucket

D. Multiple instance stores

A

Answer: C

Amazon S3 is the perfect storage solution for audio files and text files. It is a highly available and durable storage device.

For more information on Amazon $3, please visit the following URL:
https://aws.amazon.com/s3/

228
Q

Question 625

A customer has an instance hosted in the AWS Public Cloud. The VPC and subnet used to host the Instance have been created with the default settings for the Network Access Control Lists. They need to provide an IT Administrator secure access to the underlying instance. How can this be accomplished.

A. Ensure the Network Access Control Lists allow Inbound SSH traffic from the IT Administrator’s Workstation

B. Ensure the Network Access Control Lists allow Outbound SSH traffic from the IT Administrator’s Workstation

C. Ensure that the security group allows Inbound SSH traffic from the IT Administrator’s Workstation

D. Ensure that the security group allows Outbound SSH traffic from the IT Administrator’s Workstation

A

Answer: D

If the VPC and subnet are created with the default settings, then the Network Access Control Lists would already allow all traffic. So you only need to ensure that the Security groups are changed to allow SSH traffic into the Instance.

For more information on VPC Security groups, please visit the following URL: https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_SecurityGroups.html

229
Q

Question 626

A company has an On-premise infrastructure which they want to extend to the AWS Cloud. There is a need to ensure that communication across both environments is possible over the Internet. What would you create in this case to fulfil this requirement?

A. Create a VPC peering connection between the On-premise the AWS Environment

B. Create an AWS Direct connection between the On-premise the AWS Environment

C. Create a VPN connection between the On-premise the AWS Environment

D. Create a Virtual private gateway connection between the On-premise the AWS Environment

A

Answer: C

The AWS Documentation mentions the following One can create a Virtual private connection to establish communication across both environments over the Internet

For more information on Virtual private connection, please visit the following URL: https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_VPN.html

230
Q

Question 627

A company wants to build a brand new application on the AWS Cloud. They want to ensure that the application follows the Micro-services architecture. Which of the following services can be used to build this sort of architecture. Choose 3 answers from the options given below

A. AWS Lambda

B. AWS ECS

C. AWS API gateway

D. AWS Config

A

Answer: A,B,C

The AWS Lambda is serverless compute service that can allow you to build independent services The ECS is the Elastic Container service that can be used to manage containers The API Gateway is a serverless component for managing access to API’s

For more information on Microservices on AWS, please visit the following URL:
https://aws.amazon.com/microservices/

231
Q

Question 628

You are deploying an application to track GPS coordinates of delivery trucks in the United States. Coordinates are transmitted from each delivery truck once every three seconds. You need to design an architecture that will enable real-time processing of these coordinates from multiple consumers. Which service should you use to implement data ingestion?

A. Amazon Kinesis

B. AWS Data Pipeline

C,. Amazon AppStream

D. Amazon Simple Queue Service

A

Answer: A

The AWS Documentation mentions the following Amazon Kinesis makes it easy to collect, process, and analyze real-time, streaming data so you can get timely insights and react quickly to new information. Amazon Kinesis offers key capabilities to cost-effectively process streaming data at any scale, along with the flexibility to choose the tools that best suit the requirements of your application. With Amazon Kinesis, you can ingest real-time data such as video, audio, application logs, website clickstreams, and IoT telemetry data for machine learning, analytics, and other applications. Amazon Kinesis enables you to process and analyze data as it arrives and respond instantly instead of having to wait until all your data is collected before the processing can begin.

For more information on Amazon Kinesis , please visit the following URL:
https://aws.amazon.com/kinesis/

232
Q

Question 629

A company is planning on hosting a set of EC2 Instances on the AWS Cloud. They also have the need to ensure data can be stored on the EC2 Instances. Which block level storage device could actually make this possible?

A. Amazon 83

B. Amazon Glacier

C,. Amazon Storage Gateway

D. Amazon EBS Volumes

A

Answer: D

The AWS Documentation mentions the following An Amazon EBS volume is a durable, block-level storage device that you can attach to a single EC2 instance. You can use EBS volumes as primary storage for data that requires frequent updates, such as the system drive for an instance or storage for a database application. You can also use them for throughput-intensive applications that perform continuous disk scans

For more information on Amazon EBS Volumes , please visit the following URL:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumes.html

233
Q

Question 630

A company is planning on using the AWS Redshift service. The data on Redshift and the service itself would be used continuously for the next 3 years as per the current business plan. Which of the following would help in a more COST effective solution when using the Redshift service.

A. Consider using On-demand instances for the Redshift Cluster

B. Enable Automated backup

C. Consider using Reserved instances for the Redshift Cluster

D. Consider not using a cluster for the Redshift nodes

A

Answer: C

The AWS Documentation mentions the following If you intend to keep your Amazon Redshift cluster running continuously for a prolonged period, you should consider purchasing reserved node offerings. These offerings provide significant savings over on-demand pricing, but they require you to reserve compute nodes and commit to paying for those nodes for either a one-year or three-year duration.

For more information on Reserved Nodes in Redshift , please visit the following URL:
https://docs.aws.amazon.com/redshift/latest/mgmt/purchase-reserved-node-instance.html

234
Q

Question 631

A company is planning to run a number of Admin related scripts using the AWS Lambda service. There is a need to understand if there are any errors encountered when the script run. How can this be accomplished in the most effective manner.

A. Use Cloudwatch metrics and logs to watch for errors

B. Use Cloudtrail to monitor for errors

C. Use the AWS Config service to monitor for errors

D. Use the AWS Inspector service to monitor for errors

A

Answer: A

The AWS Documentation mentions the following AWS Lambda automatically monitors Lambda functions on your behalf, reporting metrics through Amazon CloudWatch. To help you troubleshoot failures in a function, Lambda logs all request handled by your function and also automatically stores logs generated by your code through Amazon CloudWatch Logs.

For more information on Monitoring Lambda functions , please visit the following URL:
https://docs.aws.amazon.com/lambda/latest/dg/monitoring-functions-logs.html

235
Q

Question 632

A Cloudfront distribution is being used to distribute content from an S3 bucket. There is a requirement for ensuring that only the right set of users get access to certain content. How can this be accomplished.

A. Create IAM Users for each user and then provide access to the S3 bucket content

B. Create [AM Groups for each set of users and then provide access to the S3 bucket content

C. Create CloudFront signed URLs and then distribute these URL’s to the users

D. Use IAM Polices for the underlying S3 buckets to restrict content

A

Answer: C

The AWS Documentation mentions the following Many companies that distribute content via the internet want to restrict access to documents, business data, media streams, or content that is intended for selected users, for example, users who have paid a fee.

To securely serve this private content using CloudFront, you can do the following:
- Require that your users access your private content by using special CloudFront signed URLs or signed cookies.

  • Require that your users access your Amazon S3 content using CloudFront URLs, not Amazon S3 URLs. Requiring CloudFront URLs isn’t required, but we recommend it to prevent users from bypassing the restrictions that you specify in signed URLs or signed cookies.

For more information on serving private content via Cloudfront , please visit the following url
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/PrivateContent.html#

236
Q

Question 633

You are planning on creating a VPC from scratch and launch EC2 Instances in the subnet. What should be done to ensure that one can access the EC2 Instance from the Internet?

A. Attach an Internet gateway to the VPC and add a route for 0.0.0.0/0 to the Route table

B. Attach an NAT gateway to the VPC and add a route for 0.0.0.0/0 to the Route table

C. Attach an NAT gateway to the VPC and add a route for 0.0.0.0/32 to the Route table

D. Attach an Internet gateway to the VPC and add a route for 0.0.0.0/32 to the Route table

A

Answer: A

The diagram from the AWS Documentation shows the Internet gateway and the route table

For more information on the Internet gateway , please visit the following URL:
https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Internet_Gateway.html

237
Q

Question 634

Your company currently has an entire data warehouse of assets that need to be migrated to the AWS Cloud. Which of the following services should this be migrated to?

A. AWS DynamoDB

B. AWS RDS

C. AWS RDS

D. AWS Redshift

A

Answer: D

The AWS Documentation mentions the following Redshift is a fully managed, petabyte-scale data warehouse service in the cloud. You can start with just a few hundred gigabytes of data and scale to a petabyte or more. This enables you to use your data to acquire new insights for your business and customers.

For more information on AWS Redshift, please visit the following URL: https://docs.aws.amazon.com/redshift/latest/mgmt/welcome.html

238
Q

Question 635

Your company has confidential documents stored in the simple storage service. Due to compliance requirements, you have to ensure that the data in the S3 bucket is available in a different geographical location. As an architect what is the change you would make to comply with this requirement.

Apply Multi-AZ for the underlying S3 bucket

B. Copy the data to an EBS Volume in another Region

C. Create a snapshot of the S3 bucket and copy it to another region

D. Enable Cross region replication for the S3 bucket

A

Answer: D

This is mentioned clearly as a use case for S3 cross-region replication. You might configure cross-region replication on a bucket for various reasons, including the following:

  • Compliance requirements — Although, by default, Amazon S3 stores your data across multiple geographically distant Availability Zones, compliance requirements might dictate that you store data at even further distances. Cross-region replication allows you to replicate data between distant AWS Regions to satisfy these compliance requirements.

For more information on S3 cross-region replication, please visit the following URL:
https://docs.aws.amazon.com/AmazonS3/latest/dev/crr.html

239
Q

Question 636

A company has a requirement to have a stack based model for their resources in AWS. They want to have different stacks for Development and production environments. Which of the following can be used to fulfil this methodology required by the company.

A. Use EC2 tags to define different stack layers for your resources.

B. Define the metadata for the different layers in DynamoDB

C. Use AWS Opsworks to define the different layers for your application

D. Use AWS Config to define the different layers for your application

A

Answer: C

This can be done via the Opswork service. Below is the documentation from AWS to support this requirement AWS OpsWorks Stacks lets you manage applications and servers on AWS and on-premises. With OpsWorks Stacks, you can model your application as a stack containing different layers, such as load balancing, database, and application server. You can deploy and configure Amazon EC2 instances in each layer or connect other resources such as Amazon RDS databases.

For more information on Opswork stacks, please visit the following URL:
https://aws.amazon.com/opsworks/stacks/

240
Q

Question 637

You are designing a web application that stores static assets in an Amazon Simple Storage Service (S3) bucket. You expect this bucket to immediately receive over 150 PUT requests per second. What should you do to ensure optimal performance?

A. Use multi-part upload.

B. Add a random prefix to the key names.

C. Amazon S3 will automatically manage performance at this scale.

D. Use a predictable naming scheme, such as sequential numbers or date time sequences, in the key names

A

Answer: B

If your workload in an Amazon S3 bucket routinely exceeds 100 PUT/LIST/DELETE requests per second or more than 300 GET requests per second then you need to perform some guidelines for your S3 bucket. One way to add a hash prefix key to the key name - One way to introduce randomness to key names is to add a hash string as prefix to the key name. For example, you can compute an MDs hash of the character sequence that you plan to assign as the key name.

For performance considerations in the Simple Storage service, please visit the URL:
http: //docs.aws.amazon.com/AmazonS3/latest/dev/request-rate-perf-considerations.html

241
Q

Question 638

An infrastructure is being hosted in AWS using the following resources

a) A couple of EC2 instances serving a web based application
b) An Elastic Balancer in front of the EC2 Instances
c) An AWS RDS with Multi-AZ enabled

Which of the following can be added to the setup to ensure scalability.

A. Add another ELB to the setup

B. Add more EC2 Instances to the setup

C. Enable Read Replica’s for the AWS RDS

D. Add an Autoscaling Group to the setup

A

Answer: D

The AWS Documentation mentions the following AWS Auto Scaling enables you to configure automatic scaling for the scalable AWS resources for your application in a matter of minutes. AWS Auto Scaling uses the Auto Scaling and Application Auto Scaling services to configure scaling policies for your scalable AWS resources.

For more information on AWS Autoscaling, please visit the URL:
https://docs.aws.amazon.com/autoscaling/plans/userguide/what-is-aws-auto-scaling.html

242
Q

Question 639

A company wants to store their documents in AWS. Initially these documents will be used frequently. After a duration of 6 months, these documents need to be archived.
How would you architect this requirement?

A. Store the files in Amazon EBS and create a lifecycle policy to remove the files after 6 months.

B. Store the files in Amazon S3 and create a lifecycle policy to remove the files after 6 months.

C. Store the files in Amazon Glacier and create a lifecycle policy to remove the files after 6 months.

D. Store the files in Amazon EFS and create a lifecycle policy to remove the files after 6 months.

A

Answer: B

The AWS Documentation mentions the following on lifecycle policies Lifecycle configuration enables you to specify the lifecycle management of objects in a bucket. The configuration is a set of one or more rules, where each rule defines an action for Amazon S3 to apply to a group of objects. These actions can be classified as follows:

  • Transition actions — In which you define when objects transition to another storage class. For example, you may choose to transition objects to the STANDARD_IA (IA, for infrequent access) storage class 30 days after creation, or archive objects to the GLACIER storage class one year after creation.
  • Expiration actions — In which you specify when the objects expire. Then Amazon S3 deletes the expired objects on your behalf.

For more information on AWS S3 Lifecycle policies, please visit the following URL:
https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html

243
Q

Question 640

When managing permissions for the API gateway, what can be used to ensure that the right level of permissions are given to developers, IT admins and users? These permissions should be easily managed.

A. Use the secure token service to manage the permissions for the different users

B. Use IAM Policies to create different policies for the different types of users.

C. Use the AWS Config tool to manage the permissions for the different users

D. Use LAM Access Keys to create sets of keys for the different types of users.

A

Answer: B

The AWS Documentation mentions the following You control access to Amazon API Gateway with LAM permissions by controlling access to the following two API Gateway component processes:

  • To create, deploy, and manage an API in API Gateway, you must grant the API developer permissions to perform the required actions supported by the API management component of API Gateway.
  • To call a deployed API or to refresh the API caching, you must grant the API caller permissions to perform required IAM actions supported by the API execution component of API Gateway.

For more information on permissions with the API gateway, please visit the following URL:
https://docs.aws.amazon.com/apigateway /latest/developerguide/permissions.html

244
Q

Question 641

Your development team wants to start making use of EC2 Instances to host their application and web servers. In the space of automation, they want the Instances to always download the latest version of the Web and application servers when the Instances are launched. As an architect what would you recommend?

A. Ask the development team to create scripts which can be added to the User Data section when the instance is launched

B. Ask the development team to create scripts which can be added to the Meta Data section when the instance is launched

C. Use Autoscaling Groups to install the Web and application servers when the instances are launched

D. Use EC2 Config to install the Web and application servers when the instances are launched

A

Answer: A

The AWS Documentation mentions the following: When you launch an instance in Amazon EC2, you have the option of passing user data to the instance that can be used to perform common automated configuration tasks and even run scripts after the instance starts. You can pass two types of user data to Amazon EC2: shell scripts and cloud-init directives. You can also pass this data into the launch wizard as plain text, as a file (this is useful for launching instances using the command line tools), or as base64-encoded text (for API calls).

For more information on User Data, please visit the following URL:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html

245
Q

Question 642

Your company has an application that looks into uploading, processing and publishing videos posted by users. They currently have the following architecture for this application

a) A set of EC2 Instances which look into taking the videos uploaded by users and putting them n S3 buckets
b) A set of EC2 worker processes to process and publish the videos
c) An Autoscaling Group for the EC2 worker processes

Which of the following ca be added to the architecture to make it more reliable

A. Amazon SQS

B. Amazon SNS

C. Amazon Cloudfront

D. Amazon SES

A

Answer: A

Amazon SQS is used to decouple systems. It can store the requests to process videos which can be picked up by the Worker processes. The AWS Documentation mentions the following Amazon Simple Queue Service (Amazon SQS) offers a reliable, highly-scalable hosted queue for storing messages as they travel between applications or microservices. It moves data between distributed application components and helps you decouple these components.

For more information on AWS SQS, please visit the following URL:
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/Welcome.html

246
Q

Question 643

There is an urgent requirement to monitor some database metrics for a database hosted on AWS and send notifications. Which AWS services can accomplish this? Choose 2 answers from the options given below.

A. Amazon Simple Email Service

B. Amazon CloudWatch

C. Amazon Simple Queue Service

D. Amazon Route 53

E. Amazon Simple Notification Service

A

Answer: B, E

Amazon Cloudwatch will be used to monitor the IOP’s metrics from the RDS instance and Amazon Simple Notification Service will be used to send the notification if any alarm is triggered.

For more information on Cloudwatch and SNS, please visit the below URLS:
https://aws.amazon.com/cloudwatch/ https://aws.amazon.com/sns/

247
Q

Question 644

You have a business-critical two tier web app currently deployed in 2 availability zones in a single region, using Elastic Load Balancing and Auto-Scaling. The app depends on synchronous replication at the database layer. The application needs to remain fully available even if one application AZ goes off-line and AutoScaling cannot launch new instances in the remaining AZ. How can the current architecture be enhanced to ensure this?

A. Deploy in 2 regions using Weighted Round Robin with AutoScaling minimums set of 50% peak load per Region.

B. Deploy in 3 AZ with Autoscaling minimum set to handle 33 percent peak load per zone.

C. Deploy in 3 AZ with Autoscaling minimum set to handle 50 percent peak load per zone.

D. Deploy in 2 regions using Weighted Round Robin with AutoScaling minimums set of 100% peak load per Region.

A

Answer: C

Since the requirement is that the application should never go down even if an AZ is availability. Option A and D are incorrect
because region deployment is not possible for ELB. ELB’s can manage traffic within a region and not between regions. Option B is incorrect because even if one AZ goes down, we would be operating at only 66% and not the required 100%.

For more information on Autoscaling please visit the below URL:
https://aws.amazon.com/autoscaling/

248
Q

Question 645

You have been tasked with creating a VPC network topology for your company. The VPC network must support both internet-facing applications and internally-facing applications accessed only over VPN. Both Internet-facing and internally-facing applications must be able to leverage at least 3 AZs for high availability. At a minimum, how many subnets must you create within your VPC to accommodate these requirements?

A.2

B.3

C4

D.6

A

Answer: D

Since each subnet corresponds to one availability zone and you need 3 AZ’s for the internet and intranet applications, hence you need 6 subnets.

For more information on VPC and subnets please visit the below URL:
http: //docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Subnets.html

249
Q

Question 646

You have the following architecture deployed in AWS

a) A set of EC2 Instances which sit behind an ELB
b) A database hosted in AWS RDS

Off late the performance on the database has been slacking due to the high number of read requests. Which of the following can be added to the architecture to alleviate the performance issue.

A. Enable Multi-AZ to add a secondary read-only DB in another AZ

B. Use Elastic Cache in front of the database

C. Use AWS Cloudfront in front of the database

D. Use DynamoDB to offload all the reads. Populate the common read items in a separate table.

A

Answer: B

Amazon Elastic Cache is an in-memory cache which can be used to cache common read requests. The diagram from the AWS Documentations shows how caching can be added to an existing architecture

For more information on database caching please visit the below URL:
https: //aws.amazon.com/caching/database-caching/

250
Q

Question 647

An application is currently hosted on an EC2 Instance which has attached EBS volumes. The data on these volumes is frequently accessed. But after a duration of a week, the documents need to be moved to infrequent access storage. Which of the following would be the ideal EBS volume type to use.

A. EBS Provisioned IOPS SSD

B. EBS Throughput Optimized HDD

C. EBS General Purpose SSD

D. EBS Cold HDD

A

Answer: D

The AWS Documentation mentions the following Cold HDD (sc1) volumes provide low-cost magnetic storage that defines performance in terms of throughput rather than IOPS. With a lower throughput limit than st1, sc1 is a good fit ideal for large, sequential cold-data workloads. If you require infrequent access to your data and are looking to save costs, sc1 provides inexpensive block storage.

For more information on the various EBS Volume types please visit the below URL:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html

251
Q

Question 648

A customer wants to import their existing virtual machines to the cloud. Which service can they use for this? Choose one answer from the options given below.

A. VM Import/Export

B. AWS Import/Export

C. AWS Storage Gateway

D. DB Migration service

A

Answer: A

VM Import/Export enables customers to import Virtual Machine (VM) images in order to create Amazon EC2 instances. Customers can also export previously imported EC2 instances to create VMs. Customers can use VM Import/Export to leverage their previous investments in building VMs by migrating their VMs to Amazon EC2.

For more information on AWS VM Import, please visit the URL:
https://aws.amazon.com/ec2/vm-import/

252
Q

Question 649

There is a company website that is going to be launched in the coming weeks. There is a probability that the traffic will be quite high in the first couple of weeks. In the event of a load failure, how can you set up DNS failover to a static website? Choose
the correct answer from the options given below.

A. Duplicate the exact application architecture in another region and configure DNS weight-based routing

B. Enable failover to an on-premise data center to the application hosted there.

C. Use Route 53 with the failover option to failover to a static S3 website bucket or CloudFront distribution.

D. Add more servers in case the application fails.

A

Answer: C

Amazon Route 53 health checks monitor the health and performance of your web applications, web servers, and other resources. If you have multiple resources that perform the same function, you can configure DNS failover so that Amazon Route 53 will route your traffic from an unhealthy resource to a healthy resource. For example, if you have two web servers and one web server becomes unhealthy, Amazon Route 53 can route traffic to the other web server. So you can route traffic to a website hosted on S3 or to a cloudfront distribution.

For more information on DNS failover using Routes3, please refer to the below link:
http://docs.aws.amazon.com/Routes53/latest/DeveloperGuide/dns-failover.html

253
Q

Question 650

A company is running three production web server reserved EC2 instances with EBS-backed root volumes. These instances have a consistent CPU load of 80%. Traffic is being distributed to these instances by an Elastic Load Balancer. They also have production and development Multi-AZ RDS MySQL databases. What recommendation would you make to reduce cost in this environment without affecting availability of mission-critical systems? Choose the correct answer from the options given below

A. Consider using on-demand instances instead of reserved EC2 instances

B. Consider not using a Multi-AZ RDS deployment for the development database

C. Consider using spot instances instead of reserved EC2 instances

D. Consider removing the Elastic Load Balancer

A

Answer: B

Multi-AZ databases is better for production environments rather than for development environments, so you can reduce costs by not using this for development environments Amazon RDS Multi-AZ deployments provide enhanced availability and durability for Database (DB) Instances, making them a natural fit for production database workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Each AZ runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable. standby (or to a read replica in the case of Amazon Aurora), so that you can resume database operations as soon as the failover is complete. Since the endpoint for your DB Instance remains the same after a failover, your application can resume database operation without the need for manual administrative intervention

For more information on Multi-AZ RDS, please refer to the below link:
https://aws.amazon.com/rds/details/multi-az/

254
Q

Question 651

An application consists of a couple of EC2 Instances. One EC2 Instance hosts a web application and the other instance hosts the database server. Which of the following changes can be made to ensure high availability of the database layer

A. Enable Read-Replica’s for the database

B. Enable Multi-AZ for the database

C. Have another EC2 Instance in the same availability zone with replication configured

D. Have another EC2 Instance in the another availability zone with replication configured

A

Answer: D

Since this is a self-managed database and not an AWS RDS instance , hence Option A and B are incorrect. To ensure high availability have the EC2 Instance in another Availability zone , so even if one goes down , the other one will still be available.

One can refer to the following media link for achieving high availability in AWS
https://media.amazonwebservices.com/architecturecenter/AWS_ac_ra_ftha_o4.pdf

255
Q

Question 652

You are designing an architecture on AWS with disaster recovery in mind. Currently the architecture consists of an ELB and underlying EC2 Instances in a primary and secondary region. How can you establish a switch over in case of failure in the primary region?

A. Use Routes53 Health checks and then do a failover

B. Use Cloudwatch metrics to detect the failure and then do a failover

C. Use scripts to scan Cloudwatch logs to detect the failure and then do a failover

D. Use Cloudtrail to detect the failure and then do a failover

A

Answer: A

The AWS Documentation mentions the following If you have multiple resources that perform the same function, you can configure DNS failover so that Route 53 will route your traffic from an unhealthy resource to a healthy resource. For example, if you have two web servers and one web server becomes unhealthy, Route 53 can route traffic to the other web server For more information on configuring DNS failover using Route§3.

one can refer to the below link:
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover.html

256
Q

Question 653

A company has assigned two web serves instances to an Elastic Load Balancer. However, the instances and the ELB are not reachable via URL to the elastic load balancer serving the web app data from the EC2 instances. How might you resolve the issue so that your instances are serving the web app data to the public Internet? Choose the correct answer from the options given below

A. Attach an Internet gateway to the VPC and route it to the subnet

B. Add an elastic IP address to the instance

C. Use Amazon Elastic Load Balancer to serve requests to your instances located in the internal subnet

D. None of the above

A

Answer: A

If the Internet gateway is not attached to the VPC, which is a pre-requisite for the instances to be accessed from the internet then the instances will not be reachable.

For more information on Internet gateways, please refer to the below link:
http: //docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Internet_Gateway.html

257
Q

Question 654

Your company currently has an infrastructure hosted On-premise. They have requested you to devise an architecture on AWS which can be used for migrating some of the On-premise components. They are currently concerned with the data storage layer. They also want minimum administrative overheads for the underlying infrastructure in AWS. Which of the following would be included in the architecture you propose. Choose 2 answers from the options given below

A. Use DynamoDB to store data in tables

B. Use EC2 to host the data on EBS volumes

C. Use the Simple Storage service to store data

D. Use AWS RDS to store the data

A

Answer: A, C

Both the Simple Storage service and DynamoDB are complete serverless offerings from AWS where you don’t need to manage the infrastructure. For more information on S3 and DynamoDB,

please refer to the below links

https: //aws.amazon.com/s3/
https: //aws.amazon.com/dynamodb/

258
Q

Question 655

Currently you’re helping design and architect a highly available application. After building the initial environment, you’ve found that part of your application does not work correctly until port 443 is added to the security group. After adding port 443 to the appropriate security group, how much time will it take before the changes are applied and the application begins working correctly? Choose the correct answer from the options below

A. Generally, it takes 2-5 minutes in order for the rules to propagate

B. Immediately after a reboot of the EC2 instances belong to that security group

C. Changes apply instantly to the security group, and the application should be able to respond to 443 requests

D. It will take 60 seconds for the rules to apply to all availability zones within the region

A

Answer: C

This is given in the aws documentation For more information on Security Groups,

please refer to the below link:
http: //docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_SecurityGroups.html

259
Q

Question 656

A company hosts data in S3. There is now a mandate that going forward all data in the S3 bucket needs to encrypt at rest. How can this be achieved?

A. Use AWS Access keys to encrypt the data

B. Use SSL certificates to encrypt the data

C. Enable server side encryption on the S3 bucket

D. Enable MFA on the S3 bucket

A

Answer: C

The AWS Documentation mentions the following Server-side encryption is about data encryption at rest—that is, Amazon S3 encrypts your data at the object level as it writes it to disks in its data centers and decrypts it for you when you access it. As long as you authenticate your request and you have access permissions, there is no difference in the way you access encrypted or unencrypted objects.

For more information on S3 server side encryption, please refer to the below link:
https://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html

260
Q

Question 657

A company hosts data in S3. There is a requirement to control access to the S3 buckets. Which are the 2 ways in which this can be achieved?

A. Use Bucket policies

B. Use the Secure Token service

C. Use LAM user policies

D. Use AWS Access Keys

A

Answer: A, C

The AWS Documentation mentions the following Amazon S3 offers access policy options broadly categorized as resource based policies and user policies. Access policies you attach to your resources (buckets and objects) are referred to as resource-based policies. For example, bucket policies and access control lists (ACLs) are resource- based policies. You can also attach access policies to users in your account. These are called user policies. You may choose to use resource-based policies, user policies, or some combination of these to manage permissions to your Amazon S3 resources.

For more information on S3 access control, please refer to the below link:
https://docs.aws.amazon.com/AmazonS3/latest/dev/s3-access-control.html

261
Q

Question 658

Your application provides data transformation services. Files containing data to be transformed are first uploaded to Amazon S3 and then transformed by a fleet of spot EC2 instances. Files submitted by your premium customers must be transformed with the highest priority. How should you implement such a system?

A. Use a DynamoDB table with an attribute defining the priority level. Transformation instances will scan the table for tasks, sorting the results by priority level.

B. Use Route 53 latency based-routing to send high priority tasks to the closest transformation instances.

C. Use two SQS queues, one for high priority messages, the other for default priority. Transformation instances first poll the high priority queue; if there is no message, they poll the default priority queue.

D. Use a single SQS queue. Each message contains the priority level. Transformation instances poll high-priority messages first.

A

Answer: C

The best way is to use 2 SQS queues. Each queue can be polled separately. The high priority queue can be polled first.

For more information on AWS SQS, please refer to the below link:
https://aws.amazon.com/sqs/

262
Q

Question 659

A VPC has been setup with public subnet and an internet gateway. You setup and EC2 instance with a public IP. But you are still not able to connect to it via the Internet. You can see that the right Security groups are in place. What should you do to ensure you can connect to the EC2 instance from the internet?

A. Set an Elastic IP Address to the EC2 instance

B. Set a Secondary Private IP Address to the EC2 instance

C. Ensure the right route entry is there in the Route table

D. There must be some issue in the EC2 instance. Check the system logs.

A

Answer: C

You have to ensure that the Route table has an entry to the Internet gateway because this is required for instances to communicate over the internet. The diagram shows the configuration of the public subnet in a VPC.

Option A is wrong because you already have a public IP Assigned to the instance, so this should be enough to connect to the Internet.

Option B is wrong because private IP’s cannot be access from the internet

Option D is wrong because the Route table is what is causing the issue and not the system

For more information on AWS public subnet, please visit the link:
http: //docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Scenario1.html

263
Q

Question 660

A customer has a single 3-TB volume on-premises that is used to hold a large repository of images and print layout files. This repository is growing at 500 GB a year and must be presented as a single logical volume. The customer is becoming
increasingly constrained with their local storage capacity and wants an off-site backup of this data, while maintaining low-latency access to their frequently accessed data. Which AWS Storage Gateway configuration meets the customer requirements?

A. Gateway-Cached volumes with snapshots scheduled to Amazon S3

B. Gateway-Stored volumes with snapshots scheduled to Amazon S3

C. Gateway-Virtual Tape Library with snapshots to Amazon S3

D. Gateway-Virtual Tape Library with snapshots to Amazon Glacier

A

Answer: A

Gateway-cached volumes let you use Amazon Simple Storage Service (Amazon S3) as your primary data storage while retaining frequently accessed data locally in your storage gateway. Gateway-cached volumes minimize the need to scale your on-premises storage infrastructure, while still providing your applications with low-latency access to their frequently accessed data. You can create storage volumes up to 32 TiB in size and attach to them as iSCSI devices from your on-premises application servers. Your gateway stores data that you write to these volumes in Amazon S3 and retains recently read data in your on-premises storage gateway’s cache and upload buffer storage.

For more information on Storage gateways, please visit the link:
http://docs.aws.amazon.com/storagegateway/latest/userguide/storage-gateway-cached-concepts.html

264
Q

Question 661

A company is planning to use the AWS ECS service to work with containers. There is a need for the least amount of administrative overhead when launching containers. How can this be achieved.

A. Use the Fargate launch type in AWS ECS

B. Use the EC2 launch type in AWS ECS

C. Use the Autoscaling launch type in AWS ECSD. Use the ELB launch type in AWS ECS

A

Answer: A

The AWS Documentation mentions the following The Fargate launch type allows you to run your containerized applications without the need to provision and manage the backend infrastructure. Just register your task definition and Fargate launches the container for you.

For more information on the different launch types, please visit the link:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/launch_types.html

265
Q

Question 662

You currently manage a set of web servers hosted on EC2 Servers with public IP addresses. These IP addresses are mapped to domain names. There was an urgent maintenance activity that had to be carried out on the servers and the servers had to be restarted. Now the web application hosted on these EC2 Instances is not accessible via the domain names configured earlier. Which of the following could be a reason for this.

A. The Route53 hosted zone needs to be restarted.

B. The network interfaces need to initialized again.

C. The public IP addresses need to associated to the ENI again.

D. The public IP addresses have changed after the instance was stopped and started

A

Answer: D

By default the public IP address of an EC2 Instance is released after the instance is stopped and started. Hence the earlier IP address which were mapped to the domain names would have become invalid now.

For more information on public IP addressing, please visit the below URL:
http: //docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-instance-addressing.htm]#concepts-public-addresses

266
Q

Question 663

You are responsible to deploying a critical application onto AWS. Part of the requirements for this application is to ensure that the controls set for this application met PCI compliance. Also there is a need to monitor web application logs to identify any malicious activity. Which of the following services can be used to fulfil this requirement. Choose 2 answers from the options given below

A. Amazon Cloudwatch Logs

B. Amazon VPC Flow Logs

C. Amazon AWS Config

D. Amazon Cloudtrail

A

Answer: A, D

The AWS Documentation mentions the following about these services AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. With CloudTrail, you can log, continuously monitor, and retain account activity related to actions across your AWS infrastructure. CloudTrail provides event history of your AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command line tools, and other AWS services. This event history simplifies security analysis, resource change tracking, and troubleshooting.

For more information on Cloudtrail, please refer to below URL:
https://aws.amazon.com/cloudtrail/

You can use Amazon CloudWatch Logs to monitor, store, and access your log files from Amazon Elastic Compute Cloud (Amazon EC2) instances, AWS CloudTrail, Amazon Route 53, and other sources. You can then retrieve the associated log data from CloudWatch Logs.

For more information on Cloudwatch logs, please refer to below URL:
http: //docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html

267
Q

Question 664

There is a requirement to host a database server. This server should not be able to connect to the internet except in the case of downloading the required database patches. Which of the following solutions would be the best to satisfy all the above requirements? Choose the correct answer from the options below

Set up the database in a private subnet with a security group which only allows outbound traffic.

B. Set up the database in a public subnet with a security group which only allows inbound traffic.

C. Set up the database in a local data center and use a private gateway to connect the application to the database.

D. Set up the database in a private subnet which connects to the Internet via a NAT instance.

A

Answer: D

This sort of setup as per the aws documentation coincides with Scenario2 of setting up a VPC.

For more information on the VPC Scenario for public and private subnets please see the below link:
http: //docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Scenario2.html

268
Q

Question 665

You have instances running on your VPC. You have both production and development based instances running in the VPC. You want to ensure that people who are responsible for the development instances don’t have the access to work on the production instances to ensure better security. Using policies, which of the following would be the best way to accomplish this?

Choose the correct answer from the options given below:

A. Launch the test and production instances in separate VPC’s and use VPC peering

B. Create an IAM policy with a condition which allows access to only instances that are used for production or development

C. Launch the test and production instances in different Availability Zones and use Multi Factor Authentication

D. Define the tags on the test and production servers and add a condition to the IAM policy which allows access to specific tags

A

Answer: D

You can easily add tags which define which instances are production and which are development instances and then ensure these tags are used when controlling access via an IAM policy.

For more information on tagging your resources, please refer to the below link:
http: //docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html

269
Q

Question 666

A company is planning on building a 2 tier architecture which consists of a web server and a database server. This will be hosted on EC2 Instances accordingly. The database server will experience a lot of read/write operations whereas the web server will have a standard workload. Which of the following underlying EBS volumes are optimum to use for the underlying EC2 Instances. Choose 2 answers from the options given below.

A. General Purpose SSD for the web server

B. Provisioned IOPS for the web server

C. General Purpose SSD for the database server

D. Provisioned IOPS for the database server

A

Answer: A, D

If the database is going to have a lot of read/write requests, then the ideal solution would be to have the underlying EBS volume as Provisioned IOPS. Whereas since the standard workload, General Purpose SSD should be sufficient enough. The below except from the documentation also shows the different types of EBS volumes for different workloads For more information on EBS Volume types,

please visit the following URL:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html

270
Q

Question 667

You are hosting a web server on an EC2 Instance. The number of requests are now consuming a large part of the CPU, and the response performance for the application is getting degraded. Which of the following would help alleviate the problem and provide a better response time.

A. Place the EC2 Instance behind a classic load balancer

B. Place the EC2 Instance behind an Application load balancer

C. Place the EC2 Instance in an Autoscaling Group with the max size as 1.

D. Place a Cloudfront distribution in front of the EC2 Instance

A

Answer: D

Since there is only a mention of one EC2 instance, placing it behind the ELB would not make much sense , hence Option A and B are invalid. Having it in an Autoscaling Group with just one instance would not make much sense. The Cloudfront distribution would help alleviate the load on the EC2 Instance because of its edge location and cache feature.

For more information on Cloudfront, please visit the following URL:
https://docs.aws.amazon.com/AmazonCloudFront/latest/ DeveloperGuide/Introduction.html

271
Q

Question 668

A company is hosting a MySQL database in AWS using the AWS RDS service. To offload the reads, a read replica has been created and reports are run off the read replica database. But at certain times, the reports are showing stale data. Why is this the case?

A. The Read replica has not been created properly

B. The backup of the original database has not been set properly

C. This is due to the replication lag

D. The Multi-AZ feature is not enable

A

Answer: D

An AWS White paper on the caveat for Read Replica’s is given below which must be taken into consideration by designers Read replicas are separate database instances that are replicated asynchronously. As a result, they are subject to replication lag and might be missing some of the latest transactions. Application designers need to consider which queries have tolerance to slightly stale data. Those queries can be executed on a read replica, while the rest should run on the primary node. Read replicas can also not accept any write queries.

For more information on AWS Cloud best practices, please visit the following URL: https://d1.awsstatic.com/whitepapers/AWS_Cloud_Best_Practices.pdf

272
Q

Question 669

One is planning on using SQS queues and AWS Lambda to create a leverage the serverless aspects of the AWS Cloud. Each invocation to AWS Lambda will send a message to an SQS queue. In order for messages to be sent, which of the following must be in place

A. The queue must be a FIFO queue

B. An LAM Role with the required permissions

C. The code for Lambda must be written in C#

D. An LAM Group with the required permissions

A

Answer: B

When working with AWS Lambda functions, if there is a need to access other resources, then ensure that an IAM role is in place. The IAM role will have the required permissions to access the SQS queue.

For more information on AWS IAM Roles, please visit the following URL:
https://docs.aws.amazon.com/IAM /latest/UserGuide/id_roles.html

273
Q

Question 670

You have enabled Cloudtrail logs for your company’s AWS account. In addition the IT Security department has mentioned that the logs need to be encrypted. How can this be achieved.

A. Enable SSL certificates for the Cloudtrail logs

B. There is no need to do anything since the logs will already be encrypted

C. Enable Server side encryption for the trail

D. Enable Server side encryption for the destination S3 bucket

A

Answer: B

The AWS Documentation mentions the following By default, CloudTrail event log files are encrypted using Amazon S3 server-side encryption (SSE). You can also choose to encrypt your log files with an AWS Key Management Service (AWS KMS) key. You can store your log files in your bucket for as long as you want. You can also define Amazon S3 lifecycle rules to archive or delete log files automatically. If you want notifications about log file delivery and validation, you can set up Amazon SNS notifications.

For more information on how Cloudtrail works, please visit the following
URL: https://docs.aws.amazon.com/awscloudtrail/latest/userguide/how-cloudtrail-works.html

274
Q

Question 671

A company has setup their data layer in the Simple Storage service. There are a number of requests which include read/write and updates to objects in an S3 bucket. Users sometimes complain that updates to an object are not being reflected. Which of the following could be a reason for this

A. The versioning is not enable for the bucket , so the newer version is not reflecting the right data

B. The updates are being made to the same key for the object

C. Encryption is enabled for the bucket , hence it is taking time for the update to occur

D. The metadata for the S3 bucket is incorrectly configured

A

Answer: B

When updates are made to objects in S3, they have an eventual consistency model. Hence when objects updates are made to the same key, there can be a slight delay when the updated object is provided back to the user when the next read request is made.

For more information on the various aspects for the Simple Storage service, please visit the following URL: https://aws.amazon.com/s3/faqs/

275
Q

Question 672

A company needs to have a fully managed NoSQL database on the AWS Cloud. The database should have the ability for backups and high availability. Which Amazon database meets these requirements?

A. MySQL

B. Microsoft SQL Server

C. DynamoDB

D. Amazon Aurora

A

Answer: C

The AWS Documentation mentions the following Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. DynamoDB lets you offload the administrative burdens of operating and scaling a distributed database, so that you don’t have to worry about hardware provisioning, setup and configuration, replication, software patching, or cluster scaling

For more information on AWS DynamoDB, please visit the following URL:
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/ Introduction.html

276
Q

Question 673

A company is planning to move to the AWS Cloud. They want to leverage their existing Chef recipes for configuration management of their infrastructure. Which AWS service would be ideal for this requirement?

A. AWS Elastic Load Balancer

B. AWS Elastic beanstalk

C. AWS Opswork

D. AWS Inspector

A

Answer: C

The AWS Documentation mentions the following which can support this requirement AWS OpsWorks is a configuration management service that helps you configure and operate applications in a cloud enterprise by using Puppet or Chef. AWS
OpsWorks Stacks and AWS OpsWorks for Chef Automate let you use Chef cookbooks and solutions for configuration management, while AWS OpsWorks for Puppet Enterprise lets you configure a Puppet Enterprise master server in AWS. Puppet offers a Set of tools for enforcing the desired state of your infrastructure, and automating on- demand tasks. For more information on AWS Opswork,

please visit the following URL:
https://docs.aws.amazon.com/opsworks/latest/userguide/welcome.html

277
Q

Question 674

An application consists of a web server and database server hosted on separate EC2 Instances. There are lot of read requests on the database which is degrading the performance of the application. Which of the following can help improve the performance of the database under the heavy load

A. Enable Multi-AZ for the database

B. Put an Elastic Cache in front of the database

C. Place another web server in the architecture to take the load

D. Place a cloudfront distribution in front of the database

A

Answer: B

The ideal solution would be to use Elastic Cache The AWS Documentation furthers mentions the following with respective to Elastic cache ElastiCache is a web service that makes it easy to set up, manage, and scale a distributed in-memory data store or cache environment in the cloud. It provides a high-performance, scalable, and cost-effective caching solution, while removing the complexity associated with deploying and managing a distributed cache environment.

For more information on AWS Elastic Cache, please visit the following URL:
https://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/Whatls.html

278
Q

Question 675

You need to have the ability to store archive documents in AWS. This needs to be COST effective solution. Which of the following would you use to meet this requirement

A. Amazon Glacier

B. Amazon S3 Standard Infrequent Access

C. Amazon EFS

D. Amazon S3 Standard

A

Answer: A

The AWS Documentation mentions the following on Amazon Glacier Amazon Glacier is an extremely low-cost storage service that provides durable storage with security features for data archiving and backup. With Amazon Glacier, customers can store their data cost effectively for months, years, or even decades. Amazon Glacier enables customers to offload the administrative burdens of operating and scaling storage to AWS, so they don’t have to worry about capacity planning, hardware provisioning, data replication, hardware failure detection and recovery, or time- consuming hardware migrations. For more information on Amazon Glacier,

please visit the following URL:
https://docs.aws.amazon.com/amazonglacier/latest/dev/introduction.html

279
Q

Question 676

You are planning on hosting a web application which consists of a web server and database server. They are going to be hosted on different EC2 Instances in different subnets in a VPC. Which of the following can be used to ensure that the database server only allows traffic from the web server.

A. Make use of Security Groups

B. Make use of VPC Flow Logs

C. Make use of Network Access Control Lists

D. Make use of IAM Roles

A

Answer: A

Security groups can be used to control traffic into an EC2 Instance The below snapshot from the AWS Documentation shows the rules tables for the security groups for a sample web and database server setup.

For more information on this use case scenario, please visit the following URL:
https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Scenario2.html

280
Q

Question 677

Your IT Supervisor is worried about users accidentally deleting objects in an S3 bucket. Which of the following can help prevent accidental deletion of objects in an S3 bucket. Choose 2 answers from the options given below

A. Enable encryption for the S3 bucket

B. Enable MFA delete on the S3bucket

C. Enable versioning on the S3 bucket

D. Enable LAM Roles on the S3 bucket

A

Answer: B, C

The AWS Documentation mentions the following When a user performs a DELETE operation on an object, subsequent simple (un-versioned) requests will no longer retrieve the object. However, all versions of that object will continue to be preserved in your Amazon Sg bucket and can be retrieved or restored. Versioning’s MFA Delete capability, which uses multi-factor authentication, can be used to provide an additional layer of security. By default, all requests to your Amazon S3 bucket require your AWS account credentials. If you enable Versioning with MFA Delete on your Amazon S3 bucket, two forms of authentication are required to permanently delete a version of an object: your AWS account credentials and a valid six-digit code and serial number from an authentication device in your physical possession

For more information on the features of S3, please visit the following URL:
https://aws.amazon.com/s3/faqs/

281
Q

Question 678

A company has an application that uses S3 bucket as the data layer. As per the monitoring on the S3 bucket, it can be seen that the number of GET requests is 400 requests per second. The IT operations team is also getting service requests that users
are getting HTTP 500 or 503 errors when accessing the application. Which of the following can be done to resolve the errors. Choose 2 answers from the options given below.

A. Add a Cloudfront distribution in front of the bucket.

B. Add randomness to the key names.

C. Add an ELB in front of the S3 bucket

D. Enable versioning for the S3 bucket

A

Answer: A, B

The AWS Documentation mentions the following When your workload is sending mostly GET requests, you can add randomness to key names. In addition, you can integrate Amazon CloudFront with Amazon S3 to distribute content to your users with low latency and a high data transfer rate.

For more information on S3 bucket performance, please visit the following URL:
https://aws.amazon.com/premiumsupport/knowledge-center/s3-bucket-performance-improve/

282
Q

Question 679

A company has a Redshift cluster defined in AWS. The IT operations team have ensured that both automated and manual snapshots are in place. Since the cluster is going to be run for a long duration of a couple of years, Reserved Instances have been purchased. There has been a recent concern on the cost being incurred by the cluster. Which of the following step can be carried out to minimize the costs being incurred by the cluster

A. Disable the manual snapshots

B. Set the retention period of the automated snapshots to 35 days

C. Choose to use Spot Instances instead of Reserved Instances

D. Choose to use Instance store volumes to store the cluster data

A

Answer: A

The AWS Documentation mentions the following Regardless of whether you enable automated snapshots, you can take a manual snapshot whenever you want. Amazon Redshift will never automatically delete a manual snapshot. Manual snapshots are retained even after you delete your cluster. Because manual snapshots accrue storage charges, it’s important that you manually delete them if you no longer need them \

For more information on working with Snapshots, please visit the following URL:
https://docs.aws.amazon.com/redshift/latest/mgmt/working-with-snapshots.html

283
Q

Question 680

A company has a collection of EC2 Instances that are backed by EBS Volumes. The IT policy of the company states that all data must be backed up in an efficient manner. What is the MOST resilient way to backup the volumes?

Take regular EBS snapshots

B. Enable EBS volume encryption

C. Create a script to copy data to an EC2 Instance store

D. Mirror data across 2 EBS volumes

A

Answer: A

Option B is incorrect because it does not help in durability of EBS Volumes

Option C is incorrect since EC2 Instance stores are not durable

Option D is incorrect since mirroring data across EBS volumes is inefficient, when you already have the option for
EBS snapshots

The AWS Documentation mentions the following on AWS EBS Snapshots You can back up the data on your Amazon EBS volumes to Amazon S3 by taking point-in-time snapshots. Snapshots are incremental backups, which means that only the blocks on the device that have changed after your most recent snapshot are saved. This minimizes the time required to create the snapshot and saves on storage costs by not duplicating data. When you delete a snapshot, only the data unique to that snapshot is removed. Each snapshot contains all of the information needed to restore your data (from the moment when the snapshot was taken) to a new EBS volume.

For more information on AWS EBS Snapshots, please visit the following URL:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSSnapshots.html

284
Q

Question 681

A company currently hosts a lot of data on their On-premise location. They want to start storing backups of this data on AWS. How can this be achieved in the most
efficient way possible

A. Create EBS volumes and store the data

B. Create EBS Snapshots and store the data

C. Make use to Storage gateway Cached Volumes

D. Make use of Amazon Glacier

A

Answer: C

If a backup of On-premise data is required, the most efficient way would be to make use of Storage gateway Cached Volumes. The AWS Documentation mentions the following on Cached Volumes. Cached volumes — You store your data in Amazon Simple Storage Service (Amazon S3) and retain a copy of frequently accessed data subsets locally. Cached volumes offer a substantial cost savings on primary storage and minimize the need to scale your storage on-premises. You also retain low-latency access to your frequently accessed data.

For more information on the Storage gateway, please visit the following URL:
https://docs.aws.amazon.com/storagegateway/latest/userguide/WhatIsStorageGateway.html

285
Q

Question 682

A company is planning on moving their PostgreSQL database to AWS. They want to have the ability to have Replica’s for the database and automated backup. Which of the following databases would be ideal for this scenario

A. AWS Aurora

B. AWS PostgreSQL

C. AWS DynamoDB

D. AWS Redshift

A

Answer: A

The AWS Documentation mentions the following on Amazon Aurora Amazon Aurora is a drop-in replacement for MySQL and PostgreSQL. The code, tools and applications you use today with your existing MySQL and PostgreSQL databases can be used with Amazon Aurora.

For more information on Amazon Aurora, please visit the following URL:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Aurora.Overview.html

286
Q

Question 683

You currently have a set of Lambda functions which has business logic embedded in them. You want to have customers to have the ability to call these functions via HTTP. How can this be achieved?

A. Use the API gateway and provide the integration with the AWS Lambda functions

B. Enable HTTP access on the AWS Lambda function

C. Add EC2 Instances with an API server installed. Integrate the server with AWS Lambda functions.

D. Use S3 websites to make calls to the Lambda functions

A

Answer: A

The API gateway provides the ideal access to your backend services via API’s.

For more information on the API gateway service, please visit the following URL:
https://docs.aws.amazon.com/apigateway/latest/developerguide/welcome.html

287
Q

Question 684

Users within a company need a place to store their documents. Each user must have their own location for placing their set of documents. Each user should not be able to view the other person’s documents. Also the users should be able to retrieve their documents easily. Which AWS service would be ideal for this requirement?

A. AWS Simple Storage Service

B. AWS Glacier

C. AWS Redshift

D. AWS RDS MySQL

A

Answer: A

The simple storage service is the perfect place to store the documents. You can define buckets for each user and have policies which restrict access so that each user can only access their own files.

For more information on the S3 service, please visit the following
URL: https://aws.amazon.com/s3/

288
Q

Question 685

A Solutions Architect is designing a solution to store and archive corporate documents and has determined that Amazon Glacier is the right solution. Data can be retrieved within 3-5 hrs as directed by management. Which feature in Amazon Glacier
can help meet this requirement and ensure COST effectiveness?

A. Vault Lock

B. Expedited retrieval

C. Bulk retrieval

D. Standard retrieval

A

Answer: D

The AWS Documentation mentions the following on Standard retrievals Standard retrievals are a low-cost way to access your data within just a few hours. For example, you can use Standard retrievals to restore backup data, retrieve archived media content for same-day editing or distribution, or pull and analyze logs to drive business decisions within hours.

For more information on Amazon Glacier retrievals, please visit the following URL: https://aws.amazon.com/glacier/faqs/#dataretrievals

289
Q

Question 686

You currently have an EC2 instance hosting a web application. The number of users is expected to increase in the coming months and hence you need to add more elasticity to your setup. Which of the following methods can help add elasticity to your existing setup. Choose 2 answers from the options given below

A. Setup your web app on more EC2 instances and set them behind an Elastic Load balancer

B. Setup an Elastic Cache in front of the EC2 instance.

C. Setup your web app on more EC2 instances and use Route53 to route requests accordingly.

D. Setup DynamoDB behind your EC2 Instances

A

Answer: A, C

The Elastic Load balancer can be used to distribute traffic to EC2 Instances. So to add elasticity to your setup, one can either do this, or even use Route53. In Route53, you can setup weighted routing policies to distribute requests to multiple EC2 Instances.

For more information on architecting for the cloud, please visit the following
URL: https://aws.amazon.com/whitepapers/architecting-for-the-aws-cloud-best-practices/

290
Q

Question 687

A company is hosting EC2 instances which focuses on work-loads are on non-production and non-priority batch loads. Also these processes can be interrupted at any time. What is the best pricing model which can be used for EC2 instances in this case?

A. Reserved Instances

B. On-Demand Instances

A

Answer: C

Spot instances enable you to bid on unused EC2 instances, which can lower your Amazon EC2 costs significantly. The hourly price for a Spot instance (of each instance type in each Availability Zone) is set by Amazon EC2, and fluctuates depending on the supply of and demand for Spot instances. Your Spot instance runs whenever your bid exceeds the current market price. Spot instances are a cost-effective choice if you can be flexible about when your applications run and if your applications can be interrupted. For example, Spot instances are well-suited for data analysis, batch jobs, background processing, and optional tasks

Option A is invalid because even though Reserved instances can reduce costs , its best for workloads that would be active for a longer period of time rather than for batch load processes which could last for a shorter period of time.

Option B is not right because On-Demand Instances tend to be more expensive than Spot Instances.

Option D is invalid because there is no concept of Regular instances in AWS

For more information on Spot Instances, please visit the below URL: http: //docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-spot-instances.html

291
Q

Question 688

A company wants to deploy docker containers to the AWS Cloud. They also want a
highly scalable service which can help manage the orchestration of these containers.
Which of the following would be ideal for such a requirement

A. Use the Amazon Elastic Container Service for Kubernetes

B. Install a custom orchestration tool on EC2 Instances

C. Use SQS to orchestrate the messages between docker containers

D. Use AWS Lambda functions to embed the logic for container orchestration.

A

Answer: A

The AWS Documentation mentions the following Amazon Elastic Container Service for Kubernetes (Amazon EKS) is a managed service that makes it easy for you to run Kubernetes on AWS without needing to install and operate your own Kubernetes clusters. Kubernetes is an open-source system for automating the deployment, scaling, and management of containerized applications. Operating Kubernetes for production applications presents a number of challenges. You need to manage the scaling and availability of your Kubernetes masters and persistence layer by ensuring that you have chosen appropriate instance types, running them across multiple Availability Zones, monitoring their health, and replacing unhealthy nodes. You need to patch and upgrade your masters and worker nodes to ensure that you are running the latest version of Kubernetes. This all requires expertise and a lot of manual work. With Amazon EKS, upgrades and high availability are managed for you by AWS. Amazon EKS runs three Kubernetes masters across three Availability Zones in order to ensure high availability. Amazon EKS automatically detects and replaces unhealthy masters, and it provides automated version upgrades and patching for the masters. For more information on the Elastic Container service, please visit the below URL: https://aws.amazon.com/eks/

292
Q

Question 689

When using the following AWS services, which should be implemented in multiple Availability Zones for high availability solutions? Choose 2 answers from the options below

A. Amazon DynamoDB

B. Amazon Elastic Compute Cloud (EC2)

C. Amazon Elastic Load Balancing

D. Amazon Simple Storage Service (S3)

A

Answer: B, C

The snapshot from the aws documentation shows how the ELB and EC2 instances get setup for high availability. You have the ELB placed in front of the instances. The instances are placed in different AZ’s.

For more information on the ELB, please visit the below URL: https://aws.amazon.com/elasticloadbalancing/

Option A is wrong because the service runs across Amazon’s proven, high-availability data centers. The service replicates data across three facilities in an AWS Region to provide fault tolerance in the event of a server failure or Availability Zone outage.

Option D is wrong because Amazon S3 Standard and Standard - IA redundantly stores your objects on multiple devices across multiple facilities in an Amazon S3 Region. The service is designed to sustain concurrent device failures by quickly detecting and repairing any lost redundancy

293
Q

Question 690

Your company’s management team has asked you to devise a strategy for disaster recovery for the current resources hosted in AWS. They want to minimize costs, but be able to spin up the Infrastructure when needed in another region. How could you accomplish this with the LEAST costs in mind?

A. Create a duplicate of the entire infrastructure in another region

B. Create a Pilot light infrastructure in another region

C. Use Elastic Beanstalk to create another copy of the infrastructure in another region if a disaster occurs in the primary region

D. Use Cloudformation to spin up resources in another region if a disaster occurs in the primary region

A

Answer: D

Since cost is a factor, both option A and B are invalid. The best and most cost effective option is to create Cloudformation templates which can be used to spin up resources in another region in a disaster recovery.

For more information on Cloudformation please visit the below URL:
https://aws.amazon.com/cloudformation/

294
Q

Question 691

You create an Autoscaling Group which is used to spin up instances on Demand. You need to ensure that as an architect the instances get pre-installed with a software when they are launched. What are the ways in which you can achieve this? Choose 2 answers from the options given below.

A. Add the software installation to the configuration for the Autoscaling Group

B. Add the scripts for the installation in the User data section.

C. Create a golden image and then create a launch configuration.

D. Ask the IT operations team to install the software as soon as the instance is launched

A

Answer: B, C

The User data section of an Instance launch can be used to pre-configure software after the instance is initially booted.

For more information on UserData please visit the below URL:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html

Also you can create an AMI or a golden image with the software already installed and then create a launch configuration which can be used by that Autoscaling Group

For more information on AMI’s please visit the below URL:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html

295
Q

Question 692

You are building a stateless architecture for an application. This will consists of Web servers and an Autoscaling Group. Which of the following would be an ideal storage mechanism for the Session data.

A. AWS DynamoDB

B. AWS Redshift

C. AWS EBS Volumes

D. AWS S3

A

Answer: A

The diagram from the AWS Documentation shows how the stateless architecture would look like.

For more information on architecting for the cloud please visit the
below URL: https://aws.amazon.com/whitepapers/architecting-for-the-aws-cloud-best-practices/

296
Q

Question 693

You have a set of IIS Servers running on EC2 Instances. You want to collect and process the log files generated from the IIS Servers. Which of the below services is ideal to run in this scenario

A. Amazon S3 for storing the log files and Amazon EMR for processing the log files

B. Amazon S3 for storing the log files and EC2 Instances for processing the log files

C. Amazon EC2 for storing and processing the log files

D. Amazon DynamoDB to store the logs and EC2 for running custom log analysis scripts

A

Answer: A

Amazon EMR is a managed cluster platform that simplifies running big data frameworks, such as Apache Hadoop and Apache Spark, on AWS to process and analyze vast amounts of data. By using these frameworks and related open-source projects, such as Apache Hive and Apache Pig, you can process data for analytics purposes and business intelligence workloads. Additionally, you can use Amazon EMR to transform and move large amounts of data into and out of other AWS data stores and databases, such as Amazon Simple Storage Service (Amazon S3) and Amazon DynamoDB.

Option B and C, even though partially correct would be an overhead for EC2 Instances to process the log files when you already have a readymade service which can help in this regard

Option D is in invalid because DynamoDB is not an ideal option to store log files.

For more information on EMR, please visit the below URL:
http: //docs.aws.amazon.com/emr/latest/ManagementGuide/emr-what-is-emr.html

297
Q

Question 694

You need to ensure that objects in an S3 bucket are available in another region.
This is because of the criticality of the data that is hosted in the S3 bucket. How can you
achieve this in the easiest way possible?

A. Enable cross region replication for the bucket

B. Write a script to copy the objects to another bucket in the destination
region

C. Create an S3 snapshot in the destination region

D. Enable versioning which will copy the objects to the destination region

A

Answer: A

The AWS Documentation mentions the following Cross-region replication is a bucket-level configuration that enables automatic, asynchronous copying of objects across buckets in different AWS Regions.

For more information on Cross region replication in the Simple Storage Service, please visit the below URL:
https://docs.aws.amazon.com/AmazonS3/latest/dev/crr.html

298
Q

Question 695

You want to build and deploy code functions in the AWS Cloud, but don’t want to manage the infrastructure. Which of the following services can meet this requirement.

A. AWS EC2

B. AWS API Gateway

C. AWS Lambda

D. AWS DynamoDB

A

Answer: C

The AWS Documentation mentions the following AWS Lambda is a compute service that lets you run code without provisioning or managing servers. AWS Lambda executes your code only when needed and scales automatically, from a few requests per day to thousands per second. You pay only for the compute time you consume - there is no charge when your code is not running. With AWS Lambda, you can run code for virtually any type of application or backend service - all with zero administration.

For more information on AWS Lambda, please visit the below URL:
https://docs.aws.amazon.com/lambda/latest/dg/welcome.html

299
Q

Question 696

A storage solution is required in AWS to store videos uploaded by the user. Over a period of a month these videos can be deleted. How should this be implemented in a cost effective manner?

A. Use EBS Volumes to store the videos. Create script to delete the videos after a month

B. Store the videos in S3 and then use Lifecycle policies

C. Store the videos in Amazon Glacier and then use Lifecycle policies

D. Store the videos using Stored Volumes. Create script to delete the videos after a month

A

Answer: C

The AWS Documentation mentions the following on lifecycle policies Lifecycle configuration enables you to specify the lifecycle management of objects in a bucket. The configuration is a set of one or more rules, where each rule defines an action for Amazon S3 to apply to a group of objects. These actions can be classified as follows:

  • Transition actions — In which you define when objects transition to another storage class. For example, you may choose to transition objects to the STANDARD_IA (IA, for infrequent access) storage class 30 days after creation, or archive objects to the GLACIER storage class one year after creation.
  • Expiration actions — In which you specify when the objects expire. Then Amazon S3 deletes the expired objects on your behalf.

For more information on AWS S3 Lifecycle policies, please visit the following URL:
https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html

300
Q

Question 697

You want to ensure that you keep a check on the Active EBS Volumes, Active snapshots and Elastic IP addresses you use so that you don’t go beyond the service limit. Which of the below services can help in this regard?

A. AWS Cloudwatch

B. AWS EC2

C. AWS Trusted Advisor

D. AWS SNS

A

Answer: C

Snapshot of the service limits that the Trusted Advisor can monitor will be found in AWS documentation

Option A is invalid because even though you can monitor resources, it cannot be checked against the service limit.

Option B is invalid because this is the Elastic Compute cloud service Option D is invalid because it can be send notification but not check on service limits

For more information on the Trusted Advisor monitoring, please visit the below
URL: https://aws.amazon.com/premiumsupport/ta-faqs/

301
Q

Question 698

You have an EC2 Instance in a particular region. This EC2 Instance has a preconfigured software running on it. You have been requested to create a disaster recovery solution in case the instance in the region fails. Which of the following is the best solution.

A. Create a duplicate EC2 Instance in another AZ. Keep it in the shutdown state. When required, bring it back up.

B. Backup the EBS data volume. If the instance fails, bring up a new EC2 instance and attach the volume.

C. Store the EC2 data on S3. If the instance fails, bring up a new EC2 instance and restore the data from S3.

D. Create an AMI of the EC2 Instance and copy it to another region

A

Answer: D

You can copy an Amazon Machine Image (AMI) within or across an AWS region using the AWS Management Console, the AWS command line tools or SDKs, or the Amazon EC2 API, all of which support the CopyImage action. You can copy both Amazon EBS-backed AMIs and instance store-backed AMIs. You can copy AMIs with encrypted snapshots and encrypted AMIs. Copying a source AMI results in an identical but distinct target AMI with its own unique identifier. In the case of an Amazon EBS-backed AMI, each of its backing snapshots is, by default, copied to an identical but distinct target snapshot. Option A is invalid, because it is a maintenance overhead to maintain another non-running instance Option B is invalid, because the pre-configured software could have settings on the root volume Option C is invalid because this is a long and inefficient way to restore a failed instance

For more information on Copying AMI’s, please visit the below URL:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/CopyingAMIs.html

302
Q

Question 699

You working in the media industry and you have created a web application where users will be able to upload photos they create to your website. This web application must be able to call the S3 API in order to be able to function. Where should you store your API credentials whilst maintaining the maximum level of security?

A. Save the API credentials to your PHP files.

B. Don’t save your API credentials. Instead create a role in IAM and assign this role to an EC2 instance when you first create it.

C. Save your API credentials in a public Github repository.

D. Pass API credentials to the instance using instance userdata.

A

Answer: B

Applications must sign their API requests with AWS credentials. Therefore, if you are an application developer, you need a strategy for managing credentials for your applications that run on EC2 instances. For example, you can securely distribute your AWS credentials to the instances, enabling the applications on those instances to use your credentials to sign requests, while protecting your credentials from other users. However, it’s challenging to securely distribute credentials to each instance, especially those that AWS creates on your behalf, such as Spot Instances or instances in Auto Scaling groups. You must also be able to update the credentials on each instance when you rotate your AWS credentials. LAM roles are designed so that your applications can securely make API requests from your instances, without requiring you to manage the security credentials that the applications use.

For more information on IAM Roles, please visit the below URL:
http: //docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html

303
Q

Question 700

You need to ensure that data stored in S3 is encrypted. You don’t want to manage the encryption keys. Which of the following encryption mechanism can be used in such a case

A. SSE-S3

B. SSE-C

C. SSE-KMS

D. SSE-SSL

A

Answer: A

The AWS Documentation mentions the following on Encryption keys - SSE-S3 requires that Amazon S3 manage the data and master encryption keys. - SSE-C requires that you manage the encryption key. - SSE-KMS requires that AWS manage the data key but you manage the master key in AWS KMS.

For more information on using the Key Management service for S3, please visit the below URL: https://docs.aws.amazon.com/kms/latest/developerguide/services-s3.html

304
Q

Question 701

An organization is managing a Redshift Cluster in AWS. They need to monitor the performance of the Redshift to ensure that it is performing as efficiently as possible. Which of the following service can be used for achieving this requirement

A. Cloudtrail

B. VPC Flow Logs

C. Cloudwatch

D. AWS Trusted Advisor

A

Answer: C

The AWS Documentation mentions the following on monitoring Redshift Clusters Amazon CloudWatch metrics help you monitor physical aspects of your cluster, such as CPU utilization, latency, and throughput. Metric data is displayed directly in the Amazon Redshift console. You can also view it in the Amazon CloudWatch console, or you can consume it in any other way you work with metrics such as with the Amazon CloudWatch Command Line Interface (CLI) or one of the AWS Software Development Kits (SDKs).

For more information on monitoring Redshift please visit the below URL:
https://docs.aws.amazon.com/redshift/latest/mgmt/metrics.html

305
Q

Question 702

Your company currently has an S3 bucket in AWS. The objects in S3 are accessed quite frequently. Which of the following is an implementation step that can be considered to reduce the cost of accessing contents from the S3 bucket.

A. Place the S3 bucket behind a Cloudfront distribution

B. Enable versioning on the S3 bucket

C. Enable encryption on the S3 bucket

D. Place the S3 bucket behind an API gateway

A

Answer: A

The AWS Documentation mentions the following Using CloudFront can be more cost effective if your users access your objects frequently because, at higher usage, the price for CloudFront data transfer is lower than the price for Amazon S3 data transfer.
In addition, downloads are faster with CloudFront than with Amazon S3 alone because your objects are stored closer to your users.

For more information on using Cloudfront with S3 please visit the below URL:
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/MigrateS3ToCloudFront.html

306
Q

Question 703

You have an application in which users can subscribe to a service. They can subscribe to it using their email ID. They should be able to receive messages published by the service. This all needs to be done using AWS Components. Which of the below would be a probable service included in this architecture.

A. AWS SNS

B. AWS Config

C. AWS S3

D. AWS Glacier

A

Answer: A

The AWS Documentation mentions the following Amazon Simple Notification Service (Amazon SNS) is a web service that coordinates and manages the delivery or sending of messages to subscribing endpoints or clients. In Amazon SNS, there are two types of clients—publishers and subscribers—also referred to as producers and consumers. Publishers communicate asynchronously with subscribers by producing and sending a message to a topic, which is a logical access point and communication channel. Subscribers (i.e., web servers, email addresses, Amazon SQS queues, AWS Lambda functions) consume or receive the message or notification over one of the supported protocols (i.e., Amazon SQS, HTTP/S, email, SMS, Lambda) when they are subscribed to the topic.

For more information on the Simple Notification service, please visit the below URL: https://docs.aws.amazon.com/sns/latest/dg/welcome.html

307
Q

Question 704

You are IOT sensors to monitor the number of bags that are handled at an airport. The data gets sent back to a Kinesis stream with default settings. Every alternate day, the data from the stream is sent to S3 for processing. But you notice that S3 is not receiving all of the data that is being sent to the Kinesis stream. What could be the reason for this?

A. The sensors probably stopped working on some days hence data is not sent to the stream.

B. S3 can only store data for a data

C. Data records are only accessible for a default of 24 hours from the time they are added to a stream

D. Kinesis streams are not meant to handle IoT related data

A

Answer: C

Kinesis Streams supports changes to the data record retention period of your stream. A Kinesis stream is an ordered sequence of data records meant to be written to and read from in real-time. Data records are therefore stored in shards in your stream temporarily. The time period from when a record is added to when it is no longer accessible is called the retention period. A Kinesis stream stores records from 24 hours by default, up to 168 hours.

Option A, even though a possibility, cannot be taken for granted as the right option.

Option B is invalid since S3 can store data indefinitely unless you have a lifecycle policy defined.

Option D is invalid because the Kinesis service is perfect for this sort of data ingestion

For more information on Kinesis data retention, please refer to the below URL:
http: //docs.aws.amazon.com/streams/latest/dev/kinesis-extended-retention.html

308
Q

Question 705

A company needs to have a columnar database sue to the underlying analytic query performance that can be achieved. Which of the following can meet this requirement for a database.

A. Amazon Redshift

B. Amazon RDS

C. ElastiCache

D. DynamoDB

A

Answer: A

The AWS Documentation mentions the following Amazon Redshift is a column- oriented, fully managed, petabyte-scale data warehouse that makes it simple and cost-effective to analyze all your data using your existing business intelligence tools. Amazon Redshift achieves efficient storage and optimum query performance through a combination of massively parallel processing, columnar data storage, and very efficient, targeted data compression encoding schemes

For more information on columnar database in AWS, please refer to the below URL:
https://aws.amazon.com/nosql/columnar/

309
Q

Question 706

There is a requirement to host a database on an EC2 Instance. There is a requirement for the EBS volume to support a high rate of IOPS since there are going to be a large number of read and write requests on the database. Which Amazon EBS volume type can meet the performance requirements of this database?

A. EBS Provisioned IOPS SSD

B. EBS Throughput Optimized HDD

C. EBS General Purpose SSD

D. EBS Cold HDD

A

Answer: A

Since there is a high performance requirement with high IOPS needed, one needs to opt for EBS Provisioned IOPS SSD The snapshot from the AWS Documentation mentions the need of using Provisioned IOPS for better IOPS performance for database based applications.

For more information on AWS EBS Volume types, please visit the following URL:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html

310
Q

Question 707

You have a requirement for deploying an existing Java based application to AWS. There is a need for automatic scaling for the underlying environment. Which of the following can be used to deploy this environment in the quickest way possible.

A. Deploy to an S3 bucket and enable web site hosting.

B. Use the Elastic beanstalk service to provision the environment.

C. Use EC2 with Autoscaling for the environment

D. Use AMI’s to build EC2 instances for deployment.

A

Answer: B

The AWS Documentation mentions the following AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services developed with Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker on familiar servers such as Apache, Nginx, Passenger, and IIS. You can simply upload your code and Elastic Beanstalk automatically handles the deployment, from capacity provisioning, load balancing, auto-scaling to application health monitoring. At the same time, you retain full control over the AWS resources powering your application and can access the underlying resources at any time.

For more information on the Elastic beanstalk
service, please visit the following URL: https://aws.amazon.com/elasticbeanstalk/

311
Q

Question 708

There is a requirement to upload a million files to $3. Which of the following can be used to ensure optimal performance

A. Use a date for the prefix

B. Use a hexadecimal hash for the prefix

C. Use a date for the suffix

D. Use a sequential ID for the suffix

A

Answer: B

This recommendation for increasing performance if you have a high request rate in S3 is given in the AWS documentation

For more information on S3 performance considerations, please visit the following URL:
https://docs.aws.amazon.com/AmazonS3/latest/dev/request-rate-perf-considerations.html

312
Q

Question 709

You want to build a decoupled, highly available and fault tolerant architecture for your application in AWS. You decide to use EC2, the Classic Load balancer, Autoscaling and Route53. Which of the following is an additional service you should involve in this architecture.

A. AWS SNS

B. AWS SQS

C. AWS API Gateway

D. AWS Config

A

Answer: B

The Simple Queue service can be used to build a decoupled architecture. The AWS Documentation further mentions the following Amazon Simple Queue Service (SQS) is a fully managed message queuing service that makes it easy to decouple and scale microservices, distributed systems, and serverless applications. Building applications from individual components that each perform a discrete function improves scalability and reliability, and is best practice design for modern applications.

For more information on the Simple Queue Service, please visit the following URL:
https://aws.amazon.com/sqs/

313
Q

Question 710

You have been tasked with architecting an application in AWS. The architecture would consist of EC2, the Classic Load balancer, Autoscaling and Route53. There is a directive to ensure that Blue Green deployments are possible in this architecture. Which routing policy could you ideally use in Routes53 for achieving Blue Green deployments?

A. Simple

B. Multi-answer

C, Latency

D. Weighted

A

Answer: D

The AWS Documentation mentions that the weighted routing policy is good for testing new versions of the software. And this is the ideal approach for Blue Green deployments. Weighted routing lets you associate multiple resources with a single
domain name (example.com) or subdomain name (acme.example.com) and choose how much traffic is routed to each resource. This can be useful for a variety of purposes, including load balancing and testing new versions of software. 

For more information on Routes3 routing policies, please visit the following URL:
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html

314
Q

Question 711

Acompany is planning to deploy an application in AWS. The application requires an EC2 Instance to do continuously log processing activities which requires at least 500MiB/s throughput of data. Which of the following is the best storage option for this.

A. EBS IOPS

B. EBS SSD

C. EBS Throughput Optimized

D. EBS Cold Storage

A

Answer: C

When you are considering storage volume types for batch processing activities with large throughput, then consider using EBS Throughput Optimized volume type. This is also mentioned in the AWS Documentation

For more information on EBS volume types, please visit the following URL:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html

315
Q

Question 712

You have a need to connect 2 VPC’s in different accounts. How could this be achieved?

A. Use the Security Groups to do the mapping of both VPC’s

B. Use the VPC Route tables to do the mapping of both VPC’s

C. Use Consolidating billing to connect both accounts.

D. Use VPC peering to connect both VPC’s

A

Answer: D

The AWS Documentation mentions the following about VPC Peering. A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them privately. Instances in either VPC can communicate with each other as if they are within the same network. You can create a VPC peering connection between your own VPCs, with a VPC in another AWS account, or with a VPC in a different AWS Region.

For more information on VPC Peering, please visit the following URL: https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-peering.html

316
Q

Question 713

You need to ensure that instances in a private subnet can access the Internet. The solution should be highly available and ensure less maintenance overhead. Which of the following would ideally fit this requirement.

A. Host the NAT instance in the private subnet

B. Host the NAT instance in the public subnet

C. Use the NAT gateway in the private subnet

D. Use the NAT gateway in the public subnet

A

Answer: D

If you look at the comparison of the NAT gateway and NAT instances in the AWS Documentation, you can see that the NAT gateway is highly available and requires less management.

For more information on the comparison, please visit the following URL: https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-nat-comparison.html

317
Q

Question 714

You need to have a data storage layer in AWS. Following are the key requirements

a) Storage of JSON documents
b) Availability of Indexes
c) Automatic scaling

Which would be the ideal storage layer for this.

A. AWS DynamoDB

B. AWS EBS Volumes

C. AWS S3

D. AWS Glacier

A

Answer: A

The AWS Documentation mentions the following Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. DynamoDB enables customers to offload the administrative burdens of operating and scaling distributed databases to AWS so that they don’t have to worry about hardware provisioning, setup and configuration, throughput capacity planning, replication, software patching, or cluster scaling.

For more information on DynamoDB please visit the following URL:
https://aws.amazon.com/dynamodb/faqs/

318
Q

Question 715

You have a set of Docker images that you use for building containers. You want to start using the Elastic container service and utilize the docker images. You need a place to store these docker images. Which of the following can be used for this purpose.

A. Use AWS DynamoDB to store the docker Images

B. Use AWS RDS to store the docker Images

C. Use EC2 Instances with EBS Volumes to store the docker Images

D. Use the ECR service to store the docker Images

A

Answer: D

The AWS Documentation mentions the following Amazon Elastic Container Registry (ECR) is a fully-managed Dockercontainer registry that makes it easy for developers to store, manage, and deploy Docker container images. Amazon ECR is integrated with Amazon Elastic Container Service (ECS), simplifying your development to production workflow.

For more information on the Elastic container service please visit the following URL:
https://aws.amazon.com/ecr/?nc2=h_m1

319
Q

Question 716

You need to start using resources in AWS to build a big data processing system. Which of the following is a service that you would ideally use for this requirement

A. AWS DynamoDB

B. AWS EMR

C. AWS ECS

D. AWS ECR

A

Answer: B

The AWS Documentation mentions the following Amazon EMR provides a managed Hadoop framework that makes it easy, fast, and cost-effective to process vast amounts of data across dynamically scalable Amazon EC2 instances. You can also run other popular distributed frameworks such as Apache Spark, HBase, Presto, and Flink in Amazon EMR, and interact with data in other AWS data stores such as Amazon S3 and Amazon DynamoDB. Amazon EMR securely and reliably handles a broad set of big data use cases, including log analysis, web indexing, data transformations (ETL), machine learning, financial analysis, scientific simulation, and bioinformatics.

For more information on the EMR service please visit the following:
URL: https://aws.amazon.com/emr/?nc2=h_m1

320
Q

Question 717

Your company asked you to create a mobile application. The application is built to work with DynamoDB as the backend and Javascript as the frontend. During the usage of the application you notice that there are spikes in the application, especially in the DynamoDB area. Which option provides the most cost effective and scalable architecture for this application? Choose an answer from the options below.

A. Auto scale DynamoDB to meet the requirements

B. Increase write capacity of DynamoDB tables to meet the peak loads

C. Create a service that pulls SQS messages and writes these to DynamoDB to handle sudden spikes in dynamoDB

D. Launch DynamoDB in Multi-AZ configuration with a global index to balance writes

A

Answer: C

When the idea comes for scalability then SQS is the best option. Normally DynamoDB is scalable, but since one is looking for a cost effective solution, the messaging in SQS can assist in managing the situation mentioned in the question. Amazon Simple Queue Service (SQS) is a fully-managed message queuing service for reliably communicating among distributed software components and microservices - at any scale. Building applications from individual components that each perform a discrete function improves scalability and reliability, and is best practice design for modern applications. SQS makes it simple and cost-effective to decouple and coordinate the components of a cloud application. Using SQS, you can send, store, and receive messages between software components at any volume, without losing messages or requiring other services to be always available

For more information on SQS, please refer to the URL:
https://aws.amazon.com/sqs/

321
Q

Question 718

You are building a large-scale confidential documentation web server on AWS and all of the documentation for it will be stored on S3. One of the requirements is that it cannot be publicly accessible from S3 directly, and you will need to use CloudFront to accomplish this. Which of the methods listed below would satisfy the requirements as outlined? Choose an answer from the options below

A. Create an Identity and Access Management (IAM) user for CloudFront and grant access to the objects in your S3 bucket to that LAM User.

B. Create an Origin Access Identity (OAD) for CloudFront and grant access to the objects in your S3 bucket to that OAI.

C. Create individual policies for each bucket the documents are stored in and in that policy grant access to only CloudFront.

D. Create an S3 bucket policy that lists the CloudFront distribution ID as the Principal and the target bucket as the Amazon Resource Name (ARN).

A

Answer: B

If you want to use CloudFront signed URLs or signed cookies to provide access to objects in your Amazon S3 bucket, you probably also want to prevent users from accessing your Amazon S3 objects using Amazon S3 URLs. If users access your objects directly in Amazon S3, they bypass the controls provided by CloudFront signed URLs or signed cookies, for example, control over the date and time that a user can no longer access your content and control over which IP addresses can be used to access content. In addition, if user’s access objects both through CloudFront and directly by using Amazon S3 URLs, CloudFront access logs are less useful because they’re incomplete.

For more information on Origin Access Identity please see the below link:
http: //docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html

322
Q

Question 719

Your company is planning on hosting their development, test and production applications on EC2 Instances in AWS. They are worried on how access control can be given to relevant IT Admins for the respective environments. As an architect, what can you suggest for managing the relevant access?

A. Add tags to the instances marking each environment and then segregate access using LAM policies.

B. Add Userdata to the underlying instances to mark each environment

C. Add Metadata to the underlying instances to mark each environment

D. Add each environment to a separate Autoscaling Group

A

Answer: A

The AWS Documentation mentions the following which helps support thisrequirement Tags enable you to categorize your AWS resources in different ways, for example, by purpose, owner, or environment. This is useful when you have many resources of the same type — you can quickly identify a specific resource based on the tags you’ve assigned to it. Each tag consists of a key and an optional value, both of which you define. For example, you could define a set of tags for your account’s Amazon EC2 instances that helps you track each instance’s owner and stack level. We recommend that you devise a set of tag keys that meets your needs for each resource type. Using a consistent set of tag keys makes it easier for you to manage your resources. You can search and filter the resources based on the tags you add.

For more information on using tags, please see the below link:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html

323
Q

Question 720

You want to set up a public website on AWS. The things that you require are as follows:

  • You want the database and the application server running on AWS VPC.
  • You want the database to be able to connect to the Internet, specifically for any patch upgrades.
  • You do not want to receive any incoming requests from the Internet to the database.

Which of the following solutions would be the best to satisfy all the above requirements for your planned public website on AWS? Choose the correct answer from the options below

A. Set up the database in a private subnet with a security group which only allows outbound traffic.

B. Set up the database in a public subnet with a security group which only allows inbound traffic.

C. Set up the database in a local data center and use a private gateway to connect the application to the database.

D. Set up the public website on a public subnet and set up the database in a private subnet which connects to the Internet via a NAT instance.

A

Answer: D

The diagram from the AWS documentation show cases this architecture

For more information on the VPC Scenario for public and private subnets please see the below link:
http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Scenario2.html

324
Q

Question 721

Acompany has a Redshift cluster for petabyte-scale data warehousing. The data within the cluster is easily reproducible from additional data stored on Amazon S3. The company wants to reduce the overall total cost of running this Redshift cluster. Which scenario would best meet the needs of the running cluster, while still reducing total overall ownership of the cluster? Choose the correct answer from the options below

A. Instead of implementing automatic daily backups, write a CLI script that creates manual snapshots every few days. Copy the manual snapshot to a secondary AWS region for disaster recovery situations.

B. Enable automated snapshots but set the retention period to a lower number to reduce storage costs

C. Implement daily backups, but do not enable multi-region copy to save data transfer costs.

D. Disable automated and manual snapshots on the cluster

A

Answer: D

Snapshots are point-in-time backups of a cluster. There are two types of snapshots: automated and manual. Amazon Redshift stores these snapshots internally in Amazon S3 by using an encrypted Secure Sockets Layer (SSL) connection. If you need to restore from a snapshot, Amazon Redshift creates a new cluster and imports data from the snapshot that you specify. Now since the question already mentions that the cluster is easily reproducible from additional data stored on Amazon Sg then you don’t need to maintain any sort of snapshots

For more information on Red shift snapshots, please visit the below URL:
http://docs.aws.amazon.com/redshift/latest/mgmt/working-with-snapshots.html

325
Q

Question 722

You have the following application to be setup in AWS

a) A web tier hosted on EC2 Instances
b) Session data to be written to DynamoDB
c) Log files to be written to Microsoft SQL Server

How can you allow an application to write data to a DynamoDB table?

A. Add an IAM user to a running EC2 instance.

B. Add an IAM user that allows write access to the DynamoDB table.

C. Create an [AM role that allows read access to the DynamoDB table.

D. Create an IAM role that allows write access to the DynamoDB table.

A

Answer: D

IAM roles are designed so that your applications can securely make API requests from your instances, without requiring you to manage the security credentials that the applications use. Instead of creating and distributing your AWS credentials

For more information on LAM Roles please refer to the below link:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html

326
Q

Question 723

You are doing a load testing exercise on your application hosted on AWS. While testing your Amazon RDS MySQL DB instance, you notice that when you hit 100% CPU utilization on it, your application becomes non- responsive. Your application is read- heavy. What are methods to scale your data tier to meet the application’s needs? Choose three answers from the options given below

A. Add Amazon RDS DB read replicas, and have your application direct read queries to them.

B. Add your Amazon RDS DB instance to an Auto Scaling group and configure your CloudWatch metric based on CPU utilization.

C. Use an Amazon SQS queue to throttle data going to the Amazon RDS DB instance.

D. Use ElastiCache in front of your Amazon RDS DB to cache common queries.

E. Shard your data set among multiple Amazon RDS DB instances.

F. Enable Multi-AZ for your Amazon RDS DB instance.

A

Answer: A, D, E

Amazon RDS Read Replicas provide enhanced performance and durability for database (DB) instances. This replication feature makes it easy to elastically scale out beyond the capacity constraints of a single DB Instance for read-heavy database workloads. You can create one or more replicas of a given source DB Instance and serve high-volume application read traffic from multiple copies of your data, thereby increasing aggregate read throughput For more information on Read Replica’s please refer to the below link: https://aws.amazon.com/rds/details/read-replicas/ Sharding is a common concept to split data across multiple tables in a database

For more information on sharding please refer to the below link:
https://forums.aws.amazon.com/thread.jspa?messageID=203052

Amazon ElastiCache is a web service that makes it easy to deploy, operate, and scale an in-memory data store or cache in the cloud. The service improves the performance of web applications by allowing you to retrieve information from fast, managed, in-memory data stores, instead of relying entirely on slower disk-based databases

For more information on Elastic Cache please refer to the below link:

https://aws.amazon.com/elasticache/

Option B is not an ideal way to scale a database

Option C is not ideal to store the data which would go into a database because of the message size

Option F is invalid because Multi-AZ feature is only a failover option

327
Q

Question 724

You work for a very large company that has multiple applications which are very different and built on different programming languages. How can you deploy applications as quickly as possible?

A. Develop each app in one Docker container and deploy using ElasticBeanstalk

B. Create a Lambda function deployment package consisting of code and any dependencies

C. Develop each app in a separate Docker container and deploy using Elastic Beanstalk

D. Develop each app in a separate Docker containers and deploy using CloudFormation

A

Answer: C

Elastic Beanstalk supports the deployment of web applications from Docker containers. With Docker containers, you can define your own runtime environment. You can choose your own platform, programming language, and any application dependencies (such as package managers or tools), that aren’t supported by other platforms. Docker containers are self-contained and include all the configuration information and software your web application requires to run. Option A is an efficient way to use Docker. The entire idea of Docker is that you have a separate environment for various applications. Option B is ideally used to running code and not packaging the applications and dependencies Option D is not ideal deploying Docker containers using Cloudformation.

For more information on Docker and Elastic Beanstalk, please visit the below URL:
http: //docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker.html

328
Q

Question 725

You are designing a system which needs, at minimum, 8 m4.large instances operating to service traffic. When designing a system for high availability in the us-east-1 region, which has 6 Availability Zones, you company needs to be able to handle death of a full availability zone. How should you distribute the servers, to save as much cost as possible, assuming all of the EC2 nodes are properly linked to an ELB? Your VPC account can utilize us-east-1’s AZ’s a through f, inclusive.

A. 3 servers in each of AZ’s a through d, inclusive.

B. 8 servers in each of AZ’s a and b.

C. 2 servers in each of AZ’s a through e, inclusive.

D. 4 servers in each of AZ’s a through c, inclusive.

A

Answer: C

The best way is to distribute the instances across multiple AZ’s to get the best and avoid a disaster scenario. With this scenario, you will always a minimum of more than 8 servers even if one AZ were to go down.

Even though options A and D are also valid options, the best option when it comes to distribution is Option C.

For more information on High Availability and Fault tolerance, please refer to the below link:
https://media.amazonwebservices.com/architecturecenter/AWS_ac_ra_ftha_o4.pdf

329
Q

Question 726

You have been given a business requirement to retain log files for your application for 10 years. You need to regularly retrieve the most recent logs for troubleshooting. Your logging system must be cost-effective, given the large volume of logs.

What technique should you use to meet these requirements?

A. Store your log in Amazon CloudWatch Logs.

B. Store your logs in Amazon Glacier.

C. Store your logs in Amazon S3, and use lifecycle policies to archive to Amazon Glacier.

D. Store your logs on Amazon EBS, and use Amazon EBS snapshots to archive them.

A

Answer: C

Option A is invalid, because cloudwatch will not store the logs indefinitely and secondly it won’t be the cost effective option.

Option B is invalid, because it won’t server the purpose of regularly retrieve the most recent logs for troubleshooting. You will need to pay more to retrieve the logs faster from this storage.

Option D is invalid, because it is not an ideal or cost effective option.

For more information on Lifecycle management please refer to the below link:
http://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html

330
Q

Question 727

An application in AWS is currently running in the Singapore region. You have been asked to implement disaster recovery. So if the application goes down in the Singapore region, it has to be started in the Asia region. You application relies on pre-built AMIs. As part of your disaster recovery strategy, which of the below points should you consider.

A. Nothing, because all AMI’s de default are available in any region as long as it is created within the same account

B. Copy the AMI from the Singapore region to the Asia region. Modify the Auto Scaling groups in the backup region to use the new AMI ID in the backup region

C. Modify the image permissions and share the AMI to the Asia region.

D. Modify the image permissions to share the AMI with another account, then set the default region to the backup region

A

Answer: B

If you need an AMI across multiple regions , then you have to copy the AMI across regions.

Note that by default AMI’s that you have created will not be available across all regions.
So option A is automatically invalid.

Next you can share AMI’s with other users, but they will not be available across regions. So option C and D is invalid.

You have to copy the AMI across regions.

For more information on copying AMI’s , please refer to the below URL:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/CopyingAMIs.html

331
Q

Question 728

You are an AWS Solution Architect and architecting an application environment on AWS. Which service or service feature might you enable to take advantage of monitoring to ensure auditing the environment for compliance is easy and follows the strict security compliance requirements?

A. CloudTrail for security logs

B. SSL Logging

C. Encrypted data storage

D. Multi Factor Authentication

A

Answer: A

AWS Cloudtrail is the defacto service provided by aws for monitoring all API calls to AWS and is used for logging and monitoring purposes for compliance purposes. Amazon Cloudtrail detects every call made to aws and creates a log which can then be further used for analysis.

For more information on Amazon Cloudtrail , please visit the link:
https://aws.amazon.com/cloudtrail/

332
Q

Question 729

As part of your application architecture requirements, the company you are working for has requested the ability to run analytics against all combined log files from the Elastic Load Balancer. Which services are used together to collect logs and process log file analysis in an AWS environment? Choose the correct option.

A. Amazon DynamoDB to store the logs and EC2 for running custom log analysis scripts

B. Amazon EC2 for storing and processing the log files

C. Amazon S3 for storing the ELB log files and EC2 for processing the log files in analysis

D. Amazon S3 for storing ELB log files and Amazon EMR for processing the log files in analysis

A

Answer: D

This question is not that complicated, even though if you don’t understand the options. By default when you see “collection of logs and processing of logs”, directly think of AWS EMR. Amazon EMR provides a managed Hadoop framework that makes it easy, fast, and cost-effective to process vast amounts of data across dynamically scalable Amazon EC2 instances. You can also run other popular distributed frameworks such as Apache Spark, HBase, Presto, and Flink in Amazon EMR, and interact with data in other AWS data stores such as Amazon S3 and Amazon DynamoDB. Amazon EMR securely and reliably handles a broad set of big data use cases, including log analysis, web indexing, data transformations (ETL), machine learning, financial analysis, scientific simulation, and bioinformatics.

For more information on EMR, please visit the link:
https://aws.amazon.com/emr/

333
Q

Question 730

You have the requirement for storing documents in AWS. You need the documents to be versioned controlled. Which of the following storage option would be the ideal case for this scenario

A. Amazon 83

B. Amazon EBS

C. Amazon EFS

D. Amazon Glacier

A

Answer: A

Amazon S3 is a perfect storage layer for storing documents and other types of objects Amazon S3 also has the option for versioning as shown below. The versioning is on the bucket level and can be used to recover prior versions of an object.

For more information on Amazon S3, please visit the following URL:
https://aws.amazon.com/s3/

334
Q

Question 731

An application currently consists of an EC2 Instance hosting a web application. The Web application connects to an AWS RDS database. Which of the following can be used to ensure that the database layer is highly available.

A. Create another EC2 Instance in another availability zone and host a replica of the database

B. Create another EC2 Instance in another availability zone and host a replica of the web server

C. Enable Read Replica for the AWS RDS database

D. Enable Multi-AZ for the AWS RDS database

A

Answer: D

The AWS Documentation mentions the following Amazon RDS Multi-AZ deployments provide enhanced availability and durability for Database (DB) Instances, making them a natural fit for production database workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Each AZ runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable. In case of an infrastructure failure, Amazon RDS performs an automatic failover to the standby (or to a read replica in the case of Amazon Aurora), so that you can resume database operations as soon as the failover is complete. Since the endpoint for your DB Instance remains the same after a failover, your application can resume database operation without the need for manual administrative intervention.

For more information on AWS RDS Multi-AZ, please visit the following URL:
https://aws.amazon.com/rds/details/multi-az/

335
Q

Question 732

An application currently accept users to upload files to an S3 bucket. You want to ensure that the file name for each uploaded file is stored in a DynamoDB table. How can this be achieved? Choose 2 answers from the options given below. Each answer forms part of the solution.

A. Create an AWS Lambda function to insert the required entry

B. Use AWS Cloudwatch to probe for any S3 event

C. Add an event to the S3 bucket

D. Add the Cloudwatch event to the DynamoDB table streams section

A

Answer: A, C

One can create a Lambda function which can contain the code to process the file and add the name of the file to the DynamoDB table. You can then use the Event notification from the S3 bucket to invoke the Lambda function whenever the file is uploaded.

For more information on Amazon S3 event notification, please visit the following URL:
https://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html

336
Q

Question 733

A company is migrating an on-premise MySQL database to AWS.

Following are the key requirements

a) Ability to support an initial size of 5TB
b) Ability for the database to double in size
c) Replication Lag to be kept under 100 milliseconds.

Which Amazon RDS engine meets these requirements?

MySQL

B. Microsoft SQL Server

C. Oracle

D. Amazon Aurora

A

Answer: D

The AWS Documentation supports the mentioned requirements which is supported by AWS Aurora Amazon Aurora (Aurora) is a fully managed, MySQL- and PostgreSQL-compatible, relational database engine. It combines the speed and reliability of high-end commercial databases with the simplicity and cost-effectiveness of open-source databases. It delivers up to five times the throughput of MySQL and up to three times the throughput of PostgreSQL without requiring changes to most of your existing applications. All Aurora Replicas return the same data for query results with minimal replica lag—usually much less than 100 milliseconds after the primary instance has written an update.

For more information on AWS Aurora, please visit the following URL:
http: //docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Aurora.Overview.html

337
Q

Question 734

A company has a requirement to host a static web site in AWS. Which of the following would be an easy and cost effective way to have this setup in AWS.

A. Use Cloudformation templates to have the web site setup

B. Create an EC2 Instance, install the web server and then have the site setup

C. Use S3 web site hosting to host the web site

D. Use Elastic beanstalk to host the web site

A

Answer: C

The AWS Documentation mentions the following: You can host a static website on Amazon Simple Storage Service (Amazon S3). On a static website, individual webpages include static content. They might also contain client-side scripts.

For more information on AWS S3 web site hosting, please visit the following URL:
https://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteHosting.html

338
Q

Question 735

An application needs to have a database hosted in AWS. The database will be hosted on an EC2 Instance. The application itself does not have a high usage ratio, hence the reads and writes on the database would be to a bare minimum. What is the MOST cost effective storage type that could be used by the underlying EC2 instance hosting the database?

A. Amazon EBS provisioned IOPS SSD

B. Amazon EBS Throughput Optimized HDD

C. Amazon EBS General Purpose SSD

D. Amazon EFS

A

Answer: C

Since the database is not going to be used that frequently you should ideally choose the EBS General Purpose SSD over EBS provisioned IOPS SSD.

For more information on AWS EBS Volumes, please visit the following URL:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumes.html

339
Q

Question 736

An application needs to have files stored in AWS. The file system need to have the ability to be mounted from various linux EC2 Instances. Which of the following would be the ideal storage service that can be used for this requirement.

A. Amazon EBS

B. Amazon EFS

C. Amazon 83

D. Amazon EC2 Instance store

A

Answer: B

The AWS Documentation mentions the following Amazon EFS provides scalable file storage for use with Amazon EC2. You can create an EFS file system and configure your instances to mount the file system. You can use an EFS file system as a common data source for workloads and applications running on multiple instances

For more information on AWS EFS, please visit the following URL:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonEFS.html

340
Q

Question 737

An application allows users to upload images to an S3 bucket. Initially these images will be downloaded quite frequently, but after some time, the images might only be accessed once a week. What could be done to ensure that a COST effective solution? Choose 2 answers from the options below. Each answer forms part of the solution

A. Store the objects in Amazon Glacier

B. Store the objects in S3 — Standard storage

C. Create a lifecycle policy to transfer the objects to S3 — Standard storage after a certain duration of time

D. Create a lifecycle policy to transfer the objects to S3 — Infrequent Access storage after a certain duration of time

A

Answer: B, D

Store the images initially in Standard storage since they are accessed frequently. Define Lifecycle policies to move the images to Infrequent Access storage to save on costs. Amazon S3 Infrequent access is perfect if you want to store data that is not frequently access. It is must more cost effective than Option D of Amazon Sg Standard. And if you choose Amazon Glacier with expedited retrievals, then you defeat the whole purpose of the requirement, because you would have an increased cost with this option

For more information on AWS Storage classes, please visit the following URL:
https://aws.amazon.com/s3/storage-classes/

341
Q

Question 738

A company needs a solution to store and archive corporate documents and has determined that Amazon Glacier is the right solution. Data must be delivered within 5 minutes of a retrieval request. Which feature in Amazon Glacier can help meet this
requirement?

A. Defining a Vault Lock

B. Using Expedited retrieval

C. Using Bulk retrieval

D. Using Standard retrieval

A

Answer: B

The AWS Documentation mentions the following Expedited retrievals allow you to quickly access your data when occasional urgent requests for a subset of archives are required.

For more information on AWS Glacier Retrieval, please visit the following
URL: https://docs.aws.amazon.com/amazonglacier /latest/dev/downloading-an-archive-two-steps.html

342
Q

Question 739

A company wants to use Kubernetes as an orchestration tool for their application containers. They need to have a fully managed solution for this. Which of the following service would help fulfill this requirement

A. AWS ECS

B. AWS Lambda

C. AWS API Gateway

D. AWS ELB

A

Answer: A

The AWS Documentation mentions the following Amazon Elastic Container Service for Kubernetes (Amazon EKS) is a managed service that makes it easy for you to run Kubernetes on AWS without needing to install and operate your own Kubernetes clusters

For more information on AWS Elastic Container service for Kubernetes, please visit the following URL: https://aws.amazon.com/eks/

343
Q

Question 740

You are currently planning on using Autoscaling Groups for processing purposes for an application. How can you ensure that when an instance is spun up via the Autoscaling Group , that sufficient time is provided for the application to stabilize.

A. Modify the Instance User Data property with a timeout interval

B. Increase the Autoscaling cool down timer value

C. Enable the Autoscaling cross zone balancing feature

D. Disable Cloudwatch alarms till the application stabilizes

A

Answer: B

The AWS Documentation mentions the following The cool down period is a configurable setting for your Auto Scaling group that helps to ensure that it doesn’t launch or terminate additional instances before the previous scaling activity takes effect. After the Auto Scaling group dynamically scales using a simple scaling policy, it waits for the cool down period to complete before resuming scaling activities.

For more information on Autoscaling cooldown, please visit the following URL:
https://docs.aws.amazon.com/autoscaling/ec2/userguide/Cooldown.html

344
Q

Question 741

A company hosts a popular web application that connects to an Amazon RDS MySQL DB instance running in a private VPC subnet that was created with default ACL settings. The IT Security department has a suspicion that a DDos attack is coming from a suspecting IP. How can you protect the subnets from this attack?

A. Change the Inbound Security Groups to deny access from the suspecting IP

B. Change the Outbound Security Groups to deny access from the suspecting IP

C. Change the Inbound NACL to deny access from the suspecting IP

D. Change the Outbound NACL to deny access from the suspecting IP

A

Answer: C

Option A and B are invalid because by default the Security Groups already block
traffic. You can use NACL’s as an additional security layer for the subnet to deny traffic.

Option D is invalid since just changing the Inbound Rules is sufficient.

The AWS Documentation mentions the following A network access control list (ACL) is an optional layer of security for your VPC that acts as a firewall for controlling traffic in and out of one or more subnets. You might set up network ACLs with rules similar to your security groups in order to add an additional layer of security to your VPC. For optional layer of security for your VPC that acts as a firewall for controlling traffic in and out of one or more subnets. You might set up network ACLs with rules similar to your security groups in order to add an additional layer of security to your VPC.

For more information on Network Access Control Lists, please visit the following URL:
https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_ACLs.html

345
Q

Question 742

A company is planning on allowing their users to upload and read objects from an S3 bucket. Due to the numerous amount of users, the read/write traffic will be very high. How should the architect maximize Amazon S3 performance?

A. Prefix each object name with a random string

B. Use the STANDARD _IA storage class

C. Prefix each object name with the current data

D. Enable versioning on the S3 bucket

A

Answer: A

If the request rate is high, then you can use hash keys or random strings to prefix the object name. In such a case, the partitions used to store the objects will be better distributed and hence allow for better read/write performance for your objects.

For more information on how to ensure performance in S3, please visit the following URL:
https://docs.aws.amazon.com/AmazonS3/latest/dev/request-rate-perf-considerations.html

346
Q

Question 743

An EC2 Instance is setup in AWS. It will host an application that will make API calls to the Simple Storage Service. Which is the ideal way for the application to access the Simple Storage Service

A. Pass API credentials to the instance using instance userdata

B. Store API credentials as an object in a separate Amazon S3 bucket

C. Embed the API credentials into your application

D. Create and Assign an IAM role to the EC2 Instance

A

Answer: D

The AWS Documentation mentions the following You can use roles to delegate access to users, applications, or services that don’t normally have access to your AWS resources. Its not a best practice to use IAM credentials for any production based application. It’s always a good practice to use IAM Roles. For more information on IAM Roles, please visit the following URL: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html

347
Q

Question 744

You have videos which you upload to an S3 bucket. You want to provide access to users to view the videos. You want to provide the best user experience no matter where the user is located. What is the best way to achieve this?

A. Enable Cross region replication for the S3 bucket to all regions

B. Use Cloudfront with the S3bucket as the source

C. Use API gateway with S3 bucket as the source

D. Use AWS Lambda functions to deliver the content to users

A

Answer: B

The AWS Documentation mentions the following to backup this requirement Amazon CloudFront is a web service that speeds up distribution of your static and dynamic web content, such as .html, .css, .js, and image files, to your users. CloudFront delivers your content through a worldwide network of data centers called edge locations. When a user requests content that you’re serving with CloudFront, the user is routed to the edge location that provides the lowest latency (time delay), so that content is delivered with the best possible performance. If the content is already in the edge location with the lowest latency, CloudFront delivers it immediately. If the content isnot in that edge location, CloudFront retrieves it from an Amazon S3 bucket or an HTTP server (for example, a web server) that you have identified as the source for the definitive version of your content.

For more information on Amazon Cloudfront, please visit the following URL: https://docs.aws.amazon.com/AmazonCloudFront/latest/ DeveloperGuide/Introduction.html

348
Q

Question 745

An organization has the requirement to store 10TB worth of scanned files. There is a requirement to have a search application in place which can be used to search through the scanned files? Which of the below mentioned options is the best option for implementing the search facility.

A. Use S3 with reduced redundancy lo store and serve the scanned files. Install a commercial search application on EC2 Instances and configure with auto-scaling and an Elastic Load Balancer.

B. Model the environment using CloudFormation. Use an EC2 instance running Apache webserver and an open source search application, stripe multiple standard EBS volumes together to store the scanned files with a search index.

C. Use 83 with standard redundancy to store and serve the scanned files. Use CloudSearch for query processing, and use Elastic Beanstalk to host the website across multiple availability zones.

D. Use a single-AZ RDS MySQL instance to store the search index for the scanned files and use an EC2 instance with a custom application to search based on the index.

A

Answer: C

With Amazon CloudSearch, you can quickly add rich search capabilities to your website or application. You don’t need to become a search expert or worry about hardware provisioning, setup, and maintenance. With a few clicks in the AWS
Management Console, you can create a search domain and upload the data that you want to make searchable, and Amazon CloudSearch will automatically provision the required resources and deploy a highly tuned search index. You can easily change your search parameters, fine tune search relevance, and apply new settings at any time. As your volume of data and traffic fluctuates, Amazon CloudSearch seamlessly scales to meet your needs.

For more information on AWS cloudsearch , please visit the below link:
https://aws.amazon.com/cloudsearch/

349
Q

Question 746

You work as an AWS Architect for a company that has an On-premise data center. They want to connect this setup to the AWS Cloud. How could this be achieved? Note that the connection must have the maximum throughput and be dedicated for the company

A. Use AWS Express Route

B. Use AWS Direct Connect

C. Use AWS VPC Peering

D. Use AWS VPN

A

Answer: B

The AWS Documentation mentions the following AWS Direct Connect makes it easy to establish a dedicated network connection from your premises to AWS. Using AWS Direct Connect, you can establish private connectivity between AWS and your datacenter, office, or colocation environment, which in many cases can reduce your network costs, increase bandwidth throughput, and provide a more consistent network experience than Internet-based connections. 

For more information on AWS Direct Connect, please visit the below link:
https://aws.amazon.com/directconnect/

350
Q

Question 747

You currently have developers who have access to your production AWS account? There is a concern raised that the developers could potentially delete the production based EC2 resources. Which of the below options could help alleviate this concern?

A. Tag the production instances with a production-identifying tag and add resource-level permissions to the developers with an explicit deny on the terminate API call to instances with the production tag.

B. Create a separate AWS account and add the developers to that account.

C. Modify the LAM policy on the developers to require MFA before deleting EC2 instances and disable MFA access to the employee

D. Modify the IAM policy on the developers to require MFA before deleting EC2 instances

A

Answer: A, B

Tags enable you to categorize your AWS resources in different ways, for example, by purpose, owner, or environment. This is useful when you have many resources of the same type — you can quickly identify a specific resource based on the tags you’ve assigned to it. Each tag consists of a key and an optional value, both of which you define

For more information on tagging aws resources please refer to the below URL:
http: //docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html

351
Q

Question 748

A company needs to monitor the read and write IOPs metrics for their AWS MySQL RDS instance and send real-time alerts to their operations team. Which AWS services can accomplish this? Choose 2 answers from the options given below.

A. Amazon Simple Email Service

B. Amazon CloudWatch

C. Amazon Simple Queue Service

D. Amazon Route 53

E. Amazon Simple Notification Service

A

Answer: B, E

Amazon Cloudwatch will be used to monitor the IOP’s metrics from the RDS instance and Amazon Simple Notification Service will be used to send the notification if any alarm is triggered.

For more information on cloudwatch metrics, please refer to the link:
http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CW_Support_For_AWS.html

352
Q

Question 749

You run an ad-supported photo sharing website using S3 to serve photos to visitors of your site. At some point you find out that other sites have been linking to the photos on your site, causing loss to your business. What is an effective method to mitigate this? Choose the correct answer from the options below

A. Use CloudFront distributions for static content.

B. Store photos on an EBS volume of the web server.

C. Remove public read access and use signed URLs with expiry dates.

D. Block the IPs of the offending websites in Security Groups.

A

Answer: C

You can distribute private content using a signed URL that is valid for only a short time—possibly for as little as a few minutes. Signed URLs that are valid for such a short period are good for distributing content on-the-fly to a user for a limited purpose, such as distributing movie rentals or music downloads to customers on demand.

For more information on Signed URL’s please visit the below link:
http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-signed-urls.html

353
Q

Question 750

A company wants to setup a template for deploying resources to AWS. They want this to be dynamic in nature so that the template can pick up parameters and then spin up resources based on those parameters. Which of the following AWS service would be ideal for this requirement

A. AWS Beanstalk

B. AWS Cloudformation

C. AWS CodeBuild

D. AWS CodeDeploy

A

Answer: B

The AWS Documentation mentions the below on AWS Cloudformation. This supplements the requirement in the question for the consultants to use their architecture diagrams to construct cloudformation templates. AWS CloudFormation is a service that helps you model and set up your Amazon Web Services resources so that you can spend less time managing those resources and more time focusing on your applications that run in AWS. You create a template that describes all the AWS resources that you want (like Amazon EC2 instances or Amazon RDS DB instances), and AWS CloudFormation takes care of provisioning and configuring those resources for you.

For more information on AWS Cloudformation, please visit the following URL:
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html

354
Q

Question 751

Your IT Security department has mandated that all data on EBS volumes created for underlying EC2 Instances need to be encrypted. Which of the following can help achieve this?

A. AWS KMS API

B. AWS Certificate Manager

C. API Gateway with STS

D. IAM Access Key

A

Answer: A

Option B is incorrect - The AWS Certificate manager can be used to generate SSL certificates that can be used to encrypt traffic in transit, but not at rest

Option C is incorrect is again used for issuing tokens when using API gateway for traffic in transit.

Option D is used for secure access to EC2 Instances

The AWS Documentation mentions the following on AWS KMS AWS Key Management Service (AWS KMS) is a managed service that makes it easy for you to create and control the encryption keys used to encrypt your data. AWS KMS is integrated with other AWS services including Amazon Elastic Block Store (Amazon EBS), Amazon Simple Storage Service (Amazon S3), Amazon Redshift, Amazon Elastic Transcoder, Amazon WorkMail, Amazon Relational Database Service (Amazon RDS), and others to make it simple to encrypt your data with encryption keys that you manage

For more information on AWS KMS, please visit the following URL:
https://docs.aws.amazon.com/kms/latest/developerguide/overview.html

355
Q

Question 752

A company’s Business Continuity department is worried about the EBS volumes hosted in AWS. They want to ensure that redundancy is achieved for the underlying EBS Volumes. What must be done to achieve this in a COST effective manner?

A. Nothing, since by default EBS Volumes are replicated across Availability zones,

B. Copy the data to S3 bucket for data redundancy

C. Create EBS Snapshots in another Availability Zone for data redundancy

D. Copy the data to a DynamoDB table for data redundancy

A

Answer: C

The AWS Documentation mentions the following You can back up the data on your Amazon EBS volumes to Amazon S3 by taking point-in-time snapshots. Snapshots are incremental backups, which means that only the blocks on the device that have
changed after your most recent snapshot are saved. This minimizes the time required to create the snapshot and saves on storage costs by not duplicating data.

For more information on EBS snapshots, please visit the following URL:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSSnapshots.html

356
Q

Question 753

A mobile application hosted on AWS needs to a data store on AWS. Each item will be around 10KB in size. Latency of data access must remain consistent despite very high application traffic. Which would be the ideal data store for the application

AWS DynamoDB

B. AWS EBS Volumes

C. AWS Glacier

D. AWS Redshift

A

Answer: A

The AWS Documentation mentions the following Amazon DynamoDB is a fast and flexible NoSQL database service for all applications that need consistent, single-digit millisecond latency at any scale. It is a fully managed cloud database and supports both document and key-value store models. Its flexible data model, reliable performance, and automatic scaling of throughput capacity, makes it a great fit for mobile, web, gaming, ad tech, IoT, and many other applications

For more information on AWS DynamoDB, please visit the following URL: https://aws.amazon.com/dynamodb/

357
Q

Question 754

A company is planning to design a micro services architected application that will be hosted in AWS. The entire architecture needs to be decoupled. Which of the following service can help achieve this.

A. AWS SNS

B. AWS ELB

C. AWS Autoscaling

D. AWS SQS

A

Answer: D

The AWS Documentation mentions the following Amazon Simple Queue Service (SQS) is a fully managed message queuing service that makes it easy to decouple and scale microservices, distributed systems, and serverless applications. Building applications from individual components that each perform a discrete function improves scalability and reliability, and is best practice design for modern applications. SQS makes it simple and cost-effective to decouple and coordinate the components of a cloud application. Using SQS, you can send, store, and receive messages between software components at any volume, without losing messages or requiring other services to be always available.

For more information on AWS SQS, please visit the following URL:
https://aws.amazon.com/sqs/

358
Q

Question 755

You are developing a mobile application that needs to issue temporary security credentials to users. This is so that security is not compromised on the application. Which of the below service can help achieve this

A. AWS STS

B. AWS Config

C. AWS Trusted Advisor

D. AWS Inspector

A

Answer: A

The AWS Documentation mentions the following You can use the AWS Security Token Service (AWS STS) to create and provide trusted users with temporary security credentials that can control access to your AWS resources. Temporary security credentials are short-term, as the name implies. They can be configured to last for anywhere from a few minutes to several hours. After the credentials expire, AWS no longer recognizes them or allows any kind of access from API requests made with them.

For more information on the Secure Token Service, please visit the following URL:
https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp.html

359
Q

Question 756

Your architecture for an application currently consists of EC2 Instances siting behind a classic ELB. The EC2 Instances are used to serve an application to Internet users. How can you scale this architecture in the event the number of users accessing the application increases?

A. Add a another ELB to the architecture

B. Use Autoscaling Groups

C. Use an Application Load balancer instead

D. Use the Elastic container service

A

Answer: B

The AWS Documentation mentions the following AWS Auto Scaling monitors your applications and automatically adjusts capacity to maintain steady, predictable performance at the lowest possible cost. Using AWS Auto Scaling, it’s easy to setup application scaling for multiple resources across multiple services in minutes.

For more information on AWS Autoscaling, please visit the following URL:
https://aws.amazon.com/autoscaling/

360
Q

Question 757

You are an architect for a gaming application. This application is still in the design phase. Which of the following services can be used to ensure optimal performance and least latency for the gaming users.

A. AWS Autoscaling

B. AWS ELB

C. AWS ElastiCache

D. AWS VPC

A

Answer: C

The AWS Documentation mentions the following Amazon ElastiCache offers fully managed Redis and Memcached. Seamlessly deploy, operate, and scale popular open source compatible in-memory data stores. Build data-intensive apps or improve the
performance of your existing apps by retrieving data from high throughput and low latency in-memory data stores. Amazon ElastiCache is a popular choice for Gaming, Ad-Tech, Financial Services, Healthcare, and IoT apps.

For more information on AWS ElastiCache, please visit the following URL: https://aws.amazon.com/elasticache/

361
Q

Question 758

You are the architect for a business intelligence application. The application reads data from a MySQL database. The database is hosted on an EC2 Instance. The application experience a high number of read and write requests. Which Amazon volume type can meet the performance requirements of this database?

A. EBS Provisioned IOPS SSD

B. EBS Throughput Optimized HDD

C. EBS General Purpose SSD

D. EBS Cold HDD

A

Answer: A

Since there is a high performance requirement with high IOPS needed, one needs to opt for EBS Provisioned IOPS SSD The snapshot from the AWS Documentation mentions the need of using Provisioned IOPS for better IOPS performance for database based applications.

For more information on AWS EBS Volume types, please visit the following URL:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html

362
Q

Question 759

An organization is planning to use AWS for their production roll out. The organization wants to implement automation for deployment such that it will automatically create a LAMP stack, download the latest PHP installable from S3 and setup the ELB. Which of the below mentioned AWS services meets the requirement for making an orderly deployment of the software?

A. AWS Elastic Beanstalk

B. AWS Cloudfront

C. AWS Cloudformation

D. AWS DevOps

A

Answer: C

When you want to automate deployment, the automatic choice is Cloudformation. Below is the excerpt from AWS on cloudformation. AWS CloudFormation gives developers and systems administrators an easy way to create and manage a collection of related AWS resources, provisioning and updating them in an orderly and predictable fashion. You can use AWS CloudFormation’s sample templates or create your own templates to describe the AWS resources, and any associated dependencies or runtime parameters, required to run your application. You don’t need to figure out the order for provisioning AWS services or the subtleties of making those dependencies work. CloudFormation takes care of this for you. After the AWS resources are deployed, you can modify and update them in a controlled and predictable way, in effect applying version control to your AWS infrastructure the same way you do with your software

For more information on Cloud Formation, please visit the URL:
https://aws.amazon.com/cloudformation/

363
Q

Question 760

Your company is planning on using the API Gateway service to manage API’s for developers and users. There needs to be a segregation of control of what the developers and users can access for the API’s itself. How can this be accomplished?

A. Use IAM permissions to control the access

B. Use AWS Access keys to manage the access

C. Use AWS KMS service to manage the access

D. Use AWS config service to control the access

A

Answer: A

The AWS Documentation mentions the following You control access to Amazon API Gateway with LAM permissions by controlling access to the following two API Gateway component processes: - To create, deploy, and manage an API in API Gateway, you must grant the API developer permissions to perform the required actions supported by the API management component of API Gateway. - Tocalla deployed API or to refresh the API caching, you must grant the API caller permissions to perform required LAM actions supported by the API execution component of API Gateway.

For more information on permissions for the API gateway, please visit the URL:
https://docs.aws.amazon.com/apigateway/latest/developerguide/permissions.html

364
Q

Question 761

You currently have 2 development environments hosted in 2 different VPC’s in an AWS account, in the same region. There is now a need for resources from one VPC to access another. How can this be accomplished

Establish a Direct Connect connection

B. Establish a VPN connection

C. Establish VPC Peering

D. Establish subnet peering

A

Answer: C

The AWS Documentation mentions the following A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them privately. Instances in either VPC can communicate with each other as if they are within the same network. You can create a VPC peering connection between your own VPCs, with a VPC in another AWS account, or with a VPC in a different AWS Region.

For more information on VPC peering, please visit the URL:
https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-peering.html

365
Q

Question 762

Your company is planning on using the EMR service available in AWS for running their big data framework. They want to minimize the cost for ranning the EMR service
A. Running the EMR cluster in a dedicated VPC

B. Choosing Spot Instances for the underlying nodes

C. Choosing On-Demand Instances for the underlying nodes

D. Disable automated backups

A

Answer: C

The AWS Documentation mentions the following Spot Instances in Amazon EMR provide an option for you to purchase Amazon EC2 instance capacity at a reduced cost
as compared to On-Demand purchasing.

For more information on Instance types for EMR, please visit the URL:
https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-instance-purchasing-options.html

366
Q

Question 763

You have an S3 bucket hosted in AWS. This is used to host promotional videos
uploaded by yourself. You need to provide access to users for a limited duration of time.
How can this be achieved.

A. Use versioning and enable a timestamp for each version

B. Use Pre-signed URL’s

C. Use LAM Roles with a timestamp to limit the access

D. Use LAM policies with a timestamp to limit the access

A

Answer: B

The AWS Documentation mentions the following All objects by default are private.
Only the object owner has permission to access these objects. However, the object
owner can optionally share objects with others by creating a pre-signed URL, using
their own security credentials, to grant time-limited permission to download the
objects.

For more information on pre-signed URL’s, please visit the URL
https://docs.aws.amazon.com/AmazonS3/latest/dev/ShareObjectPreSignedURL.html

367
Q

Question 764

An application is currently writing a large number of records to a DynamoDB table
in one region. There is a requirement for a secondary application to just take in the
changes to the DynamoDB table every 2 hours and process the updates accordingly.
Which of the following is an ideal way to ensure the secondary application can get the
relevant changes from the DynamoDB table.

A. Insert a timestamp for each record and then scan the entire table for the timestamp as per the last 2 hours.

B. Create another DynamoDB table with the records modified in the last 2 hours.

C. Use DynamoDB streams to monitor the changes in the DynamoDB table.

D. Transfer the records to S3 which were modified in the last 2 hours

A

Answer: C

The AWS Documentation mentions the following A DynamoDB stream is an ordered flow of information about changes to items in an Amazon DynamoDB table. When you enable a stream on a table, DynamoDB captures information about every modification to data items in the table. Whenever an application creates, updates, or deletes items in the table, DynamoDB Streams writes a stream record with the primary key attribute(s) of the items that were modified. Astream record contains information about a data modification to a single item in a DynamoDB table. You can configure the stream so that the stream records capture additional information, such as the “before” and “after” images of modified items. For more information on DynamoDB streams ,

please visit the below url
http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.html

368
Q

Question 765

Your company has just started using a host of AWS services. There is now a drive from a costing perspective to ensure cost is optimized for all services used by the company. Which of the below service would give a cost optimization perspective for resources hosted on the AWS Cloud.

AWS Inspector

B. AWS Trusted Advisor

C. AWS WAF

D. AWS Config

A

Answer: B

The AWS Documentation mentions the following on the Trusted Advisor An online resource to help you reduce cost, increase performance, and improve security by optimizing your AWS environment, Trusted Advisor provides real time guidance to
help you provision your resources following AWS best practices. For more information on the Trusted Advisor ,

please visit the below URL:
https://aws.amazon.com/premiumsupport/trustedadvisor/

369
Q

Question 766

Your IT Security department has mandated that all traffic flowing from the EC2 Instances need to be monitored. Which of the below service can help achieve this.

A. Trusted Advisor

B. VPC Flow Logs

C. Use Cloudwatch metrics

D. Use Cloudtrail

A

Answer: B

The AWS Documentation mentions the following VPC Flow Logs is a feature that enables you to capture information about the IP traffic going to and from network interfaces in your VPC. Flow log data is stored using Amazon CloudWatch Logs. After you’ve created a fl Logs, please visit the following URL:
https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/flow-logs.html

370
Q

Question 767

A company has a redshift cluster defined in AWS. They need to have a disaster recovery mechanism in place for the event the redshift cluster goes down for any reason. Which of the following can help get the cluster available immediately in the event the primary one goes down.

A. Take a copy of the underlying EBS volumes to S3 and then do cross region replication

B. Enable cross region snapshots for the Redshift Cluster

C. Create a Cloudformation template to restore the Cluster in another region

D. Enable cross availability zone snapshots for the Redshift Cluster

A

Answer: B

The diagram shows that snapshots are available for Redshift clusters which enables clusters to be available in different regions

For more information on managing Redshift snapshots, please visit the following URL:
https://docs.aws.amazon.com/redshift/latest/mgmt/managing-snapshots-console.html

371
Q

Question 768

You have an AWS RDS database hosted in the Singapore region. You need to ensure that a backup database is in place and the data is asynchronously copied. Which of the following would help fulfill this requirement

A. Enable Multi-AZ for the database

B. Enable Read Replica’s for the database

C. Enable Asynchronous replication for the database

D. Enable manual backups for the database

A

Answer: B

The AWS Documentation mentions the following Amazon RDS Read Replicas enable you to create one or more read-only copies of your database instance within the same AWS Region or in a different AWS Region. Updates made to the source database are then asynchronously copied to your Read Replicas. In addition to providing scalability for read-heavy workloads, Read Replicas can be promoted to become a standalone database instance when needed.

For more information on Read Replica’s, please visit the following URL:
https://aws.amazon.com/rds/details/read-replicas/

372
Q

Question 769

Your current log analysis application takes more than four hours to generate a report of the top 10 users of your web application. You have been asked to implement a system that can report this information in real time, ensure that the report is always up to date, and handle increases in the number of requests to your web application. Choose the option that is cost-effective and can fulfill the requirements.

A. Publish your data to CloudWatch Logs, and configure your application to autoscale to handle the load on demand.

B. Publish your log data to an Amazon Sg bucket. Use AWS CloudFormation to create an Auto Scaling group to scale your post-processing application which is configured to pull down your log files stored in Amazon S3.

C. Post your log data to an Amazon Kinesis data stream, and subscribe your log- processing application so that is configured to process your logging data.

D. Configure an Auto Scaling group to increase the size of your Amazon EMR cluster

A

Answer: C

The AWS Documentation mentions the below Amazon Kinesis makes it easy to collect, process, and analyze real-time, streaming data so you can get timely insights and react quickly to new information. Amazon Kinesis offers key capabilities to cost effectively process streaming data at any scale, along with the flexibility to choose the of your application. With Amazon Kinesis, you can ingest real-time data such as application logs, website clickstreams, IoT telemetry data, databases, data lakes and data warehouses, or build your own real- time applications using this data. Amazon Kinesis enables you to process and analyze data as it arrives and respond in real-time instead of having to wait until all your data is collected before the processing can begin.

For more information on AWS Kinesis please see the below link:
https://aws.amazon.com/kinesis/

373
Q

Question 770

Your company wants to automate the deployment of new EC2 Instances. They want to have pre-baked Images so that the deployment of instances can be done in a faster manner. Which of the following can help achieve this?

A. Create an Elastic Beanstalk image

B. Create an Opswork image

C. Create an Amazon Machine Image

D. Create an EC2 Image

A

Answer: C

The AWS Documentation mentions the below An Amazon Machine Image (AMI) provides the information required to launch an instance, which is a virtual server in the cloud. You must specify a source AMI when you launch an instance. You can launch multiple instances from a single AMI when you need multiple instances with the same configuration. You can use different AMIs to launch instances when you need instances with different configurations.

For more information on AMI’s please see the below link:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html

374
Q

Question 771

There is a requirement to load a lot of data from your on-premise network on to AWS Redshift. Which of the below can be used for this data transfer. Choose 2 answers from the options given below.

A. Data Pipeline

B. Direct Connect

C. Snowball

D. AWS VPN

A

Answer: B, C

The AWS documentation mentions the following about the respective services With a Snowball, you can transfer hundreds of terabytes or petabytes of data between your on-premises data centers and Amazon Simple Storage Service (Amazon S3). AWS Snowball uses Snowball appliances and provides powerful interfaces that you can use to create jobs, transfer data, and track the status of your jobs through to completion. By shipping your data in Snowballs, you can transfer large amounts of data at a significantly faster rate than if you were transferring that data over the Internet, saving you time and money. AWS Direct Connect links your internal network to an AWS Direct Connect location over a standard 1-gigabit or 10-gigabit Ethernet fiber-optic cable. One end of the cable is connected to your router, the other to an AWS Direct Connect router. With this connection in place, you can create virtual interfaces directly
to public AWS services (for example, to Amazon S3) or to Amazon VPC, bypassing Internet service providers in your network pat

For more information on Direct Connect, please refer to the below URL:
http: //docs.aws.amazon.com/directconnect/latest/UserGuide/Welcome.html

For more information on AWS Snowball, please refer to the below URL:
http: //docs.aws.amazon.com/snowball/latest/ug/whatissnowball.html

375
Q

Question 772

You have just created a Redshift cluster in AWS. You are trying to use SQL Client tools from an EC2 Instance, but you are not able to connect to the Redshift Cluster. What must you do to ensure that you are able to connect to the Redshift Cluster from
the EC2 Instance?

A. Install Redshift client tools on the EC2 Instance first.

B. Modify the VPC Security Groups

C. Use the AWS CLI instead of the Redshift client tools.

D. Modify the NACL on the subnet

A

Answer: B

The AWS Documentation mentions the following By default, any cluster that you create is closed to everyone. LAM credentials only control access to the Amazon Redshift API-related resources: the Amazon Redshift console, command line interface (CLI), API, and SDK. To enable access to the cluster from SQL client tools via JDBC or ODBC, you use security groups:

  • If you are using the EC2-Classic platform for your Amazon Redshift cluster, you must use Amazon Redshift security groups.
  • If you are using the EC2-VPC platform for your Amazon Redshift cluster, you must use VPC security groups.

For more information on Amazon Redshift, please refer to the below URL:
http: //docs.aws.amazon.com/redshift/latest/mgmt/overview. html

376
Q

Question 773

You currently work for a company that looks at baggage handling. There are GPS devices located on the baggage delivery units to deliver the coordinates of the unit every 10 seconds. You need to process these coordinates in real-time from multiple sources. Which tool should you use to digest the data

A. Amazon EMR

B. Amazon SQS

C. AWS Data Pipeline

D. Amazon Kinesis

A

Answer: D

The AWS Documentation mentions the following Amazon Kinesis makes it easy to collect, process, and analyze real-time, streaming data so you can get timely insights and react quickly to new information. Amazon Kinesis offers key capabilities to cost-effectively process streaming data at any scale, along with the flexibility to choose the tools that best suit the requirements of your application. With Amazon Kinesis, you can ingest real-time data such as video, audio, application logs, website clickstreams, and IoT telemetry data for machine learning, analytics, and other applications. Amazon Kinesis enables you to process and analyze data as it arrives and respond instantly instead of having to wait until all your data is collected before the processing can begin.

For more information on Amazon Kinesis , please visit the link:
https://aws.amazon.com/kinesis/

377
Q

Question 774

You are planning on hosting a web and database application in an AWS VPC. The database should only be able to talk to the web server. Which of the following would you change to fulfil this requirement?

A. Network Access Control Lists

B. AWS RDS parameter groups

C. Route Tables

D. Security groups

A

Answer: D

You would use VPC Security Groups for this. The AWS Documentation additionally mentions the following A security group acts as a virtual firewall for your instance to control inbound and outbound traffic. When you launch an instance in a VPC, you can assign up to five security groups to the instance. Security groups act at the instance level, not the subnet level. Therefore, each instance in a subnet in your VPC could be assigned to a different set of security groups. If you don’t specify a particular group at launch time, the instance is automatically assigned to the default security group for the VPC.

For more information on VPC Security Groups, please visit the link:
https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_SecurityGroups.html

378
Q

Question 775

A company has a requirement for block level storage which would be able to store 800GB of data. Also encryption of the data is required. Which of the following can be used in such a case

A. AWS EBS Volumes

B. AWS S3

C. AWS Glacier

D. AWS EFS

A

Answer: A

When you consider block level storage , then you need to consider EBS Volumes. Option B and C is incorrect since they are object level storage. Option D is incorrect since this is file level storage.

For more information on EBS volumes, please visit the following URL:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumes.html

379
Q

Question 776

An application requires storage for an EC2 Instance which would be used to store infrequently accessed data. Which of the following is the best storage option for this which is COST effective?

A. EBS IOPS

B. EBS SSD

C. EBS Throughput Optimized

D. EBS Cold HDD

A

Answer: D

If you need storage for infrequently accessed storage , then EBS Cold HDD is the best option for this This is also mentioned in the AWS Documentation

For more information on EBS volume types, please visit the following URL:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html

380
Q

Question 777

There are multiple issues reported from an EC2 instance. It is required to analyze the logs files. What can be used in AWS to store and analyze the log files from the EC2 Instance? Choose one answer from the options below

A. AWS SQS

B. AWS S3

C. AWS Cloudtrail

D. AWS Cloudwatch Logs

A

Answer: D

You can use Amazon CloudWatch Logs to monitor, store, and access your log files from Amazon Elastic Compute Cloud (Amazon EC2) instances, AWS CloudTrail, and other sources.

For more information on Cloudwatch Logs, please visit the following URL:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html

381
Q

Question 778

You are currently hosting an infrastructure and most of the EC2 instances are near go — 100% utilized. What is the type of EC2 instances you would utilize to ensure costs are minimized?

Reserved instances
B. On-demand instances

C. Spot instances

D. Regular instances

A

Answer: A

When you have instances that will be used continuously and throughout the year, the best option is to buy reserved instances. By buying reserved instances, you are actually allocated an instance for the entire year or the duration you specify with a reduced cost.

To understand more on reserved instances, please visit the below URL:
https://aws.amazon.com/ec2/pricing/reserved-instances/

382
Q

Question 779

As a Solutions architect, it is your job to design for high availability and fault tolerance. Company-A is utilizing Amazon S3 to store large amounts of file data. What steps would you take to ensure that if an availability zone was lost due to a natural disaster your files would still be in place and accessible

A. Copy the S3 bucket to an EBS optimized backed EC2 instance

B. Amazon S3 is highly available and fault tolerant by design and requires no additional configuration

C. Enable AWS Storage Gateway using gateway-stored setup

D. Enable Cross region replication for the S3 bucket

A

Answer: B

AWS S3 is already highly available and fault tolerant.

This is very clearly mentioned in its FAQ’s, the link is given below
https://aws.amazon.com/s3/faqs/

383
Q

Question 780

Acompany wants to utilize AWS storage. For them low storage cost is paramount, the data is rarely retrieved, and data retrieval times of several hours are acceptable for them. What is the best storage option to use?

A. Amazon Glacier

B. S3-Reduced Redundancy Storage

C. EBS backed storage connected to EC2

D. AWS Cloud Front

A

Answer: A

With the above requirements, the best option is to opt for Amazon Glacier. The AWS Documentation further mentions the following Amazon Glacier is a secure, durable, and extremely low-cost cloud storage service for data archiving and long-term backup. It is designed to deliver 99.999999999% durability, and provides comprehensive security and compliance capabilities that can help meet even the most stringent regulatory requirements

For more information on Amazon Glacier, please refer to the below URL:
https://aws.amazon.com/glacier/

384
Q

Question 781

A company is building a service in which Amazon EC2 worker instances process an uploaded audio file and generate a text file. You must store both of these files in the same durable storage until the text file is retrieved. You do not know what the storage capacity requirements are. Which storage option is both cost-efficient and scalable?

A. Multiple Amazon EBS volume with snapshots

B. A single Amazon Glacier vault

C. A single Amazon S3 bucket

D. Multiple instance stores

A

Answer: C

Amazon S83 is the best storage option for this. It is durable and highly available.

For more information on Amazon S3, please refer to the below URL:
https://aws.amazon.com/s3/

385
Q

Question 782

You have an application currently running on five EC2 instances as part of an Auto Scaling group. For the past 30 minutes all five instances have been running at 100 CPU Utilization; however, the Auto Scaling group has not added any more instances to the group. What is the most likely cause?

Choose 2 likely answers from the options given below

A. You already have 20 on-demand instances running.

B. The Auto Scaling group’s MAX size is set at five.

C. The Auto Scaling group’s scale down policy is too high.

D. The Auto Scaling group’s scale up policy has not yet been reached.

A

Answer: A, B

This is provided in the aws documentation.

For more information on troubleshooting Autoscaling, please refer to the
following link: http://docs.aws.amazon.com/autoscaling/latest/userguide/ts-as-capacity.html

386
Q

Question 783

Your CloudFront distribution is performing well, but you are still getting too many request at the origin locations. What could be one way to increase CloudFront performance? Choose the correct answer from the options below

A. Change the origin location from an S3 bucket to an ELB

B. Use a faster Internet connection

C. Increase the cache expiration time

D. Create an “invalidation” for all your objects, and recache them

A

Answer: C

You can control how long your objects stay in a CloudFront cache before CloudFront forwards another request to your origin. Reducing the duration allows you to serve dynamic content. Increasing the duration means your users get better performance because your objects are more likely to be served directly from the edge cache. A longer duration also reduces the load on your origin.

For more information on Cloudfront cache expiration, please refer to the following link:
http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Expiration.html

387
Q

Question 784

You have been instructed by your supervisor to devise a disaster recovery model for the resources in their AWS account. The key requirement is to ensure that the cost is at a minimum when devising the solution. Which of the following disaster recovery mechanism would you employ in such a scenario?

A. Backup and Restore

B. Pilot Light

C. Warm standby

D. Multi-Site

A

Answer: A

Since the cost needs to be at a minimum, the best option is to back up all the resources and then perform a restore in the event of a disaster.

For more information on disaster recovery, please refer to the below link:
https://media.amazonwebservices.com/AWS_Disaster_Recovery.pdf

388
Q

Question 785

An application consists of the following architecture.

a. EC2 Instances in multiple AZ’s behind an ELB.
b. The EC2 Instances are launched via an Autoscaling Group
c. There is a NAT instance which is used to ensure that instances can download updates from the internet.

Due to the high bandwidth being consumed by the NAT instance, it has been decided to use a NAT gateway. How should this be implemented?

A. Use NAT Instances along with the NAT gateway

B. Host the NAT instance in the private subnet

C. Migrate the NAT Instance to NAT Gateway and host the NAT Gateway in the public subnet

D. Convert the NAT instance to a NAT gateway

A

Answer: C

Once can simple start using the NAT gateway service and stop using the deployed NAT instances. But you need to ensure that the NAT gateway is deployed in the public subnet

For more information on migrating to a NAT gateway, please visit the following URL:
https://aws.amazon.com/premiumsupport/knowledge-center/migrate-nat-instance-gateway/

389
Q

Question 786

A company has an application hosted in AWS. This application consists of EC2 Instances which sits behind an ELB with EC2 Instances. The following are requirements from an administrative perspective

a) Investigate any issues for the ELB by searching through the relevant logs
b) Ensure notifications are sent when the latency goes beyond 10 seconds

Which of the following can be used to achieve this requirement. Choose 2 answers from the options given below

A. Use Cloudwatch metrics for whatever metrics need to be monitored.

B. Enable Cloudwatch logs and then investigate the logs whenever there is an issue.

C. Enable the logs on the ELB and then investigate the logs whenever there is an issue.

D. Use Cloudtrail to monitor whatever metrics need to be monitored.

A

Answer: A,C

When you use Cloudwatch metrics for an ELB, you can get the amount of read requests and latency out of the box.

For more information on using Cloudwatch with the ELB, please visit the following URL:
https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-cloudwatch-metrics.html

Elastic Load Balancing provides access logs that capture detailed information about requests sent to your load balancer. Each log contains information such as the time the request was received, the client’s IP address, latencies, request paths, and server responses. You can use these access logs to analyze traffic patterns and to troubleshoot issues.

For more information on using ELB logs, please visit the following URL:
https://docs.aws.amazon.com/elasticloadbalancing /latest/classic/access-log-collection.html

390
Q

Question 787

A company has a requirement to extend their storage model to the AWS cloud. They should be able to connect their On- premise servers to the storage layers via iSCSI. Which of the following would be best suited for this?

A. Configure the Simple storage service

B. Configure Storage gateway cached volume

C. Configure Storage gateway stored volume

D. Configure Amazon Glacier

A

Answer: C

The AWS Documentation mentions the following If you need low-latency access to your entire dataset, first configure your on-premises gateway to store all your data locally. Then asynchronously back up point-in-time snapshots of this data to Amazon S3. This configuration provides durable and inexpensive offsite backups that you can recover to your local data center or Amazon EC2. For example, if you need replacement capacity for disaster recovery, you can recover the backups to Amazon EC2.

For more information on the Storage gateway, please visit the following URL:
https://docs.aws.amazon.com/storagegateway/latest/userguide/WhatIsStorageGateway-html

391
Q

Question 788

An IT company has a set of EC2 Instances hosted in a VPC. They are hosted in a private subnet. These instances now need to access resources hosted on an S3 bucket. The traffic should not traverse the internet. The addition of which of the following would help fulfil this requirement

A. VPC endpoint

B. NAT Instance

C. NAT gateway

D. Internet gateway

A

Answer: A

A VPC endpoint enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Instances in your VPC do not require public IP addresses to communicate with resources in the service. Traffic between your VPC and the other service does not leave the Amazon network.

For more information on AWS VPC endpoints, please visit the following URL: https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-endpoints.html

392
Q

Question 789

You need to host a set of web servers and database servers in an AWS VPC. Which of the following is the proper architecture design for supporting such a set of servers.

A. Use a public subnet for the web tier and a public subnet for the database layer

B. Use a public subnet for the web tier and a private subnet for the database layer

C. Use a private subnet for the web tier and a private subnet for the database layer

D. Use a private subnet for the web tier and a public subnet for the database layer

A

Answer: B

The ideal setup is to ensure that the web server is hosted in the public subnet so that it can be accessed by users on the internet. The database server can be hosted in the private subnet. The diagram from the AWS Documentation shows how this can be setup

For more information on public and private subnets in AWS, please visit the following URL:
https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Scenario2.html

393
Q

Question 790

An IT company is looking at ways it can secure their resources in their AWS Account. Which of the following are ways to secure data at rest and in transit in AWS. Choose 3 answers from the options given below

A. Encrypt all EBS volumes attached to EC2 Instances

B. Use server side encryption for S3

C. Use SSL/HTTPS when using the Elastic Load Balancer

D. Use IOPS volumes when working with EBS volumes on EC2 Instances

A

Answer: A, B, C

The AWS documentation mentions the following Amazon EBS encryption offers you a simple encryption solution for your EBS volumes without the need for you to build, maintain, and secure your own key management infrastructure. When you create an encrypted EBS volume and attach it to a supported instance type, the following types
of data are encrypted:

  • Data at rest inside the volume
  • All data moving between the volume and the instance
  • All snapshots created from the volume Data protection refers to protecting
    data while in-transit (as it travels to and from Amazon S3) and at rest (while it is stored
    on disks in Amazon S3 data centers).

You can protect data in transit by using SSL or by using client-side encryption. You
have the following options of protecting data at rest in Amazon S3.

-Use Server-Side Encryption — You request Amazon S3 to encrypt your object
before saving it on disks in its data centers and decrypt it when you download the
objects.

-Use Client-Side Encryption — You can encrypt data client-side and upload the
encrypted data to Amazon S3. In this case, you manage the encryption process, the
encryption keys, and related tools. You can create a load balancer that uses the
SSL/TLS protocol for encrypted connections (also known as SSL offload). This feature
enables traffic encryption between your load balancer and the clients that initiate
HTTPS sessions, and for connections between your load balancer and your EC2
instances.

For more information on securing data at rest , please refer to the below link:
https://do.awsstatic.com/whitepapers/aws-securing-data-at-rest-with-encryption.pdf

394
Q

Question 791

Your company currently has a set of EC2 Instances running a web application which sits behind an Elastic Load Balancer. You also have an Amazon RDS instance which is used by the web application. You have been asked to ensure that this architecture is self-healing in nature and cost effective. Which of the following would fulfill this requirement. Choose 2 answers from the option given below

A. Use Cloudwatch metrics to check the utilization of the web layer. Use Autoscaling Group to scale the web instances accordingly based on the CloudWatch metrics.

B. Use Cloudwatch metrics to check the utilization of the databases servers. Use Autoscaling Group to scale the database instances accordingly based on the CloudWatch metrics.

C. Utilize the Read Replica feature for the Amazon RDS layer

D. Utilize the Multi-AZ feature for the Amazon RDS layer

A

Answer: A, D

AWS showcases a self-healing architecture where you have a set of EC2 servers as Web server being launched by an Autoscaling Group. The AWS Documentation mentions the following Amazon RDS Multi-AZ deployments provide enhanced availability and durability for Database (DB) Instances, making them a natural fit for production database workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Each AZ runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable. In case of an infrastructure failure, Amazon RDS performs an automatic failover to the standby (or to a read replica in the case of Amazon Aurora), so that you can resume database operations as soon as the failover is complete. Since the endpoint for your DB Instance remains the same after a failover, your application can resume database operation without the need for manual administrative intervention.

For more information on Multi-AZ RDS, please refer to the below link:
https://aws.amazon.com/rds/details/multi-az/

395
Q

Question 792

Your company has a set of EC2 Instances that access data objects stored in an S3 bucket. Your IT Security department is concerned about the security of this architecture and wants you to implement the following

1) Ensure that the EC2 Instance securely accesses the data objects stored in the S3 bucket
2) Prevent accidental deletion of objects

Which of the following would help fulfil the requirements of the IT Security department. Choose 2 answers from the options given below

A. Create an [AM user and ensure the EC2 Instances uses the IAM user credentials to access the data in the bucket.

B. Create an [AM Role and ensure the EC2 Instances uses the IAM Role to access the data in the bucket.

C. Use S3 Cross Region replication to replicate the objects so that the integrity of data is maintained.

D. Use an S3 bucket policy that ensures that MFA Delete is set on the objects in the bucket

A

Answer: B, D

The AWS Documentation mentions the following IAM roles are designed so that your applications can securely make API requests from your instances, without requiring you to manage the security credentials that the applications use. Instead creating and distributing your AWS credentials, you can delegate permission to make API requests using IAM roles

For more information on IAM Roles, please refer to the below link:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html

MFA Delete can be used to add another layer of security to S3 Objects to prevent accidental deletion of objects.

For more information on MFA Delete, please refer to the below link:
https://aws.amazon.com/blogs/security/securing-access-to-aws-using-mfa-part-3/

396
Q

Question 793

You have the requirement to get a snapshot of the current configuration of the resources in your AWS Account. Which of the following services can be used for this purpose

A. AWS CodeDeploy

B. AWS Trusted Advisor

C. AWS Config

D. AWS IAM

A

Answer: C

The AWS Documentation mentions the following With AWS Config, you can do the following: - Evaluate your AWS resource configurations for desired settings.

  • Get a snapshot of the current configurations of the supported resources that are associated with your AWS account.
  • Retrieve configurations of one or more resources that exist in your account.
  • Retrieve historical configurations of one or more resources.
  • Receive a notification whenever a resource is created, modified, or deleted.

View relationships between resources.

For example, you might want to find all resources that use a particular security group.

For more information on AWS Config , please visit the below URL:
http://docs.aws.amazon.com/config/latest/developerguide/WhatIsConfig.html

397
Q

Question 794

Your company is hosting an application in AWS. The application consists of a set of web servers and AWS RDS. The application is a read intensive application. It has been noticed that the response time of the application decreases due to the load on the AWS RDS instance. Which of the following measures can be taken to scale the data tier. Choose 2 answers from the options given below

A. Create Amazon DB Read Replica’s. Configure the application layer to query the read replicas for query needs.

B. Use Autoscaling to scale out and scale in the database tier

C. Use SQS to cache the database queries

D. Use ElastiCache in front of your Amazon RDS DB to cache common queries.

A

Answer: A, D

The AWS documentation mentions the following Amazon RDS Read Replicas provide enhanced performance and durability for database (DB) instances. This replication feature makes it easy to elastically scale out beyond the capacity constraints of a single DB Instance for read-heavy database workloads. You can create one or more replicas of a given source DB Instance and serve high-volume application read traffic from multiple copies of your data, thereby increasing aggregate read throughput. Read replicas can also be promoted when needed to become standalone DB instances.

For more information on AWS RDS Read Replica’s, please visit the below URL:
https://aws.amazon.com/rds/details/read-replicas/

Amazon ElastiCache is a web service that makes it easy to deploy, operate, and scale an in-memory data store or cache in the cloud. The service improves the performance of web applications by allowing you to retrieve information from fast, managed, in-memory data stores, instead of relying entirely on slower disk-based databases.

For more information on AWS Elastic Cache, please visit the below URL:
https://aws.amazon.com/elasticache/