Test - 1 Flashcards

1
Q

Question 1

In AWS what is used for encrypting and decrypting login information to EC2 instances?

Templates

B. AMI’s

C. Key pairs

D. None of the above

A

Answer C.

Amazon EC2 uses public–key cryptography to encrypt and decrypt login information. Public–key cryptography uses a public key to encrypt a piece of data, such as a password, then the recipient uses the private key to decrypt the data. The public and private keys are known as a key pair. To log in to your instance, you must create a key pair, specify the name of the key pair when you launch the instance, and provide the private key when you connect to the instance. Linux instances have no password, and you use a key pair to log in using SSH. With Windows instances, you use a key pair to obtain the administrator password and then log in using RDP.

For more information on key pairs, please visit the below url
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Question 2

For DynamoDB, what are the scenario’s in which you would want to enable Cross-region replication?

Live data migration

B. Easier Traffic management

C. Disaster Recovery

D. All of the above

A

Answer: D

From the AWS Documentation, it clearly states the reason for why you would want to enable Cross-Region Replication.

For more information on DynamoDB, please visit the url
https://aws.amazon.com/dynamodb/faqs/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Question 3

You have launched two web servers in private subnet and one ELB (internet facing) in public subnet in your VPC. Yet, you are still unable to access your web application through the internet, which of the following would likely the cause of this?

Choose two correct options

Web server must be launched inside public subnet and not private subnet.

B. Route table for public subnet is not configured to route to VPC internet gateway.

C. No elastic IP is assigned to web servers.

D. No internet gateway is attached to the VPC.

A

Answer: B, D

In order for the EC2 or ELBs to be accessible from internet, we would need to configure the route table for public subnet to route traffic to VPC internet gateway. For example:

For information on VPC Route Tables and VPC Internet Gateway, please visit the link:
https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Route_Tables.html

https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Internet_Gateway.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Question 4

Which of the following is mandatory when defining a cloudformation template?

Resources

B. Parameters

C. Outputs

D. Mappings

A

Answer: A

This is clearly given in the aws documentation

For more information on Cloudformation templates, please visit the url
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-anatomy.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Question 5

In IAM, what is the representation of a person or service ?

User

B. Group

C. Team

D. Role

A

Answer: A

An IAM user is an entity that you create in AWS. The IAM user represents the person or service who uses the IAM user to interact with AWS An IAM group is a collection of IAM users. You can use groups to specify permissions for a collection of users, which can make those permissions easier to manage for those users An IAM role is very similar to a user, in that it is an identity with permission policies that determine what the identity can and cannot do in AWS

For more information on IAM entities , please visit the url
http://docs.aws.amazon.com/IAM/latest/UserGuide/id.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Question 6

Which of the below instances is used normally for massive parallel computations?

A. Spot Instances

B. On-Demand Instances

C. Dedicated Instances

D. This is not possible in AWS

A

Answer: A

This is clearly given in the aws documentation

For more information on Spot Instances, please visit the link–
https://aws.amazon.com/ec2/spot/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Question 7

Which of the below are incremental backups of your EBS volumes? Choose one answer from the options given below.

A. Volumes

B. State Manager

C. Placement Groups

D. Snapshots

A

Answer: D

You can easily create a snapshot from a volume while the instance is running and the volume is in use. You can do this from the EC2 dashboard.

For more information on EBS snapshots, please visit the link-
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSSnapshots.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Question 8

There is a requirement to host a NoSQL database with a need for low latency. Which class of instances from the below list should they choose from ?

T2

B. I2

C. T1

D. G2

A

Answer: B

I2 instances are optimized to deliver tens of thousands of low-latency, random I/ O operations per second (IOPS) to applications. They are well suited for the following scenarios: NoSQL databases (for example, Cassandra and MongoDB) Clustered databases online transaction processing (OLTP) systems.

For more information on I2 instances, please visit the link:
https://aws.amazon.com/blogs/aws/amazon-ec2-new-i2-instance-type-available-now/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Question 9

You are designing a site for a new start up which generates cartoon images for people automatically. Customers will log on to the site, upload an image which is stored in S3. The application then passes a job to AWS SQS and a fleet of EC2 instances poll the queue to receive new processing jobs. These EC2 instances will then turn the picture in to a cartoon and will then need to store the processed job somewhere. Users will typically download the image once (immediately), and then never download the image again. What is the most commercially feasible method to store the processed images?

Rather than use S3, store the images inside a BLOB on RDS with Multi-AZ configured for redundancy.

B. Store the images on S3 RRS, and create a lifecycle policy to delete the image after 24 hours.

C. Store the images on glacier instead of S3.

D. Use elastic block storage volumes to store the images.

A

Answer: B

“Use the AWS Reduced Redundancy storage to save on costs. The use lifecycle policies to delete the data since it is not required.

For more information on AWS Reduced Redundancy storage , please refer to the below link
https://aws.amazon.com/s3/reduced-redundancy/

The AWS Documentation mentions the following on Lifecycle policies Lifecycle configuration enables you to specify the lifecycle management of objects in a bucket. The configuration is a set of one or more rules, where each rule defines an action for Amazon S3 to apply to a group of objects

For more information on S3 Lifecycle policies , please refer to the below link
http://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Question 10

You have a high performance compute application and you need to minimize network latency between EC2 instances as much as possible. What can you do to achieve this?

A. Use Elastic Load Balancing to load balance traffic between availability zones

B. Create a CloudFront distribution and to cache objects from an S3 bucket at Edge Locations.

C. Create a placement group within an Availability Zone and place the EC2 instances within that placement group.

D. Deploy your EC2 instances within the same region, but in different subnets and different availability zones so as to maximize redundancy.

A

Answer: C

The AWS Documentation mentions the following on placement Groups A placement group is a logical grouping of instances within a single Availability Zone. Placement groups are recommended for applications that benefit from low network latency, high network throughput, or both. To provide the lowest latency, and the highest packet-per-second network performance for your placement group, choose an instance type that supports enhanced networking.

For more information on placement groups , please refer to the below link:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Question 11

Which of the below elements can you manage in the Billing dashboard ? Select 2 options.

A. Budgets

B. Policies

C. Credential Report 

D. Cost Explorer

A

Answer: A, D

When you go to your Billing dashboard, below are the set of elements which can be configured.

For more information on AWS cloud billing and pricing, please visit the link http://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/billing-getting-started.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Question 12

What is the name of the VPC that is automatically created for your AWS account for the first time ?

A. Primary VPC

B. First VPC 

C. Default VPC 

D. Initial VPC

A

Answer: C

A default VPC is a logically isolated virtual network in the AWS cloud that is automatically created for your AWS account the first time you provision Amazon EC2 resources. When you launch an instance without specifying a subnet-ID, your instance will be launched in your default VPC.

For more information on VPC, please refer to the link
https://aws.amazon.com/vpc/faqs/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Question 13

Which of the following databases support the read replica feature? Select 3 options.

A. MySQL

B. MariaDB 

C. PostgreSQL 

D. Oracle

A

Answer: A, B, C

Read replicas are available in Amazon RDS for MySQL, MariaDB, and PostgreSQL. When you create a read replica, you specify an existing DB Instance as the source. Amazon RDS takes a snapshot of the source instance and creates a read-only instance from the snapshot. For MySQL, MariaDB and PostgreSQL, Amazon RDS uses those engines’ native asynchronous replication to update the read replica whenever there is a change to the source DB instance. The read replica operates as a DB instance that allows only read-only connections; applications can connect to a read replica just as they would to any DB instance. Amazon RDS replicates all databases in the source DB instance.

For more information on rds read replicas, please refer to the link https://aws.amazon.com/rds/details/read-replicas/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Question 14

What can be used from AWS to import existing Virtual Machines Images into AWS?

A. VM Import/ Export 

B. AWS Import/ Export 

C. AWS Storage Gateway 

D. This is not possible in AWS

A

Answer: A

VM Import/ Export enables customers to import Virtual Machine (VM) images in order to create Amazon EC2 instances. Customers can also export previously imported EC2 instances to create VMs. Customers can use VM Import/ Export to leverage their previous investments in building VMs by migrating their VMs to Amazon EC2.

For more information on AWS EC2, please visit
https://aws.amazon.com/ec2/faqs/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Question 15

What is the service used by AWS to segregate control over the various AWS services ?

A. AWS RDS 

B. AWS Integrity Management 

C. AWS Identity and Access Management

D. Amazon EMR

A

Answer: C

AWS Identity and Access Management (IAM) is a web service that helps you securely control access to AWS resources for your users. You use IAM to control who can use your AWS resources (authentication) and what resources they can use and in what ways (authorization).

For more information on IAM, please visit:
http://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Question 16

How long can messages live in a SQS queue?

A. 12 hours 

B. 10 days 

C. 14 days 

D. 1 year Answer: C

A

Answer: C

This is clearly given in the AWS documentation

For more information on SQS , please visit the following url
https://aws.amazon.com/sqs/faqs/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Question 17

You working in the media industry and you have created a web application where users will be able to upload photos they create to your website. This web application must be able to call the S3 API in order to be able to function. Where should you store your API credentials whilst maintaining the maximum level of security?

A. Save the API credentials to your php files. 

B. Don’t save your API credentials. Instead create a role in IAM and 
assign this role to an EC2 instance when you first create it. 

C. Save your API credentials in a public Github repository. 

D. Pass API credentials to the instance using instance userdata.

A

Answer: B

Always use IAM Roles for accessing AWS resources from EC2 Instances The AWS Documentation mentions the following IAM roles are designed so that your applications can securely make API requests from your instances, without requiring you to manage the security credentials that the applications use. Instead of creating and distributing your AWS credentials, you can delegate permission to make API requests using IAM roles

For more information on IAM Roles for EC2 Instances, please refer to the below link:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Question 18

What are some of the common causes why you cannot connect to a DB instance on AWS ? Select 3 options.

A. There is a read replica being created, hence you cannot connect 

B. The DB is still being created 

C. The local firewall is stopping the communication traffic 

D. The security groups for the DB are not properly configured.

A

Answer: B, C, D

There are some steps clearly given in the AWS documentation:

For more information on rds troubleshooting please visit the below link
http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Troubleshooting.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Question 19

For which of the following databases does Amazon RDS provides high availability and failover support using Amazon’s failover technology for DB instances using Multi-AZ deployments. Select 3 options.

A. SQL Server 

B. MySQL 

C. Oracle 

D. MariaDB

A

Answer: B, C, D

This is clearly provided in the aws documentation.

For more information on MultiAZ please visit the below link
http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZ.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Question 20

Which is the service provided by AWS for providing a petabyte-scale data warehouse ?

A. Amazon DynamoDB 

B. Amazon Redshift 

C. Amazon Kinesis 

D. Amazon Simple Queue Service

A

Answer: B Amazon Redshift is a fast, fully managed, petabyte-scale data warehouse that makes it simple and cost-effective to analyze all your data using your existing business intelligence tools. Start small for $ 0.25 per hour with no commitments and scale to petabytes for $ 1,000 per terabyte per year, less than a tenth the cost of traditional solutions. Option A is wrong because it is used as a NOSQL solution. Option C is wrong because it is used for processing streams and not for storage. Option D is wrong because it is a de-coupling solution.

For more information on Redshift, please visit the below url
https://aws.amazon.com/redshift/?nc2=h_m1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Question 21

An image named photo.jpg has been uploaded to a bucket named examplebucket in the us-east-1 region. Which of the below is the right URL to access the image, if it were made public ? Consider that S3 is used as a static website.

A. http://examplebucket.s3-website-us-east-1.amazonaws.com/photo.jpg 

B. http://examplebucket.website-us-east-1.amazonaws.com/photo.jpg 

C. http://examplebucket.s3-us-east-1.amazonaws.com/ photo.jpg 

D. http://examplebucket.amazonaws.s3-website-us-east-1./photo.jpg

A

Answer: A

The URL for an S3 web site is shown in the KB article

[bucket-name].s3-website-[AWS-region].amazonaws.com.

hence the right option is option A. When you configure a bucket for website hosting, the website is available via the region-specific website endpoint. Website endpoints are different from the endpoints where you send REST API requests.

For more information about the differences between the endpoints, see Key Differences Between the Amazon Website and the REST API Endpoint. The two general forms of an Amazon S3 website endpoint are as follows:

—> bucket-name.s3-website-region.amazonaws.com

—> bucket-name.s3-website.region.amazonaws.com

Which form is used for the endpoint depends on what region the bucket is in.

For example, if your bucket is named example-bucket and it resides in US East (N. Virginia) region, the website is available at the following amazon S3 website endpoint:

For more information on the bucket and the URL format for S3 buckets, please visit:

http: //docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteEndpoints.html
http: //docs.aws.amazon.com/AmazonS3/latest/dev/HostingWebsiteOnS3Setup.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Question 22

An image named photo.jpg has been uploaded to a bucket named examplebucket in the us-east-1 region. Which of the below is the right URL to access the image, if it were made public ? Consider that S3 is used as a static website.

http://examplebucket.s3-website-us-east-1.amazonaws.com/photo.jpg

B. http://examplebucket.website-us-east-1.amazonaws.com/photo.jpg

C.http://examplebucket.s3-us-east-1.amazonaws.com/photo.jpg

D. http://examplebucket.amazonaws.s3-website-us-east-1./photo.jpg

A

Answer: A

The URL for a S3 web site is as shown in the KB Article < bucket-name >. s3-website-< AWS-region >. amazonaws.com Hence the right option in option A

When you configure a bucket for website hosting, the website is available via the region-specific website endpoint. Website endpoints are different from the endpoints where you send REST API requests. For more information about the differences between the endpoints, see Key Differences Between the Amazon Website and the REST API Endpoint.

The two general forms of an Amazon S3 website endpoint are as follows:

  • -> bucket-name.s3-website-region.amazonaws.com
  • -> bucket-name.s3-website.region.amazonaws.com

Which form is used for the endpoint depends on what Region the bucket is in. For example, if your bucket is named example-bucket and it resides in the US East (N. Virginia) region, the website is available at the following Amazon S3 website endpoint:

For more information on the bucket and the URL format for S3 buckets , please visit the below urls:
http://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteEndpoints.html

http://docs.aws.amazon.com/AmazonS3/latest/dev/HostingWebsiteOnS3Setup.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Question 23

A company has an EC2 instance that is hosting a web solution which is mostly used for read-only purposes. The CPU utilization is constantly 100% on the EC2 instance. Which of the below solutions can help alleviate and provide a quick resolution to the problem.

Use Cloudfront and place the EC2 instance as the origin

B. Let the EC2 instance continue to run at 100%, since the AWS environment can handle the load.

C. Use SNS to notify the IT admin when it reaches 100% so that they can disconnect some sessions to help alleviate the load

D. Use SES to notify the IT admin when it reaches 100% so that they can disconnect some sessions to help alleviate the load

A

Answer: A

Cloudfront can be used alleviate the load on web based solutions by caching the recent reads in its edge locations and reduce the burden on the EC2 instance. Amazon CloudFront is a global content delivery network (CDN) service that accelerates delivery of your websites, APIs, video content or other web assets

For more information on AWS Cloudfront please visit the below url
https://aws.amazon.com/cloudfront/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Question 24

Which of the mentioned AWS services uses the concept of shards and is uniquely identified group of data records in a stream?

Cloudfront

B. SQS

C. Kinesis

D. SES

A

Answer: C

In Amazon Kinesis, a shards is a uniquely identified group of data records in a stream. A stream is composed of one or more shards, each of which provides a fixed unit of capacity. Each shard can support up to 5 transactions per second for reads, up to a maximum total data read rate of 2 MB per second and up to 1,000 records per second for writes, up to a maximum total data write rate of 1 MB per second (including partition keys).

For more information on AWS Kinesis please visit the below url
http://docs.aws.amazon.com/streams/latest/dev/key-concepts.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Question 25

Which of the below mentioned services are the building blocks for creating a basic high availability architecture in AWS. Select 2 options.

EC2

B. SQS

C. Elastic Load Balancer

D. Cloudwatch

A

Answer: A, C

Having EC2 instances hosting your applications in multiple subnets, hence multiple AZ’s and placing them behind an ELB is the basic building block of a high availability architecture in AWS.

For more information on High availability and Fault tolerance please visit the below url
https://media.amazonwebservices.com/architecturecenter/AWS_ac_ra_ftha_04.pdf

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Question 26

You have a set of EC2 Instances launched via Autoscaling. You now want to change the Instance type for the instances that would be launched in the future via Autoscaling. What would you do in such a case

A. Change the Launch configuration to reflect the new instance type

B. Change the Autoscaling Group and add the new instance type.

C. Create a new Launch Configuration with the new instance type and replace the existing Launch configuration attached to the Autoscaling Group.

D. Create a new Launch Configuration with the new instance type and add it along with the existing Launch configuration attached to the Autoscaling Group.

A

Answer: C

The AWS Documentation mentions the following When you create an Auto Scaling group, you must specify a launch configuration. You can specify your launch configuration with multiple Auto Scaling groups. However, you can only specify one launch configuration for an Auto Scaling group at a time, and you can’t modify a launch configuration after you’ve created it. Therefore, if you want to change the launch configuration for your Auto Scaling group, you must create a launch configuration and then update your Auto Scaling group with the new launch configuration.

For more information on Launch Configuration for Autoscaling, please refer to the below link
http://docs.aws.amazon.com/autoscaling/latest/userguide/LaunchConfiguration.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Question 27

Which of the following services provides an object store which can also be used to store files ?

A. S3

B. SQS

C. SNS

D. EC2

A

Answer:

A Amazon Simple Storage Service is storage for the Internet. It is designed to make web-scale computing easier for developers. Amazon S3 has a simple web services interface that you can use to store and retrieve any amount of data, at any time, from anywhere on the web. It gives any developer access to the same highly scalable, reliable, fast, inexpensive data storage infrastructure that Amazon uses to run its own global network of web sites.

For more information on S3 please visit the below url
http://docs.aws.amazon.com/AmazonS3/latest/dev/Welcome.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Question 28

What are the different types of scale out options available in the AutoScaling service provided by AWS? Select 3 options.

A. Scheduled Scaling

B. Dynamic Scaling

C. Manual Scaling

D. Static Scaling

A

Answer: A, B, C

This is clearly given in the AWS documentation:

For more information on Autoscaling please visit the below URL:
http://docs.aws.amazon.com/autoscaling/latest/userguide/AutoScalingGroupLifecycle.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Question 29

Which of the following services provides edge locations that can be used to cache frequently accessed pages of a web application ?

SQS

B. Cloudfront

C. Subnets

D. EC2

A

Answer: B

Amazon CloudFront is a global content delivery network (CDN) service that accelerates delivery of your websites, APIs, video content or other web assets

For more information on AWS Cloudfront please visit the below url
https://aws.amazon.com/cloudfront/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Question 30

You are a security architect working for a large antivirus company. The production environment has recently been moved to AWS and is in a public subnet. You are able to view the production environment over HTTP however when your customers try to update their virus definition files over a custom port, that port is blocked. You log in to the console and you allow traffic in over the custom port. How long will this take to take effect?

Straight away but to the new instances only.

B. Immediately.

C. After a few minutes this should take effect.

D. Straight away to the new instances, but old instances must be stopped and restarted before the new rules apply.

A

Answer: B

“When you make a change to the Security Groups or Network ACL’s , they are applied immediately. This is clearly given in the AWS documentation

For more information on Security Groups, please refer to the below link:
http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_SecurityGroups.html “

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Question 31

You work for a company who are deploying a hybrid cloud approach. Their legacy servers will remain on premise within their own datacenter however they will need to be able to communicate to the AWS environment over a site to site VPN connection. What do you need to do to establish the VPN connection?

A. Connect to the environment using AWS Direct Connect.

B. Assign a static routable address to the customer gateway

C. Create a dedicated NAT and deploy this to the public subnet.

D. Update your route table to add a route for the NAT to 0.0.0.0/ 0.

A

Answer: B

This requirement is given in the AWS documentation for the customer gateway. The traffic from the VPC gateway must be able to leave the VPC and traverse through the internet onto the customer gateway. Hence the customer gateway needs to be assigned a static IP that can be routable via the internet.

For more information on VPC Virtual Private connections, please refer to the below link:
http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_VPN.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Question 32

Which is the service provided by AWS for collecting and processing large streams of data in real time?

A. Amazon Kinesis

B. AWS Data Pipeline

C. Amazon AppStream

D. Amazon Simple Queue Service

A

Answer: A

Use Amazon Kinesis Streams to collect and process large streams of data records in real time. You’ll create data-processing applications, known as Amazon Kinesis Streams applications. A typical Amazon Kinesis Streams application reads data from an Amazon Kinesis stream as data records. These applications can use the Amazon Kinesis Client Library, and they can run on Amazon EC2 instances. The processed records can be sent to dashboards, used to generate alerts, dynamically change pricing and advertising strategies, or send data to a variety of other AWS services.

For more information on Amazon Kinesis, please visit this URL:
http://docs.aws.amazon.com/streams/latest/dev/introduction.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Question 33

Which of the following criteria are some of which must be met when attaching an EC2 instance to an existing AutoScaling Group ? Select 3 options.

A. The instance is in the running state.

B. The AMI used to launch the instance must still exist.

C. The instance is not a member of another Auto Scaling group.

D. They should have the same private key

A

Answer: A, B, C

This is clearly mentioned in the AWS documentation:

For more information on the criteria for attaching an EC2 instance to an existing AutoScaling group please visit the below URL:
http://docs.aws.amazon.com/autoscaling/latest/userguide/attach-instance-asg.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

Question 34

A company wants to make use of serverless code. Which service in AWS provides such a facility?

A. SQS

B. Cloudfront

C. EC2

D. Lambda

A

Answer: D

AWS Lambda is a compute service that lets you run code without provisioning or managing servers. AWS Lambda executes your code only when needed and scales automatically, from a few requests per day to thousands per second. You pay only for the compute time you consume-there is no charge when your code is not running.

For more information on AWS Lamda please visit the below URL:
http://docs.aws.amazon.com/lambda/latest/dg/welcome.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

Question 35

A t2. medium EC2 instance type must be launched with what type of Amazon Machine Image (AMI)?

A. An Instance store Hardware Virtual Machine AMI

B. An Instance store Paravirtual AMI

C. An Amazon EBS-backed Hardware Virtual Machine AMI

D. An Amazon EBS-backed Paravirtual AMI

A

Answer: C

The AWS Documentation mentions the below Linux Amazon Machine Images use one of two types of virtualization: paravirtual (PV) or hardware virtual machine (HVM). The main difference between PV and HVM AMIs is the way in which they boot and whether they can take advantage of special hardware extensions (CPU, network, and storage) for better performance. The below snapshot also shows the support for the T2 Instance family.

For more information on the Instance types for Linux AMI’s, please refer to the below link https://aws.amazon.com/amazon-linux-ami/instance-type-matrix/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

Question 36

You are building an automated transcription service in which Amazon EC2 worker instances process an uploaded audio file and generate a text file. You must store both of these files in the same durable storage until the text file is retrieved. You do not know what the storage capacity requirements are. Which storage option is both cost-efficient and scalable?

A. Multiple Amazon EBS volume with snapshots

B. A single Amazon Glacier vault

C. A single Amazon S3 bucket

D. Multiple instance stores

A

Answer: C

The AWS Simple Storage service is the best option for this scenario. The AWS documentation provides the following information on the Simple Storage service Amazon S3 is object storage built to store and retrieve any amount of data from anywhere–web sites and mobile apps, corporate applications, and data from IoT sensors or devices. It is designed to deliver 99.999999999% durability, and stores data for millions of applications used by market leaders in every industry

For more information on the Simple Storage Service, please refer to the below link
https://aws.amazon.com/s3/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

Question 37

You have some EC2 instances hosted in your AWS environment. You have a concern that not all of the EC2 instances are being utilized. Which of the below mentioned services can help you find underutilized resources in AWS ? Select 2 options.

A. AWS Cloudwatch

B. SNS

C. AWS Trusted Advisor

D. Cloudtrail

A

Answer: A, C

The AWS Trusted Advisor can help you identify underutilized resources in AWS. For more information on AWS trusted advisor please visit the below URL: https:// aws.amazon.com/ premiumsupport/ trustedadvisor/ If You look at the Cloudwatch graphs, the CPU utilization of your resources and you can see the trend over time in the graphs.

For more information on AWS Cloudwatch please visit the below URL:
https://aws.amazon.com/cloudwatch/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

Question 38

Which of the following features can be used to capture information for outgoing and incoming IP traffic from network interfaces in a VPC.

A. AWS Cloudwatch

B. AWS EC2

C. AWS SQS

D. AWS VPC Flow Logs

A

Answer: D

VPC Flow Logs is a feature that enables you to capture information about the IP traffic going to and from network interfaces in your VPC. Flow log data is stored using Amazon CloudWatch Logs. After you’ve created a flow log, you can view and retrieve its data in Amazon CloudWatch Logs.

For more information on VPC Flowlogs please visit the below URL:
http://docs.aws.amazon.com/AmazonVPC/AmazonVPC/latest/UserGuide/flow-logs.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

Question 39

What are the main benefits of AWS regions? Select 2 options.

A. Regions allow you to design applications to conform to specific laws and regulations for specific parts of the world.

B. All regions offer the same service at the same prices.

C. Regions allow you to choose a location in any country in the world.

D. Regions allow you to place AWS resources in the area of the world closest to your customers who access those resources.

A

Answer: A, D

AWS developer data centers across the world to help develop solutions that are close to the customer as possible. They also have center in core countries to help tie up with the specific laws and regulations for specific parts of the world. AWS does not have centers in every location of the world, hence option C is invalid. Services and prices are specific to every region, hence option B is invalid.

For more information on Regions please visit the below URL:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

Question 40

Which of the following are ways that users can interface with AWS? Select 2 options.

A. AWS Cloudfront

B. AWS CLI

C. AWS Console

D. AWS Cloudwatch

A

Answer: B, C

AWS Management Console-The console is a browser-based interface to manage IAM and AWS resources. AWS Command Line Tools-You can use the AWS command line tools to issue commands at your system’s command line to perform IAM and AWS tasks; this can be faster and more convenient than using the console.

For more information on interfacing with AWS please visit the below URL:
http://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

Question 41

What are the two layers of security provided by AWS in the VPC?

A. Security Groups and NACLs

B. NACLs and DHCP Options

C. Route Tables and Internet gateway

D. None of the above

A

Answer: A

This is clearly given in the AWS documentation

For more information on VPC Security please visit the below URL: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Security.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

Question 42

A company wants to have a 50 Mbps dedicated connection to its AWS resources. Which of the below services can help fulfil this requirement ?

A. Virtual Private Gateway

B. Virtual Private Connection (VPN)

C. Direct Connect

D. Internet Gateway

A

Answer: C

AWS Direct Connect makes it easy to establish a dedicated network connection from your premises to AWS. Using AWS Direct Connect, you can establish private connectivity between AWS and your datacenter, office, or colocation environment, which in many cases can reduce your network costs, increase bandwidth throughput, and provide a more consistent network experience than Internet-based connections. For more information on AWS Direct Connect, please visit the below URL: https:// aws.amazon.com/ directconnect/ If you require a port speed less than 1 Gbps, you cannot request a connection using the console. Instead, contact an APN partner, who will create a hosted connection for you. The hosted connection appears in your AWS Direct Connect console, and must be accepted before use.

Please find an exact explanation in AWS documentation below:
http://docs.aws.amazon.com/directconnect/latest/UserGuide/getting_started.html#ConnectionRequest

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

Question 43

What is the service name in AWS that can display costs in a chart format?

A. Cost Explorer

B. Cost Allocation Tags

C. AWS Budgets

D. Payment History

A

Answer: A

Cost Explorer is a free tool that you can use to view charts of your costs (also known as spend data) for up to the last 13 months, and forecast how much you are likely to spend for the next three months. You can use Cost Explorer to see patterns in how much you spend on AWS resources over time, identify areas that need further inquiry, and see trends that you can use to understand your costs. You can also specify time ranges for the data you want to see, and you can view time data by day or by month.

For more information on Cost Explorer, please visit the URL: http://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/cost-explorer-what-is.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

Question 44

In the shared responsibility model, what is the customer not responsible for?

A. Edge locations

B. Installation of custom firewall software

C. Security Groups

D. Applying an SSL Certificate to an ELB

A

Answer: A

AWS has published the Shared Responsibility Model. And the Physical networking comes as part of the responsibility of AWS.

For more information on the Shared Responsibility Model, please refer to the below URL:
https://aws.amazon.com/compliance/shared-responsibility-model/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

Question 45

Which of the below options best describes how EBS snapshots work?

A. Each snapshot stores the entire volume in S3

B. Snapshots are not possible for EBS volumes

C. Snapshots are incremental in nature and are stored in S3

D. Snapshots are stored in DynamoDB

A

Answer: C

You can back up the data on your EBS volumes to Amazon S3 by taking point-in-time snapshots. Snapshots are incremental backups, which means that only the blocks on the device that have changed after your most recent snapshot are saved. This minimizes the time required to create the snapshot and saves on storage costs. When you delete a snapshot, only the data unique to that snapshot is removed.

For more information on EBS Snapshots, please refer to the below link
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSSnapshots.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

Question 46

You work for a market analysis firm who are designing a new environment. They will ingest large amounts of market data via Kinesis and then analyze this data using Elastic Map Reduce. The data is then imported in to a high performance NoSQL Cassandra database which will run on EC2 and then be accessed by traders from around the world. The database volume itself will sit on 2 EBS volumes that will be grouped into a RAID 0 volume. They are expecting very high demand during peak times, with an IOPS performance level of approximately 15,000. Which EBS volume should you recommend?

A. Magnetic

B. General Purpose SSD

C. Provisioned IOPS (PIOPS)

D. Turbo IOPS (TIOPS)

A

Answer: C

When you are looking at hosting intensive I/ O applications such as databases, always look to using IOPS as the preferred storage option.

For more information on the various EBS Volume types, please refer to the below link
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

Question 47

Which of the following best describes the main feature of an Elastic Load Balancer (ELB) in AWS?

A. To evenly distribute traffic among multiple EC2 instances in separate Availability Zones.

B. To evenly distribute traffic among multiple EC2 instances in a single Availability Zones.

C. To evenly distribute traffic among multiple EC2 instances in a multiple regions.

D. To evenly distribute traffic among multiple EC2 instances in a multiple counties.

A

Answer: A

Elastic Load Balancing automatically distributes incoming application traffic across multiple Amazon EC2 instances. It enables you to achieve fault tolerance in your applications, seamlessly providing the required amount of load balancing capacity needed to route application traffic. The ELB is best used for EC2 instances multiple AZ’s. You cannot use ELB for distributing traffic across regions.

For more information on AWS ELB, please refer to the below link:
https://aws.amazon.com/elasticloadbalancing/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

Question 48

What are the 2 main components of AutoScaling? Select 2 options.

A. Launch Configuration

B. Cloudtrail

C. Cloudwatch

D. AutoScaling Groups

A

Answer: A, D

Groups-Your EC2 instances are organized into groups so that they can be treated as a logical unit for the purposes of scaling and management. When you create a group, you can specify its minimum, maximum, and, desired number of EC2 instances. Launch configurations-Your group uses a launch configuration as a template for its EC2 instances. When you create a launch configuration, you can specify information such as the AMI ID, instance type, key pair, security groups, and block device mapping for your instances.

For more information on AWS Autoscaling, please refer to the below link:
http://docs.aws.amazon.com/autoscaling/latest/userguide/WhatIsAutoScaling.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

Question 49

What are two primary requirements of a NAT Instance? Choose the correct answer from the options below:

A. A NAT instance must be provisioned into a private subnet, and it must part of the private subnet’s route table.

B. A NAT instance must be provisioned into a public subnet, and it must part of the private subnet’s route table.

C. A NAT instance must be provisioned into a private subnet, and does not require a public IP address.

D. A NAT instance must be provisioned into a public subnet, and must be combined with a bastion host.

A

Answer: B

The snapshot from the AWS documentation shows how the NAT instance is setup. It needs to be placed in the public subnet and the private subnet should have a route to it.

For more information on AWS NAT Instance, please refer to the below link:
http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_NAT_Instance.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

Question 50

You have just provisioned a fleet of EC2 instances and realized that none of them have a public IP address. What settings would need to be changed for the next fleet of instances to be created with public IP addresses?

A. Modify the auto-assign public IP setting setting on the subnet.

B. Modify the auto-assign public IP setting on the instance type.

C. Modify the auto-assign public IP setting on the route table.

D. Modify the auto-assign public IP setting on the VPC.

A

Answer: A

This setting is done at the subnet level and if marked as true, all instances launched in that subnet will get a public IP address by default.

For more information on AWS IP Addressing, please refer to the below link:
http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-ip-addressing.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
51
Q

Question 51

You keep on getting an error while trying to attach an Internet Gateway to a VPC. What is the most likely cause of the error?

A. You need to have a customer gateway defined first before attaching an internet gateway

B. You need to have a public subnet defined first before attaching an internet gateway

C. You need to have a private subnet defined first before attaching an internet gateway

D. An Internet gateway is already attached to the VPC

A

Answer: D

You can only have one internet gateway attached to your VPC at one time, hence the error must be coming because there is already an internet gateway attached.

For more information on Internet gateways, please refer to the below link: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Internet_Gateway.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
52
Q

Question 52

You work for a company that stores records for a minimum of 10 years. Most of
these records will never be accessed but must be made available upon request (within a few hours).

What is the most cost-effective storage option?

A. 83-IA

B. Reduced Redundancy Storage (RRS)

C. Glacier

D. AWS Import/Export

A

Answer: C

Amazon Glacier is a secure, durable, and extremely low-cost cloud storage service for data archiving and long-term backup. Customers can reliably store large or small amounts of data for as little as $0.004 per gigabyte per month, a significant savings compared to on-premises solutions. To keep costs low yet suitable for varying options for access to archives, from a few minutes to several hours.

For more information on Amazon Glacier, please refer to the below link:
https://aws.amazon.com/glacier/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
53
Q

Question 53

What are the services from AWS helps to migrate databases to AWS easily ?

A. AWS Snowball

B. AWS Direct Connect

C. AWS Database Migration Service (DMS)

D. None of the above

A

Answer: C

AWS Database Migration Service helps you migrate databases to AWS easily and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database. The AWS Database Migration Service can migrate your data to and from most widely used commercial and open-source databases.

For more information on aws database migration service, please visit the URL:
https://aws.amazon.com/dms

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
54
Q

Question 54

Your company has petabytes of data that it wants to move from their on-premise location to AWS. Which of the following can be used to fulfil this requirement?

A. AWS VPN

B. AWS Migration

C. AWS VPC

D. AWS Snowball

A

Answer: D

Snowball is a petabyte-scale data transport solution that uses secure appliances to transfer large amounts of data into and out of the AWS cloud. Using Snowball addresses common challenges with large-scale data transfers including high network costs, long transfer times, and security concerns. Transferring data with Snowball is simple, fast, secure, and can be as little as one-fifth the cost of high-speed Internet.

For more information, please refer the below link:
http://docs.aws.amazon.com/snowball/latest/ug/whatissnowball.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
55
Q

Question 55

There are multiple issues reported from an EC2 instance hence it is required to analyze the logs files. What can be used in AWS to store and analyze the log files?

A. SQS

B. 83

C. Cloudtrail

D. Cloudwatch Logs

A

Answer: D

You can use Amazon CloudWatch Logs to monitor, store, and access your log files from Amazon Elastic Compute Cloud (Amazon EC2) instances, AWS CloudTrail, and other sources. You can then retrieve the associated log data from CloudWatch Log.

To enable Cloudwatch logs , follow the below steps

Step 1) Go to the Cloudwatch section and click on Logs

Step 2) Once you create a log group, you can then configure your EC2 instance to send logs to this Log Group.

For more information on Cloudwatch logs please visit the link:
http://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
56
Q

Question 56

You are not able to connect to an EC2 instance via SSH, and you have already verified that the instance has a public IP and the Internet gateway and route tables are in place, what should you check next?

A. Adjust the security group to allow traffic to port 22

B. Adjust the security group to allow traffic to port 3389

C. Restart the instance since there might be some issue with the instance

D. Create a new instance since there might be some issue with the instance

A

Answer: A

The reason why you cannot connect to the instance is because maybe the SSH protocol has not been enabled in the security group. Go to your EC2 Security groups, click on the required security groups to make the changes. Go to the Inbound Tab. Ensure that the Inbound rules has a rule for SSH protocol.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
57
Q

Question 57

Your company VPC has a need to communicate with another company VPC within the same AWS region. What can be used from AWS to interface between the two VPC?

A. VPC Connection

B. VPN Connection

C. Direct Connect

D. VPC Peering

A

Answer: D

A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them using private IP addresses. Instances in either VPC can communicate with each other as if they are within the same network. You can create a VPC peering connection between your own VPCs, or with a VPC in another AWS account within a single region The below diagram shows an example of VPC peering. Now please note that VPC B cannot communicate to VPC C because there is no peering between them.

For more information on VPC peering, please visit the URL:
http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-peering.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
58
Q

Question 58

You currently have an EC2 instance hosting a web application. The number of users is expected to increase in the coming months and hence you need to add more elasticity to your setup. Which of the following methods can help add elasticity to your existing setup. Choose 2 answers from the options given below

A. Setup your web app on more EC2 instances and set them behind an Elastic Load balancer

B. Setup an Elastic Cache in front of the EC2 instance.

C. Setup your web app on more EC2 instances and use Route53 to route requests accordingly.

D. Setup DynamoDB behind your EC2 Instances

A

Answer: A,C

The Elastic Load balancer is one of the most the ideal solution for adding elasticity to your application. The below snapshot is an example where you can add 3 EC2 Instances to an ELB. All requests can then be routed accordingly to these instances.

The other alternative is to create a routing policy in Route53 with Weighted routing policy . Weighted resource record sets let you associate multiple resources with a single DNS name. Weighted routing policy enables Route 53 to route traffic to different resources in specified proportions (weights).To create a group of weighted resource record sets, two or more resource record sets can be created that have the same combination of DNS name and type, and each resource record set is assigned a unique identifier and a relative weight.

Option B is not valid because this will just cache the reads , and will not add that desired elasticity to your application.

Option D is not valid , because there is no mention of a persistence layer in the question , that would require the use of DynamoDB.

For more information on Elastic Load Balancer, please visit the below URL:
https://aws.amazon.com/elasticloadbalancing/

For more information on Route53, please visit the below URL:
http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/Welcome.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
59
Q

Question 59

You are creating a Provisioned IOPS volume in AWS. The size of the volume is 8 GiB. Which of the following are the possible values that can put for the IOPS of the volume

A. 400

B. 500

C. 600

D. 1000

A

Answer: A

The Maximum ratio of IOPS to volume size is 50:1 , so if the volume size is 8 GiB , the maximum IOPS of the volume can be 400. If you go beyond this value , you will get an error as shown in the screenshot below.

For more information on Provisioned IOPS, please visit the below URL:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
60
Q

Question 60

A company is hosting EC2 instances which focuses on work-loads are on non- production and non-priority batch loads. Also these processes can be interrupted at any time. What is the best pricing model which can be used for EC2 instances in this case?

A. Reserved Instances

B. On-Demand Instances

C. Spot Instances

D. Regular Instances

A

Answer: C

Spot instances enable you to bid on unused EC2 instances, which can lower your Amazon EC2 costs significantly. The hourly price for a Spot instance (of each instance type in each Availability Zone) is set by Amazon EC2, and fluctuates depending on the supply of and demand for Spot instances. Your Spot instance runs whenever your bid exceeds the current market price. Spot instances are a cost-effective choice if you can be flexible about when your applications run and if your applications can be interrupted. For example, Spot instances are well-suited for data analysis, batch jobs, background processing, and optional tasks Option A is invalid because even though Reserved instances can reduce costs , its best for workloads that would be active for a longer period of time rather than for batch load processes which could last for a shorter period of time.

Option B is not right because On-Demand Instances tend to be more expensive than Spot Instances.

Option D is invalid because there is no concept of Regular instances in AWS

For more information on Spot Instances, please visit the below URL:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-spot-instances.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
61
Q

Question 61

You have 2 Ubuntu instances located in different subnets in the same VPC. Now to your understanding these instances should be able to communicate with each other, but when you try to ping from one instance to another, you get a timeout. The Route tables seem to be valid and has the entry for the Target ‘local’ for your VPC CIDR. Which of the following could be a valid reason for this issue?

A. The Instances are of the wrong AMI , hence you are not able to ping the instances.

B. The Security Group has not been modified for allow the required traffic.

C. The Instances don’t have Public IP, so that the ping commands can be routed

D. The Instances don’t have Elastic IP, so that the ping commands can be routed

A

Answer: B

The security groups need to configured to ensure that ping commands can go through. The below snapshot shows that the ICMP protocol needs to be allowed to ensure that the ping packets can be routed to the instances. You need to edit the Inbound Rules of the Web Security Group. Option A is invalid because the AMI will not impact the ping command Option C and D are invalid because even if you have a Public IP and Elastic IP allocated to the Instance, you need to ensure there is a route to the internet gateway and the Web Security Groups are configured accordingly.

For more information on Security Groups, please visit the below URL:
http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_SecurityGroups.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
62
Q

Question 62

What is the best way to move an EBS volume currently attached to an EC2 instance from one availability zone to another?

A. Detach the volume and attach to an EC2 instance in another AZ.

B. Create a new volume in the other AZ and specify the current volume as the source.

C. Create a snapshot of the volume and then create a volume from the snapshot in the other AZ

D. Create a new volume in the AZ and do a disk copy of contents from one volume to another.

A

Answer: C

In order for a volume to be available in another availability zone, you need to first create a snapshot from the volume. Then in the snapshot from creating a volume from the snapshot , you can then specify the new availability zone accordingly.

Option A is invalid, because the Instance and Volume have to be in the same AZ in order for it to be attached to the instance

Option B is invalid , because there is no way to specify a volume as a source

Option D is invalid , because the Diskcopy would just be a tedious process.

For more information on snapshots, please visit the below URL:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSSnapshots.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
63
Q

Question 63

When it comes to API credentials, what is the best practise recommended by AWS?

A. Create a role which has the necessary and can be assumed by the EC2 instance.

B. Use the API credentials from an EC2 instance.

C. Use the API credentials from a bastion host.

D. Use the API credentials from a NAT Instance.

A

Answer: A

IAM roles are designed in such a way so that your applications can securely make API requests from your instances, without requiring you to manage the security credentials that the applications use. Option B,C and D are invalid because it is not secure to use API credentials from any EC2 instance. The API credentials can be tampered with and hence is not the ideal secure way to make API calls.

For more information on IAM roles for EC2, please visit the below URL:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
64
Q

Question 64

You want to retrieve the Public IP addresses assigned to a running instance via the Instance metadata. Which of the below urls is valid for retrieving this data.

A. http://169.254.169.254/latest/meta-data/public-ipv4

B. http://254.169.254.169/latest/meta-data/public-ipv4

C. http://254.169.254.169/meta-data/latest/public-ipv4

D. http://169.254.169.254/meta-data/latest/public-ipv4

A

Answer: A

As per the AWS documentation, below is the right way to access the instance metadata

For more information on Instance metadata, please visit the below URL:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
65
Q

Question 65

You are planning to use the MySQL RDS in AWS. You have a requirement to ensure that you are available to recover from a database crash. Which of the below is not a recommended practice when you want to fulfill this requirement

A. Ensure that automated backups are enabled for the RDS

B. Ensure that you use the MyISAM storage engine for MySQL

C. Ensure that the database does not grow too large

D. Ensure that file sizes for the RDS is well under 16 TB.

A

Answer: B

Visit KB Article for the best recommended practices for MySQL

For more information on best practices for MySQL Storage, please visit the below URL:
http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP _BestPractices.html#CHAP_BestPractices.MySQLStorage

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
66
Q

Question 66

Which of the following is a valid bucket name

A. demo

B. Example

C. .example

D. demo.

A

Answer: A

Following are the restrictions when naming buckets in S3. Bucket names must be at least 3 and no more than 63 characters long. Bucket names must be a series of one or more labels. Adjacent labels are separated by a single period (.). Bucket names can contain lowercase letters, numbers, and hyphens. Each label must start and end with a lowercase letter or a number. Bucket names must not be formatted as an IP address (e.g., 192.168.5.4). When using virtual hosted—style buckets with SSL, the SSL wildcard certificate only matches buckets that do not contain periods. To work around this, use HTTP or write your own certificate verification logic. We recommend that you do not use periods (“”.””) in bucket names.

Option B is invalid because it has an upper case character

Option C is invalid because the bucket name cannot start with a period (.).

Option D is invalid because the bucket name cannot end with a period (.).

For more information on S3 Bucket restrictions, please visit the below URL:
http://docs.aws.amazon.com/AmazonS3/latest/dev/BucketRestrictions.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
67
Q

Question 67

Which of the following is not a feature provided by Route53?

A. Registration of Domain Names

B. Routing of internet traffic to domain resources

C. Offloading content to cache locations

D. Health check of resources

A

Answer: C

Visit KB Article for features which are available for Route53 hence option A,B and D are valid. Register domain names — Your website needs a name, such as example.com. Amazon Route 53 lets you register a name for your website or web application, known as a domain name. Route internet traffic to the resources for your domain — When a user opens a web browser and enters your domain name in the address bar, Amazon Route 53 helps the Domain Name System (DNS) connect the browser with your website or web application. Check the health of your resources — Amazon Route 53 sends automated requests over the internet to a resource, such as a web server, to verify that it’s reachable, available, and functional. You also can choose to receive notifications when a resource becomes unavailable and choose to route internet traffic away from unhealthy resources.

Option C is basically a feature provided by the AWS Content Delivery service.

For more information on Route53, please visit the below URL:
http: //docs.aws.amazon.com/Route53/latest/DeveloperGuide/Welcome.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
68
Q

Question 68

When working with API gateways in AWS , what is the type of endpoints that are exposed

A. HTTP

B. HTTPS

C. JSON

D. XML

A

Answer: B

All of the endpoints created with the API gateway are of HTTPS. Option A is incorrect because Amazon API Gateway does not support unencrypted (HTTP) endpoints

Option C and D are invalid because API gateway expose HTTPS endpoints only

For more information on API Gateways, please visit the below URL:
https://aws.amazon.com/api-gateway/faqs/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
69
Q

Question 69

Which of the following verbs are supported with the API Gateway

A. GET

B. POST

C. PUT

D. All of the above

A

Answer: D

Each resource within a REST API can support one or more of the standard HTTP
methods. You define which verbs should be supported for each resource (GET, POST, PUT, PATCH, DELETE, HEAD, OPTIONS) and their implementation.

For more information on API Gateways, please visit the below URL:
https://aws.amazon.com/api-gateway/faqs/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
70
Q

Question 70

Which of the following container technologies are currently supported by the AWS ECS service?

Choose 2 answers.

A. Kubernetes

B. Docker

C. Mesosphere

D. Canonical LXD

A

Answer: A,B

Currently Kubernetes and Docker are the container platform supported by EC2 Container Service.

For more information on ECS, please visit the below URL:
https://aws.amazon.com/ecs/faqs/ https://aws.amazon.com/blogs/aws/amazon-elastic-container-service-for-kubernetes/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
71
Q

Question 71

Which of the following when used alongside with the AWS Secure Token service can be used to provide a single sign-on experience for existing users who are part of an organization using on-premise applications

A. OpenID Connect

B. JSON

C. SAML 2.0

D. OAuth

A

Answer: C

You can authenticate users in your organization’s network, and then provide those users access to AWS without creating new AWS identities for them and requiring them to sign in with a separate user name and password. This is known as the single sign-on (SSO) approach to temporary access. AWS STS supports open standards like Security Assertion Markup Language (SAML) 2.0, with which you can use Microsoft AD FS to leverage your Microsoft Active Directory.

Option A and D are incorrect because these are used when you want users to sign in using a well-known third party identity provider such as Login with Amazon, Facebook, Google.

Option B is incorrect because this is more of a data exchange protocol.

For more information on STS, please visit the below URL:
http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
72
Q

Question 72

While performing status checks on your volume in AWS , you can see that the volume check has a status of “insufficient data”. What can you derive from this status check

A. All checks have passed

B. A particular check has failed only

C. All checks have failedD. The check on the volume is still in progress.

A

Answer: D

Volume status checks enable you to better understand, track, and manage potential inconsistencies in the data on an Amazon EBS volume. They are designed to provide you with the information that you need to determine whether your Amazon EBS volumes are impaired, and to help you control how a potentially inconsistent volume is handled. If the status is insufficient-data, the checks may still be in progress on the volume.

Option A is incorrect because if all checks have passed, then the status of the volume is OK.

Option B and C are incorrect because if a check fails, then the status of the volume is impaired

For more information on Volume status checks, please visit the below URL:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/monitoring-volume-status.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
73
Q

Question 73

Which of the following can constitute the term of a ‘Golden Image’

A. This is the basic AMI which is available in AWS.

B. This refers to an instance which has been bootstraped.

C. This refers to an AMI that has been constructed from a customized Image.

D. This refers to a special type of Linux AMI.

A

Answer: C

You can customize an Amazon EC2 instance and then save its configuration by creating an Amazon Machine Image (AMI). You can launch as many instances from the AMI as you need, and they will all include those customizations that you’ve made. Each time you want to change your configuration you will need to create a new golden image, so you will need to have a versioning convention to manage your golden images over time

Because of the above explanation , all of the remaining options are automatically invalid.

For more information on AMI’s, please visit the below URL:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
74
Q

Question 74

When designing a health check for your web application which is hosted behind an elastic load balancer, which of the following health checks is ideal to implement

A. ATCP health check

B. A UDP health check

C. AHTTP health check

D. A combination of TCP and UDP health checks

A

Answer: C

Option B and D is invalid because UDP health checks are not possible

Option A is partially valid.

A simple TCP health check would not detect the scenario where the instance itself is healthy, but the web server process has crashed. Instead, you should assess whether the web server can return a HTTP 200 response for some simple request.

For more information on ELB health checks, please visit the below URL:
http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-healthchecks.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
75
Q

Question 75

Which of the following is an example of synchronous replication which occurs in the AWS service?

A. AWS RDS Read Replica’s for MySQL, MariaDB and PostgreSQL

B. AWS Multi-AZ RDS

C. Redis engine for Amazon ElastiCache replication

D. AWS RDS Read Replica’s for Oracle

A

Answer: B

Amazon RDS Multi-AZ deployments provide enhanced availability and durability for Database (DB) Instances, making them a natural fit for production database workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ).

For more information on Multi-AZ, please visit the below URL:
https://aws.amazon.com/rds/details/multi-az/

Option A is invalid because Amazon RDS takes a snapshot of the source instance and creates a read-only instance from the snapshot. For MySQL, MariaDB and PostgreSQL, Amazon RDS uses those engines’ native asynchronous replication to update the read replica whenever there is a change to the source DB instance.

Option C is invalid, because the Redis engine for Amazon ElastiCache supports replication with automatic failover, but the Redis engine’s replication is asynchronous

Option D is invalid because this is not supported by AWS.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
76
Q

Question 76

You want to get the reason for your EC2 Instance termination from the CLI. Which of the below commands is ideal in getting the reason.

A. aws ec2 describe-instances

B. aws ec2 describe-images

C. aws ec2 get-console-screenshot

D. aws ec2 describe-volume-status

A

Answer: A

When you execute the AWS ec2 describe-instances CLI command with the instance_id as shown below AWS ec2 describe-instances –instance-id instance_id In the JSON response that’s displayed, locate the StateReason element. An example is shown below. This will help in understanding why the instance was shutdown. “StateReason”: { “Message”: “Client.UserInitiatedShutdown: User initiated shutdown”, “Code”: “Client. UserInitiatedShutdown” },

Option B is invalid because this command describes one or more of the images (AMIs, AKIs, and ARIs) available to you

Option C is invalid because retrieve a JPG- format screenshot of a running instance. This might not help to the complete extent of understanding why the instance was terminated.

Option D is invalid because this command describes the status of the specified volumes.

For more information on the command, please visit the below URL:
http: //docs.aws.amazon.com/cli/latest/reference/ec2/describe-instances. html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
77
Q

Question 77

An application is currently configured on an EC2 instance to process messages in SQS. The queue has been created with the default settings. The application is configured to just read the messages once a week. It has been noticed that not all the messages are being picked by the application. What could be the issue?

A. The application is configured to long polling, so some messages are not being picked up

B. The application is configured to short polling, so some messages are not being picked up

C. Some of the messages have surpassed the retention period defined for the queue

D. Some of the messages don’t have the right permissions to be picked up by the application

A

Answer: C

When you create an SQS with the default options , the message retention period is 4 days. So if the application is processing the messages just once a week there are chances that messages sent at the start of the week will get deleted before it can be picked up by the application.

Option A and B are invalid , because even if you use short or long polling , the application should be able to read the messages eventually.

Option D is invalid because you can provide permissions at the queue level.

For more information on SQS, please visit the below URL:
https://aws.amazon.com/sqs/faqs/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
78
Q

Question 78

Your application is on an EC2 instance in AWS. Users use the application to upload a file to Sg. The message first goes to an SQS queue , before it is picked up by a worker process, which fetches the object and uploads it to S3. An email is then sent on successful completion of the upload. You notice though that you are getting numerous emails for each request, when ideally you should be getting only one final email notification for each successful upload. Which of the below could be the possible reasons for this.

A. The application is configured for long polling so the messages are being picked up multiple times.

B. The application is not deleting the messages from SQS.

C. The application is configured to short polling, so some messages are not being picked up

D. The application is not reading the message properly from the SQS queue.

A

Answer: B

When you look at the Message lifecycle from AWS for SQS queues , one of the most important aspect is to delete the messages after they have been read from the queue.

Option A and C are invalid because even if you use short or long polling , the application should be able to read the messages eventually. The main part is that the deletion of messages is not happening after they have been read.

Option D is invalid because if the messages are not being read properly , then the application should not send successful notifications.

For more information on SQS message lifecycle, please visit the below URL:
http: //docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-message-lifecycle.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
79
Q

Question 79

You have created your own VPC and subnet in AWS. You have launched an instance in that subnet. You have noticed that the instance is not receiving a DNS name. Which of the below options could be a valid reason for this issue.

A. The CIDR block for the VPC is invalid

B. The CIDR block for the subnet is invalid

C. The VPC configuration needs to be changed.

D. The subnet configuration needs to be changed.

A

Answer: C

If the DNS hostnames option of the VPC is not set to ‘Yes’ then the instances launched in the subnet will not get DNS Names. You can change the option by choosing your VPC and clicking on ‘Edit DNS Hostnames’

Option A and B are invalid because if the CIDR blocks were invalid then the VPC or subnet would not be created. Option D is invalid because the subnet configuration does not have the effect on the DNS hostnames.

For more information on VPC’s, please visit the below URL:
https://aws.amazon.com/vpc/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
80
Q

Question 80

You have created your own VPC and subnet in AWS. You have launched an instance in that subnet. You have attached an internet gateway to the VPC and seen that the instance has a public IP. The Route table is 10.0.0.0/16. The instance still cannot be reached from the Internet. Which of the below changes need to be made to the route table to ensure that the issue can be resolved.

A. Add the following entry to the route table — 0.0.0.0/0->Internet Gateway

B. Modify the above route table — 10.0.0.0/16 ->Internet Gateway

C. Add the following entry to the route table — 10.0.0.0/16 ->Internet Gateway

D. Add the following entry to the route table - 0.0.0.0/16->Internet Gateway

A

Answer: A

The Route table need to be to 0.0.0.0/0 to ensure that the routes from the internet
can reach the instance Hence by default all other options become invalid

For more information on Route Tables, please visit the below URL:
http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Route_Tables.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
81
Q

Question 81

You wanted to have a VPC created in AWS which will host an application. The application will just consist of web and database servers. The application just requires to be accessed from the internet by internet users. Which of the following VPC configuration wizards options would you use

A. VPC with a Single Public Subnet Only

B. VPC with Public and Private Subnets

C. VPC with Public and Private Subnets and Hardware VPN Access

D. VPC with a Private Subnet Only and Hardware VPN Access

A

Answer: B

The configuration for this scenario includes a virtual private cloud (VPC) with a public subnet and a private subnet. We recommend this scenario if you want to run a public-facing web application, while maintaining back-end servers that aren’t publicly accessible. A common example is a multi-tier website, with the web servers in a public subnet and the database servers in a private subnet. You can set up security and routing so that the web servers can communicate with the database servers. Option A is invalid, because ideally you need a private subnet to host the database server. Option C and D are invalid because there is no case of accessing the application from on premise locations using VPN connections.

For more information on this scenario, please visit the below URL:
http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Scenario2.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
82
Q

Question 82

You are a solutions architect working for a large oil and gas company. Your company runs their production environment on AWS and has a custom VPC. The VPC contains 3 subnets, 1 of which is public and the other 2 are private. Inside the public subnet is a fleet of EC2 instances which are the result of an auto scaling group. All EC2 instances are in the same security group. Your company has created a new custom application which connects to mobile devices using a custom port. This application has been rolled out to production and you need to open this port globally to the internet. What steps should you take to do this, and how quickly will the change occur?

A. Open the port on the existing network Access Control List. Your EC2 instances will be able to communicate on this port after a reboot.

B. Open the port on the existing network Access Control List. Your EC2 instances will be able to communicate over this port immediately.

C. Open the port on the existing security group. Your EC2 instances will be able to communicate over this port immediately.

D. Open the port on the existing security group. Your EC2 instances will be able to communicate over this port as soon as the relevant Time To Live (TTL) expires.

A

Answer: C

One can use the Security Group , change the Inbound Rules so that the traffic will be allowed on the custom port. When you make a change to the Security Groups or Network ACL’s , they are applied immediately This is clearly given in the AWS documentation

For more information on Security Groups, please refer to the below link
http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_SecurityGroups.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
83
Q

Question 83

You are designing various CloudFormation templates, each template to be used for a different purpose. What determines the cost of using the CloudFormation templates?

A. CloudFormation does not have a cost itself.

B. You are charged based on the size of the template.

C. You are charged based on the time it takes to launch the template.

D. It has a basic charge of $1.10

A

Answer: A

If you look at the AWS Documentation, this is clearly given. You only get charged for the underlying resources created using Cloud Formation templates. So , because of the explanation , all other options automatically become invalid.

For more informationon Cloudformation, please visit the below URL:
https://aws.amazon.com/cloudformation/faqs/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
84
Q

Question 84

You are creating a number of EBS Volumes for your EC2 instances. You are concerned on the backups of the EBS Volumes. Which of the below is a way to backup the EBS Volumes

A. Configure Amazon Storage Gateway with EBS volumes as the data source and store the backups on premise through the storage gateway

B. Write a cronjob that uses the AWS CLI to take a snapshot of production EBS volumes.

C. Use a lifecycle policy to back up EBS volumes stored on Amazon Sg for durability

D. Write a cronjob on the server that compresses the data and then copy it to Glacier

A

Answer: B

A point-in-time snapshot of an EBS volume, can be used as a baseline for new volumes or for data backup. If you make periodic snapshots of a volume, the snapshots are incremental—only the blocks on the device that have changed after your last snapshot are saved in the new snapshot. Even though snapshots are saved incrementally, the snapshot deletion process is designed so that you need to retain only the most recent snapshot in order to restore the entire volume. You can create a snapshot via the CLI command - create-snapshot

Option A is incorrect because you normally use the Storage gateway to backup your on-premise data.

Option C is incorrect because this is used for S3 storage

Option D is incorrect because compression is another maintenance task and storing it in Glacier is not an ideal option

For more information on snapshots, please visit the below URL:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-creating-snapshot.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
85
Q

Question 85

You have a set of IIS Servers running on EC2 instances for a high traffic web site.
You want to collect and process the log files generated from the IIS Servers. Which of
the below services is ideal to run in this scenario

A. Amazon S3 for storing the log files and Amazon EMR for processing the log
files

B. Amazon S3 for storing the log files and EC2 Instances for processing the log
files

C. Amazon EC2 for storing and processing the log files

D. Amazon DynamoDB to store the logs and EC2 for running custom log analysis
scripts

A

Answer: A

Amazon EMR is a managed cluster platform that simplifies running big data frameworks, such as Apache Hadoop and Apache Spark, on AWS to process and analyze vast amounts of data. By using these frameworks and related open-source projects, such as Apache Hive and Apache Pig, you can process data for analytics purposes and business intelligence workloads. Additionally, you can use Amazon EMR to transform and move large amounts of data into and out of other AWS data stores and databases, such as Amazon Simple Storage Service (Amazon S3) and Amazon DynamoDB.

Option B and C ,, even though partially correct would be an overhead for EC2 Instances to process the log files when you already have a ready made service which can help in this regard

Option D is in invalid because DynamoDB is not an ideal option to store log files.

For more information on EMR, please visit the below URL:
http://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-what-is-emr.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
86
Q

Question 86

You are trying to configure Cross Region Replication for your S3 bucket. But you are not able to select the option of Cross Region Replication and is disabled. Which of the below could be the possible reasons for this ?

A. The feature is not available in that region

B. You need to enable versioning on the bucket

C. The source region is currently down

D. The destination region is currently down

A

Answer: B

Requirements for cross-region replication: The source and destination buckets must be versioning-enabled. The source and destination buckets must be in different AWS regions. You can replicate objects from a source bucket to only one destination bucket. Amazon S3 must have permission to replicate objects from that source bucket to the destination bucket on your behalf. If the source bucket owner also owns the object, the bucket owner has full permissions to replicate the object. If not, the source bucket owner must have permission for the Amazon S3 actions s3:GetObjectVersion and s3:GetObjectVersionACL to read the object and object ACL. If you are setting up cross-region replication in a cross-account scenario (where the source and destination buckets are owned by different AWS accounts), the source bucket owner must have permission to replicate objects in the destination bucket.

The destination bucket owner needs to grant these permissions via a bucket policy.

Option A is invalid , because it is available in all regions

Option C is invalid because if so, then you would not be able to access S3 in that region

Option D is invalid because you have not reached the configuration stage to select the destination bucket

For more information on S3 Cross Region Replication, please visit the below URL:
https://docs.aws.amazon.com/AmazonS3/latest/dev/crr.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
87
Q

Question 87

What is the amount of temp space is allocated to you when using Lambda functions per invocation.

A. 256 MB

B. 512 MB

C. 2 GiB

D. 16 GiB

A

Answer: B

The snapshot from the AWS documentation lists some of the service limits with AWS Lambda

For more information on AWS Lambda, please visit the below URL:
http: //docs.aws.amazon.com/lambda/latest/dg/limits.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
88
Q

Question 88

You have a requirement to create a subnet in an AWS VPC which will host around 20 hosts. This subnet will be used to host web servers. Which of the below could be the possible CIDR block allocated for the subnet

A. 10.0.1.0/27

B. 10.0.1.0/28

C. 10.0.1.0/29

D. 10.0.1.0/30

A

Answer: A

With this configuration you can have 27 allowable hosts which fits the requirement.

Option B is invalid because you can have only a maximum of 16 hosts with this configuration
Option C and D are invalid because you can assign a single CIDR block to a VPC. The allowed block size is between a /16 netmask and /28 netmask.

For more information on Subnets, please visit the below URL:
http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Subnets.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
89
Q

Question 89

Question 89

You run a website which hosts videos and you have two types of members, premium fee paying members and free members. All videos uploaded by both your premium members and free members are processed by a fleet of EC2 instances which will poll SQS as videos are uploaded. However you need to ensure that your premium fee paying members’ videos have a higher priority than your free members. How do you design SQS?

SQS allows you to set priorities on individual items within the queue, so simply set the fee paying members at a higher priority than your free members.

B. Create two SQS queues, one for premium members and one for free members. Program your EC2 fleet to poll the premium queue first and if empty, to then poll your free members SQS queue.

C. SQS would not be suitable for this scenario. It would be much better to use SNS to encode the videos.

D. Use SNS to notify when a premium member has uploaded a video and then process that video accordingly.

A

Answer: B

In this case, you can have multiple SQS queues. The SQS queues for the premium members can be polled first by the EC2 Instances and then those messages can be processed.

For information on SQS best practices, please refer to the below link
http: //docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-best-practices.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
90
Q

Question 90

In a VPC, you have launched two web servers and attached to an internet facing ELB. Both your web servers and ELB are located in the public subnet. Yet, you are stillnot able to access your web application via the ELB’s DNS through the internet. What could be done to resolve this issue?

A. Attach an Internet gateway to the VPC and route it to the subnet

B. Add an elastic IP address to the instance

C. Use Amazon Elastic Load Balancer to serve requests to your instances located in the internal subnet

D. Recreate the instances again

A

Answer: A

You need to ensure that the VPC has an internet gateway attached and the route table properly configured for the subnet.

Option B is invalid because even the ELB is not accessible from the internet.

Option C is invalid because the instances and ELB is not reachable via internet if no internet gateway is attached to the VPC.

Option D is invalid because this will not have an impact on the issue.

For more information on troubleshooting ELB, please visit the below URL:
https://aws.amazon.com/premiumsupport/knowledge-center/elb-connectivity-troubleshooting/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
91
Q

Question 91

You want to ensure that you keep a check on the Active Volumes , Active snapshots and Elastic IP addresses you use so that you don’t go beyond the service limit. Which of the below services can help in this regard?

A. AWS Cloudwatch

B. AWS EC2

C. AWS Trusted Advisor

D. AWS SNS

A

Answer: C

An online resource to help you reduce cost, increase performance, and improve security by optimizing your AWS environment,Trusted Advisor provides real time guidance to help you provision your resources following AWS best practices. Below is a snapshot of the service limits it can monitor.

Option A is invalid because even though you can monitor resources , it cannot be checked against the service limit.

Option B is invalid because this is the Elastic Compute cloud service

Option D is invalid because it can be send notification but not check on service limits

For more information on the Trusted Advisor monitoring, please visit the below URL:
https://aws.amazon.com/premiumsupport/ta-faqs/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
92
Q

Question 92

You are building an automated transcription service in which Amazon EC2 worker instances process an uploaded audio file and generate a text file. You must store both of these files in the same durable storage until the text file is retrieved. You do not know what the storage capacity requirements are. Which storage option is both cost-efficient and scalable?

A. Multiple Amazon EBS volume with snapshots

B. A single Amazon Glacier vault

C. Asingle Amazon S3 bucket

D. Multiple instance stores

A

Answer: C

The AWS Simple Storage service is the best option for this scenario. The AWS documentation provides the following information on the Simple Storage service Amazon S3 is object storage built to store and retrieve any amount of data from anywhere — web sites and mobile apps, corporate applications, and data from IoT sensors or devices. It is designed to deliver 99.999999999% durability, and stores data for millions of applications used by market leaders in every industry

For more information on the Simple Storage Service, please refer to the below link
https://aws.amazon.com/s3/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
93
Q

Question 93

You are an AWS Administrator for your company. The company currently has a set of AWS resources hosted in a particular region. You have been requested by your supervisor to create a script which could create duplicate resources in another region incase of a disaster. Which of the below AWS services could help fulfill this requirement.

A. AWS Elastic Beanstalk

B. AWS SQS

C. AWS Cloudformation

D. AWS SNS

A

Answer: C

AWS CloudFormation is a service that helps you model and set up your Amazon Web Services resources so that you can spend less time managing those resources and more time focusing on your applications that run in AWS. You create a template that describes all the AWS resources that you want (like Amazon EC2 instances or Amazon RDS DB instances), and AWS CloudFormation takes care of provisioning and configuring those resources for you. Option A is invalid because this is good to get a certain set of defined resources up and running. But it cannot be used to duplicate infrastructure as code. Option B is invalid because this is the Simple Queue Service which is used for sending messages. Option D is invalid because this is the Simple Notification service that is used for sending notifications.

For more information on Cloudformation, please visit the below URL:
http: //docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
94
Q

Question 94

What are bastion hosts?

A. They are instances in the public subnet which are used as a jump server to resources within other subnets.

B. They are instances in the private subnet which are used as a jump server to resources within other subnets.

C. They are instances in the public subnet which are used to host web resources that can be accessed by users.

D. They are instances in the private subnet which are used to host web resources that can be accessed by users.

A

Answer: A

As the number of EC2 instances in your AWS environment grows, so too does the number of administrative access points to those instances. Depending on where your administrators connect to your instances from, you may consider enforcing stronger network-based access controls. A best practice in this area is to use a bastion. A bastion is a special purpose server instance that is designed to be the primary access point from the Internet and acts as a proxy to your other EC2 instances. The below picture from the AWS documentation shows the setup of the bastion hosts in a public subnet.

Option B is invalid because bastion hosts need to be in the public subnet
Option C and D are invalid because bastion hosts are not used to host web resources.

For more information on Bastion hosts, please visit the below URL:
https://aws.amazon.com/blogs/security/controlling-network-access-to-ec2-instances-using-a-bastion-server/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
95
Q

Question 95

You have several AWS reserved instances in your account. They have been running for some time, but now need to be shutdown since they are no longer required. The data is still required for future purposes. Which of the below possible 2 steps can be taken.

A. Convert the instance to on-demand instances

B. Sell the instances on the AWS Reserved Instance Marketplace

C. Take snapshots of the EBS volumes and terminate the instances

D. Convert the instance to spot instances

A

Answer: B,C

The Reserved Instance Marketplace is a platform that supports the sale of third- party and AWS customers’ unused Standard Reserved Instances, which vary in term lengths and pricing options. For example, you may want to sell Reserved Instances after moving instances to a new AWS region, changing to a new instance type, ending projects before the term expiration, when your business needs change, or if you have unneeded capacity

For more information on selling instances, please visit the below
URL: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ri-market-general.html

Since the data is still required , it’s better to take snapshots of the existing volumes and then terminate the instances.

For more information on EBS Snapshots, please visit the below URL:
http: //docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSSnapshots.html

Option A and D are invalid , because you cannot convert Reserved instances to either on- demand instances or Spot Instances.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
96
Q

Question 96

You have an EC2 Instance in a particular region. This EC2 Instance has a preconfigured software running on it. You have been requested to create a disaster recovery solution in case the instance in the region fails. Which of the following is the best solution.

A. Create a duplicate EC2 Instance in another AZ. Keep it in the shutdown state. When required, bring it back up.

B. Backup the EBS data volume. If the instance fails, bring up a new EC2 instance and attach the volume.

C. Store the EC2 data on S3. If the instance fails, bring up a new EC2 instance and restore the data from S3.

D. Create an AMI of the EC2 Instance and copy it to another region

A

Answer: D

You can copy an Amazon Machine Image (AMI) within or across an AWS region using the AWS Management Console, the AWS command line tools or SDKs, or the Amazon EC2 API, all of which support the CopyImage action. You can copy both Amazon EBS-backed AMIs and instance store-backed AMIs. You can copy AMIs with encrypted snapshots and encrypted AMIs. Copying a source AMI results in an identical but distinct target AMI with its own unique identifier. In the case of an Amazon EBS-backed AMI, each of its backing snapshots is, by default, copied to an identical but distinct target snapshot.

Option A is invalid , because it is a maintenance overhead to maintain another non-running instance

Option B is invalid , because the pre- configured software could have settings on the root volume

Option C is invalid because this is a long and inefficient way to restore a failed instance

For more information on Copying AMI’s, please visit the below URL:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/CopyingAMIs.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
97
Q

Question 97

You have an EC2 instance located in a subnet in AWS. You have installed a web
application on this instance. The security group attached to this instance is 0.0.0.0/0.
The VPC has 10.0.0.0/16 attached to it. You can SSH into the instance from the
internet, but you are not able to access the web server via the web browser. Which of
the below steps would resolve the issue?

A. Add an HTTP rule to the Security Group

B. Remove the SSH rule from the security group

C. Add the route 10.0.0.0/16 -> igw-a97272cc to the Route Table

D. Add the route 0.0.0.0/0 -> local to the Route Table

A

Answer: A

You need to add the following security rule so that you can access HTTP traffic to the server. Add the rules to the security group as desired. Option B is invalid because then you will not be able to access the server via SSH Option C and D are invalid
because these routes are not ideal routes to add to the VPC.

For more information on security groups, please visit the below URL:
http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_SecurityGroups.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
98
Q

Question 98

You working in the media industry and you have created a web application where users will be able to upload photos they create to your website. This web application must be able to call the S3 API in order to be able to function. Where should you store your API credentials whilst maintaining the maximum level of security.

A. Save the API credentials to your php files.

B. Don’t save your API credentials. Instead create a role in IAM and assign this role to an EC2 instance when you first create it.

C. Save your API credentials in a public Github repository.

D. Pass API credentials to the instance using instance userdata.

A

Answer: B

Applications must sign their API requests with AWS credentials. Therefore, if you are an application developer, you need a strategy for managing credentials for your applications that run on EC2 instances. For example, you can securely distribute your AWS credentials to the instances, enabling the applications on those instances to use your credentials to sign requests, while protecting your credentials from other users. However, it’s challenging to securely distribute credentials to each instance, especially those that AWS creates on your behalf, such as Spot Instances or instances in Auto Scaling groups. You must also be able to update the credentials on each instance when you rotate your AWS credentials. LAM roles are designed so that your applications can securely make API requests from your instances, without requiring you to manage the security credentials that the applications use.

For more information on IAM Roles, please visit the below URL:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
99
Q

Question 99

You are a systems administrator and you need to monitor the health of your production environment. You decide to do this using Cloud Watch, however you notice that you cannot see the health of every important metric in the default dash board. Which of the following metrics do you need to design a custom cloud watch metric for, when monitoring the health of your EC2 instances?

A. CPU Usage

B. Memory usage

C. Disk read operations

D. Network in

A

Answer: B

When you look at your cloudwatch metric dashboard, you can see the metrics for CPU Usage , Disk read operations and Network in You need to add a custom metric for Memory Usage.

An example of enabling the custom metric is shown below URL:
http: //docs.aws.amazon.com/AWSEC2/latest/UserGuide/mon-scripts.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
100
Q

Question 100

In order for an EC2 instance to be accessed from the internet, which of the following are required. Choose 3 answers from the options given below

A. An Internet gateway attached to the VPC

B. A private IP address attached to the instance

C. A public IP address attached to the instance

D. A route entry to the Internet gateway in the Route table

A

Answer: A, C, D

The image shows the in KB Article configuration of an instance which can be accessed from the internet. The key requirements are

1) An Internet gateway attached to the VPC
2) Apublic IP or elastic IP address attached to the instance
3) Aroute entry to the Internet gateway in the Route table

Option B is invalid, because this is only required for communication between instances in the VPC.

For more information on Public subnets, please refer to the below URL:
http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Scenarioi.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
101
Q

Question 101

You are IOT sensors to monitor the number of bags that are handled at an airport. The data gets sent back to a Kinesis stream with default settings. Every alternate day, the data from the stream is sent to S3 for processing. But you notice that S3 is not receiving all of the data that is being sent to the Kinesis stream. What could be the reason for this?

A. The sensors probably stopped working on some days hence data is not sent to the stream.

B. Sg can only store data for a day

C. Data records are only accessible for a default of 24 hours from the time they are added to a stream

D. Kinesis streams are not meant to handle IoT related data

A

Answer: C

Kinesis Streams supports changes to the data record retention period of your stream. An Kinesis stream is an ordered sequence of data records meant to be written to and read from in real-time. Data records are therefore stored in shards in your
stream temporarily. The time period from when a record is added to when it is no longer accessible is called the retention period. An Kinesis stream stores records from 24 hours by default, up to 168 hours.

Option A , even though a possibility , cannot be taken for granted as the right option.

Option B is invalid since S3 can store data indefinitely unless you have a lifecycle policy defined.

Option D is invalid because the Kinesis service is perfect for this sort of data injestion

For more information on Kinesis data retention, please refer to the below URL:
http://docs.aws.amazon.com/streams/latest/dev/kinesis-extended-retention.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
102
Q

Question 102

You have defined the following Network ACL for your subnet:

Rule:100-ALL TRAFFIC-ALL PROTOCOL-ALL PORTS-Source-0.0.0.0/0-ALLOW,

Rule 101-Custom TCP Rule-TCP(6) PROTOCOL-3000 PORT-Source:54.12.34.34/32-DENY &

Rule * - ALL TRAFFIC-ALL PROTOCOL-ALL PORTS-Source:0.0.0.0/0-DENY.

What will be the outcome when a workstation of IP 54.12.34.34 tries to access your subnet

A. The request will be allowed

B. The request will be denied

C. The request will be allowed initially and then denied

D. The request will be denied initially and then allowed

A

Answer: A

The following are the parts of a network ACL rule: Rule number. Rules are evaluated starting with the lowest numbered rule. As soon as a rule matches traffic, it’s applied regardless of any higher-numbered rule that may contradict it. Now since the first rule number is 100 and allows all traffic, no matter what rule you put after that all traffic will be allowed. Hence, all options except A are incorrect

For more information on Network ACL, please refer to the below URL:
http: //docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_ACLs.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
103
Q

Question 103

You are a solutions architect working for a company. They store their data on S3, however recently an someone accidentally deleted some critical files in S3. You’ve been asked to prevent this from happening in the future. What options below can prevent this?

A. Make sure you provide signed URL’s to all users.

B. Enable Sg versioning and Multifactor Authentication (MFA) on the bucket.

C. Use S3 Infrequently Accessed storage to store the data on.

D. Create an IAM bucket policy that disables deletes.

A

Answer: B

from both unintended user actions and application failures. You can optionally add another layer of security by configuring a bucket to enable MFA (Multi-Factor Authentication) Delete, which requires additional authentication for either of the following operations.

1) Change the versioning state of your bucket
2) Permanently delete an object version

Option A is invalid because this would be a maintenance overhead

Option C is invalid because changing the storage option will not prevent accidental deletion.

Option D is invalid because the question does not ask to remove the delete permission completely.

For more information on S3 versioning, please refer to the below URL:
http://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html

Versioning is a means of keeping multiple variants of an object in the same bucket. You can use versioning to preserve, retrieve, and restore every version of every object stored in your Amazon Sg bucket. With versioning, you can easily recover

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
104
Q

Question 104

You run an automobile reselling company that has a popular online store on AWS. The application sits behind an Auto Scaling group and requires new instances of the Auto Scaling group to identify their public and private IP addresses. How can you achieve this?

A. By using Ipconfig for windows or Ifconfig for Linux.

B. By using a cloud watch metric.

C. Using a Curl or Get Command to get the latest meta-data from http://169.254.169.254/latest/meta-data/

D. Using a Curl or Get Command to get the latest user-data from http://169.254.169.254/latest/user-data/

A

Answer: C

To get the private and public IP addresses, you can run the following commands on the running instance http://169.254.169.254/latest/meta-data/local-ipv4
http://169.254.169.254/latest/meta-data/public-ipv4

Option A is partially correct , but is an overhead when you already have the service running in AWS.
Option B is incorrect, because you cannot get the IP address from the cloudwatch metric.
Option D is incorrect, because user-data cannot get the IP addresses

For more information on instance metadata , please refer to the below URL:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
105
Q

Question 105

You are the solution architect for a company. The company has a requirement to deploy an application which will need to have session management in place. Which of the following services can be used to store session data for session management?

A. AWS Storage Gateway, Elasticache & ELB

B. ELB, Elasticache & RDS

C. Cloudwatch, RDS & DynamoDb

D. RDS, DynamoDB & Elasticache.

A

Answer: D

These options are the best when it comes to storing session data. Amazon ElastiCache is a web service that makes it easy to deploy, operate, and scale an in- memory data store or cache in the cloud. The service improves the performance of web applications by allowing you to retrieve information from fast, managed, in-memory data stores, instead of relying entirely on slower disk-based databases

For more information, please visit the below URL:
https://aws.amazon.com/elasticache/

For DynamoDB, this is also evident from the AWS documentation

For more information, please visit the below URL:
http: //docs.aws.amazon.com/gettingstarted /latest/awsgsg-intro/gsg-aws-database.html

And by default, in the industry , RDS have been used to store session data. The Elastic Load Balancer, AWS Storage Gateway and Cloudwatch cannot store session data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
106
Q

Question 106

You are working for an Enterprise and have been asked to get a support plan in place from AWS.

1) 24x7 access to support.
2) Access to the full set of Trusted Advisor checks. Which of the following would meet these requirements ensuring that cost is kept at a minimum

A. Basic

B. Developer

C. Business

D. Enterprise

A

Answer: C

Some of the features of Business support are

1) 24x7 access to customer service, documentation, whitepapers, and support forums
2) Access to full set of Trusted Advisor checks
3) 24x7 access to Cloud Support Engineers via email, chat & phone

Option A and B are invalid because they have Access to 6 core Trusted Advisor checks only. And they don’t have 24*7 support

Option D is invalid because even though it fulfills all requirements, it is an expensive option and since Business support already covers the requirement, this should be selected , when you are taking cost as an option.

For a full comparison of plans, please visit the following URL:
https://aws.amazon.com/premiumsupport/compare-plans/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
107
Q

Question 107

Which of the following is incorrect with regards to Private IP addresses?

A. In Amazon EC2 classic, the private IP addresses are only returned to Amazon EC2 when the instance is stopped or terminated

B. In Amazon VPC, an instance retains its private IP addresses when the instance is stopped.

C. In Amazon VPC, an instance does NOT retain its private IP addresses when the instance is stopped.

D. In Amazon EC2 classic, the private IP address is associated exclusively with the instance for its lifetime

A

Answer: C

The following is true with regards to Private IP addressing For instances launched in a VPC, a private IPv4 address remains associated with the network interface when the instance is stopped and restarted, and is released when the instance is terminated. For instances launched in EC2-Classic, we release the private IPv4 address when the instance is stopped or terminated. If you restart your stopped instance, it receives a new private IPv4 address

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
108
Q

Question 108

For more information on IP addressing, please refer to the below link:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-instance-addressing.html

Question 108

Which of the following are best practices for monitoring your EC2 Instances

A. Create and implement a monitoring plan that collects monitoring data from all of the parts in your AWS solution

B. Automate monitoring tasks as much as possible

C. Check the log files on your EC2 instances

D. All of the above

A

Answer: D

Use the following best practices for monitoring to help you with your Amazon EC2 monitoring tasks. Make monitoring a priority to head off small problems before they become big ones. Create and implement a monitoring plan that collects monitoring data from all of the parts in your AWS solution so that you can more easily debug a multi-point failure if one occurs. Your monitoring plan should address, at a minimum, the following questions: What are your goals for monitoring? What resources you will monitor? How often you will monitor these resources? What monitoring tools will you use? Who will perform the monitoring tasks? Who should be notified when something goes wrong? Automate monitoring tasks as much as possible. Check the log files on your EC2 instances.

For more information on monitoring EC2, please refer to the below link:
http: //docs.aws.amazon.com/AWSEC2/latest/UserGuide/monitoring_ec2.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
109
Q

Question 109

You work for a major news network in Europe. They have just released a new app which allows users to report on events as and when they happen using their mobile phone. Users are able to upload pictures from the app and then other users will be able to view these pics. Your organization expects this app to grow very quickly, essentially doubling its user base every month. The app uses S3 to store the media and you are expecting sudden and large increases in traffic to S3 when a major news event takes place as people will be uploading content in huge numbers). You need to keep your storage costs to a minimum however and it does not matter if some objects are lost. Which storage media should you use to keep costs as low as possible?

A. S83 — Infrequently Accessed Storage.

B. S83 — Reduced Redundancy Storage (RRS).

C. Glacier.

D. S3 —- Provisioned IOPS.

A

Answer: B

Since the requirement mentions that it does not matter if objects are lost and you need a low cost storage option then Reduced Redundancy Storage is the best option. The AWS Documentation mentions the below on Reduced Redundancy Storage Reduced Redundancy Storage (RRS) is an Amazon S3 storage option that enables customers to store noncritical, reproducible data at lower levels of redundancy than Amazon S3’s standard storage. It provides a highly available solution for distributing or sharing content that is durably stored elsewhere, or for storing thumbnails, transcoded media, or other processed data that can be easily reproduced

For more information on RRS, please refer to the below link:
https://aws.amazon.com/s3/reduced- redundancy/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
110
Q

Question 110

Amazon EC2 provides a repository of public data sets that can be seamlessly integrated into AWS cloud-based applications. What is the monthly charge for using the public data sets?

A. 1 time charge of 1$ for all the datasets.

B. 1$ per dataset per month

C. 10 $ per month for all datasets

D. There is no charge for using public data sets

A

Answer: D

AWS hosts a variety of public datasets that anyone can access for free. Previously, large datasets such as the mapping of the Human Genome required hours or days to locate, download, customize, and analyze. Now, anyone can access these datasets via the AWS centralized data repository and analyze those using Amazon EC2 instances or Amazon EMR (Hosted Hadoop) clusters. By hosting this important data where it can be quickly and easily processed with elastic computing resources, AWS hopes to enable more innovation, more quickly.

For more information on datasets please visit the below link
https://aws.amazon.com/public-datasets/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
111
Q

Question 111

In VPCs with private and public subnets, database servers should ideally be
launched into:

A. The public subnet

B. The private subnet

C. Either of them

D. Not recommended, they should ideally be launched outside VPC

A

Answer: B

Normally database servers should not be exposed to the internet and should reside in private subnets. The web servers will be part of the public subnet and exposed to the end users.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
112
Q

Question 112

You have written a CloudFormation template that creates 1 elastic load balancer fronting 2 EC2 instances. Which section of the template should you edit so that the DNS of the load balancer is returned upon creation of the stack?

A. Resources

B. Parameters

C, Outputs

D. Mappings

A

The example shows a simple CloudFormation template. It creates an EC2 instance based on the AMI - ami-d6f32ab5. When the instance is created, it will output the AZ in which it is created.

{ “Resources”: { “MyECaInstance”: { “Type”: “AWS::EC2::Instance”,

“Properties”: { “Imageld”: “ami-d6f32ab5” } } }
“Outputs”: { “Availability”: { “Description”: “The Instance ID”,

“Value”: {“Fn::GetAtt” : [ “MyEC2Instance”, “AvailabilityZone” ]} }
}}

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
113
Q

Question 113

Is it true that EBS can always tolerate an Availability Zone failure?

A. No, all EBS volume is stored in a single Availability Zone

B. Yes, EBS volume has multiple copies so it should be fine

C. Depends on how it is setup

D. Depends on the Region where EBS volume is initiated

A

Answer: A

EBS Volume replicated to physical hardware with in the same available zone, So if AZ fails then EBS volume will fail. That’s why AWS recommend to always keep EBS volume snapshot in S3 bucket for high durability.

When you create an EBS volume in an Availability Zone, it is automatically replicated within that zone to prevent data loss due to the failure of any single hardware component.

Option B is wrong as EBS volume has multiple copies but with in same AZ , so volume will not persist in case of AZ failure.

Option C is wrong because there is no special setup available to persist EBS volume across region or AZ.

Answer D is wrong as EBS volume has same behavior regardless of region.

As per AWS user guide:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumes.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
114
Q

Question 114

Which of the following benefits does adding Multi-AZ deployment in RDS provide? Choose 2 answers from the options given below

A. MultiAZ deployed database can tolerate an Availability Zone failure

B. Decrease latencies if app servers accessing database are in multiple Availability zones

C. Make database access times faster for all app servers

D. Make database more available during maintenance tasks

A

Answer: A, D

Some of the advantages of Multi AZ rds deployments are given below If an Availability Zone failure or DB Instance failure occurs, your availability impact is limited to the time automatic failover takes to complete The availability benefits of Multi-AZ deployments also extend to planned maintenance and backups. In the case of system upgrades like OS patching or DB Instance scaling, these operations are applied first on the standby, prior to the automatic failover. As a result, your availability impact is, again, only the time required for automatic failover to complete. If a storage volume on your primary fails in a Multi-AZ deployment, Amazon RDS automatically initiates a failover to the up-to-date standby

For more information on Multi AZ rds deployments please visit the link
https://aws.amazon.com/rds/details/multi-az/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
115
Q

Question 115

A company has the following EC2 instance configuration. They are trying to connect to the instance from the internet. They have verified the existence of the Internet gateway and the route tables are in place. What could be the issue?

A. It’s launched in the wrong Availability Zone

B. The AMI used to launch the instance cannot be accessed from the internet

C. The private IP is wrongly assigned

D. There is no Elastic IP Assigned

A

Answer: D

An instance must either have a public or Elastic IP in order to be accessible from the internet. A public IP address is reachable from the Internet. You can use public IP addresses for communication between your instances and the Internet. An Elastic IP address is a static IP address designed for dynamic cloud computing. An Elastic IP address is associated with your AWS account. With an Elastic IP address, you can mask the failure of an instance or software by rapidly remapping the address to another instance in your account. An Elastic IP address is a public IP address, which is reachable from the Internet. If your instance does not have a public IP address, you can associate an Elastic IP address with your instance to enable communication with the Internet; for example, to connect to your instance from your local computer.

For more information on Elastic IP’s, please visit the link
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/elastic-ip-addresses-eip.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
116
Q

Question 116

What is the basic requirement to login into an EC2 instance on the AWS cloud?

A. Volumes

B. AMI’s

C. Key Pairs

D. S3

A

Answer: C

Amazon EC2 uses public-key cryptography to encrypt and decrypt login information. Public-key cryptography uses a public key to encrypt a piece of data, such as a password, then the recipient uses the private key to decrypt the data. The public and private keys are known as a key pair. To log in to your instance, you must create a key pair, specify the name of the key pair when you launch the instance, and provide the private key when you connect to the instance. Linux instances have no password, and you use a key pair to log in using SSH. With Windows instances, you use a key pair

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
117
Q

Question 117

Which of the below features allows you to take backups of your EBS volumes? Choose one answer from the options given below.

A. Volumes

B. State Manager

C. Placement Groups

D. Snapshots

A

Answer: D

You can easily create a snapshot from a volume while the instance is running and the volume is in use. You can do this from the EC2 dashboard.

For more information on EBS snapshots, please visit the link-
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSSnapshots.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
118
Q

Question 118

A company wants to host a selection of MongoDB instances. They are expecting a high load and want to have as low latency as possible. Which class of instances from the below list should they choose from.

T2

B. I2

C.T1

D. G2

A

Answer: B

I2 instances are optimized to deliver tens of thousands of low-latency, random I/O operations per second (IOPS) to applications. They are well suited for the following scenarios: NoSQL databases (for example, Cassandra and MongoDB) Clustered databases Online transaction processing (OLTP) systems

For more information on I2 instances, please visit the link
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/i2-instances.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
119
Q

Question 119

Which of the below elements can you manage in the IAM dashboard? Choose 3 answers from the options given below

A. Users

B. Encryption Keys

C. Cost Allocation Reports

D. Policies

A

Answer: A, B, D

When you go to your IAM dashboard, you will see the set of elements which can be configured.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
120
Q

Question 120

What are the languages currently supported by AWS Lamda? Choose 3 answers from the options given below

A. Node.js

B. Angular JS

C. Java

D. Python

A

Answer: A, C, D

AWS Lambda supports code written in Node.js (JavaScript), Python, Java (Java 8 compatible), and C# (using the .NET Core runtime).

For more information on Amazon Lambda, please visit
https://aws.amazon.com/lambda/?nc2=h_m1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
121
Q

Question 121

A photo-sharing service stores pictures in Amazon Simple Storage Service (S3) and allows application sign-in using an OpenID Connect-compatible identity provider. Which AWS Security Token Service approach to temporary access should you use for the Amazon S83 operations?

A. SAML-based Identity Federation

B. Cross-Account Access

C. AWS Identity and Access Management roles

D. Web Identity Federation

A

Answer: D

The AWS Documentation mentions the below With web identity federation, you don’t need to create custom sign-in code or manage your own user identities. Instead, users of your app can sign in using a well-known identity provider (IdP) —such as Login with Amazon, Facebook, Google, or any other OpenID Connect (OIDC)- compatible IdP, receive an authentication token, and then exchange that token for temporary security credentials in AWS that map to an IAM role with permissions to use the resources in your AWS account. Using an IdP helps you keep your AWS account secure, because you don’t have to embed and distribute long-term security credentials with your application.

For more information on Web Identity Federation, please visit the below URL:
http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_oidc.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
122
Q

Question 122

In Cloudtrail, where does it store all of the logs that it creates? Choose one answer from the options given below.

A. Aseparate EC2 instance with EBS storage

B. A RDS instance

C. A DynamoDB instance

D. Amazon S3

A

Answer: D

When you enable Cloudtrail, you need to provide an S3 bucket where all the logs can be written to.
For more information on AWS Cloudtrail, please visit
https://aws.amazon.com/cloudtrail/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
123
Q

Question 123

In the event of an unplanned outage of your primary DB, AWS RDS automatically switches over to the secondary. In such a case which record in Route 53 is changed? Select one answer from the options given below

A. DNAME

B, CNAME

C. TXT

D. MX

A

Answer: B

The AWS documentation clearly highlights what happens in the event of an automatic failover for an AWS RDS instance.

For more information on AWS RDS, please visit
https://aws.amazon.com/rds/faqs/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
124
Q

Question 124

Which of the below resources cannot be tagged in AWS

A. Images

B. EBS Volumes

C. VPC endpoint

D. VPC

A

Answer: C

Tags enable you to categorize your AWS resources in different ways, for example, by purpose, owner, or environment. This is useful when you have many resources of the same type — you can quickly identify a specific resource based on the tags you’ve assigned to it. Each tag consists of a key and an optional value, both of which you define. But you cannot tag a VPC endpoint

For more information on AWS Resourcing Tagging, please visit
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
125
Q

Question 125

What type of monitoring for EBS volumes is available automatically in 5 minute periods at no charge?

A. Basic

B. Primary

C. Detailed

D. Local

A

Answer: A

Visit the AWS documentation for the types of monitoring data.

For more information on Volume monitoring, please visit
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/monitoring-volume-status. html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
126
Q

Question 126

There is a company website that is going to be launched in the coming weeks. There is a probability that the traffic will be quite high in the first couple of weeks. In the event of a load failure, how can you set up DNS failover to a static website? Choose the correct answer from the options given below.

A. Duplicate the exact application architecture in another region and configure DNS weight-based routing

B. Enable failover to an on-premise data center to the application hosted there.

C. Use Route 53 with the failover option to failover to a static S3 website bucket or CloudFront distribution.

D. Add more servers in case the application fails.

A

Answer: C

Amazon Route 53 health checks monitor the health and performance of your web applications, web servers, and other resources. If you have multiple resources that perform the same function, you can configure DNS failover so that Amazon Route 53 will route your traffic from an unhealthy resource to a healthy resource. For example, if you have two web servers and one web server becomes unhealthy, Amazon Route 53 can route traffic to the other web server. So you can route traffic to a website hosted on S3 or to a cloudfront distribution.

For more information on DNS failover using Routes3, please refer to the below link http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
127
Q

Question 127

What is one of the major advantages of having a VPN in AWS?

A. You don’t have to worry about security, this is managed by AWS.

B. You can connect your cloud resources to on-premise data centers using VPN connections

C. You can provision unlimited number of S3 resources.

D. None of the above

A

Answer: B

One of the major advantages is that you can combine your on-premise data center to AWS via a VPN connection. You can create an IPsec, hardware VPN connection between your VPC and your remote network. On the AWS side of the VPN connection, a virtual private gateway provides two VPN endpoints for automatic failover. You configure your customer gateway, which is the physical device or software application on the remote side of the VPN connection.

For more information on VPN connections, please refer to the below link
http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpn-connections.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
128
Q

Question 128

One of your instances is reporting an unhealthy system status check. However, this is not something you should have to monitor and repair on your own. How might you automate the repair of the system status check failure in an AWS environment?

Choose the correct answer from the options given below

A. Create CloudWatch alarms that stop and start the instance based off of status check alarms

B. Write a script that queries the EC2 API for each instance status check

C. Write a script that periodically shuts down and starts instances based on certain stats.

D. Implement a third party monitoring tool.

A

Answer: A

Using Amazon CloudWatch alarm actions, you can create alarms that automatically stop, terminate, reboot, or recover your EC2 instances. You can use the stop or terminate actions to help you save money when you no longer need an instance to be running. You can use the reboot and recover actions to automatically reboot those instances or recover them onto new hardware if a system impairment occurs.

For more information on using alarm actions, please refer to the below link
http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/UsingAlarmActions.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
129
Q

Question 129

A company is running three production web server reserved EC2 instances with EBS-backed root volumes. These instances have a consistent CPU load of 80%. Traffic is being distributed to these instances by an Elastic Load Balancer. They also have production and development Multi-AZ RDS MySQL databases. What recommendation would you make to reduce cost in this environment without affecting availability of mission-critical systems? Choose the correct answer from the options given below

A. Consider using on-demand instances instead of reserved EC2 instances

B. Consider not using a Multi-AZ RDS deployment for the development database

C. Consider using spot instances instead of reserved EC2 instances

D. Consider removing the Elastic Load Balancer

A

Answer: B

Multi-AZ databases is better for production environments rather than for development environments, so you can reduce costs by not using this for development environments Amazon RDS Multi-AZ deployments provide enhanced availability and durability for Database (DB) Instances, making them a natural fit for production database workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Each AZ runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable. In case of an infrastructure failure, Amazon RDS performs an automatic failover to the standby (or to a read replica in the case of Amazon Aurora), so that you can resume database operations as soon as the failover is complete. Since the endpoint for your DB Instance remains the same after a failover, your application can resume database operation without the need for manual administrative intervention

For more information on Multi-AZ RDS, please refer to the below link
https://aws.amazon.com/rds/details/multi-az/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
130
Q

Question 130

A company has assigned two web server instances in a VPC subnet to an Elastic Load Balancer (ELB). However, the instances and the ELB are not reachable via URL to the Elastic Load Balancer (ELB). How can you resolve the issue so that your web server instances can start serving the web app data to the public Internet? Choose the correct answer from the options given below

A. Attach an Internet gateway to the VPC and route it to the subnet

B. Add an elastic IP address to the instance

C. Use Amazon Elastic Load Balancer to serve requests to your instances located in the internal subnet

D. None of the above

A

Answer: A

If the Internet gateway is not attached to the VPC, which is a pre-requisite for the instances to be accessed from the internet then the instances will not be reachable. You can assign instance from private subnet to ELB, in that case, ELB will automatically become internal ELB and AWS will assign scheme as “Internal” .If your subnet is public
then ELB will automatically become external ELB and AWS will assign scheme as “Internet-facing”. You can add Internet Gateway to VPC and add IGW route in the subnet to make it available over the internet, however, in that case, AWS will still show ELB scheme as internal but it will allow internet traffic to the instance.

See internal load balancer details here:
http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-create-internal-load-balancer.html

For more information on Internet gateways, please refer to the below link:
http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Internet_Gateway.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
131
Q

Question 131

A company has EC2 instances running in AWS. The EC2 instances are running via an Autoscaling solution. There is a lot of application requests or work items being lost because of the load on the servers. The Autoscaling solution is launching new instances to take the load but there are still some application requests which are being lost. Which of the following is likely to provide the most cost-effective solution to avoid losing recently submitted requests?

Choose the correct answer from the options given below

A. Use an SQS queue to decouple the application components

B. Keep one extra EC2 instance always powered on in case a spike occurs

C. Use larger instances for your application

D. Pre-warm your Elastic Load Balancer

A

Answer: A

Amazon Simple Queue Service (SQS) is a fully-managed message queuing service for reliably communicating among distributed software components and microservices - at any scale. Building applications from individual components that each perform a discrete function improves scalability and reliability, and is best practice design for modern applications

For more information on SQS, please refer to the below link
https://aws.amazon.com/sqs/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
132
Q

Question 132

After migrating an application architecture from on-premise to AWS you will not be responsible for the ongoing maintenance of packages for which of the following AWS services that your application uses. Choose the 2 correct answers from the options below.

A. Elastic Beanstalk

B. RDS

C. DynamoDB

D. EC2

A

Answer: B, C

Both RDS and DynamoDB are managed solutions provided by AWS. Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. It provides cost-efficient and resizable capacity while managing time-consuming database administration tasks, freeing you up to focus on your applications and business.

For more information on RDS, please refer to the below link
https://aws.amazon.com/rds/

Amazon DynamoDB is a fast and flexible NoSQL database service for all applications that need consistent, single-digit millisecond latency at any scale. It is a fully managed cloud database and supports both document and key-value store models.

For more information on DynamoDB, please refer to the below link
https://aws.amazon.com/dynamodb/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
133
Q

Question 133

What is the difference between an availability zone and an edge location? Choose the correct answer from the options below

A. Edge locations are used as control stations for AWS resources

B. An edge location is used as a link when building load balancing between regions

C. An Availability Zone is an isolated location inside a region; an edge location will deliver cached content to the closest location to reduce latency

D. An availability zone is a grouping of AWS resources in a specific region; an edge location is a specific resource within the AWS region

A

Answer: C

Edge locations Using a network of edge locations around the world, Amazon CloudFront caches copies of your static content close to viewers, lowering latency when they download your objects and giving you the high, sustained data transfer rates needed to deliver large popular objects to end users at scale.

For more information on Cloudfront and edge locations, please refer to the below link
https://aws.amazon.com/cloudfront/

Availability Zones Each region is completely independent. Each Availability Zone is isolated, but the Availability Zones in a region are connected through low-latency links. The following diagram illustrates the relationship between regions and Availability Zones.

For more information on AZ, please refer to the below link
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones. html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
134
Q

Question 134

Question 134

What is an AWS service which can help protect web applications from common security threats from the outside world?

Choose one answer from the options below

A. NAT

B. WAF

Cc. SQS

D. SES

A

Answer: B

Option A is wrong because this is used to relay information from private subnets to the internet.

Option C is wrong because this is used as a queuing service in aws.

Option D is wrong because this is used as an emailing service in aws.

AWS WAF is a web application firewall that helps protect your web applications from common web exploits that could affect application availability, compromise security, or consume excessive resources. AWS WAF gives you control over which traffic to allow or block to your web applications by defining customizable web security rules. You can use AWS WAF to create custom rules that block common attack patterns, such as SQL injection or cross-site scripting, and rules that are designed for your specific application. New rules can be deployed within minutes, letting you respond quickly to changing traffic patterns. Also, AWS WAF includes a full-featured API that you can use to automate the creation, deployment, and maintenance of web security rules. In WAF, you can create a set of Conditions and Rules to protect your network against attacks from outside.

For more information on AWS WAF please visit the below link
https://aws.amazon.com/waf/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
135
Q

Question 135

Your supervisor asks you to create a decoupled application whose process includes dependencies on EC2 instances and servers located in your company’s on-premises data center. Which of these are you least likely to recommend as part of that process? Choose the correct answer from the options below:

A. SQS polling from an EC2 instance deployed with an LAM role

B. An SWF workflow

C. SQS polling from an EC2 instance using IAM user credentials

D. SQS polling from an on-premises server using IAM user credentials

A

Answer: C

Note that the question asks you for the least likely recommended option.

The correct answer is C , SQS polling from an EC2 instance using IAM user credentials. An EC2 role should be used when deploying EC2 instances to grant permissions rather than storing IAM user credentials in EC2 instances. You should use IAM roles for secure communication between EC2 instances and resources on AWS. Your most likely scenario will actually be SQS polling from an EC2 instance deployed with an IAM role because when you’re polling SQS from EC2 you should use IAM roles. What you should never do is use IAM user api keys for authentication to poll sqs messages. An IAM role is similar to a user, in that it is an AWS identity with permission policies that determine what the identity can and cannot do in AWS. However, instead of being uniquely associated with one person, a role is intended to be assumable by anyone who needs it. Also, a role does not have any credentials (password or access keys) associated with it. Instead, if a user is assigned to a role, access keys are created dynamically and provided to the user. The most likely scenario will actually be SQS polling from an EC2 instance deployed with an IAM role because when your polling SQS from EC2 you should use IAM roles. We should never use [AM user api keys for authentication to poll SQS messages.

Option C is correct which least likely scenario is.

For more information on IAM Roles, please refer to the below link:
http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
136
Q

Question 136

An EC2 instance retrieves a message from an SQS queue, begins
processing the message, then crashes. What happens to the message? Choose
the correct answer from the options below:

A. When the message visibility timeout expires, the message becomes
available for processing by other EC2 instances

B. It will remain in the queue and still assigned to same EC2 instances
when instances become online within visibility timeout.

C. The message is deleted and becomes duplicated when the EC2 instance
comes online.

A

Answer: A

When a consumer receives and processes a message from a queue, the
message remains in the queue. Amazon SQS doesn’t automatically delete the
message: Because it’s a distributed system, there is no guarantee that the
component will actually receive the message (the connection can break or a
component can fail to receive the message). Thus, the consumer must delete
the message from the queue after receiving and processing it.

Q: How does Amazon SQS allow multiple readers to access the same
message queue without losing messages or processing them multiple times?

Every Amazon SQS queue has a configurable visibility timeout. A
message is not visible to any other reader for a designated amount of time
when it is read from a message queue. As long as the amount of time it takes
to process the message is less than the visibility timeout, every message is
processed and deleted. If the component processing of the message fails or
becomes unavailable, the message again becomes visible to any component
reading the message queue once the visibility timeout ends. This allows
multiple components to read messages from the same message queue, each
one working to process different messages.

For more information on SQS Visibility timeout, please refer to the
below link

http://sqs-public-
images.s3.amazonaws.com/Building Scalabale EC2 applications with SQ
S2.pdf

(this document explains in detai] how EC2 and SQS works together in all
scenarios.

There is also explanation what happens if the EC2 instance crashes
before it deletes a message from Queue)
http: //docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDevelope

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
137
Q

Question 137

You are running an instance store based instance. You shutdown and
then start the instance. You then notice that the data which you have saved
earlier is no longer available. What might be the cause of this? Choose the
correct answer from the options below

A. The volume was not big enough to handle all of the processing data

B. The EC2 instance was using EBS backed root volumes, which are
ephemeral and only live for the life of the instance

C. The EC2 instance was using instance store volumes, which are
ephemeral and only live for the life of the instance

D. The instance might have been compromised

A

Answer: C

The data in an instance store persists only during the lifetime of its
associated instance. If an instance reboots (intentionally or unintentionally),
data in the instance store persists. However, data in the instance store is lost
under the following circumstances: The underlying disk drive fails The
instance stops The instance terminates

For more information on Instance store , please refer to the below link
http: //docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.h
tml

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
138
Q

Question 138

You have been told that you need to set up a bastion host by your
manager in the cheapest, most secure way, and that you should be the only
person that can access it via SSH. Which of the following setups would satisfy your manager’s request? Choose the correct answer from the options
below

A. Asmall EC2 instance and a security group which only allows access
on port 22 via your IP address

B. A large EC2 instance and a security group which only allows access on
port 22 via your IP address

C. A large EC2 instance and a security group which only allows access on
port 22

D. A small EC2 instance and a security group which only allows access
on port 22

A

Answer: A

The bastion host should only have a security group from a particular IP
address for maximum security. Since the request is to have a cheapest
infrastructure, then you should use a small instance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
139
Q

Question 139

Which of the following are Invalid VPC peering configurations? Choose
3 answers from the options below

A. Overlapping CIDR blocks

B. Transitive Peering

C. Edge to Edge routing via a gateway

D. One to one relationship between 2 VPC’s

A

Answer: A, B, C

This is given in the aws documentation

For more information on VPC Peering configurations, please refer to the below link
http: //docs.aws.amazon.com/AmazonVPC/latest/PeeringGuide/invalid-
peering-configurations.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
140
Q

Question 140

You’ve been tasked with building out a duplicate environment in another
region for disaster recovery purposes. Part of your environment relies on
EC2 instances with preconfigured software. What steps would you take to
configure the instances in another region? Choose the correct answer from
the options below

A. Create an AMI of the EC2 instance

B. Create an AMI of the EC2 instance and copy the AMI to the desired
region

C. Make the EC2 instance shareable among other regions through IAM
permissions

D. None of the above

A

Answer: B

You can copy an Amazon Machine Image (AMI) within or across an
AWS region using the AWS Management Console, the AWS command line
tools or SDKs, or the Amazon EC2 API, all of which support
the CopyImage action. You can copy both Amazon EBS-backed AMIs and
instance store-backed AMIs. You can copy AMIs with encrypted snapshots
and encrypted AMIs. For more information on copying AMI’s, please refer to
the below link
http: //docs.aws.amazon.com/AWSEC2/latest/UserGuide/CopyingAMIs.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
141
Q

Question 141

In order to establish a successful site-to-site VPN connection from your
on-premise network to the VPC (Virtual Private Cloud), which of the
following needs to be configured outside of the VPC? Choose the correct
answer from the options below

A. The main route table to route traffic through a NAT instance

B. A public IP address on the customer gateway for the on-premise
network

C. A dedicated NAT instance in a public subnet

D. An Elastic IP address to the Virtual Private Gateway

A

Answer: B

On the customer side gateway you need to have a public IP address
which can be addressed by the VPN connection.

For more information on VPN connections, please refer to the below link:

http: //docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpn-
connections. html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
142
Q

Question 142

You have 5 CloudFormation templates. Each template has been defined
for a specific purpose. What determines the cost of using the
CloudFormation templates? Choose the correct answer from the options
below

A. $1.10 per template per month

B. The length of time it takes to build the architecture with
CloudFormation

C. It depends on the region the template is created in

D. CloudFormation does not have a cost but you are charged for the
underlying resources it builds

A

Answer: D

This is given in the aws documentation For more information on
Cloudformation pricing, please refer to the below link
https://aws.amazon.com/cloudformation/pricing/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
143
Q

Question 143

Does S3 provide read-after-write consistency for new objects? Choose
the correct answer from the options below

A. Yes, for all regions

B. No, not for any region

C. Yes, but only for certain regions and for new objects

D. Yes, but only for certain regions, not the us-standard region

A

Answer: A

This is given in the aws documentation.

For more information on S3, please refer to the below link
https://aws.amazon.com/s3/faqs/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
144
Q

Question 144

Your organization has been using a HSM (Hardware Security Module)
for secure key storage. It is only used for generating keys for your EC2
instances. Unfortunately, the HSM has been zeroized after someone
attempted to log in as the administrator three times using an invalid
password. This means that the encryption keys on it have been wiped. You did not have a copy of the keys stored anywhere else. How can you obtain a
new copy of the keys that you had stored on HSM? Choose the correct answer
from the options below

A. You cannot; the keys are lost if you did not have a copy.

B. Contact AWS Support; your incident will be routed to the team that
supports AWS CloudHSM and a copy of the keys will be sent to you after
verification

C. Restore a snapshot of the HSM

D. You can still connect via CLI; use the command ‘get-client-
configuration’ and you can get a copy of the keys

A

Answer: A

This is given in the aws documentation For more information on
CloudHSM, please refer to the below link
https://aws.amazon.com/cloudhsm/faqs/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
145
Q

Question 145

What service from AWS can help manage the budgets for all resources in
AWS? Choose one answer from the options below

A. Cost Explorer

B. Cost Allocation Tags

C. AWS Budgets

D. Payment History

A

Answer: C

A budget is a way to plan your usage and your costs (also known as spend data), and to track how close your usage and costs are to exceeding your budgeted amount. Budgets use data from Cost Explorer to provide you
with a quick way to see your usage-to-date and current estimated charges
from AWS, and to see how much your predicted usage accrues in charges by
the end of the month. Budgets also compare the current estimated usage and
charges to the amount that you indicated that you want to use or spend, and
lets you see how much of your budget has been used. AWS updates your
budget status several times a day. Budgets track your unblended costs,
subscriptions, and refunds. You can create budgets for different types of
usage and different types of cost. For example, you can create a budget to see
how many EC2 hours you have used, or how many GB you have stored in an
S3 bucket. You can also create a budget to see how much you are spending on
a particular service, or how often you call a particular API operation. Budgets
use the same data filters as Cost Explorer.

To create your budget, you can perform the below steps

Step 1) Go to your billing section, go to Budgets and create a new Budget

Step 2) In the next screen, you can then mention the budget amount and
what services to link the budget to.

For more information on AWS Budgets please visit the below link
http: //docs.aws.amazon.com/awsaccountbilling /latest/aboutv2/budgets-
managing-costs.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
146
Q

Question 146

A customer wants to leverage Amazon Simple Storage Service (S3) and
Amazon Glacier as part of their backup and archive infrastructure. The
customer plans to use third-party software to support this integration. Which
approach will limit the access of the third party software to only the Amazon
S3 bucket named “company-backup”?

A. A custom bucket policy limited to the Amazon $3 API in the Amazon
Glacier archive “company-backup”

B. A custom bucket policy limited to the Amazon S3 API in “company-
backup”

C. Acustom IAM user policy limited to the Amazon S3 API for the
Amazon Glacier archive “company-backup”.

D. A custom IAM user policy limited to the Amazon S3 API in
“company-backup”.

A

Answer: D

You can use IAM user policies and attach them to users/groups that
need specific access to S3 buckets. An example of creating such policies is
given in the link below https://aws.amazon.com/blogs/security/writing-iam-
policies-how-to-grant-access-to-an-amazon-s3-bucket/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
147
Q

Question 147

Currently you’re helping design and architect a highly available
application. After building the initial environment, you’ve found that part of
your application does not work correctly until port 443 is added to the
security group. After adding port 443 to the appropriate security group, how
much time will it take before the changes are applied and the application
begins working correctly? Choose the correct answer from the options
below

A. Generally, it takes 2-5 minutes in order for the rules to propagate

B. Immediately after a reboot of the EC2 instances belong to that
security group
C. Changes apply instantly to the security group, and
the application should be able to respond to 443 requests
D. It will take 60 seconds for the rules to apply to all availability zones within the region

A

Answer: C

This is given in the aws documentation For more information on
Security Groups, please refer to the below link
http: //docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Security
Groups.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
148
Q

Question 148

Which of the following services allow the administrator access to the
underlying operating system? Choose the 2 correct answers from the options
below

A. Amazon RDS

B. Amazon EMR

C. Amazon EC2

D. DynamoDB

A

Answer: B, C

“Amazon Elastic Compute Cloud (Amazon EC2) is a web service that
provides secure, resizable compute capacity in the cloud. It is designed to
make web-scale cloud computing easier for developers.

For more information on EC2, please refer to the below link
https://aws.amazon.com/ec2/

Your security credentials identify you to services in AWS and grant you
unlimited use of your AWS resources, such as your Amazon EC2 resources.
http: //docs.aws.amazon.com/AWSEC2/latest/UserGuide/UsingIAM.html

Amazon EMR provides a managed Hadoop framework that makes it
easy, fast, and cost-effective to process vast amounts of data across
dynamically scalable Amazon EC2 instances. You can also run other popular
distributed frameworks such as Apache Spark, HBase, Presto, and Flink in
Amazon EMR, and interact with data in other AWS data stores such as
Amazon S3 and Amazon DynamoDB.

For more information on EMR, please refer to the below link
https://aws.amazon.com/emr/

Amazon EMR and applications such as Hadoop need permission to
access other AWS resources when running jobs on behalf of users.

http: //docs.aws.amazon.com/emr/latest/ManagementGuide/emr-iam-
roles. html “

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
149
Q

Question 149

Besides regions and their included availability zones, which of the
following is another “regional” data center location used for content
distribution? Choose the correct answer from the options below

A. Edge Location

B. Front Location

C. Backend Location

D. Cloud Location

A

Answer: A

Using a network of edge locations around the world, Amazon
CloudFront caches copies of your static content close to viewers, lowering
latency when they download your objects and giving you the high, sustained
data transfer rates needed to deliver large popular objects to end users at scale. For more information on Cloudfront and edge locations, please refer
to the below link https: //aws.amazon.com/cloudfront/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
150
Q

Question 150

What are the main benefits of [AM groups? Choose 2 answers from the
options below

A. Ability to create custom permission policies.

B. Allow for EC2 instances to gain access to S3.

C. Easier user/policy management.

D. Assign IAM permission policies to more than one user at a
time.

A

Answer: C, D

An IAM group is a collection of IAM users. Groups let you specify
permissions for multiple users, which can make it easier to manage the
permissions for those users. For example, you could have a group
called Admins and give that group the types of permissions that
administrators typically need. Any user in that group automatically has the
permissions that are assigned to the group. If a new user joins your
organization and needs administrator privileges, you can assign the
appropriate permissions by adding the user to that group. Similarly, ifa
person changes jobs in your organization, instead of editing that user’s
permissions, you can remove him or her from the old groups and add him or
her to the appropriate new groups.

For more information on [AM Groups, please refer to the below link
http: //docs.aws.amazon.com/IAM/latest/UserGuide/id_groups.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
151
Q

Question 151

API Access Keys are required to make programmatic call to AWS from
which of the following? Choose the 3 correct answers from the options
below

A. AWS Tools for Windows PowerShell

B. Managing AWS resources through the AWS console

C. Direct HTTP call using the API

D. AWS CLI

A

Answer: A, C, D

By default, when you create an access key, its status is Active, which
means the user can use the access key for AWS CLI, Tools for Windows
PowerShell, and API calls. Each user can have two active access keys, which
is useful when you must rotate the user’s access keys. You can disable a user’s
access key, which means it can’t be used for API calls. You might do this
while you’re rotating keys or to revoke API access for a user

For more information on API Access keys, please refer to the below link

http: //docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-
keys. html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
152
Q

Question 152

A customer is leveraging Amazon Simple Storage Service in eu-west-1 to
store static content for a web-based property. The customer is storing objects
using the Standard Storage class. Where are the customers objects
replicated?

A. A single facility in eu-west-1 and a single facility in eu-central-1

B. A single facility in eu-west-1 and a single facility in us-east-1

C. Multiple facilities in eu-west-1

D. A single facility in eu-west-1

A

Answer: C

It is clearly mentioned in the AWS documentation that data in an S3
bucket is replicated to multiple facilities in the same region. For more
information on S3 product details, please refer to the below link
https://aws.amazon.com/s3/details/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
153
Q

Question 153

How are Network access rules evaluated? Choose the correct answer
from the options below

A. Rules are evaluated by rule number, from highest to lowest, and
executed immediately when a matching allow/deny rule is found.

B. All rules are evaluated before any traffic is allowed or denied.

C. Rules are evaluated by rule number, from lowest to highest, and
executed immediately when a matching allow/deny rule is found.

D. Rules are evaluated by rule number, from lowest to highest, and
executed after all rules are checked for conflicting allow/deny rules.

A

Answer: C

This is given in the aws documentation For more information on NACL,
please refer to the below link
http: //docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_ACLs.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
154
Q

Question 154

What are three attributes of DynamoDB? Choose the 3 correct answers from the options below

A. Used for data warehousing

B. A NoSQL database platform

C. Uses key-value store

D. Fully-managed

A

Answer: B, C, D

Amazon DynamoDB is a fast and flexible NoSQL database service for all
applications that need consistent, single-digit millisecond latency at any
scale. It is a fully managed cloud database and supports both document and
key-value store models. Its flexible data model and reliable performance
make it a great fit for mobile, web, gaming, ad tech, IoT, and many other
applications. AWS Redshift can be used for data warehousing. For more
information on DynamoDB, please refer to the below link
https://aws.amazon.com/dynamodb/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
155
Q

Question 155

If you cannot connect to your Ec2 instance via remote desktop, and you have already verified the instance has a public IP and the Internet gateway and route tables are in place, what should you check next? Choose one answer from the options given below

A. Adjust the security group to allow traffic from port 22

B. Adjust the security group to allow traffic from port 3389

C. Restart the instance since there might be some issue with the instance

D. Create a new instance since there might be some issue with the
instance

A

Answer: B

The reason why you cannot connect to the instance is because by default
RDP protocol will not be enabled on the Security Group. Option A is wrong
because this is for the SSH protocol and here we want to RDP into the
instance. Option C and D are wrong because there is no mention of anything
wrong with the instance.

Step 1) Go to your EC2 Security groups, click on the required security
groups to make the changes. Go to the Inbound Tab.

Step 2) Make sure to add a rule for the RDP protocol for the instance
and then click the Save button.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
156
Q

Question 156

What database service should you choose if you need petabyte-scale data warehousing? Choose the correct answer from the options below

A. DynamoDB

B. ElastiCache

C. RDS

D. Redshift

A

Answer: D

Amazon Redshift is a fast, fully managed data warehouse that makes it
simple and cost-effective to analyze all your data using standard SQL and
your existing Business Intelligence (BI) tools. It allows you to run complex
analytic queries against petabytes of structured data, using sophisticated
query optimization, columnar storage on high-performance local disks, and
massively parallel query execution For more information on Redshift, please
refer to the below link

http://docs.aws.amazon.com/redshift/latest/mgmt/welcome.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
157
Q

Question 157

Which feature in AWS allows 2 VPC’s to talk to each other? Choose one answer from the options given below

A. VPC Connection

B. VPN Connection

C. Direct Connect

D. VPC Peering

A

Answer: D

A VPC peering connection is a networking connection between two VPCs
that enables you to route traffic between them using private IP addresses.
Instances in either VPC can communicate with each other as if they are
within the same network. You can create a VPC peering connection between
your own VPCs, or with a VPC in another AWS account within a single region
The below diagram shows an example of VPC peering. Now please note that
VPC B cannot communicate to VPC C because there is no peering between
them. 

For more information on VPC peering, please visit the url
http: //docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-
peering.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
158
Q

Question 158

In AWS Security Groups what are the 2 types of rules you can define?
Select 2 options.

A. Inbound
B. Transitional
C. Bi-Directional
D. Outbound

A

Answer: A, D

A security group acts as a virtual firewall that controls the traffic for one
or more instances. When you launch an instance, you associate one or more
security groups with the instance. You add rules to each security group that
allow traffic to or from its associated instances. You can modify the rules for
a security group at any time; the new rules are automatically applied to all
instances that are associated with the security group. The below diagram’s
show that rules can be defined for Inbound and Outbound For more
information on Security Groups, please visit the url
http: //docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-
security. html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
159
Q

Question 159

A VPC has been setup with public subnet and an internet gateway. You setup and EC2 instance with a public IP. But you are still not able to connect to it via the Internet. You can see that the right Security groups are in place. What should you do to ensure you can connect to the EC2 instance from the internet

A. Set an Elastic IP Address to the EC2 instance

B. Set a Secondary Private IP Address to the EC2 instance

C. Ensure the right route entry is there in the Route table

D. There must be some issue in the EC2 instance. Check the system logs.

A

Answer: C

You have to ensure that the Route table has an entry to the Internet gateway because this is required for instances to communicate over the internet. The diagram shows the configuration of the public subnet in a VPC.

Option A is wrong because you already have a public IP Assigned to the instance, so this should be enough to connect to the Internet.

Option B is wrong because private IP’s cannot be access from the internet

Option D is wrong because the Route table is what is causing the issue and not the
system

For more information on aws public subnet, please visit the link
http: //docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Scenario1.html

160
Q

Question 160

Which of the following approaches provides the lowest cost for Amazon Elastic Block Store snapshots while giving you the ability to fully restore data?

A. Maintain two snapshots: the original snapshot and the latest incremental snapshot.

B. Maintain a volume snapshot; subsequent snapshots will overwrite one another

C. Maintain a single snapshot the latest snapshot is both Incremental and complete.

D. Maintain the most current snapshot, archive the original and incremental to Amazon Glacier.

A

Answer: C

EBS snapshots are incremental and complete, so you don’t need to maintain multiple snapshots if you are looking on reducing costs. You can easily create a snapshot from a volume while the instance is running and the volume is in use. You can do this from the EC2 dashboard.

AWS Docs provides following details:

In state 3, the volume has not changed since State 2, but Snapshot A has been deleted. The 6 GiB of data stored in Snapshot A that were referenced by Snapshot B have now been moved to Snapshot B, as shown by the heavy arrow. As a result, you are still charged for storing 10 GiB of data—6 GiB of unchanged data preserved from Snap A, and 4 GiB of changed data from Snap B. So as the diagram in AWS Document shows when we take multiple snapshots, the volume that occupied would be Snap A (10 GiB) + Snap B (4 GiB) + Snap C (2 GB) which is in total 16 GiB. In real production environment, the data on the EBS is changing every minute or even every second. So taking multiple snapshots would definitely consume more
storage.

Hence, Option C is correct which provide lowest cost and at the same time able to fully restore the data. Please refer to the following document on how incremental backup works:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSSnapshots.ht
ml#how_snapshots_work

For more information on EBS snapshots, please visit the link -
http: //docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSSnapshots.html

161
Q

Question 161

An existing application stores sensitive information on a non-boot
Amazon EBS data volume attached to an Amazon Elastic Compute Cloud instance. Which of the following approaches would protect the sensitive data on an Amazon EBS volume?

A. Upload your customer keys to AWS CloudHSM. Associate the
Amazon EBS volume with AWS CloudHSM. Remount the Amazon EBS volume.

B. Create and mount a new, encrypted Amazon EBS volume. Move the data to the new volume. Delete the old Amazon EBS volume.

C. Unmount the EBS volume. Toggle the encryption attribute to True. Re-mount the Amazon EBS volume.

D. Snapshot the current Amazon EBS volume. Restore the snapshot to a new, encrypted Amazon EBS volume. Mount the Amazon EBS volume

A

Answer: B

Here the only option available is to create a new mount volume Option A
is wrong because you cannot encrypt a volume once it is created. You would
need to use some local encrypting algorithm if you want to encrypt the data
on the volume. Option C is wrong because even if you unmounts the volume,
you cannot encrypt the volume. Encryption has to be done during volume
creation. Option D is wrong because even if the volume is not encrypted, the
snapshot will also not be encrypted. You cannot create an encrypted
snapshot of an unencrypted volume or change existing volume from
unencrypted to encrypted. You have to create new encrypted volume and
transfer data to the new volume. The other option is to encrypt a volume’s
data by means of snapshot copying 1. Create a snapshot of your unencrypted
EBS volume. This snapshot is also unencrypted. 2. Copy the snapshot while
applying encryption parameters. The resulting target snapshot is encrypted.
3. Restore the encrypted snapshot to a new volume, which is also encrypted.
but that option is not listed.

Find more details here :
http: //docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html

162
Q

Question 162

In Amazon CloudWatch what is the retention period for a one minute datapoint. Choose the right answer from the options given below

A. 10 days

B. 15 days

C.1 month

D. 1 year

A

Answer: B

Amazon CloudWatch is a monitoring service for AWS cloud resources and the applications you run on AWS. You can use Amazon CloudWatch to collect and track metrics, collect and monitor log files, set alarms, and
automatically react to changes in your AWS resources. Amazon CloudWatch can monitor AWS resources such as Amazon EC2 instances, Amazon DynamoDB tables, and Amazon RDS DB instances, as well as custom metrics generated by your applications and services, and any log files your applications generate. You can use Amazon CloudWatch to gain system-wide visibility into resource utilization, application performance, and operational health. Below is the retention period for the various data points CloudWatch
Metrics now supports the following three retention schedules: 1 minute datapoints are available for 15 days 5 minute datapoints are available for 63 days 1 hour datapoints are available for 455 days

For more information on Amazon Cloudwatch, please visit
https://aws.amazon.com/cloudwatch/

163
Q

Question 163

A customer wants to apply a group of database specific settings to their
Relational Database Instances in their AWS acccount. Which of the following
options can be used to apply the settings in one go for all of the Relational
database instances

A. Security Groups

B. NACL Groups

C, Parameter Groups

D. IAM Roles.

A

Answer: C

DB Parameter Groups are used to assign specific settings which can be
applied to a set of RDS instances in aws. In your RDS, when you go to
Parameter Groups, you can create a new parameter group. In the parameter
group itself, you have a lot of database related settings that can be assigned
to the database. Option A, B and D are wrong because this is specific to what
resources have access to the database. For more information on EB
parameter groups, please visit
http: //docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Worki
ngWithParamGroups.html

164
Q

Question 164

Before I delete an EBS volume, what can I do if I want to recreate the
volume later?

A. Create a copy of the EBS volume (not a snapshot)
B. Store a snapshot of the volume
C. Download the content to an EC2 instance
D. Back up the data in to a physical disk

A

Answer: B

After you no longer need an Amazon EBS volume, you can delete it.
After deletion, its data is gone and the volume can’t be attached to any
instance. However, before deletion, you can store a snapshot of the volume,
which you can use to re-create the volume later. See more details here :
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-deleting-
volume.htm] Snapshots occur asynchronously; the point-in-time snapshot is
created immediately, but the status of the snapshot is pending until the
snapshot is complete (when all of the modified blocks have been transferred
to Amazon S3), which can take several hours for large initial snapshots or
subsequent snapshots where many blocks have changed. While it is
completing, an in-progress snapshot is not affected by ongoing reads and
writes to the volume. You can easily create a snapshot from a volume while
the instance is running and the volume is in use. You can do this from the
EC2 dashboard. For more information on EBS snapshots, please visit the
link
- http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSSnapshots.html

165
Q

Question 165

All Amazon EC2 instances are assigned two IP addresses at launch, out
of which one can only be reached from within the Amazon EC2
network?

A. Multiple IP address

B. Public IP address

C. Private IP address

D. Elastic IP Address

A

Answer: C

A private IP address is an IP address that’s not reachable over the
Internet. You can use private IP addresses for communication between
instances in the same network (EC2-Classic or a VPC). When an instance is
launched a private IP address is allocated for the instance using DHCP. Each
instance is also given an internal DNS hostname that resolves to the private
IP address of the instance; for example, ip-10-251-50-12.ec2.internal. You
can use the internal DNS hostname for communication between instances in
the same network, but we can’t resolve the DNS hostname outside the
network that the instance is in. For more information on IP Addressing,
please visit the link -
http: //docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-instance-
addressing. html

166
Q

Question 166

Where does AWS beanstalk store the application files and server log
files? Choose one answer from the options given below

A. On the local server within Elastic beanstalk

B. AWS S3

C. AWS Cloudtrail

D. AWS DynamoDB

A

Answer: B

AWS Elastic Beanstalk stores your application files and, optionally,
server log files in Amazon S3. If you are using the AWS Management
Console, the AWS Toolkit for Visual Studio, or AWS Toolkit for Eclipse, an Amazon S3 bucket will be created in your account for you and the files you
upload will be automatically copied from your local client to Amazon S3.
Optionally, you may configure Elastic Beanstalk to copy your server log files
every hour to Amazon S3. You do this by editing the environment
configuration settings

For more information on Elastic Beanstalk visit the below link
https://aws.amazon.com/elasticbeanstalk/faqs/

167
Q

Question 167

A customer is looking for a hybrid cloud solution and learns about AWS
Storage Gateway. What is the main use case of AWS Storage Gateway?

A. It allows to integrate on-premises IT environments with Cloud
Storage.

B. A direct encrypted connection to Amazon S3.

C. It’s a backup solution that provides an on-premises Cloud storage.

D. It provides an encrypted SSL endpoint for backups in the
Cloud.

A

Answer: A

Option B is wrong because it is not an encrypted solution to $3 Option C
is wrong because you can use S3 as a backup solution Option D is wrong
because the SSL endpoint can be achieved via S3 The AWS Storage Gateway’s
software appliance is available for download as a virtual machine (VM) image
that you install on a host in your datacenter. Once you’ve installed your
gateway and associated it with your AWS Account through our activation
process, you can use the AWS Management Console to create either gateway-
cached volumes, gateway-stored volumes, or a gateway-virtual tape library
(VTL), which can be mounted as iSCSI devices by your on-premises applications.

You have primarily 2 types of volumes

1) Gateway-cached volumes allow you to utilize Amazon 83 for your
primary data, while retaining some portion of it locally in a cache for
frequently accessed data.

2) Gateway-stored volumes store your primary data locally, while
asynchronously backing up that data to AWS

For more information on AWS Storage gateways visit the below link
https://aws.amazon.com/storagegateway/details/

168
Q

Question 168

What is the base URI for all requests for instance metadata? Choose one answer from the options given below

A. http://254.169.169.254/latest/

B. http://169.169.254.254/latest/

C. http://127.0.0.1/latest/

D. http://169.254.169.254/latest/

A

Answer: D

Instance metadata is data about your instance that you can use to
configure or manage the running instance Because your instance metadata is
available from your running instance, you do not need to use the Amazon
EC2 console or the AWS CLI. This can be helpful when you’re writing scripts
to run from your instance. For example, you can access the local IP address
of your instance from instance metadata to manage a connection to an
external application. http://169.254.169.254/latest/meta-data/

For more information on Instance Metadata visit the below link

http: //docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-
metadata. html

169
Q

Question 169

When you disable automated backups for aws rds, what are you
compromising on? Choose on answer from the options given below

A. Nothing, you are actually saving resources on aws

B. You are disabling the point-in-time recovery.

C. Nothing really, you can still take manual backups.

D. You cannot disable automated backups in RDS.

A

Answer: B

Amazon RDS creates a storage volume snapshot of your DB instance,
backing up the entire DB instance and not just individual databases. You can
set the backup retention period when you create a DB instance. If you don’t
set the backup retention period, Amazon RDS uses a default period retention
period of one day. You can modify the backup retention period; valid values
are 0 (for no backup retention) to a maximum of 35 days You will also
specifically see AWS mentioning the risk of not allowing automated backups.

For more information on Automated backups, please visit
http: //docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Worki
ngWithAutomatedBackups.html

170
Q

Question 170

Your customer is willing to consolidate their log streams (access logs
application logs security logs etc.) in one single system. Once consolidated, the customer wants to analyze these logs in real time based on heuristics. From time to time, the customer needs to validate heuristics, which requires going back to data samples extracted from the last 12 hours? What is the best approach to meet your customer’s requirements?

A. Send all the log events to Amazon SQS. Setup an Auto Scaling group of EC2 servers to consume the logs and apply the heuristics.

B. Send all the log events to Amazon Kinesis develop a client process to apply heuristics on the logs

C. Configure Amazon Cloud Trail to receive custom logs, use EMR to apply heuristics the logs

D. Setup an Auto Scaling group of EC2 syslogd servers, store the logs on S3 use EMR to apply heuristics on the logs

A

Answer: B

Amazon Kinesis is the best option for analyzing logs in real time The
AWS documentation mentions the following for AWS Kinesis Amazon
Kinesis makes it easy to collect, process, and analyze real-time, streaming
data so you can get timely insights and react quickly to new information.
Amazon Kinesis offers key capabilities to cost effectively process streaming
data at any scale, along with the flexibility to choose the tools that best suit
the requirements of your application. With Amazon Kinesis, you can ingest
real-time data such as application logs, website clickstreams, IoT telemetry
data, and more into your databases, data lakes and data warehouses, or build
your own real-time applications using this data.

For more information on AWS Kinesis, please refer to the below URL:
https://aws.amazon.com/kinesis/

171
Q

Question 171

In what events would cause Amazon RDS to initiate a failover to the
standby replica? Select 3 options.

A. Loss of availability in primary Availability Zone

B. Loss of network connectivity to primary

C. Storage failure on secondary

D. Storage failure on primary

A

Answer: A, B, D

Amazon RDS detects and automatically recovers from the most common
failure scenarios for Multi-AZ deployments so that you can resume database
operations as quickly as possible without administrative intervention.
Amazon RDS automatically performs a failover in the event of any of the
following: Loss of availability in primary Availability Zone Loss of network
connectivity to primary Compute unit failure on primary Storage failure on
primary.

Note: When operations such as DB Instance scaling or system upgrades
like OS patching are initiated for Multi-AZ deployments, for enhanced
availability, they are applied first on the standby prior to an automatic
failover. As a result, your availability impact is limited only to the time
required for automatic failover to complete. Note that Amazon RDS Multi-AZ
deployments do not failover automatically in response to database
operations such as long running queries, deadlocks or database corruption
errors.

For more information on read replicas, please visit
https://aws.amazon.com/rds/details/read-replicas/

172
Q

Question 172

What does the following command do with respect to the Amazon EC2
security groups? revoke-security-group-ingress

Removes one or more security groups from a rule.

B. Removes one or more security groups from an Amazon EC2 instance.

C. Removes one or more rules from a security group.

A

Answer: C

“Removes one or more ingress rules from a security group. The values
that you specify in the revoke request (for example, ports) must match the
existing rule’s values for the rule to be removed. Each rule consists of the
protocol and the CIDR range or source security group.

For the TCP and UDP protocols, you must also specify the destination
port or range of ports. For the ICMP protocol, you must also specify the
ICMP type and code.

For more information on revoke-security-group-ingress CLI command,
please visit http: //docs.aws.amazon.com/cli/latest/reference/ec2/revoke-
security-group-ingress.html “

173
Q

Question 173

What is the durability of S3 RRS?

A. 99.99%

B. 99.95%

C. 99.995%

D. 99.999999999%

A

Answer: A

RRS only has 99.99% durability and there is a chance that data can be

lost. So you need to ensure you have the right steps in place to replace lost
objects. For more information on RRS, visit the link
https: //aws.amazon.com/s3/reduced-redundancy/

174
Q

Question 174

Which aws service is used as a global content delivery network (CDN)
service in aws?

A. Amazon SES

B. Amazon Cloudtrail

C. Amazon CloudFront

D. Amazon S3

A

Answer: C

Amazon CloudFront is a web service that gives businesses and web
application developers an easy and cost effective way to distribute content
with low latency and high data transfer speeds. Like other AWS services,
Amazon CloudFront is a self-service, pay-per-use offering, requiring no long
term commitments or minimum fees. With CloudFront, your files are
delivered to end-users using a global network of edge locations. For more
information on CloudFront, please visit the link
https://aws.amazon.com/cloudfront/

175
Q

Question 175

What features in aws acts as a firewall that controls the traffic allowed to reach one or more instances ?

A. Security group

B. ACL

C. IAM

D. Private IP Addresses

A

Answer: A

A security group acts as a virtual firewall that controls the traffic for one
or more instances. When you launch an instance, you associate one or more
security groups with the instance. You add rules to each security group that
allow traffic to or from its associated instances. Below is an example of a
security group for EC2 instances that allows inbound rules and ensure there
is a rule for TCP on port 22. For more information on EC2 Security groups,
please visit the url
- http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-
security. html

176
Q

Question 176

How many types of block devices does Amazon EC2 support? Choose one answer from the options below

A.2

B.3

C4

D.1

A

Answer: A

A block device is a storage device that moves data in sequences of bytes
or bits (blocks). These devices support random access and generally use
buffered I/O. Examples include hard disks, CD-ROM drives, and flash drives.
A block device can be physically attached to a computer or accessed remotely
as if it were physically attached to the computer. Amazon EC2 supports two
types of block devices: Instance store volumes (virtual devices whose
underlying hardware is physically attached to the host computer for the
instance) EBS volumes (remote storage devices)

177
Q

Question 177

When running my DB Instance as a Multi-AZ deployment, can I use the standby for read and write operations?

A. Yes

B. Only with MSSQL based RDS

C. Only for Oracle RDS instances

D. No

A

Answer: D

This is clearly mentioned in the aws documentation that you cannot use
the secondary DB instances for writing purposes. Here is the overview of
Multi-AZ RDS Deployments: Amazon RDS Multi-AZ deployments provide
enhanced availability and durability for Database (DB) Instances, making
them a natural fit for production database workloads. When you provision a
Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB
Instance and synchronously replicates the data to a standby instance in a
different Availability Zone (AZ). Each AZ runs on its own physically distinct,
independent infrastructure, and is engineered to be highly reliable. In case of
an infrastructure failure, Amazon RDS performs an automatic failover to the
standby (or to a read replica in the case of Amazon Aurora), so that you can
resume database operations as soon as the failover is complete. Since the
endpoint for your DB Instance remains the same after a failover, your
application can resume database operation without the need for manual
administrative intervention.

For more information on Multi AZ RDS, please
visit the link https://aws.amazon.com/rds/details/multi-az/

178
Q

Question 178

Which Amazon service can I use to define a virtual network that closely resembles a traditional data center?

A. Amazon VPC

B. Amazon ServiceBus

C. Amazon EMR

D. Amazon RDS

A

Answer: A

Amazon Virtual Private Cloud (Amazon VPC) lets you provision a
logically isolated section of the Amazon Web Services (AWS) cloud where
you can launch AWS resources in a virtual network that you define. You have
complete control over your virtual networking environment, including
selection of your own IP address range, creation of subnets, and
configuration of route tables and network gateways. You can easily customize
the network configuration for your Amazon Virtual Private Cloud. For
example, you can create a public-facing subnet for your webservers that has
access to the Internet, and place your backend systems such as databases or
application servers in a private-facing subnet with no Internet access. You
can leverage multiple layers of security, including security groups and
network access control lists, to help control access to Amazon EC2 instances
in each subnet

For more information on Amazon VPC, please visit the link
https://aws.amazon.com/vpc/

179
Q

Question 179

The common use for IAM is to manage what? Select 3 options.

A. Security Groups

B. API Keys

C. Multi-Factor Authentication

D. Roles

A

Answer: B, C, D

You can use IAM to manage API key and MFA along with roles.

Please find specific details below:

http: //docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-
keys. html
http: //docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa_e
nable. html

If you go on the IAM console, you will see the options on the left hand
side. The Security groups are managed as part of the EC2 dashboard and not
the IAM console. For more information on IAM, please refer to the below
link https://aws.amazon.com/iam/

180
Q

Question 180

You have instances running on your VPC. You have both production and development based instances running in the VPC. You want to ensure that people who are responsible for the development instances don’t have the access to work on the production instances to ensure better security. Using policies, which of the following would be the best way to accomplish this? Choose the correct answer from the options given below

A. Launch the test and production instances in separate VPC’s and use VPC peering

B. Create an IAM policy with a condition which allows access to only
instances that are used for production or development

C. Launch the test and production instances in different Availability Zones and use Multi Factor Authentication

D. Define the tags on the test and production servers and add a
condition to the LAM policy which allows access to specific tags

A

Answer: D

You can easily add tags which define which instances are production and
which are development instances and then ensure these tags are used when
controlling access via an LAM policy. For more information on tagging your
resources, please refer to the below link
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html

181
Q

Question 181

Your company is concerned with EBS volume backup on Amazon EC2 and wants to ensure they have proper backups and that the data is durable. What solution would you implement and why? Choose the correct answer from the options below

A. Configure Amazon Storage Gateway with EBS volumes as the data
source and store the backups on premise through the storage gateway

B. Write a cronjob on the server that compresses the data that needs to be backed up using gzip compression, then use AWS CLI to copy the data into an S3 bucket for durability

C. Use a lifecycle policy to back up EBS volumes stored on Amazon S3 for durability

D. Write a cronjob that uses the AWS CLI to take a snapshot of
production EBS volumes. The data is durable because EBS snapshots are stored on the Amazon S3 standard storage class

A

Answer: D

You can take snapshots of EBS volumes and to automate the process you
can use the CLI. The snapshots are automatically stored on S3 for durability.
For more information on EBS snapshots, please refer to the below link
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSSnapshots.html

182
Q

Question 182

You are a consultant tasked with migrating an on-premise application
architecture to AWS. During your design process you have to give
consideration to current on-premise security and determine which security attributes you are responsible for on AWS. Which of the following does AWS provide for you as part of the shared responsibility model? Choose the correct answer from the options given below

A. Customer Data

B. Physical network infrastructure

C. Instance security

D. User access to the AWS environment

A

Answer: B

As per the Shared responsibility model, the Physical network
infrastructure is taken care by AWS. The below diagram clearly shows what
has to be managed by customer and what is managed by AWS.

For more information on the Shared Responsibility model, please refer to the below link
https://aws.amazon.com/compliance/shared-responsibility-model/

183
Q

Question 183

Which of the following will occur when an EC2 instance in a VPC with an associated Elastic IP is stopped and started? Select 2 options.

A. The underlying host for the instance can be changed

B. The ENI (Elastic Network Interface) is detached

C. All data on instance-store devices will be lost

D. The Elastic IP will be dissociated from the instance

A

Answer: A, C

Find more details here:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Stop_Start.html

EC2 instances are available in EBS backed storage and instance store
backed storage. In fact, now more EC2 instances are EBS backed only so we
need to consider both options while answering the question.

Find more details here: https://aws.amazon.com/ec2/instance-types/

If you have an EBS backed instance store , then the underling host is
changed when the instance is stopped and started. And if you have instance
store volumes, the data on the instance store devices will be lost.

For more information on the AMI types, please refer to the below link:

http: //docs.aws.amazon.com/AWSEC2/latest/UserGuide/ComponentsAMIs
- html

184
Q

Question 184

A customer is hosting their company website on a cluster of web servers that are behind a public-facing load balancer. The customer also uses Amazon Route 53 to manage their public DNS. How should the customer configure the DNS zone apex record to point to the load balancer?

A. Create an A record pointing to the IP address of the load balancer

B. Create a CNAME record pointing to the load balancer DNS name.

C. Create an alias for CNAME record to the load balancer DNS name.

D. Create an A record aliased to the load balancer DNS name

A

Answer: D

Alias resource record sets are virtual records that work like CNAME
records. But they differ from CNAME records in that they are not visible to
resolvers. Resolvers only see the A record and the resulting IP address of the
target record. As such, unlike CNAME records, alias resource record sets are
available to configure a zone apex (also known as a root domain or naked
domain) in a dynamic environment. So when you create a hosted zone and
having a pointer to the load balancer , you need to mark ‘yes’ for the Alias
option as shown below. Then you can choose the Elastic Load balancer which
you have defined in aws.

For more information on the zone apex, please visit
the link http://docs.aws.amazon.com/govcloud-us/latest/UserGuide/setting-up-route53-zoneapex-elb.html

185
Q

Question 185

To maintain compliance with HIPPA laws, all data being backed up or
stored on Amazon S83 needs to be encrypted at rest. What is the best method for encryption for your data, assuming S3 is being used for storing the healthcare-related data?

A. Enable SSE on an S3 bucket to make use of AES-256 encryption

B. Store the data in encrypted EBS snapshots

C. Encrypt the data locally using your own encryption keys, then copy
the data to Amazon S3 over HTTPS endpoints

D. Store the data on EBS volumes with encryption enabled instead of
using Amazon S3

A

Answer: A, C

Data protection refers to protecting data while in-transit (as it travels to
and from Amazon S3) and at rest (while it is stored on disks in Amazon S3
data centers). You can protect data in transit by using SSL or by using client-
side encryption. You have the following options of protecting data at rest in
Amazon S3. Use Server-Side Encryption — You request Amazon Sg to encrypt
your object before saving it on disks in its data centers and decrypt it when
you download the objects. Use Client-Side Encryption — You can encrypt
data client-side and upload the encrypted data to Amazon S3. In this case,
you manage the encryption process, the encryption keys, and related tools.

For more information on S3 encryption, please refer to the below link
http: //docs.aws.amazon.com/AmazonS3/latest/dev/UsingEncryption.html

186
Q

Question 186

There is a requirement by a company that does online credit card
processing to have a secure application environment on AWS. They are trying to decide on whether to use KMS or CloudHSM. Which of the following statements is right when it comes to CloudHSM and KMS. Choose the correct answer from the options given below

A. It probably doesn’t matter as they both do the same thing

B. AWS CloudHSM does not support the processing, storage, and
transmission of credit card data by a merchant or service provider, as it has not been validated as being compliant with Payment Card Industry (PCI) Data Security Standard (DSS); hence, you will need to use KMS

C. KMS is probably adequate unless additional protection is necessary for some applications and data that are subject to strict contractual or regulatory requirements for managing cryptographic keys, then HSM should be used

D. AWS CloudHSM should be always be used for any payment
transactions

A

Answer: C

AWS Key Management Service (KMS) is a managed service that makes it
easy for you to create and control the encryption keys used to encrypt your
data, and uses Hardware Security Modules (HSMs) to protect the security of
your keys. This is sufficient if you the basic needs of managing keys for
security.

For more information on KMS, please refer to the below link
https://aws.amazon.com/kms/

For a higher requirement on security one can use CloudHSM. The AWS
CloudHSM service helps you meet corporate, contractual and regulatory
compliance requirements for data security by using dedicated Hardware
Security Module (HSM) appliances within the AWS cloud. With CloudHSM,
you control the encryption keys and cryptographic operations performed by
the HSM

For more information on CloudHSM, please refer to the below link
https://aws.amazon.com/cloudhsm/

187
Q

Question 187

You are building a system to distribute confidential training videos to
employees. Using CloudFront, what method would be used to serve content that is stored in S3, but not publicly accessible from S3 directly? Choose the correct answer from the options given below

A. Create an Origin Access Identify (OAI) for CloudFront and grant
access to the objects in your S3 bucket to that OAI

B. Create an Identity and Access Management (IAM) user for
CloudFront and grant access to the objects in your S3 bucket to that IAM user.

C. Create a S3 bucket policy that lists the CloudFront distribution ID as the principal and the target bucket as the Amazon Resource Name
(ARN)

D. Add the CloudFront account security group

A

Answer: A

You can optionally secure the content in your Amazon S3 bucket so
users can access it through CloudFront but cannot access it directly by using
Amazon S3 URLs. This prevents anyone from bypassing CloudFront and
using the Amazon S3 URL to get content that you want to restrict access to.
This step isn’t required to use signed URLs, but we recommend it. To require
that users access your content through CloudFront URLs, you perform the
following tasks: Create a special CloudFront user called an origin access
identity. Give the origin access identity permission to read the objects in your
bucket. Remove permission for anyone else to use Amazon S3 URLs to read
the objects.

For more information on Restricting access to AWS S3, please refer to
the below link:
http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/pr
ivate-content-restricting-access-to-s3.html

188
Q

Question 188

As part of your application architecture requirements, the company you are working for has requested the ability to run analytics against all combined log files from the Elastic Load Balancer. Which services are used together to collect logs and process log file analysis in an AWS environment?

Choose the correct answer from the options given below

A. Amazon S3 for storing the ELB log files and EC2 for processing the
log files in analysis

B. Amazon DynamoDB to store the logs and EC2 for running custom log analysis scripts

C. Amazon S3 for storing ELB log files and Amazon EMR for processing the log files in analysis

D. Amazon EC2 for storing and processing the log files

A

Answer: C

You can use Amazon EMR for processing the jobs Amazon EMR
provides a managed Hadoop framework that makes it easy, fast, and cost-
effective to process vast amounts of data across dynamically scalable Amazon
EC2 instances. You can also run other popular distributed frameworks such
as Apache Spark, HBase, Presto, and Flink in Amazon EMR, and interact
with data in other AWS data stores such as Amazon S3 and Amazon
DynamoDB. Amazon EMR securely and reliably handles a broad set of big
data use cases, including log analysis, web indexing, data transformations
(ETL), machine learning, financial analysis, scientific simulation, and
bioinformatics.

For more information on Amazon EMR please refer to the below link
https://aws.amazon.com/emr/

189
Q

Question 189

Your company has moved a legacy application from an on-premise data center to the cloud. The legacy application requires a static IP address hard-coded into the backend, which prevents you from deploying the application with high availability and fault tolerance using the ELB. Which steps would you take to apply high availability and fault tolerance to this application?

Select 2 options.

A. Write a custom script that pings the health of the instance, and, if the instance stops responding, switches the elastic IP address to a standby instance

B. Ensure that the instance it’s using has an elastic IP address assigned to it

C. Do not migrate the application to the cloud until it can be converted to work with the ELB and Auto Scaling

D. Create an AMI of the instance and launch it using Auto Scaling which will deploy the instance again if it becomes unhealthy

A

Answer: A, B

The best option is to configure an Elastic IP that can be switched
between a primary and failover instance.

Here is a link on using Elastic IP for failover.
https: //aws.amazon.com/articles/2127188135977316

190
Q

Question 190

As an IT administrator you have been requested to ensure you create a highly decouple application in AWS. Which of the following help you
accomplish this goal? Choose the correct answer from the options
below

A. An SQS queue to allow a second EC2 instance to process a failed
instance’s job

B. An Elastic Load Balancer to send web traffic to healthy EC2
instances

C. IAM user credentials on EC2 instances to grant permissions to
modify an SQS queue

D. An Auto Scaling group to recover from EC2
instance failures

A

Answer: A

Amazon Simple Queue Service (SQS) is a fully-managed message
queuing service for reliably communicating among distributed software
components and microservices - at any scale. Building applications from
individual components that each perform a discrete function improves
scalability and reliability, and is best practice design for modern applications.
SQS is the best option for creating a decoupled application.

For more information on SQS, please refer to the below link
https://aws.amazon.com/sqs/

191
Q

Question 191

A company has resources hosted in AWS and on on-premise servers.
You have been requested to create a de-coupled architecture for applications which make use of both types of resources? Which of the below options are valid? Select 2 options.

A. You can leverage SWF to utilize both on-premises servers and EC2
instances for your decoupled application

B. SQS is not a valid option to help you use on-premises servers and EC2 instances in the same application, as it cannot be polled by on-premises servers

C. You can leverage SQS to utilize both on-premises servers and EC2
instances for your decoupled application

D. SWF is not a valid option to help you use on-premises servers and
EC2 instances in the same application, as on-premises servers cannot be used as activity task workers

A

Answer: A, C

You can use both SWF and SQS to coordinate with EC2 instances and on-premise servers. Amazon Simple Queue Service (SQS) is a fully-managed
message queuing service for reliably communicating among distributed
software components and microservices - at any scale. Building applications
from individual components that each perform a discrete function improves
scalability and reliability, and is best practice design for modern applications.

For more information on SQS, please refer to the below link
https://aws.amazon.com/sqs/

The Amazon Simple Workflow Service (Amazon SWF) makes it easy to build applications that coordinate work across distributed components. In Amazon SWF, a task represents a logical unit of work that is performed by a component of your application. Coordinating tasks across the application involves managing intertask dependencies, scheduling, and concurrency in accordance with the logical flow of the application. Amazon SWF gives you full control over implementing tasks and coordinating them without worrying about
underlying complexities such as tracking their progress and maintaining their state.

For more information on SWF, please refer to the below link
http: //docs.aws.amazon.com/amazonswf/latest/developerguide/swf-welcome.html

192
Q

Question 192

When reviewing the Auto Scaling events, it is noticed that an application is scaling up and down multiple times within the hour. What design change could you make to optimize cost while preserving elasticity? Choose the
correct answer from the options below

A. Change the scale down CloudWatch metric to a higher threshold

B. Increase the instance type in the launch configuration

C. Increase the base number of Auto Scaling instances for the Auto rn oer peeps

D. Add provisioned IOPS to the instances

A

Answer: A

If the threshold for the scale down is too low then the instances will keep on scaling down rapidly. Hence it is best to keep on optimal threshold for your metrics defined for Cloudwatch.

For more information on scaling on demand, please refer to the below link
http: //docs.aws.amazon.com/autoscaling/latest/userguide/as-scale-based-
on-demand.html

193
Q

Question 193

You are working for a startup company that is building an application that receives large amounts of data. Unfortunately, current funding has left the start-up short on cash, cannot afford to purchase thousands of dollars of storage hardware, and has opted to use AWS. Which services would you implement in order to store a virtually unlimited amount of data without any effort to scale when demand unexpectedly increases?Choose the correct answer from the options below

A. Amazon S3, because it provides unlimited amounts of storage data, scales automatically, is highly available, and durable

B. Amazon Glacier, to keep costs low for storage and scale infinitely

C. Amazon Import/Export, because Amazon assists in migrating large
amounts of data to Amazon S3

D. Amazon EC2, because EBS volumes can scale to hold any amount of data and, when used with Auto Scaling, can be designed for fault tolerance and high availability

A

Answer: A

The best option is to use S3 because you can host a large amount of data
in Sg and is the best storage option provided by AWS. The answer could be
Glacier if question is just asking to choose the cheapest option to store a large
amount of data , but here trick is in question where it mentioned to scale
when “demand unexpectedly increase”. As Galicer required 3 to 5 hrs
duration to get data , so it will not able to handle unexpected demand
increase thus S3 is the best choice here.

For more information on S3, please refer to the below link
http: //docs.aws.amazon.com/AmazonS3/latest/dev/Welcome.html

194
Q

Question 194

A customer is running a multi-tier web application farm in a virtual
private cloud (VPC) that is not connected to their corporate network. They are connecting to the VPC over the Internet to manage all of their Amazon EC2 instances running in both the public and private subnets. They have only authorized the bastion-security-group with Microsoft Remote Desktop Protocol (RDP) access to the application instance security groups, but the company wants to further limit administrative access to all of the instances in the VPC. Which of the following Bastion deployment scenarios will meet this requirement?

A. Deploy a Windows Bastion host on the corporate network that has RDP access to all instances in the VPC.

B. Deploy a Windows Bastion host with an Elastic IP address in the public subnet and allow SSH access to the bastion from anywhere.

C. Deploy a Windows Bastion host with an Elastic IP address in the private subnet, and restrict RDP access to the bastion from only the corporate public IP addresses.

D. Deploy a Windows Bastion host with an Elastic IP address in the public subnet and allow RDP access to bastion only from corporate IP addresses.

A

Answer: D

The bastion host should be in a public subnet with either a public or
elastic IP and only allow RDP access from one IP from the corporate
network. A bastion host is a special purpose computer on a network
specifically designed and configured to withstand attacks. The computer
generally hosts a single application, for example a proxy server, and all other
services are removed or limited to reduce the threat to the computer. In
AWS, A bastion host is kept on a public subnet. Users log on to the bastion
host via SSH or RDP and then use that session to manage other hosts in the
private subnets. This is a security practice adopted by many organization to
secure the assets in their private subnets.

195
Q

Question 195

You have started a new role as a solutions architect for an architectural firm that designs large sky scrapers in the Middle East. Your company hosts large volumes of data and has about 250 TB of data on internal servers. They have decided to store this data on S3 due to the redundancy offered by it. The company currently has a telecoms line of 2Mbps connecting their head office to the internet. What method should they use to import this data on to $3 in the fastest manner possible?

A. Upload it directly to S3

B. Purchase and AWS Direct connect and transfer the data over that
once it is installed.

C. AWS Data pipeline

D. AWS Snowball

A

Answer: D

The AWS Documentation mentions the following Snowball is a petabyte-scale data transport solution that uses secure appliances to transfer large amounts of data into and out of the AWS cloud. Using Snowball addresses
common challenges with large-scale data transfers including high network costs, long transfer times, and security concerns. Transferring data with Snowball is simple, fast, secure, and can be as little as one-fifth the cost of high-speed Internet.

For more information on AWS Snowball , please visit the below link: https://aws.amazon.com/snowball/

196
Q

Question 196

How does using ElastiCache help to improve database performance?
Choose the correct answer from the options below

A. It can store petabytes of data

B. It provides faster internet speeds

C. It can store high-taxing queries

D. It uses read replicas

A

Answer: C

Amazon ElastiCache is a web service that makes it easy to deploy,
operate, and scale an in-memory data store or cache in the cloud. The service
improves the performance of web applications by allowing you to retrieve
information from fast, managed, in-memory data stores, instead of relying
entirely on slower disk-based databases.

For more information on AWS Elastic Cache, please refer to the below link
https://aws.amazon.com/elasticache/

197
Q

Question 197

The Availability Zone that your RDS database instance is located in is
suffering from outages, and you have lost access to the database. What could you have done to prevent losing access to your database (in the event of this type of failure) without any downtime? Choose the correct answer from the options below

A. Made a snapshot of the database

B. Enabled multi-AZ failover

C. Increased the database instance size

D. Created a read replica

A

Answer: B

The best option is to enable Multi-AZ for the database. Amazon RDS Multi-AZ deployments provide enhanced availability and durability for Database (DB) Instances, making them a natural fit for production database workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Each AZ runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable. In case of an infrastructure failure, Amazon RDS performs an automatic failover to the standby (or to a read replica in the case of Amazon Aurora), so that you can resume database operations as soon as the failover is complete. Since the endpoint for your DB Instance remains the same after a failover, your application can resume database operation without the need for manual administrative intervention

For more
information on AWS Multi-AZ, please refer to the below link
https://aws.amazon.com/rds/details/multi-az/

198
Q

Question 198

As an AWS administrator you are trying to convince a team to use RDS Read Replica’s. What are two benefits of using read replicas? Choose the 2 correct answers from the options below

A. Creates elasticity in RDS

B. Allows both reads and writes

C. Improves performance of the primary database by taking workload
from it

D. Automatic failover in the case of Availability Zone service
failures

A

Answer: A, C

By creating a read replica RDS, you have the facility to scale out the
reads for your application, hence increasing the elasticity for your
application. Also it can be used to reduce the load on the main database.
Read Replica’s don’t provide write operations, hence option B is wrong. And
Multi-AZ is used for failover so Option D is wrong.

For more information on Read Replica, please refer to the below link
https://aws.amazon.com/rds/details/read-replicas/

199
Q

Question 199

What is the purpose of an SWF decision task? Choose the correct answer from the options below

A. It tells the worker to perform a function.

B. It tells the decider the state of the work flow execution.

C. It defines all the activities in the workflow.

D. It represents a single task in the workflow.

A

Answer: B

A decider is an implementation of the coordination logic of your
workflow type that runs during the execution of your workflow. You can run
multiple deciders for a single workflow type. Because the execution state for
a workflow execution is stored in its workflow history, deciders can be
stateless. Amazon SWF maintains the workflow execution history and
provides it to a decider with each decision task

For more information on Decider tasks, please refer to the below link
http: //docs.aws.amazon.com/amazonswf/latest/developerguide/swf-dg-
dev-deciders.html

200
Q

Question 200

What is the best definition of an SQS message? Choose an answer from the options below

A. A mobile push notification

B. A set of instructions stored in an SQS queue that can be up to 512KB in size

C. A notification sent via SNS

D. A set of instructions stored in an SQS queue that can be up to 256KB in size

A

Answer: D

The maximum size of an SQS message as given in the AWS
documentation is given below

For more information on SQS, please refer to the below link
https://aws.amazon.com/sqs/faqs/

201
Q

Question 201

CloudTrail can log API calls from? Choose the correct answer from the options below

A. The command line

B. The SDK

C. The Console

D. All of the above

A

Answer: D

Cloudtrail can log all API calls which enter AWS. AWS CloudTrail is a
service that enables governance, compliance, operational auditing, and risk
auditing of your AWS account. With CloudTrail, you can log, continuously
monitor, and retain events related to API calls across your AWS
infrastructure. CloudTrail provides a history of AWS API calls for your
account, including API calls made through the AWS Management Console,
AWS SDKs, command line tools, and other AWS services. This history
simplifies security analysis, resource change tracking, and troubleshooting.

For more information on AWS Cloudtrail, please refer to the below link
https://aws.amazon.com/cloudtrail/

202
Q

Question 202

What best describes Recovery Time Objective (RTO)? Choose the correct answer from the options below

A. The time it takes after a disruption to restore operations back to its
regular service level.

B. Minimal version of your production environment running on AWS.

C. A full clone of your production environment.

D. Acceptable amount of data loss measured in time.

A

Answer: A

The recovery time objective (RTO) is the targeted duration of time and a
service level within which a business process must be restored after a disaster
(or disruption) in order to avoid unacceptable consequences associated with
a break in business continuity

Please refer to the below link for more details:
https://en.wikipedia.org/wiki/Recovery_time_objective

203
Q

Question 203

What AWS service, if used as part of your application’s architecture, has an added benefit of helping to mitigate DDoS attacks from hitting your back-end instances? Choose the correct answer from the options below

A. CloudWatch

B. CloudFront

C. CloudTrail

D. Kinesis

A

Answer: B

The below snapshot from the aws documentation shows the best
architecture practices for avoiding DDos attacks.

For best practices against DDos attacks , please visit the below link
https://do.awsstatic.com/whitepapers/DDoS_White_Paper_June2015.pdf

204
Q

Question 204

Perfect Forward Secrecy is used to offer SSL/TLS cipher suites for which two AWS services? Choose the correct answer from the options below

A. EC2 and S3

B. CloudTrail and CloudWatch

C. Cloudfront and Elastic Load Balancing

D. Trusted advisor and GovCloud

A

Answer: C

Its currently available for Cloudfront and ELB.

Please find the below link for more details
https://aws.amazon.com/about-aws/whats-new/2014/02/19/elastic-load-balancing-perfect-forward-secrecy-and-more-new-security-features/

https: //aws.amazon.com/blogs/aws/cloudfront-ssl-ciphers-session-ocsp-pfs/

205
Q

Question 205

A customer has a single 3-TB volume on-premises that is used to hold a large repository of images and print layout files. This repository is growing at 500 GB a year and must be presented as a single logical volume. The customer is becoming increasingly constrained with their local storage capacity and wants an off-site backup of this data, while maintaining low-latency access to their frequently accessed data. Which AWS Storage Gateway configuration meets the customer requirements?

A. Gateway-Cached volumes with snapshots scheduled to Amazon S3

B. Gateway-Stored volumes with snapshots scheduled to Amazon S3

C. Gateway-Virtual Tape Library with snapshots to Amazon S3

D. Gateway-Virtual Tape Library with snapshots to Amazon Glacier

A

Answer: A

Gateway-cached volumes let you use Amazon Simple Storage Service
(Amazon S3) as your primary data storage while retaining frequently
accessed data locally in your storage gateway. Gateway-cached volumes
minimize the need to scale your on-premises storage infrastructure, while
still providing your applications with low-latency access to their frequently
accessed data. You can create storage volumes up to 32 TiB in size and attach
to them as iSCSI devices from your on-premises application servers. Your
gateway stores data that you write to these volumes in Amazon S3 and
retains recently read data in your on-premises storage gateway’s cache and
upload buffer storage.

For more information on Storage gateways, please visit the link
http: //docs.aws.amazon.com/storagegateway /latest/userguide/storage-
gateway-cached-concepts.html

206
Q

Question 206

Which of the following best describes what the CloudHSM has to offer? Choose the correct answer from the options given below

A. An AWS service for generating API keys

B. EBS Encryption method

C. 83 encryption method

D. A dedicated appliance that is used to store security keys

A

Answer: D

The AWS CloudHSM service helps you meet corporate, contractual and
regulatory compliance requirements for data security by using dedicated
Hardware Security Module (HSM) appliances within the AWS cloud. With
CloudHSM, you control the encryption keys and cryptographic operations
performed by the HSM.

For more information on CloudHSM, please refer to the below link https://aws.amazon.com/cloudhsm/

207
Q

Question 207

A company wants to launch EC2 instances on aws. For the linux instance, they want to ensure that the Perl language are installed
automatically when the instance is launched. In which of the below configurations can you achieve what is required by the customer.

A. User data

B. EC2Config service

C. IAM roles

D. AWS Config

A

Answer: A

When you configure an instance during creation, you can add custom
scripts to the User data section. So in Step 3 of creating an instance, in the
Advanced Details section, we can enter custom scripts in the User Data
section. The below script installs Per] during the instance creation of the EC2
instance.

208
Q

Question 208

A company is deploying a new two-tier web application in AWS. The
company wants to store their most frequently used data so that the response time for the application is improved. Which AWS service provides the solution for the company’s requirements?

A. MySQL Installed on two Amazon EC2 Instances in a single
Availability Zone

B. Amazon RDS for MySQL with Multi-AZ

C. Amazon ElastiCache

D. Amazon DynamoDB

A

Answer: C

Amazon ElastiCache is a web service that makes it easy to deploy,
operate, and scale an in-memory data store or cache in the cloud. The service improves the performance of web applications by allowing you to retrieve information from fast, managed, in-memory data stores, instead of relying entirely on slower disk-based databases

Option A is wrong because even if MySQL is installed on multiple systems, it will not help to serve the most recently used data.

Option B is wrong because even a Multi-AZ option with an RDS will not suffice the requirement of the customer.

Option D is wrong because this is a pure database option.

For more information on Elastic
cache, please visit the link https: //aws.amazon.com/elasticache/

209
Q

Question 209

Regarding the attaching of ENI to an instance, what does ‘warm attach’ refer to?

A. Attaching an ENI to an instance when it is stopped.

B. Attaching an ENI to an instance during the launch process

C. Attaching an ENI to an instance when it is running

A

Answer: A

You can attach an elastic network interface to an instance when it’s running (hot attach), when it’s stopped (warm attach), or when the instance is being launched (cold attach). An elastic network interface (ENI) is a virtual network interface that you can attach to an instance in a VPC. An Elastic network interface can have the following A primary private IP address. One or more secondary private IP addresses. One Elastic IP address per private IP address. One public IP address, which can be auto-assigned to the elastic network interface for the when you launch an instance.

For more information, see Public IP Addresses for Network Interfaces. One or more security groups.

A MAC address.

A source/destination check flag.

A description.

The below article shows where the ENI is present for an instance.

When you click on the, you will get more details on the network
interface

For more information on Elastic Network interfaces, please visit the url -
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html#attach_eni_launch

210
Q

Question 210

What can be used to monitor your EC2 instances and warn the
Operational Department in case there are any issues?

A. AWS Cloudtrail

B. AWS Cloudwatch

C. Configure scheduled jobs on the EC2 instance to notify the Ops
department in case of any CPU utilization hikes.

D. AWS SQS

A

Answer: B

A Cloudwatch alarm is used to monitor any Amazon Cloudwatch metric in your account. For example, you can create alarms on an Amazon EC2 instance CPU utilization, Amazon ELB request latency, Amazon Dynamo DB table throughput, Amazon SQS queue length, or even the charges on your AWS bill. Option A is wrong because Cloudtrail is used for logging purposes and not monitoring purposes Option C is partially correct and you can implement this policy, but since you have the option to use an AWS service, you need to opt for this Option B instead. Option D is wrong because SQS is used as a Queuing service.

For more information on Cloudwatch, please visit the link -
https://aws.amazon.com/cloudwatch/fags/

211
Q

Question 211

A company wants to store their primary data in S3 but at the same time they want to store frequently access data locally. This is because they are not having the option to extend their on-premise storage, hence they are looking at aws for an option. What is the best solution that can be provided?

A. an EC2 instance with EBS volumes to store the commonly used data.

B. A Redis cache for frequently accessed data and S3 for frequently
accessed data

C. Use the Gateway Cached Volumes

D. There is no option available

A

Answer: C

Gateway-Cached Volumes provides a durable and inexpensively way to store your primary data in Amazon S83, and retain your frequently accessed data locally. Gateway-Cached Volumes provide substantial cost savings on primary storage, minimize the need to scale your storage on-premises, and provide low-latency access to your frequently accessed data. In addition to storing your primary data in Amazon S3 using Gateway-Cached Volumes, you can also take point-in-time snapshots of your Gateway-Cached volume data in Amazon 83, enabling you to make space-efficient versioned copies of your volumes for data protection and various data reuse needs.

Option A and B are invalid because the burden of trying to sync the most recently used data on either EBS volumes or S3 would be a burden for the IT Department.

For more information on Gateway-Cached Volumes, please visit the link
https://aws.amazon.com/storagegateway/faqs/

212
Q

Question 212

A customer wants to have the ability to transfer stale data from their S3 location to a low cost storage system. If there is a possibility to automate this, they would be more than happy. As an AWS Solution Architect, what is the best solution you can provide to them?

A. Use an EC2 instance and a scheduled job to transfer the stale data
from their S3 location to Amazon Glacier.

B. Use Life-Cycle Policies

C. Use AWS SQSD. There is no option, the users will have to download
the data and then transfer the data to aws manually.

A

Answer: B

“With Amazon lifecycle policies you can create transition actions in
which you define when objects transition to another Amazon S3 storage
class. For example, you may choose to transition objects to the
STANDARD_IA (IA, for infrequent access) storage class 30 days after
creation, or archive objects to the GLACIER storage class one year after
creation.

Follow the below steps to get this in place

Step 1) Go the Lifecycle section of the $3 bucket and click on Add Rule

Step 2) Choose what you want to export

Step 3) Choose the Action to perform and then confirm on the Rule creation in the next screen.

For more information on Lifecycle management, click on the link
http://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html

213
Q

Question 213

A company has a workflow that sends video files from their on-premise system to AWS for transcoding. They use EC2 worker instances that pull transcoding jobs from SQS. Why is SQS an appropriate service for this scenario?

A. SQS guarantees the order of the messages.

B. SQS synchronously provides transcoding output.

C. SQS checks the health of the worker instances.

D. SQS helps to facilitate horizontal scaling of encoding tasks.

A

Answer: D

Now even though SQS does guarantees the order of the messages for
FIFO queues, this is still not the reason as to why this is the appropriate
reason. The normal reason for using SQS, is for decoupling of systems and
helps in horizontal scaling of aws resources. SQS does not either do
transcoding output or checks the health of the worker instances. The health
of the worker instances can be done via ELB or Cloudwatch.

For more information on SQS, please visit the link
- https://aws.amazon.com/sqs/faqs/

214
Q

Question 214

When creation of an EBS snapshot is initiated, but not completed, the EBS volume:

A. Can be used while the snapshot is in progress.

B. Cannot be detached or attached to an EC2 instance until the snapshot completes

C. Can be used in read-only mode while the snapshot is in progress.

D. Cannot be used until the snapshot completes.

A

Answer: A

Snapshots occur asynchronously; the point-in-time snapshot is created immediately, but the status of the snapshot is pending until the snapshot is complete (when all of the modified blocks have been transferred to Amazon
S3), which can take several hours for large initial snapshots or subsequent snapshots where many blocks have changed. While it is completing, an in- progress snapshot is not affected by ongoing reads and writes to the volume. You can easily create a snapshot from a volume while the instance is running
and the volume is in use. You can do this from the EC2 dashboard.

For more information on EBS snapshots, please visit the link
- http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSSnapshots.html

215
Q

Question 215

A customer needs to capture all client connection information from their ELB every five minutes. The company wants to use this data for analyzing traffic patterns and troubleshooting their applications. Which of the following options meets the customer requirements?

A. Enable AWS CloudTrail for the load balancer.

B. Enable access logs on the load balancer.

C. Install the Amazon CloudWatch Logs agent on the load balancer.

D. Enable Amazon CloudWatch metrics on the load balancer.

A

Answer: B

Elastic Load Balancing provides access logs that capture detailed information
about requests or connections sent to your load balancer. Each log contains information
such as the time it was received, the client’s IP address, latencies, request paths, and
server responses. You can use these access logs to analyze traffic patterns and to
troubleshoot issues. Perform the following steps to enable load balancing

Step 1) Go to the Description tab for your load balancer

Step 2) Go to the Attributes section and click on Edit Attributes

Step 3) In the next screen, enable Access logging and choose the S3 bucket where
the logs need to be added to.

For more information on ELB logging, please visit the link
— http://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-access-logs.html

216
Q

Question 216

Which Amazon Elastic Compute Cloud feature can you query from within the instance to access instance properties?

A. Instance user data

B. Resource tags

C. Instance metadata

D. Amazon Machine Image

A

Answer: C

Instance metadata is data about your instance that you can use to configure or manage the running instance. Option A is incorrect because, user data is what you enter
when you launch an instance. This can be accessed by the instance later on. Option B is
incorrect, because you use this feature to tag your resources to help you manage your
instances, images, and other Amazon EC2 resources, you can optionally assign your
own metadata to each resource in the form of tags.

For more information on metadata, please visit the link -
http: //docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html

217
Q

Question 217

You are tasked with setting up a Linux bastion host for access to Amazon EC2 instances running in your VPC. Only clients connecting from the corporate external public IP address 72.34.51.100 should have SSH access to the host. Which option will meet the customer requirement?

A. Security Group Inbound Rule: Protocol — TCP. Port Range — 22, Source
72.34.51.100/32

B. Security Group Inbound Rule: Protocol - UDP, Port Range — 22, Source
72.34.51.100/32

C. Network ACL Inbound Rule: Protocol — UDP, Port Range — 22, Source
72.34.51.100/32

D. Network ACL Inbound Rule: Protocol — TCP, Port Range-22,
Source 72.34.51.100/0

A

Answer: A

For SSH access, the protocol has to be TCP, so Option B and C are wrong. For
Bastion host, only the IP of the client should be put and not the entire network of
72.34.51.100/0 as given in option D. So this option is also wrong. A bastion host is a
special purpose computer on a network specifically designed and configured to withstand attacks. The computer generally hosts a single application, for example a
proxy server, and all other services are removed or limited to reduce the threat to the
computer. In AWS, A bastion host is kept on a public subnet. Users log on to the bastion
host via SSH or RDP and then use that session to manage other hosts in the private
subnets. This is a security practice adopted by many organization to secure the assets in
their private subnets.

218
Q

Question 218

You run an ad-supported photo sharing website using S3 to serve photos to visitors of your site. At some point you find out that other sites have been linking to the photos on your site, causing loss to your business. What is an effective method to mitigate this?

A. Remove public read access and use signed URLs with expiry dates.

B. Use Cloud Front distributions for static content.

C. Block the IPs of the offending websites in Security Groups.

D. Store photos on an EBS volume of the web server.

A

Answer: A

Cloud front is only used for distribution of content across edge or region locations.
It is not used for restricting access to content, so Option B is wrong. Blocking IP’s is
challenging because they are dynamic in nature and you will not know which sites are
accessing your main site, so Option C is also not feasible. Storing photos on EBS volume
is not a good practice or architecture approach for an AWS Solution Architect.

219
Q

Question 219

What are the use case scenarios when you need Enhanced Networking? Choose 2 answers from the options given below

A. high packet-per-second performance

B. low packet-per-second performance

C. high latency networking

D. low latency networking

A

Answer: A, D

Enhanced networking uses single root I/O virtualization (SR-IOV) to provide high-
performance networking capabilities on supported instance types. SR-IOV is a method
of device virtualization that provides higher I/O performance and lower CPU utilization
when compared to traditional virtualized network interfaces. Enhanced networking
provides higher bandwidth, higher packet per second (PPS) performance, and
consistently lower inter-instance latencies

For more information on EBS volumes, please visit the link

  • http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-
    networking. html
220
Q

Question 220

You are working with a customer who is using Chef Configuration management in their data center. Which service is designed to let the customer leverage existing Chef recipes in AWS?

A. Amazon Simple Workflow Service

B. AWS Elastic Beanstalk

C. AWS CloudFormation

D. AWS OpsWorks

A

Answer: D

AWS OpsWorks is a configuration management service that helps you configure
and operate applications of all shapes and sizes using Chef. You can define the
application’s architecture and the specification of each component including package
installation, software configuration and resources such as storage. Start from templates
for common technologies like application servers and databases or build your own to
perform any task that can be scripted. AWS OpsWorks includes automation to scale
your application based on time or load and dynamic configuration to orchestrate
changes as your environment scales.

For more information on Opswork, please visit the link —
https://aws.amazon.com/opsworks/

221
Q

Question 221

A company wants to create standard templates for deployment of their Infrastructure. Which AWS service can be used in this regard? Please choose one option.

A. Amazon Simple Workflow Service

B. AWS Elastic Beanstalk

C. AWS CloudFormation

D. AWS OpsWorks

A

Answer: C

AWS CloudFormation gives developers and systems administrators an easy way to
create and manage a collection of related AWS resources, provisioning and updating
them in an orderly and predictable fashion. You can use AWS CloudFormation’s sample templates or create your own templates to describe the AWS resources, and any associated dependencies or runtime parameters, required to run your application. You don’t need to figure out the order for provisioning AWS services or the subtleties of making those dependencies work. CloudFormation takes care of this for you. After the AWS resources are deployed, you can modify and update them in controlled and predictable way, in effect applying version control to your AWS infrastructure the same way you do with your software. You can also visualize your templates as diagrams and edit them using a drag-and-drop interface with the AWS CloudFormation Designer.

For more information on Cloudformation, please visit the link —
https://aws.amazon.com/cloudformation/

222
Q

Question 222

A company wants to create standard templates for deployment of their Infrastructure. They have heard that aws provides a service call CloudFormation which can meet their needs? But they are worried about the cost. As an AWS Architect what advise can you give them with regards to the cost. Please choose one option.

A. You can tell them the cost is minimal and that they should not worry on that
aspect.

B. Tell them that ‘yes’ , they have to bear the cost if they want Automation

C. Cloudformation is a free service and you only charged for the underlying aws resources

D. Tell them to buy a product and implement on their on-premise location.

A

Answer: C

There is no additional charge for AWS CloudFormation. You only pay for the AWS
resources that are created (e.g., Amazon EC2 instances, Elastic Load Balancing load
balancers etc.)

For more information on Cloudformation, please visit the link —
https://aws.amazon.com/cloudformation/

223
Q

Question 223

You have an environment that consists of a public subnet using Amazon VPC and 3 instances that are running in this subnet. These three instances can successfully communicate with other hosts on the Internet. You launch a fourth instance in the same subnet, using the same AMI and security group configuration you used for the others, but find that this instance cannot be accessed from the internet. What should you do to enable Internet access?

A. Deploy a NAT instance into the public subnet.

B. Assign an Elastic IP address to the fourth instance.

C. Configure a publicly routable IP Address in the host OS of the fourth instance.

D. Modify the routing table for the public subnet.

A

Answer: B

Option A is wrong because it already mentioned that your instances are in a public
subnet. Only when your instances are in a private Subnet, then only you have to
configure a NAT instance. Option C is wrong because the public IP address has to be
configured in AWS and not on the EC2 instance. Option D is wrong because if the
routing table was wrong then you would have an issue with the other 3 instances as
well. And the question says that there is no issue with the other instances.

For more information on Elastic IP’s, please visit the link:
http: //docs.aws.amazon.com/AWSEC2/latest/UserGuide/elastic-ip-addresses-eip.html

224
Q

Question 224

You have a video transcoding application running on Amazon EC2. Each instance polls a queue to find out which video should be transcoded, and then runs a transcoding process. If this process is interrupted, the video will be transcoded by another instance
based on the queuing system. You have a large backlog of videos which need to be transcoded and would like to reduce this backlog by adding more instances. You will need these instances only until the backlog is reduced. Which type of Amazon EC2 instances should you use to reduce the backlog in the most cost efficient way?

A. Reserved instances

B. Spot instances

C. Dedicated instances

D. On-demand instances

A

Answer: B

“Since this is like a batch processing job, the best type of instance to use is a Spot
instances. Spot instances are normally used in batch processing jobs. Since these jobs
don’t last for the entire duration of the year, they can bid upon and allocated and de-
allocated as requested. Reserved Instances/Dedicated instances cannot be used because
this is not a 100% used application. There is no mentioned on a continuous demand of
work from the question so there is no need to use On-demand instances.

What is Spot Instance? - These are spare unused Amazon EC2 instances that you
can bid for. Once your bid exceeds the current spot price (which fluctuates in real time
based on demand-and-supply) the instance is launched. The instance can go away
anytime the spot price becomes greater than your bid price. Note that spot instance also
a category of on-demand instance, but it is demanded based on the low cost bidding.
What is On-demand instance? - They let you pay for your computing capacity needs by
the hour. There is not much planning required from the user’s end and no one time cost
that you need to pay upfront like in case of reserved instances. Suitable for use cases
where you do not want any long term commitment like testing and POCs, spiky, not to be interrupted workloads.

For more information on Spot Instances, please visit the
URL - https://aws.amazon.com/ec2/spot/

http: //docs.aws.amazon.com/AWSEC2/latest/UserGuide/how-spot-instances-
work. html

225
Q

Question 225

You have a distributed application that periodically processes large volumes of data across multiple Amazon EC2 Instances. The application is designed to recover gracefully from Amazon EC2 instance failures. You are required to accomplish this task in the most cost-effective way. Which of the following will meet your requirements?

A. Spot Instances

B. Reserved instances

C. Dedicated instances

D. On-Demand instances

A

Answer: A

When you think of cost effectiveness, you can either have to choose Spot or
Reserved instances. Now when you have a regular processing job, the best is to use spot
instances and since your application is designed recover gracefully from Amazon EC2
instance failures, then even if you lose the Spot instance, there is no issue because your
application can recover.

For more information on spot instances, please visit the link
https://aws.amazon.com/ec2/spot/

226
Q

Question 226

What are the possible Event Notifications available for S3 buckets? Please choose 3 answers from the options given below.

SNS

B. SES

C. SQS

D. Lambda function

A

Answer: A, C, D

Amazon S3 event notifications enable you to run workflows, send alerts, or perform
other actions in response to changes in your objects stored in Amazon S3. You can use
Amazon S3 event notifications to set up triggers to perform actions including
transcoding media files when they are uploaded, processing data files when they
become available, and synchronizing Amazon S3 objects with other data stores. When
you go to the Events section in S3, you can see the options present there for SNS, SQS
and Lambda function.

227
Q

Question 227

A company needs to deploy services to an AWS region which they have not previously used. The company currently has an AWS identity and Access Management (IAM) role for the Amazon EC2 instances, which permits the instance to have access to Amazon DynamoDB. The company wants their EC2 instances in the new region to have
the same privileges. How should the company achieve this?

A. Create a new IAM role and associated policies within the new region

B. Assign the existing LAM role to the Amazon EC2 instances in the new region

C. Copy the LAM role and associated policies to the new region and attach it to the
instances

D. Create an Amazon Machine Image (AMI) of the instance and copy it to the desired region using the AMI Copy feature

A

Answer: B

Since you already have an existing role, you don’t need to create a new one, so Option A is wrong.

Remember that roles are a global service that is available across all regions. So Option C is also wrong.

Option D is wrong because this has to do with roles and no need of creating an AMI image.

So when you create a role choose the Amazon
EC2 option in the Select Role type. In the next screen, you can select the Amazon DynamoDB type of access required. Once the role is created, choose the role in the Configure Instance Details screen when creating the EC2 instance.

228
Q

Question 228

You are deploying an application to collect votes for a very popular television show. Millions of users will submit votes using mobile devices. The votes must be collected into a durable, scalable, and highly available data store for real-time public tabulation. Which service should you use?

A. Amazon DynamoDB

B. Amazon Redshift

C. Amazon Kinesis

D. Amazon Simple Queue Service

A

Answer: A

Amazon DynamoDB is a fully managed NoSQL database service that provides fast
and predictable performance with seamless scalability. Amazon DynamoDB enables
customers to offload the administrative burdens of operating and scaling distributed
databases to AWS, so they don’t have to worry about hardware provisioning, setup and
configuration, replication, software patching, or cluster scaling. DynamoDB is durable,
scalable, and highly available data store in aws and can be used for real time tabulation.
Option B is wrong because it is a petabyte storage engine and is used in cases where
there is a requirement for an OLAP solution. Option C is wrong because it is used for
processing streams and not for storage. Option D is wrong because it is a de-coupling
solution.

For more information on Amazon DynamoDB, please visit
https://aws.amazon.com/dynamodb/faqs/

229
Q

Question 229

Which of the below aws services allows you to run code without the need to host an EC2 instances

A. AWS Lambda

B. AWS IoT

C. AWS SQS

D. AWS SES

A

Answer: A

AWS Lambda lets you run code without provisioning or managing servers. You pay
only for the compute time you consume - there is no charge when your code is not
running. With Lambda, you can run code for virtually any type of application or
backend service - all with zero administration. Just upload your code and Lambda takes
care of everything required to run and scale your code with high availability. You can
set up your code to automatically trigger from other AWS services or call it directly
from any web or mobile app

For more information on Amazon Lambda, please visit
https://aws.amazon.com/lambda/?nc2=h_m1

230
Q

Question 230

You are deploying an application to track GPS coordinates of delivery trucks in the United States. Coordinates are transmitted from each delivery truck once every three seconds. You need to design an architecture that will enable real-time processing of these coordinates from multiple consumers. Which service should you use to
implement data ingestion?

A. Amazon Kinesis

B. AWS Data Pipeline

C,. Amazon AppStream

D. Amazon Simple Queue Service

A

Answer: A

Use Amazon Kinesis Streams to collect and process large streams of data records in
real time. You’ll create data-processing applications, known as Amazon Kinesis Streams
applications. A typical Amazon Kinesis Streams application reads data from an Amazon
Kinesis stream as data records. These applications can use the Amazon Kinesis Client
Library, and they can run on Amazon EC2 instances. The processed records can be sent
to dashboards, used to generate alerts, dynamically change pricing and advertising
strategies, or send data to a variety of other AWS services

For more information on Amazon Kinesis, please visit
http: //docs.aws.amazon.com/streams/latest/dev/introduction.html

231
Q

Question 231

You have an application running on an Amazon Elastic Compute Cloud instance that uploads 5 GB video objects to Amazon Simple Storage Service (S3). Video uploads are taking longer than expected, resulting in poor application performance. Which method will help improve performance of your application?

A. Enable enhanced networking

B. Use Amazon S3 multipart upload

C. Leveraging Amazon CloudFront, use the HTTP POST method to reduce latency.

D. Use Amazon Elastic Block Store Provisioned IOPs and use an Amazon EBS-optimized instance

A

Answer: B

When uploading large videos it’s always better to make use of aws multi part file
upload. So if you are using the Multi Upload option for S3, then you can resume on
failure. Below are the advantage of Multi Part upload Improved throughput—you can
upload parts in parallel to improve throughput. Quick recovery from any network issues
—smaller part size minimizes the impact of restarting a failed upload due to a network
error. Pause and resume object uploads—you can upload object parts over time. Once
you initiate a multipart upload there is no expiry; you must explicitly complete or abort
the multipart upload. Begin an upload before you know the final object size—you can
upload an object as you are creating it.

For more information on Multi-part file upload
for S3, please visit the URL - http://docs.aws.amazon.com/AmazonS3/latest/dev/qfacts.html

232
Q

Question 232

A customer wants to track access to their Amazon Simple Storage Service (S3) buckets and also use this information for their internal security and access audits. Which of the following will meet the customer requirements?

A. Enable AWS CloudTrail to audit all Amazon S3 bucket access.

B. Enable server access logging for all required Amazon S3 buckets.

C. Enable the Requester Pays option to track access via AWS Billing

D. Enable Amazon S3 event notifications for Put and Post.

A

Answer: B

Logging provides a way to get detailed access logs delivered to a bucket you choose. An access log record contains details about the request, such as the request type, the
resources specified in the request worked, and the time and date the request was
processed. Since you don’t want logging of every AWS service, there is no need to
CloudTrail, hence you can neglect Option A. Option C is not valid because that refers to
billing. Option D is invalid because event notifications is different from logging. To
enable logging just go to the Logging section in your S3 bucket

For more information
on S3 Logging, please visit the URL -
http: //docs.aws.amazon.com/AmazonS3/latest/UG/ManagingBucketLogging.html

233
Q

Question 233

A company is deploying a two-tier, highly available web application to AWS. Which
service provides durable storage for static content while utilizing lower Overall CPU
resources for the web tier?

A. Amazon EBS volume

B. Amazon S3

C. Amazon EC2 instance store

D. Amazon RDS instance

A

Answer: B

When you think of storage, the automatic choice should aws S3. Amazon S3 is
storage for the Internet. It’s a simple storage service that offers software developers a
highly-scalable, reliable, and low-latency data storage infrastructure at very low costs.

For more information on S3 Logging, please visit the URL -
https://aws.amazon.com/s3/faqs/

234
Q

Question 234

A company is building a two-tier web application to serve dynamic transaction-based content. The data tier is leveraging an Online Transactional Processing (OLTP) database. What services should you leverage to enable an elastic and scalable web tier?

A. Elastic Load Balancing, Amazon EC2, and Auto Scaling

B. Elastic Load Balancing, Amazon RDS with Multi-AZ, and Amazon S3

C. Amazon RDS with Multi-AZ and Auto Scaling

D. Amazon EC2, Amazon Dynamo DB, and Amazon S3

A

Answer: A

The question mentioned a scalable web tier and not a database tier. So Option C, D
and B are already automated eliminated, since we do not need a database option. The
below example shows an Elastic Load balancer connected to 2 EC2 instances connected
via Auto Scaling. This is an example of an elastic and scalable web tier. By scalable we
mean that the Auto scaling process will increase or decrease the number of EC2
instances as required.

235
Q

Question 235

You are designing a web application that stores static assets in an Amazon Simple Storage Service (S3) bucket. You expect this bucket to immediately receive over 150 PUT requests per second. What should you do to ensure optimal performance?

A. Use multi-part upload.

B. Add a random prefix to the key names.

C. Amazon S3 will automatically manage performance at this scale.

D. Use a predictable naming scheme, such as sequential numbers or date time sequences, in the key names

A

Answer: B

If your workload in an Amazon S3 bucket routinely exceeds 100
PUT/LIST/DELETE requests per second or more than 300 GET requests per second
then you need to perform some guidelines for your S3 bucket. One way to add a hash
prefix key to the key name - One way to introduce randomness to key names is to add a
hash string as prefix to the key name. For example, you can compute an MDs hash of
the character sequence that you plan to assign as the key name.

For performance considerations, please visit the URL
http://docs.aws.amazon.com/AmazonS3/latest/dev/request-rate-perf-considerations.html

236
Q

Question 236

A company has an AWS account that contains three VPCs (Dev, Test, and Prod) in the same region. Test is peered to both Prod and Dev. All VPCs have non-overlapping CIDR blocks. The company wants to push minor code releases from Dev to Prod to speed up time to market. Which of the following options helps the company accomplish this?

A. Create a new peering connection Between Prod and Dev along with appropriate routes.

B. Create a new entry to Prod in the Dev route table using the peering connection as the target.

C. Attach a second gateway to Dev. Add a new entry in the Prod route table identifying the gateway as the target.

D. The VPCs have non-overlapping CIDR blocks in the same account. The route tables contain local routes for all VPCs.

A

Answer: A

Transitive VPC peering is not allowed in AWS. Here Test VPC has peered with Dev
VPC and Test VPC also peered with Prod VPC but in order to establish private
communication between Dev and Prod resources, new VPC peering has to create
between Dev VPC and Prod VPC. A VPC peering connection is a networking connection
between two VPCs that enables you to route traffic between them using private IP
addresses. Instances in either VPC can communicate with each other as if they are
within the same network. You can create a VPC peering connection between your own
VPCs, or with a VPC in another AWS account within a single region The below diagram
shows an example of VPC peering. Now please note that VPC B cannot communicate to
VPC C because there is no peering between them. Hence in the same way the above
question there is no peering between Prod and Dev. , hence the only way for them to
communicate is to have a VPC peering setup between them.

For more information on VPC peering, please visit the url
http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-peering.html

237
Q

Question 237

What is the service provided by aws that allows developers to easily deploy and manage applications on the cloud?

A. CloudFormation

B. Elastic Beanstalk

C. Opswork

D. Container service

A

Answer: B

AWS Elastic Beanstalk makes it even easier for developers to quickly deploy and
manage applications in the AWS Cloud. Developers simply upload their application, and Elastic Beanstalk automatically handles the deployment details of capacity
provisioning, load balancing, auto-scaling, and application health monitoring.

For more information on Elastic Beanstalk, please visit the URL
https://aws.amazon.com/elasticbeanstalk/faqs/

238
Q

Question 238

What is the service provided by aws that allows developers to let connected devices interact with cloud based applications? Please choose on answer from the options below.

A. CloudFormation

B. Elastic Beanstalk

C. AWS IoT

D. Container service

A

Answer: C

AWS IoT is a managed cloud platform that lets connected devices easily and
securely interact with cloud applications and other devices. AWS IoT can support
billions of devices and trillions of messages, and can process and route those messages
to AWS endpoints and to other devices reliably and securely. With AWS IoT, your
applications can keep track of and communicate with all your devices, all the time, even
when they aren’t connected.

For more information on aws IoT, please visit the URL
https://aws.amazon.com/iot/

239
Q

Question 239

An Account has an ID of 085566624145. Which of the below mentioned URL’s would you provide to the IAM user to log in to AWS?

A. https://085566624145.signin.aws.amazon.com/console

B. https: //signin.085566624145.aws.amazon.com/console

C. https://signin.aws.amazon.com/console

D. https://aws.amazon.com/console

A

Answer: A

After you create [AM users and passwords for each, users can sign in to the AWS
Management Console for your AWS account with a special URL. By default, the sign-in
URL for your account includes your account ID. You can create a unique sign-in URL
for your account so that the URL includes a name instead of an account ID By default
the URL will be of the format shown below
https://AWS-account-ID-or-alias.signin.aws.amazon.com/console

240
Q

Question 240

What are the different types of identities available AWS. Please choose 3 answers form the options given below.

A. Roles

B. Users

C. EC2 Instances

D. Groups

A

Answer: A, B, D

An IAM user is an entity that you create in AWS. The IAM user represents the
person or service who uses the IAM user to interact with AWS. An IAM group is a
collection of LAM users. You can use groups to specify permissions for a collection of
users, which can make those permissions easier to manage for those users An
IAM role is very similar to a user, in that it is an identity with permission policies that determine what the identity can and cannot do in AWS

For more information on Identities, please visit the URL
http: //docs.aws.amazon.com/IAM/latest/UserGuide/id.html

241
Q

Question 241

Your company currently has an on-premise infrastructure. They are currently running low on storage and want to have the ability to extend their storage on to the cloud. Which of the following AWS services can help achieve this purpose.

A. Amazon EC2

B. Amazon Storage gateways

C. Amazon Storage devices

D. Amazon SQS

A

Answer: B

The AWS Documentation mentions the following on storage gateways AWS
Storage Gateway connects an on-premises software appliance with cloud-based storage
to provide seamless integration with data security features between your on-premises
IT environment and the Amazon Web Services (AWS) storage infrastructure. You can
use the service to store data in the AWS Cloud for scalable and cost-effective storage
that helps maintain data security.

For more information on Storage gateways , please refer to the below URL:
http: //docs.aws.amazon.com/storagegateway /latest/userguide/WhatIsStorageGateway.html

242
Q

Question 242

If you want to process data in real-time, what AWS service should you use? Choose the correct answer from the options below.

A. Kinesis

B. DynamoDB

C. Elastic MapReduce

D. Redshift

A

Answer: A

Amazon Kinesis is a platform for streaming data on AWS, offering powerful
services to make it easy to load and analyze streaming data, and also providing the
ability for you to build custom streaming data applications for specialized needs. Web
applications, mobile devices, wearables, industrial sensors, and many software
applications and services can generate staggering amounts of streaming data —
sometimes TBs per hour — that need to be collected, stored, and processed
continuously. Amazon Kinesis services enable you to do that simply and at a low cost.

For more information on Kinesis, please refer to the below link
https://aws.amazon.com/kinesis/

243
Q

Question 243

After a Amazon Kinesis consumer consumes the records of a stream , which are the preferred data stores to where all can the consumer store the resulting records. Choose 3 answers from the options given below:

A. Amazon 83

B. DynamoDB

C. Amazon Redshift

D. SQS

A

Answer: A, B, C

In Amazon Kinesis , the producers continually push data to Streams and the consumers process the data in real time. Consumers can store their results using an
AWS service such as Amazon DynamoDB, Amazon Redshift, or Amazon S3. Its better
to put the records to a persistent data store for any further processing at a later point in
time.

For more information on the key concepts of Amazon Kinesis, please refer to the
below link: http://docs.aws.amazon.com/streams/latest/dev/key-concepts.html

244
Q

Question 244

Your company is currently running EC2 instances in the Europe region. These instances are based on pre-built AMI’s. They now want to implement disaster recovery. What are one of the steps they would need to implement for disaster recovery? Choose the correct answer from the options given below

A. Copy the AMI from the current region to another region, modify any Auto Scaling groups if required in the backup region to use the new AMI ID in the backup region

B. Modify the image permissions to share the AMI with another account, then set the default region to the backup region

C. Nothing, because all AMI’s are available in any region as long as it is created within the same account

D. Modify the image permissions to share to the designated backup region

A

Answer: A

In order to implement disaster recovery you need to copy the AMI to the desired
region, since AMI’s are different region wise.

For more information on AMI’s, please visit the below url
http: //docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs. html

245
Q

Question 245

You are using IOT sensors to monitor all data by using Kinesis with the default settings. You then send the data to an S3 bucket after 2 days. When you go to interpret the data in S3 there is only data for the last day and nothing for the first day. Which of the following is the most probable cause of this? Choose the correct answer from the options below

A. Temporary loss of IoT device

B. You cannot send Kinesis data to the same bucket on consecutive days.

C. Data records are only accessible for a default of 24 hours from the time they are added to a stream.

D. The access to the $3 bucket is not given to the Kinesis stream

A

Answer: C

By default, Records of a stream are accessible for up to 24 hours from the time they
are added to the stream. You can raise this limit to up to 7 days by enabling extended
data retention. So since the Kinesis stream is created with the default settings, the
streams are not being added to S3 for one day.

For more information on Kinesis streams, please visit the below
url https://aws.amazon.com/kinesis/streams/faqs/

246
Q

Question 246

You are configuring EC2 instances in a subnet which currently is in a VPC with an Internet gateway attached. All of these instances are able to be accessed from the internet. You then launch another subnet and launch an EC2 instance in it, but you are not able to access the EC2 instance from the internet. What could be the possible two reasons for this? Select 2 options.

The EC2 instance does not have a public IP address associated with it

B. The EC2 instance is not a member of the same Auto Scaling group/policy

C. The EC2 instance is running in an availability zone that does not support Internet gateways

D. A proper route table configuration that sends traffic from the instance to the Internet through the internet gateway

A

Answer: A,D

The subnet could have created as a private subnet and not either not have the
Route table updated with the internet gateway and the public IP is not attached to the
EC2 instance.

For more information on VPC and subnets, please visit the below url
http: //docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Scenario2.html

247
Q

Question 247

You are the system administrator for your company’s AWS account of
approximately 100 LAM users. A new company policy has just been introduced that will change the access of 20 of the [AM users to have a particular sort of access to S3 buckets. How can you implement this effectively so that there is no need to apply the policy at the individual user level? Choose the correct answer from the options below

A. Use the LAM groups and add users, based upon their role, to different groups and apply the policy to group

B. Create a policy and apply it to multiple users using a JSON script

C. Create an S3 bucket policy with unlimited access which includes each user’s AWS account ID

D. Create a new role and add each user to the IAM role

A

Answer: A

The best option is to group the set of users in a group and then apply a policy with
the required access to the group.

For more information on IAM Groups, please visit the below url http://docs.aws.amazon.com/IAM/latest/UserGuide/id_groups.html

248
Q

Question 248

As a system administrator, you have been requested to implement the best practices for using Autoscaling, SQS and EC2. Which of the following items is not a best practice?

A. Use the same AMI across all regions

B. Utilize AutoScaling to deploy new EC2 instances if the SQS queue grows too large

C. Utilize CloudWatch alarms to alert when the number of messages in the SQS queue grows too large

D. Utilize an LAM role to grant EC2 instances permission to modify the SQS queue

A

Answer: A

The AMI’s differ from the region to region, hence this is a not a required practice.
You need to copy the AMI from region to region if you want to implement disaster
recovery as a best practice.

For more information on AMI’s, please visit the below url
http: //docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs. html

249
Q

Question 249

A company is currently using Autoscaling for their application. Anew AMI now needs to be used for launching the Ec2 instances. Which of the following changes needs to be carried out. Choose an answer from the options below

A. Nothing, you can start directly launching instances in the Autoscaling group

B. Create a new launch configuration

C, Create a new target group

D. Create a new target group and launch configuration

A

Answer: B

Since the AMI is changed, you need to create a new launch configuration that can
be used by the Autoscaling group.

For more information on Launch configuration, please visit the below url
http: //docs.aws.amazon.com/autoscaling/latest/userguide/LaunchConfiguration.html

250
Q

Question 250

In order to add current EC2 instances to an Autoscaling group, which of the following criteria must be met. Choose 3 options from the answers given below

A. The instance is in the stopped state.

B. The AMI used to launch the instance must still exist.

C. The instance is not a member of another Auto Scaling group.

D. The instance is in the same Availability Zone as the Auto Scaling group.

A

Answer: B, C, D

This is given in the aws documentation

For more information on adding instances to Autoscaling groups, please visit the below url
http://docs.aws.amazon.com/autoscaling /latest/userguide/attach-instance-asg.html

251
Q

Question 251

When designing an application architecture utilizing EC2 instances and the ELB, to determine the instance size required for your application what questions might be important?
Choose the 2 correct answers from the options below

A. Determine the required I/O operations

B. Determining the minimum memory requirements for an application

C. Determining where the client intends to serve most of the traffic

D. Determining the peak expected usage for a clients application

A

Answer: A, B

When designing which EC2 instances to use, you need to know the I/O and
memory requirements. These are some of the core components of an Ec2 instance type.

For more information on EC2 instance types, please visit the below url
https://aws.amazon.com/ec2/instance-types/

252
Q

Question 252

You have an order processing system which is currently using SQS. It was noticed that an order was processed twice which had led to great customer dissatisfaction. Your management has requested that this should not happen in the future. What can you do to avoid this happening in the future? Choose an answer from the options given
below

A. Change the retention period of SQS

B. Change the visibility timeout of SQS

C. Change the system to use SWF

D. Change the message size in SQS

A

Answer: C

Amazon SWF promotes a separation between the control flow of your background
job’s stepwise logic and the actual units of work that contain your unique business logic.
This allows you to separately manage, maintain, and scale “state machinery” of your
application from the core business logic that differentiates it. As your business
requirements change, you can easily change application logic without having to worry
about the underlying state machinery, task dispatch, and flow control. When you use
SWF you are guaranteed that a message will be processed only once.

For more information on SWF, please visit the below url
https://aws.amazon.com/swf/

253
Q

Question 253

You have a couple of EC2 instances that have just been added to an ELB. You have verified that the right security groups are open for port 80 for HTTP. But the EC2 instances are still showing out of service. What could be one of the possible reasons for this? Choose an answer from the options given below

A. The EC2 instances are using the wrong AMI

B. The page used for the health check does not exist on the EC2 instance

C. The wrong instance type was used for the EC2 instance

D. The wrong subnet was used

A

Answer: B

When defining a health check, in addition to the port number and protocol , you
have to also define the page which will be used for the health check. If you don’t have
the page defined on the web server then the health check will always fail.

For more information on Health checks, please visit the below url

http: //docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-
healthchecks. html

254
Q

Question 254

Your web application front end consists of multiple EC2 instances behind an Elastic Load Balancer. You configured ELB to perform health checks on these EC2 instances, if an instance fails to pass health checks, which statement will be true?

A. The instance gets terminated automatically by the ELB.

B. The instance gets quarantined by the ELB for root cause analysis.

C. The instance is replaced automatically by the ELB.

D. The ELB stops sending traffic to the instance that failed its health check

A

Answer: D

To discover the availability of your EC2 instances, a load balancer periodically
sends pings, attempts connections, or sends requests to test the EC2 instances. These
tests are called health checks. The status of the instances that are healthy at the time of
the health check is InService. The status of any instances that are unhealthy at the time
of the health check is OutOfService.

The load balancer performs health checks on all registered instances, whether the
instance is in a healthy state or an unhealthy state. The load balancer routes requests
only to the healthy instances. When the load balancer determines that an instance is
unhealthy, it stops routing requests to that instance. The load balancer resumes routing
requests to the instance when it has been restored to a healthy state. You can see the
status of the instance in the Registered Instances section of the load balancer.

255
Q

Question 255

A company is currently SWF for their order processing. Some of the orders seem to be stuck for 3 weeks. What could be the possible reason for this? Choose the correct answer from the options below

A. SWF is awaiting human input from an activity task.

B. The last task has exceeded SWF’s 14-day maximum task execution time

C. The workflow has exceeded SWF’s 14-day maximum workflow execution time

D. SWF is not the right service to be used

A

Answer: A

The issue is probably due to the fact they maybe a human interaction such as an
approval is required for the orders to be further processed.

For more information on SWF, please visit the below url
https://aws.amazon.com/swf/

256
Q

Question 256

You have a web application running on six Amazon EC2 instances, consuming about 45% of resources on each instance. You are using auto-scaling to make sure that six instances are running at all times. The number of requests this application processes is consistent and does not experience spikes. The application is critical to your business and you want high availability at all times. You want the load to be distributed evenly between all instances. You also want to use the same Amazon Machine Image (AMI) for all instances. Which of the following architectural choices should you make?

A. Deploy 6 EC2 instances in one availability zone and use Amazon Elastic Load Balancer.

B. Deploy 3 EC2 instances in one region and 3 in another region and use Amazon Elastic Load Balancer.

C. Deploy 3 EC2 instances in one availability zone and 3 in another availability zone and use Amazon Elastic Load Balancer.

D. Deploy 2 EC2 instances in three regions and use Amazon Elastic Load Balancer.

A

Answer: C

Option A is automatically incorrect because remember that the question asks for
high availability. For option A, if the AZ goes down then the entire application fails.

For Option B and D, the ELB is designed to only run in one region in aws and not across
multiple regions. So these options are wrong. The right option is C.

257
Q

Question 257

When you put objects in Amazon S3, what is the indication that an object was successfully stored?

A. HTTP 200 result code and MDs checksum, taken together, indicate that the operation was successful.

B. Amazon S3 is engineered for 99.999999999%é durability. Therefore there is no need to confirm that data was inserted.

C. A success code is inserted into the S3 object metadata.

D. Each S83 account has a special bucket named _s3_logs. Success codes are written to this bucket with a timestamp and checksum.

A

Answer: A

When an object is placed in S3, it is done via HTTP via a POST or PUT object
request. When a success occurs, you will get a 200 HTTP response. But since a 200
Response can also contain error information, a check of the MD5 checksum confirms
on whether the request was a success or not.

For more information on the POST request for an object in S3, please visit the link:
http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPOST. html

For more information on the PUT request for an object in S3, please visit the link:
http: //docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectCOPY.html

258
Q

Question 258

An instance is launched into a VPC subnet with the network ACL configured to allow all inbound traffic and deny all outbound traffic. The instance’s security group is configured to allow SSH from any IP address and deny all outbound traffic. What changes need to be made to allow SSH access to the instance?

A. The outbound security group needs to be modified to allow outbound traffic.

B. The outbound network ACL needs to be modified to allow outbound traffic.

C. Nothing, it can be accessed from any IP address using SSH.

D. Both the outbound security group and outbound network ACL need to be modified to allow outbound traffic.

A

Answer: B

For an EC2 instance to allow SSH, you can have ‘Allow’ configuration for both
Security and Network ACL for Inbound Traffic. For Outbound Traffic ‘Allow’ for
Network ACL and “Deny’ for Security

The reason why Network ACL has to have both an Allow for Inbound and
Outbound is because network ACL’s are stateless. Responses to allowed inbound traffic
are subject to the rules for outbound traffic (and vice versa). Whereas for Security
groups, responses are stateful. So if an incoming request is granted, by default and
outgoing request will also be granted.

259
Q

Question 259

A company AWS account consist of approximately 300 IAM users. Now there is a mandate that an access change is required for 100 AM users to have unlimited privileges to S3.As a system administrator, how can you implement this effectively so that there is no need to apply the policy at the individual user level?

A. Create a new role and add each user to the IAM role

B. Use the IAM groups and add users, based upon their role, to different groups and apply the policy to group

C. Create a policy and apply it to multiple users using a JSON script

D. Create an S3 bucket policy with unlimited access which includes each user’s AWS account ID

A

Answer: B

An IAM group is used to collectively manage users who need the same set of
permissions. By having groups, it becomes easier to manage permissions. So if you
change the permissions on the group scale, it will affect all the users in that group.
Please find the steps below for the group creation.

Step 1) Go to IAM and click on the Groups section. Click on Create New Group.

Step 2) Provide a name for the Group

Step 3) Next you need to attach a policy. Since the question asks that this group
needs full access to S3 , choose the AmazonS3FullAccess role.

Step 4) Once the group is created, you can then add the 50 users to the group

For more information on users and groups, please visit the url
- http://docs.aws.amazon.com/IAM/latest/UserGuide/id.html

260
Q

Question 260

You are a consultant tasked with migrating an on-premise application architecture to AWS. During your design process you have to give consideration to current on-premise security and determine which security attributes you are responsible for on AWS. Which of the following does AWS provide for you as part of the shared responsibility model? Choose the 2 correct options

A. EC2 Instance security

B. Physical network infrastructure

C. User access to the AWS environment via IAM.

D. Virtualization infrastructure

A

Answer: B, D

As per the shared responsibility shown below, the users are required to control the
EC2 security via security groups and network access control layers. Also it is the user’s
responsibility model, aws takes care of the physical components and the infrastructure
to provide Virtualization.

For more information on aws shared responsibility model, please visit the link - https://aws.amazon.com/blogs/security/tag/shared-responsibility-model/

261
Q

Question 261

There is a requirement to host an application in aws that requires access to a NoSQL database. But there are no human resources available who can take care of the database infrastructure. Which Amazon service provides a fully-managed and highly available NoSQL service? Choose the correct option

A. DynamoDB

B. ElasticMap Reduce

C. Amazon RDS

D. SimpleDB

A

Answer: A

DynamoDB is an aws service that provides a NoSQL database option to users.
DynamoDB is a hosted solution by aws , there is no requirement to manage the
environment for DynamoDB. And the question clearly states there are no resources in
place to manage the DynamoDB environment. DynamoDB lets you offload the
administrative burdens of operating and scaling a distributed database, so that you
don’t have to worry about hardware provisioning, setup and configuration, replication,
software patching, or cluster scaling. ElasticMapReduce is not a NoSQL solution.
SimpleDB is a simplified DB solution given by aws and hence is not a solution.

For more information on DynamoDB, please visit the link - http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Introduction.html

262
Q

Question 262

As a AWS Solution architect, you have been tasked to put the organization data on the cloud. But there is a concern from a security perspective on what can be put on the cloud. What are the best security options from the ones listed below which can be used from a security perspective. Please choose the 3 correct answers from the below options.

A. Enable EBS Encryption

B. Enable S3 Encryption

C. Encrypt the file system on an EBS volume using Linux tools

D. In AWS , you don’t need to worry as it encrypts all data

A

Answer: A, B, C

Encryption in AWS needs to be done by the users and can be done on different levels. For EBS, we can enable encryption at the volume level. This can be done when the volume is created, this is shown in the screenshot below. On S3, For any object you can enable server side encryption by going to the Permissions section of the object in S3 and enable the server side encryption option. And finally, one can use Linux based tools to encrypt a volume if it is not encrypted.

263
Q

Question 263

AWS provides a storage option known as Amazon Glacier.

What is this AWS service designed for. Please specify 2 correct options.

A. Cached session data

B. Infrequently accessed data

C. Data archives

D. Active database storage

A

Answer: B, C

Amazon Glacier is an extremely low-cost storage service that provides secure,
durable, and flexible storage for data backup and archival. So Amazon glacier is used
for Infrequently accessed data and Data archives. For Cached Data Session , the service
provided by aws is known as elastic cache. So Amazon glacier is the wrong option. For
Active database storage , this is done via EBS volumes , so this option is also incorrect.

For more information on Amazon Glacier , please visit the link -
https://aws.amazon.com/glacier/faqs/

264
Q

Question 264

There is a requirement for a user to modify the configuration of one of your Elastic Load Balancers (ELB). This access is just required one time only. Which of the following choices would be the best way to allow this access?

A. Open up whichever port ELB uses in a security group and give the user access to that security group via a policy

B. Create an IAM Role and attach a policy allowing modification access to the ELB

C. Create a new IAM user who only has access to the ELB resources and delete that user when the work is completed.

D. Give them temporary access to the root account for 12 hours only and change the password once the activity is completed

A

Answer: B

“The best practice for LAM is to create roles which has specific access to an AWS
service and then give the user permission to the AWS service via the role.

To get the role in place, follow the below steps

Step 1) Create a role which has the required ELB access

Step 2) You need to provide permissions to the underlying EC2 instances in the
Elastic Load Balancer

For the best practices on LAM policies, please visit the link:

http: //docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html
http: //docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user.html]

265
Q

Question 265

You are an AWS Solution Architect and architecting an application environment on AWS. Which services or service features might you enable to take advantage of monitoring to ensure auditing the environment for compliance is easy and follows the strict security compliance requirements?

Choose the correct option

A. CloudTrail for security logs

B. S3 logging

C. Encrypted data storage

D. Multi Factor Authentication

A

Answer: A

AWS Cloudtrail is the defacto service provided by aws for monitoring all API calls
to AWS and is used for logging and monitoring purposes for compliance purposes.
Amazon cloudtrail detects every call made to aws and creates a log which can then be
further used for analysis.

For more information on Amazon Cloudtrail , please visit the link —
https://aws.amazon.com/cloudtrail/

266
Q

Question 266

An application has been migrated from on-premise to AWS in your company and you will not be responsible for the ongoing maintenance of packages. Which of the below services allows for access to the underlying infrastructure. Choose the 2 correct options

A. Elastic Beanstalk

B. EC2

C. DynamoDB

D. RDS

A

Answer: A, B

EC2 and Elastic Beanstalk are aws services that allow the developer access to the
underlying infrastructure. When you create an Elastic beanstalk environment as shown
below , you will have access to the underlying EC2 instance.

So in the example , for the Elastic beanstalk environment , you will have access to
the Windows Server 2012 environment. DynamoDB and RDS are services provided and the infrastructure is managed by aws.

267
Q

Question 267

To protect S3 data from both accidental deletion and accidental overwriting, you
should

A. Enable Multi-Factor Authentication (MFA) protected access

B. Disable S3 delete using an [AM bucket policy

C. Access S3 data using only signed URLs

D. Enable Sg versioning on the bucket

A

Answer: D

To protect objects in S3 from both accidental deletion and accidental overwriting,
the methodology adopted by aws is to Enable versioning on the bucket. Versioning
allows to store every version of an object , so that if by mistake there is a version deleted, you can recover other versions, because the entire object is not deleted. Enable Multi-
Factor Authentication (MFA) protected access on S3 is only used to add an additional
security layer to S3. So that users who are authenticated properly before having access
to the bucket. But this is not what the question is asking. To enable versioning on S3,
you need to go to the bucket , and in the properties , you can enable versioning.

268
Q

Question 268

By default is data in S3 encrypted?

A. Yes, S3 always encrypts data for security purposes.

B. Yes, but only in government cloud data centers

C. No, but it can be when the right APIs are called for SSE

D. No, it must be encrypted before upload of any data to S3.

A

Answer: C

Please note that, no , by default , Encryption is not enabled. So option A and B are
incorrect. Also note that it is not necessary to encrypt before every upload. For any
object you can enable server side encryption by going to the Permissions section of the
object in S3 and enable the server side encryption option.

For more information on Encryption for S3 , please refer to the link
- http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingEncryption.html

269
Q

Question 269

Your AWS environment contains several reserved EC2 instances dedicated to a project that has just been cancelled. You need to stop incurring charges for the reserved instances immediately. What steps would you take to avoid taking the hit on the charge for these reserved instances? Choose 2 correct options

A. Stop the instances as soon as possible.

B. Contact AWS and explain the situation to try and recover the costs.

C. Sell the reserved instances on the AWS Reserved Instance Marketplace

D. Terminate the instances as soon as possible.

A

Answer: C, D

Reserved Instances provide a significant discount (up to 75%) compared to On-
Demand instance pricing. There is a fixed quote of reserved power that it is given to the
account. You have the Flexibility to change families, OS types, and tenancies while
benefitting from Reserved Instance pricing. Now reserved instances are bought upfront
for a specified duration. Unlike On-demand instances, there is no cost difference if you
stop the instances, so option A is incorrect. Since you have already bought the reserved
instances, you cannot ask aws to recover the costs. The only 2 options available are to
terminate the instances immediately and sell them on AWS Reserved Instance Marketplace for a specified price. Note that all Reserved Instances are grouped according to the duration of the term remaining and the hourly price in the market place. Hence, terminating the instance immediately would help to save remaining term. You can purchase reserved instances from the reserved instances section in the EC2 dashboard. In the next screen, you can choose the reserved instance to buy. But you are making an upfront commitment to buy the instances.

For more information on reserved instances please follow the link -
https://aws.amazon.com/ec2/pricing/reserved-instances/

270
Q

Question 270

A company has been asked to comply with the HIPPA laws, and they have been told that all data being backed up or stored on Amazon S3 needs to be encrypted at rest. What is the best method for encryption for your data? Please choose 2 options.

A. Encrypt the data locally using your own encryption keys, then copy the data to
Amazon S3 over HTTPS endpoints

B. Store the data on EBS volumes with encryption enabled instead of using
Amazon 83

C. Store the data in encrypted EBS snapshots

D. Enable SSE on an S3 bucket to make use of AES-256 encryption.

A

Answer: A, D

The question asks for Encryption at rest for S3, so any answer related to EBS
encryption does not correspond to the right answer. For any object you can enable
server side encryption by going to the Permissions section of the object in S3 and
enable the server side encryption option. And then for client side encryption, you can
encrypt the object and send it to S3 when you program your application.

For the entire detailed description on Encryption strategies, please visit the link -
http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingEncryption.html

271
Q

Question 271

Which of the following is true of an SQS message? Choose the correct option

A. SQS messages are guaranteed to be delivered at least once

B. SQS messages must be less than 32 KB in size

C. SQS messages must be in JSON format

D. SQS messages can live in the queue up to thirty days

A

Answer: A

If you look at the SQS FAQ, it is clearly mentioned that SQS messages are
guaranteed to be delivered at least once. The message size for SQS can be 256KB in
size. The message formats can be in XML, JSON and unformatted text. The messages
can live in the queue for a maximum of 14 days.

For more information on SQS messages, please follow the link
https://aws.amazon.com/sqs/faqs/

272
Q

Question 272

An EC2 instance has been running and data has been stored on the instance’s volumes. The instance was shutdown over the weekend to save costs. The next week, after starting the instance, you notice that all data is lost and is no longer available on the EC2 instance. What might be the cause of this?

A. The EC2 instance was using instance store volumes, which are ephemeral and
only lives for the life of the instance

B. The EC2 instance was using EBS backed root volumes, which are ephemeral and
only lives for the life of the instance

C. The EBS volume was not big enough to handle all of the processing data.

D. The instance has been compromised

A

Answer: A

Anything that is stored on an instance store volume is destroyed when the instance
is shutdown. Instance store volumes are ephemeral, which means that they only survive
when the instance is active. EBS backed Volumes are not ephemeral and exists even if
the instance is stopped and started, so Option B is wrong. Even if EBS volume is not big
enough, it does not mean that it will not be present when the instance is stopped and
started, so Option C is wrong. If the instance is compromised, then the instance would
not even start, so Option C is wrong.

For more information on instance store volumes, please visit
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html

273
Q

Question 273

What database services are provided by aws from the list mentioned below. Choose the 3 correct answers.

A. Aurora

B. MariaDB

C. MySQL

D. DB2

A

Answer: A, B, C

DBz2 is the only database service not provided by AWS.

For the list of DB services, please visit the link -
https://aws.amazon.com/rds/

For more information on Aurora, please visit the link -
https://aws.amazon.com/rds/aurora/

For more information on mySQL, please visit the link -
https://aws.amazon.com/rds/mysql/

For more information on MariaDB , please visit the link -
https://aws.amazon.com/rds/mariadb/

274
Q

Question 274

There is a requirement to move 10 TB data warehouse to the cloud. With the current bandwidth allocation it would take 2 months to transfer the data. Which service would allow you to quickly get theIr data into AWS? Choose the correct option.

A. Amazon Import/Export

B. Amazon Direct Connect

C. Amazon S3 MultiPart Upload

D. Amazon $3 Connector

A

Answer: A

AWS Import/Export is a service that accelerates transferring large amounts of data into and out of AWS using physical storage appliances, bypassing the Internet. For Amazon S3 Multipart Upload, there are the following restrictions, so then it’s better to use the Amazon Import/Export.

For more information on aws import/export, please visit the link -
https://aws.amazon.com/snowball/

Amazon Direct Connect is used as a connection between AWS and On-premise so this is the wrong option.

275
Q

Question 275

What is the difference between an availability zone and an edge location? Choose the correct option

A. An availability zone is a grouping of AWS resources in a specific region; an edge location is a specific resource within the AWS region

B. An availability zone is an isolated location within an AWS region, whereas an edge location will deliver cached content to the closest location to reduce latency

C. Edge locations are used as control stations for AWS resources

D. None of the above

A

Answer: B

In AWS , there are regions with each region separated in a seperate geographic area. Each region has multiple, isolated locations known as Availability Zones. An availability zone is used to host resources in a specific region.

For more information on Regions and availability zone, please visit the url
- http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html

An edge location is used to deliver content depending on the location of the user. So if the user is located in Australia, the Australia region can be used to deliver content. If a user was in Australia and you delivered content in Asia , there would a delay in the relay of information to the user. Each edge location synchronizes data so that integrity of data is maintained across all edge locations.

For more information on Edge locations, please visit the url —
https: //aws.amazon.com/about-aws/global-infrastructure/

276
Q

Question 276

An order processing website is using EC2 instances to process messages from an SQS queue. A user reported an issue that their order was processed twice and hence charged twice. What action would you recommend to ensure this does not happen again? Choose the correct option

A. Insert code into the application to delete messages after processing

B. Increase the visibility timeout for the queue

C. Modify the order process to use SWF

D. Use long polling rather than short polling

A

Answer: C

This is a tricky question and note that Options A, B and D can be used to decrease the likelihood of duplicate messages , but cannot remove the change entirely.

For option A , even if the code is inserted , which should be the case already , if the EC2 instance goes down , the same issue can occur again. The message will not be deleted and when it comes in the SQS queue, it will be processed again.

For option B , even if you increase the visibility timeout , if the process has taken the message but not deleted the message , after the visibility timeout expires , the EC2 instance will again process the message and the same issue will happen again. If you use long polling instead of short polling , you still have the same problem with Option A and B.

For more information on SQS , please visit the link —
https://aws.amazon.com/sqs/faqs/

277
Q

Question 277

There is a connectivity issue reported on a client’s Amazon Virtual Private Cloud and EC2 instances. After logging into the environment, you notice that the client is using two instances that all belong to a subnet with an attached internet gateway. The instances also belong to the same security group. However, one of the instances is not able to send or receive traffic like the other one. You see that there is no OS level issue and the instance is working as it should. What could be the possible issue?

Choose the correct option.

A. A proper route table configuration that sends traffic from the instance to the Internet through the internet gateway

B. The EC2 instance is running in an availability zone that does not support Internet gateways

C. The EC2 instance is not a member of the same Auto Scaling group/policy

D. The EC2 instance does not have a public IP address associated with it

A

Answer: D

Below is a sample VPC from the aws VPC guides. For an instance to be available from the internet , you need to ensure

1) The Internet gateway is in place — This is has been confirmed in the question.
2) There isa route entry for the internet gateway — This should be in place , because out of the 2 instances , one is working.
3) The EC2 instance should have a public or Elastic IP — From the question , there is no mention of one being allocated to the problem instance. Hence option D is the right answer.

For more information on VPC public subnets , please visit the url -http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Scenario1.html

278
Q

Question 278

Which of the following best describes what “bastion hosts” are? Choose the correct option.

A. Bastion hosts are instances that sit within a private subnet and are typically accessed using SSH or RDP. Once remote connectivity has been established with the bastion host, it then acts as a ‘jump’ server, allowing you to use SSH or RDP to log into other instances (within public subnets) deeper within your network.

B. Bastion hosts are instances that sit within your public subnet and are typically accessed using SSH or RDP. Once remote connectivity has been established with the bastion host, it then acts as a ‘jump’ server, allowing you to use HTTPS to log into other instances (within private subnets) deeper within your network.

C. Bastion hosts are instances that sit within your public subnet and are typically accessed using SSH or RDP. Once remote connectivity has been established with a bastion host, it then acts as a ‘jump’ server, allowing you to use SSH or RDP to log into other instances (within private subnets) deeper within your network.

D. Bastion hosts are instances that sit within your private subnet and are typically accessed using SSH or RDP. Once remote connectivity has been established with the bastion host, it then acts as a ‘jump’ server, allowing you to use HTTPS to log into other instances (within public subnets) deeper within your network.

A

Answer: C

A bastion host is a special purpose computer on a network specifically designed and configured to withstand attacks. The computer generally hosts a single application, for example a proxy server, and all other services are removed or limited to reduce the threat to the computer. In AWS , A bastion host is kept on a public subnet. Users log on to the bastion host via SSH or RDP and then use that session to manage other hosts in the private subnets. This is a security practice adopted by many organization to secure the assets in their private subnets.

279
Q

Question 279

A web application is hosted on EC2 instances and using SQS. Requests are saved as messages in the SQS queue. The SQS queue is configured with the maximum message retention period. After 10 days you notice that the application was in a hung state and 2000 messages are still lying in the queue unprocessed. You are going to resolve the issue but you need to send a communication to the users on the issue. What information will you provide? Choose the correct option.

An apology for the delay in processing requests and telling them that unfortunately they have to resubmit all the requests.

B. An apology for the delay in processing requests, assurance that the application will be operational shortly, and a note that requests greater than five days old will need to be resubmitted.

C. An apology for the delay in processing requests, assurance that the application will be operational shortly, and a note that all received requests will be processed at that time.

D. An apology for the delay in processing requests and telling them that unfortunately they have to resubmit all the requests since the queue would not be able to process the 2000 messages together.

A

Answer: C

Since the question states that the SQS is configured with the maximum retention period , it means that messages can last for 14 days. So option A is invalid , since the messages will still be in the queue even after 10 days Option B is invalid for the same reason noted in Option B Option D is invalid because a queue can have up to 120,000 messages.

For more information on SQS, please visit the link:
https: //aws.amazon.com/sqs/

280
Q

Question 280

A Company provides an online service that utilizes SQS to decouple system components for scalability. The SQS consumer’s EC2 instances poll the queue as often as possible to keep end-to-end throughput as high as possible. However, it is noticed that polling in tight loops is burning CPU cycles and increasing costs with empty responses. What can be done to reduce the number of empty responses? Choose the correct option.

Scale the component making the request using Auto Scaling based off the number of messages in the queue

B. Enable long polling by setting the ReceiveMessageWaitTimeSeconds to a number > o

C. Enable short polling on the SQS queue by setting the ReceiveMessageWaitTimeSeconds to a number > o

D. Enable short polling on the SQS message by setting the ReceiveMessageWaitTimeSeconds to a number = o

A

Answer: B

By default an SQS queue is configured with Short polling , which means that the queue is polled every so often for new messages. There is an option of long polling which allows for a shorter poll time but taking in more messages during the long polling cycle. In order to reduce the number of polling cycles , it better to have bigger gaps by enabling long polling. And this can be done by setting the ReceiveMessageWaitTimeSeconds attribute of the queue to a value greater than o. You can do this by changing the queue attributes as shown below Answer - B

For more information on polling, please visit the link - http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-long-polling.html

281
Q

Question 283

As part of your application architecture requirements, the company you are working for has requested the ability to run analytics against all combined log files from the Elastic Load Balancer. Which services are used together to collect logs and process log file analysis in an AWS environment? Choose the correct option.

A. Amazon DynamoDB to store the logs and EC2 for running custom log analysis scripts

B. Amazon EC2 for storing and processing the log files

C. Amazon S3 for storing the ELB log files and EC2 for processing the log files in analysis

D. Amazon S3 for storing ELB log files and Amazon EMR for processing the log files in analysis

A

Answer: D

This question is not that complicated, even though if you don’t understand the options. By default when you see “collection of logs and processing of logs”, directly think of AWS EMR. Amazon EMR provides a managed Hadoop framework that makes it easy, fast, and cost-effective to process vast amounts of data across dynamically scalable Amazon EC2 instances. You can also run other popular distributed frameworks such as Apache Spark, HBase, Presto, and Flink in Amazon EMR, and interact with data in other AWS data stores such as Amazon S3 and Amazon DynamoDB. Amazon EMR securely and reliably handles a broad set of big data use cases, including log analysis, web indexing, data transformations (ETL), machine learning, financial analysis, scientific simulation, and bioinformatics.

For more information on EMR, please visit the link —
https://aws.amazon.com/emr/

282
Q

Question 284

You have been told that you need to set up a bastion host by your manager in the cheapest, most secure way, and that you should be the only person that can access it via SSH. Which of the following setups would satisfy your manager’s request?Choose the correct option

A. A large EC2 instance and a security group which only allows access on port 22

B. A large EC2 instance and a security group which only allows access on port 22 via your IP address

C. Asmall EC2 instance and a security group which only allows access on port 22

D. A small EC2 instance and a security group which only allows access on port 22 via your IP address

A

Answer: D

A bastion host should always be a small EC2 instance, because there is no requirement of applications to run on it. Also you should only open port 22 from your IP address and no other IP Address. In AWS , A bastion host is kept on a public subnet. Users log on to the bastion host via SSH or RDP and then use that session to manage other hosts in the private subnets. This is a security practise adopted by many organization to secure the assets in their private subnets.

283
Q

Question 285

You have a web application hosted in AWS on EC2 Instances. The application
provides newspaper content to users around the world. Off late , the load on the web
application has increased and is subsequently increasing the response time for the
application for end users. Which of the below services can be used to alleviate this
problem. Choose 2 answers from the options given below

A. Use Cloudfront and use the web application as the origin

B. Use AWS Storage gateways to distribute the content across multiple storage
devices for better read throughput.

C. Use Elastic cache behind of the web application.

D. Consider using SQS to process some of the user requests

A

Answer: A, C

The AWS Documentation provides the following information on Cloudfront and Elastic Cache Amazon CloudFront is a web service that speeds up distribution of your static and dynamic web content, such as .html, .css, .php, and image files, to your users. CloudFront delivers your content through a worldwide network of data centers called edge locations. When a user requests content that you’re serving with CloudFront, the user is routed to the edge location that provides the lowest latency (time delay), so that content is delivered with the best possible performance.

For more information on AWS Cloudfront, please visit the below URL: http: //docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Introduction.htm]

Amazon ElastiCache is a web service that makes it easy to deploy, operate, and scale an in-memory data store or cache in the cloud. The service improves the performance of web applications by allowing you to retrieve information from fast, managed, in memory data stores, instead of relying entirely on slower disk-based databases.

For more information on AWS Elastic Cache, please visit the below URL:
https://aws.amazon.com/elasticache/

284
Q

Question 286

Your supervisor asks you to create a highly available website which serves static content from EC2 instances. Which of the following is not a requirement to accomplish this goal? Choose the correct option

A. Multiple Availability Zones

B. Multiple subnets

C. An SQS queue

D. An auto scaling group to recover from EC2 instance failures

A

Answer: C

For highly available websites, yes Multiple Availability Zones and Multiple subnets are required. Below is a simple architecture of a highly available website consisting of an ELB and 2 AZ’s. BY default each AZ should be located in a different subnet. Also auto-scaling is used to add additional EC2 instances for fault tolerance. SQS is not an option, because SQS is only used to decouple components in an architecture, it is not necessary for a high available web site.

285
Q

Question 287

What is the maximum object size allowed for Multi-part file upload for S3.

A. 10 TB

B. 5 TB

C. 1TB

D. 5 GB

A

Answer: B

Please refer to the table in the KB Article which gives the restrictions for the Multi-
part file upload for $3. From here it clearly shows that the right answer is 5TB.

For more information on Multi-part file upload for S3 , please visit the url -
http://docs.aws.amazon.com/AmazonS3/latest/dev/qfacts.html

286
Q

Question 288

Which of the following statements about S3 are true. Please choose 2 options

A. The total volume of data and number of objects you can store are unlimited

B. Individual Amazon S3 objects can range in size from a minimum of 0 bytes toa maximum of 1 terabytes

C. You can use Multi-Object Delete to delete large numbers of objects from Amazon 83

D. You can store only objects of a particular format in S3

A

Answer: A, C

The screenshots are from the S3 FAQ’s of AWS. Visit KB Article. Option B is incorrect, because as per the S3 definition, the maximum size of objects can be 5 TB

Option D is incorrect because you can virtually store objects of any type

For more information on S3 , please visit the URL —
https://aws.amazon.com/s3/faqs/

287
Q

Question 289

What is a document that provides a formal statement of one or more
permissions?

A. Policy

B. Permission

C. Role

D. Resource

A

Answer: A

A policy is a JSON document that specifies what a user can do on AWS. This document consists of Actions: what actions you will allow. Each AWS service has its own set of actions. Resources: which resources you allow the action on. Effect: what the effect will be when the user requests access—either allow or deny. Below is a sample snippet of a policy document that allows access to all users to Describe EC2 Instances. You can clearly see the Actions, Resources and Effect which define the policy document. For more information on policies, please visit the url - http://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html

288
Q

Question 290

Which of the following is not required to ensure that you can SSH into a Linux instance hosted in a VPC from the internet.

A. Private IP Address

B. Public IP Address

C. Internet gateway attached to the VPC

D. Elastic IP

A

Answer: A

The AWS Documentation provides the following information A private IPv4 address is an IP address that’s not reachable over the Internet. You can use private IPv4 addresses for communication between instances in the same network (EC2-Classic or a VPC).

For more information on AWS IP Addressing, please visit the below URL:
http: //docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-instance-addressing. html

289
Q

Question 291

What are the two permission types used by AWS?

A. Resource-based and Product-based

B. Product-based and Service-based

C. Service-based

D. User-based and Resource-based

A

Answer: D

Permissions are defined via policies which consist of the following elements
Actions: what actions you will allow. Each AWS service has its own set of actions.
Resources: which resources you allow the action on. Effect: what the effect will be when
the user requests access—either allow or deny. Below is a sample snippet of a policy
document that allows access to all users to Describe EC2 Instances. You can clearly see
the Actions, Resources and Effect which define the policy document.

For more information on policies, please visit the url:
http: //docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html

290
Q

Question 292

A company has resources hosted both on their on-premise network and in AWS. They want their IT administrators to access resources in both environments using their on-premise credentials which is stored in Active Directory. Which of the following can be used to fulfill this requirement?

A. Use Web Identity Federation

B. Use SAML Federation

C. Use IAM users

D. Use AWS VPC

A

Answer: B

The AWS Documentation provides the following information on SAML Federation AWS supports identity federation with SAML 2.0 (Security Assertion Markup Language 2.0), an open standard that many identity providers (IdPs) use. This feature enables federated single sign-on (SSO), so users can log into the AWS Management Console or call the AWS APIs without you having to create an LAM user for everyone in your organization. By using SAML, you can simplify the process of configuring federation with AWS, because you can use the IdP’s service instead of writing custom identity proxy code.

For more information on SAML Federation, please visit the below URL:
http: //docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_saml.html

291
Q

Question 293

Amazon RDS DB snapshots and automated backups are stored in

A. Amazon 83

B. Amazon ECS Volume

C. Amazon RDS

D. Amazon EMR

A

Answer: A

Automated backups automatically back up your DB instance during a specific, user-definable backup window. Amazon RDS keeps these backups for a limited period
that you can specify. You can later recover your database to any point in time during
this backup retention period. And all of these backups get stored to S3 by default.
Option B is not correct, because that is used to store data for EC2 instances Option Cis
not correct because an RDS cannot be used to store snapshots Option D because EMR
is used for storing and processing logs.

For more information on DB instance backup’s, go to the url
- http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.BackingUpAndRestoringAmazonRDSInstances.html

292
Q

Question 294

Using Amazon CloudWatch’s Free Tier, what is the frequency of metric updates which you receive?

A. 5 minutes

B. 500 milliseconds.

C. 30 seconds

D. 1 minute

A

Answer: A

AWS free tier gives you access to the basic metrics for Cloudwatch and by default the basic package gives 5 minutes of aggregation of Cloudwatch metrics. If you need a further shorter interval, the you need to pay extra.

For more information on the free tier , please feel free to visit the url -
https://aws.amazon.com/free/

293
Q

Question 295

What option from the below lets you categorize your EC2 resources in different ways, for example, by purpose, owner, or environment.

A. wildcards

B. pointers

C. Tags

D. special filters

A

Answer: C

Please note that this is an important concept, if you are pursuing further certifications in AWS. Tags in aws are used to segregate resources in aws, which can also be used for cost reporting and billing purposes. In EC2 dashboard , there is a separate section for Tags.

294
Q

Question 296

What acts as a firewall that controls the traffic allowed to reach one or more instances?

A. Security group

B. ACL

C. IAM

D. Private IP Addresses

A

Answer: A

A security group acts as a virtual firewall for your instance to control inbound and outbound traffic. When you launch an instance in a VPC, you can assign the instance to up to five security groups. Security groups act at the instance level. Below is an example of a security group which has inbound rules. The below rule states that users can only SSH into EC2 instances that are attached to this security group.

For more information on Security Groups , please visit the url -
http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_SecurityGroups.html

295
Q

Question 297

You design an application that checks for new items in an S3 bucket once per hour. If new items exist, a message is added to an SQS queue. You have several EC2 instances which retrieve messages from the SQS queue, parse the file, and send you an email containing the relevant information from the file. You upload one test file to the bucket, wait a couple hours and find that you have hundreds of emails from the application. What is the most likely cause for this volume of email? Choose the correct answer from the options given below

A. This is expected behavior when using short polling because SQS does not guarantee that there will not be duplicate messages processed

B. You can only have one EC2 instance polling the SQS queue at a time

C. This is expected behavior when using long polling because SQS does not guarantee that there will not be duplicate messages processed

D. Your application does not issue a delete command to the SQS queue after processing the message

A

Answer: D

You need to ensure that after a message is processed in SQS, the message is deleted.

For more information on SQS, please visit the below url
https://aws.amazon.com/sqs/faqs/

296
Q

Question 298

An application requires a minimum of 4 instances to run to ensure that it can cater to its users. You want to ensure fault tolerance and high availability. Which of the following is the best option.

A. Deploy 2 instances in each of 3 Availability Zones, add a load balancer and an Auto Scaling group to launch more instances if required.

B. Deploy 2 instances in each of 2 Availability Zones, add a load balancer and an Auto Scaling group to launch more instances if required.

C. Deploy 4 instances in one Availability Zone, add a load balancer and an Auto Scaling group to launch more instances if required.

D. Deploy 1 instance in each of 3 Availability Zones, add a load balancer and an Auto Scaling group to launch more instances if required.

A

Answer: A

Since there is a minimum of 4 instances required to run, if you deploy them in 3 AZ’s and even if one AZ goes down , you will have at least 4 instances running. Requirement is to look for Best Option. Since in question’s context ensuring it will be fault tolerant and high availability system, 2 extra instances shall be created in another AZ. This will ensure the requirement is fulfilled properly.

For more information on fault tolerance and high availability, please visit the below URL: https://media.amazonwebservices.com/architecturecenter/AWS_ac_ra_ftha_o4.pdf

297
Q

Question 299

Which of the following is true when it comes to hosting a database in VPC’s using the AWS RDS service.

A. The VPC must have at least one subnet

B. The VPC must have at least one subnet in one Availability Zone

C. Your VPC must have at least one subnet in at least two of the Availability Zones

D. None of the above

A

Answer: C

One of the important aspects of hosting databases in VPC’s is the following: Your VPC must have at least one subnet in at least two of the Availability Zones in the region where you want to deploy your DB instance. A subnet is a segment of a VPC’s IP address range that you can specify and that lets you group instances based on your security and operational needs. Few important points about VPC: When you create a VPC, it spans all the Availability Zones in the region. After creating a VPC, you can add one or more subnets in each Availability Zone.

For more information on working with RDS instances , please refer to the below link:
http: //docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_VPC.WorkingWithRDSInstanceinaVPC.html

298
Q

Question 300

There are currently multiple applications hosted in a VPC. During monitoring it has been noticed that multiple port scans are coming in from a specific IP Address block. The internal security team has requested that all offending IP Addresses be denied for the next 24 hours. Which of the following is the best method to quickly and temporarily deny access from the specified IP Address’s.

A. Create an AD policy to modify the Windows Firewall settings on all hosts in the VPC to deny access from the IP Address block.

B. Modify the Network ACLs associated with all public subnets in the VPC to deny access from the IP Address block.

C. Add a rule to all of the VPC Security Groups to deny access from the IP Address block.

D. Modify the Windows Firewall settings on all AMI’s that your organization uses in that VPC to deny access from the IP address block.

A

Answer: B

A network access control list (ACL) is an optional layer of security for your VPC that acts as a firewall for controlling traffic in and out of one or more subnets. Option A and D are wrong because this is a tedious task and it only works for Windows systems. You need something that will work for Linux systems as well. Option C is only adequate for EC2 instances, but you need rules that will apply to the whole subnet. Otherwise the task of having this done for all servers becomes a tedious task.

For more information on Network ACL’s, please visit the URL
http: //docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_ACLs.html

299
Q

Question 301

Which service allows one to issue temporary credentials in AWS? Choose one answer from the options below.

A. AWS SQS

B. AWS STS

C. AWS SES

D. None of the above. You need to use a third party software to achieve this.

A

Answer: B

You can use the AWS Security Token Service (AWS STS) to create and provide trusted users with temporary security credentials that can control access to your AWS resources. Temporary security credentials work almost identically to the long term access key credentials that your IAM users can use. Option A is wrong because this is the queuing service provided by AWS. Option C is wrong because this is the emailing service provided by AWS. Option D is wrong because there is a service which exists from AWS.

For more information on STS, please visit the below URL
http: //docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp.html

300
Q

Question 302

You have two Elastic Compute Cloud (EC2) instances inside a Virtual Private Cloud (VPC) in the same Availability Zone (AZ) but in different subnets. One instance is running a database and the other instance an application that will interface with the database. You want to confirm that they can talk to each other for your application to work properly. Which two things do we need to confirm in the VPC settings so that these EC2 instances can communicate inside the VPC? Choose 2 answers.

A. A network ACL that allows communication between the two subnets.

B. Both instances are the same instance class and using the same Key-pair.

C. That the default route is set to a NAT instance or internet Gateway (IGW) for them to communicate.

D. Security groups are set to allow the application host to talk to the database on the right port/ protocol.

A

Answer: A, D

When you design a web server and database server, the security groups must be defined so that the web server can talk to the database server. An example image from the AWS documentation is given below Also when communicating between subnets you need to have the NACL’s defined Option B is wrong since the EC2 instances need not be of the same class or same key pair to communicate to each other. Option C is wrong since there the NAT and Internet gateway is used for the subnet to communicate to the internet.

For more information on VPC and Subnets, please visit the below URL
http: //docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Subnets.html

301
Q

Question 303

As a solution architect, you have been asked to design a cloud service based on AWS and choose to use RRS on S3 instead of S3 standard storage type. In such a case what type of trade-offs do you have to build your application around?

A. With RRS you have to copy data and extract data which can take up to 3 hours.

B. RRS only has 99.99% availability

C. With RRS, you don’t need to worry since AWS will take care of the durability of RRS.

D. RRS only has 99.99% durability and you have to design automation around replacing lost objects

A

Answer: D

RRS only has 99.99% durability and there is a chance that data can be lost. So you need to ensure you have the right steps in place to replace lost objects. Even though RRS has 99.99% availability, all storage types have the same availability, so it does not answer the question on the specific trade-offs for RRS.

302
Q

Question 304

You are running a web-application on AWS consisting of the following components an Elastic Load Balancer (ELB) an Auto Scaling Group of EC2 instances running Linux/PHP/Apache, and Relational DataBase Service (RDS) MySQL. Which security measures fall into AWS’s responsibility?

A. Protect the EC2 instances against unsolicited access by enforcing the principle of least-privilege access

B. Protect against IP spoofing or packet sniffing

C. Assure all communication between EC2 instances and ELB is encrypted

D. Install latest security patches on ELB, RDS and EC2 instances

A

Answer: B

As per the shared responsibility shown below, the users are required to control the EC2 security via security groups and network access control layers.

For more information on the Shared Responsibility model, please refer the below URL: https://aws.amazon.com/compliance/shared-responsibility-model/

303
Q

Question 305

What would happen to an RDS (Relational Database Service) multi-Availability Zone deployment of the primary DB instance fails?

A. The IP of the primary DB instance is switched to the standby DB instance

B. The RDS (Relational Database Service) DB instance reboots

C. Anew DB instance is created in the standby availability zone

D. The canonical name record (CNAME) is changed from primary to standby

A

Answer: D

Amazon RDS Multi-AZ deployments provide enhanced availability and durability for Database (DB) Instances, making them a natural fit for production database workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Each AZ runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable. In case of an infrastructure failure (for example, instance hardware failure, storage failure, or network disruption), Amazon RDS performs an automatic failover to the standby, so that you can resume database operations as soon as the failover is complete. And as per the AWS documentation, the cname is changed to the standby DB when the primary one fails.

For more information on Multi-AZ RDS, please visit the link —
https://aws.amazon.com/rds/details/multi-az/

304
Q

Question 306

An organization is planning to use AWS for their production roll out. The organization wants to implement automation for deployment such that it will automatically create a LAMP stack, download the latest PHP installable from S3 and setup the ELB. Which of the below mentioned AWS services meets the requirement for making an orderly deployment of the software?

A. AWS Elastic Beanstalk

B. AWS Cloudfront

C. AWS Cloudformation

D. AWS DevOps

A

Answer: A

The Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services We can simply upload code and Elastic Beanstalk automatically handles the deployment, from capacity provisioning, load balancing, auto-scaling to application health monitering. Meanwhile we can retain full control over the AWS resources used in the application and can access the underlying resources at any time. Launch LAMP stack with Elastic Beanstalk:
https://aws.amazon.com/getting-started/projects/launch-lamp-web-app/.

We can do it on AWS CloudFormation as well in a harder way and it will be less Native:
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/deploying.applications.html

305
Q

Question 307

In which of the following ways can you manage lambda functions. Choose all 3 correct answers.

A. Console

B. CLI

C. SDK

D. EC2 Instances

A

Answer: A, B, C

This is given in the AWS documentation:

For more information on AWS Lambda, please visit the link:
https://aws.amazon.com/lambda/faqs/

306
Q

Question 308

What is the maximum execution time for a Lambda function?

A. 3 seconds

B. 300 seconds

C. 24 hours

D. No limit

A

Answer: B

This is given in the aws documentation For more information on AWS Lambda, please visit the link https://aws.amazon.com/lambda/faqs/

307
Q

Question 309

If you want to point a domain name to an AWS VPC elastic load balancer in Route 53, how would you need to configure the record set? Choose the correct answer from the options below

A. Non-Alias with a type “A” record set

B. Alias with a type “AAAA” record set

C. Alias with a type “CNAME” record set

D. Alias with a type “A” record set

A

Answer: D

Yes, You need to configure ALIAS record for ELB but it should point to A record.

You can find details in below AWS document
http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-to-elb-load- balancer.html This is given in the aws documentation

For more information on Routes3, please visit the link
http: //docs.aws.amazon.com/Route53/latest/DeveloperGuide/resource-record-sets- values-alias.html

308
Q

Question 310

Which of the following statement is false with regards to the AWS Simple Queue Service?

Standard queues provide at-least-once delivery, which means that each message is delivered at least once

B. Both FIFO queues and Standard queues preserve the order of messages

C. Amazon SQS can help you build a distributed application with decoupled components

D. FIFO queues provide exactly-once processing

A

Answer: B

Only FIFO queues can preserve the order of messages and not standard queues.

For more information on standard queues, please visit the below URL
https://aws.amazon.com/sqs/faqs/

309
Q

Question 311

A company want to implement a hybrid architecture where it wants to connect VPC’s in its account to its on-premise architecture. Which of the following can be used to create a secure private connection between the Company’s on-premise architecture and the VPC’s hosted in AWS.

A. AWS Direct Connect + VPN

B. Route53

C. ClassicLink

D. AWS Direct Link

A

Answer: A

AWS Direct Connect provides private connectivity between AWS and your data center, office, or co-location environment. It makes it easy to establish a dedicated connection from an on- premise network to Amazon VPC. However if you want to have a secure private connection between your on-premise architecture and VPC’s hosted in AWS we can combine AWS Direct Connect dedicated network connection along with the Amazon VPC hardware VPN. AWS Direct Connect along with VPN can provide an IP-Sec encrypted private connection that also reduces network costs.

Option A is correct answer for a secured private connection.

Option B, Route 53 is AWS Domain Name service. Incorrect answer for this question

Option C ClassicLink allows you to link your EC2-Classic instance to a VPC in your account, within the same region.

Incorrect answer Option D is Incorrect.

Further information is available on the following white-paper.
https://media.amazonwebservices.com/AWS_Amazon_VPC_Connectivity_Options.pdf

310
Q

Question 312

Which of the following statements are true when it comes to EBS volumes and snapshots. Choose all that apply.

A. You can change the size of an EBS volume.

B. If you have an unencrypted volume, you can still create an encrypted snapshot from it.

C. The volume change size can also happen when it is attached to an instance.

D. The volume change size can only happen if the volume is detached from an instance.

A

Answer: A, C

If your Amazon EBS volume is attached to a current generation EC2 instance type, you can increase its size, change its volume type, or (for an io1 volume) adjust its IOPS performance, all without detaching it.

For more information on changing the volume size, please visit the link: http: //docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-expand-volume.html

For more information on changing the EBS encryption, please visit the link
http: //docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption. html

311
Q

Question 313

You have set up a CloudFront distribution but find that instead of each edge location serving up objects that should be cached, your application’s origins are being hit for each request. What could be a possible cause of this behavior? Choose the correct answer from the options below

A. The requested contest has never been requested before

B. The objects file size are 10GB in size.

C. The cache expiration time is set to a low value

D. You didn’t configure the objects with a X.509 certificate

A

Answer: C

You can control how long your objects stay in a CloudFront cache before CloudFront forwards another request to your origin. Reducing the duration allows you to serve dynamic content. Increasing the duration means your users get better performance because your objects are more likely to be served directly from the edge cache. A longer duration also reduces the load on your origin.

For more information on changing the volume encryption, please visit the link
http: //docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Expiration.html

312
Q

Question 314

You are using an EC2 instance that is backed by an S3-based AMI. You are planning on terminating that instance. When the instance is terminated, what happens to the data on the root volume?

A. Data is automatically saved as an EBS snapshot.

B. Data is automatically saved as an EBS volume.

C. Data is unavailable until the instance is restarted.

D. Data is automatically deleted.

A

Answer: D

“The AWS documentation mentions the following The data in an instance store persists only during the lifetime of its associated instance. If an instance reboots (intentionally or unintentionally), data in the instance store persists. However, data in the instance store is lost under the following circumstances: - The underlying disk drive fails - The instance stops - The instance terminates

For more information on Instance store AMI’s, please visit the below URL:
http: //docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html

AWS docs provides following details: it’s an instance store. Storage for the Root Device All AMIs are categorized as either backed by Amazon EBS or backed by instance store. The former means that the root device for an instance launched from the AMI is an Amazon EBS volume created from an Amazon EBS snapshot. The latter means that the root device for an instance launched from the AMI is an instance store volume created from a template stored in Amazon S3.

For more information, see Amazon EC2 Root Device Volume.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ComponentsAMIs.html #storage-for-the-root-device

313
Q

Question 315

You are setting up Routes§3 for your application. You have a set of EC2 instances to which the traffic needs to be distributed to. You want a certain percentage of traffic to go to each instance. Which routing policy would you use? Choose an answer from the options given below

Latency

B. Failover

C. Weighted

D. Geolocation

A

Answer: C

Use the weighted routing policy when you have multiple resources that perform the same function (for example, web servers that serve the same website) and you want Amazon Route 53 to route traffic to those resources in proportions that you specify (for example, one quarter to one server and three quarters to the other).

For more information on the routing policy, please visit the link
http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html

314
Q

Question 316

An application in AWS is currently running in the Singapore region. You have been asked to implement disaster recovery. So if the application goes down in the Singapore region, it has to be started in the Asia region. You application relies on pre-built AMIs. As part of your disaster recovery strategy, which of the below points should you consider.

A. Nothing, because all AMI’s de default are available in any region as long as it is created within the same account

B. Copy the AMI from the Singapore region to the Asia region. Modify the Auto Scaling groups in the backup region to use the new AMI ID in the backup region

C. Modify the image permissions and share the AMI to the Asia region.

D. Modify the image permissions to share the AMI with another account, then set the default region to the backup region

A

Answer: B

If you need an AMI across multiple regions , then you have to copy the AMI across regions. Note that by default AMI’s that you have created will not be available across all regions. So option A is automatically invalid. Next you can share AMI’s with other users, but they will not be available across regions. So option C and D is invalid. You have to copy the AMI across regions. To copy AMI’s , follow the below steps Step 1) The first step is to create an AMI from your running instance by choosing on Image > Create Image.

Step 2) Once the Image has been created, go to the AMI section in the EC2 dashboard and click on the Copy AMI option.
Step 3 ) In the next screen , you can specify where to copy the AMI to.

For the entire details to copy AMI’s , please visit the link
- http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/CopyingAMIs.html

315
Q

Question 317

In the basic monitoring package for RDS, Amazon CloudWatch provides the following metrics. Choose three correct options.

A. Database visible metrics such as number of connections

B. Disk OPS metrics

C. Database memory usage

D. Web service visible metrics such as number failed transaction requests

A

Answer: A, B, C

As RDS Instance is completely managed by AWS and user doesn’t have access Operating System metrics, So it is logical for AWS to provide us those metrics.

Please refer to AWS documentation
http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Monitoring.html

Refer to the screenshot attached which shows metrics like Freeable Memory, Freeable Space, Swap Usage etc which are Operating System visible metrics.

316
Q

Question 318

What is the maximum size of an EBS Provisioned IOPS SSD volume? Choose the correct option.

A. 2TiB

B. 16TiB

C. 4Gib

D. 500 GiB

A

Answer: B

The maximum size for EBS provisioned IOPS volume allowed is 16384 GiB which 16 TiB. See error while trying to create volume more than available size: The minimum size for an EBS Provisioned IOPS SSD volume is 4GiB and maximum size is 16TiB. This sort of volumes are normally used for hosting databases which require a lot of I/O operations. These types of volumes have better performance and are optimized for such scenarios.

For more information on EBS volume types, please visit the link:
http: //docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html

317
Q

Question 319

A company wants to store data that is not frequently accessed. What is the best and
cost efficient solution that should be considered?

A. Amazon Storage Gateway

B. Amazon Glacier

C. Amazon EBS

D. Amazon S3

A

Answer: B

Since the data is not required to be accessed frequently, the data can be stored on Amazon glacier for cheaper storage. Remember that the recovery time for getting data from Glacier is from 3-5 hours.

You can look at the FAQ section of aws glacier -
https://aws.amazon.com/glacier/faqs/

318
Q

Question 320

You have an EC2 instance that is transferring data from S3 in the same region. The project sponsor is worried about the cost of the infrastructure. What can you do to convince him that you have a cost effective solution.

A. You are going to be hosting only 4 instances, so you are minimizing on cost.

B. There is no cost for transferring data from EC2 to S3 if they are in the same region.

C. AWS provides a discount if you transfer data from EC2 to S3 if they are in the same region.

D. Both EC2 and S3 are in the same availability zone, so you can save via consolidated billing.

A

Answer: B

Please note that there is no cost when data is transferred from EC2 to S3 if they are in the same region. This is very important for an AWS Solution Architect to know.

319
Q

Question 321

Which services allow the customer to retain full administrative privileges of the underlying EC2 instances?

A. Amazon Relational Database Service

B. Amazon Elastic Map Reduce

C. Amazon ElastiCache

D. Amazon DynamoDB

A

Answer: B

In Amazon EMR, you have the ability to work with the underlying instances wherein the EMR service allows you to associate the EC2 Key pair with the launched instances. This is also given in the AWS documentation.

For more information on the access to EMR nodes , please visit the below URL
http: //docs.aws.amazon.com/emr/latest/ManagementGuide/emr-plan-access-ssh. html

320
Q

Question 322

Bucket names must be unique across all S3.

A. True

B. False

A

Answer: A

Bucket names must be unique across all regions. Let’s say you have created a bucket names devtoolslogging in the Singapore region. Now if you want to create a bucket of the same name in the Oregon region, you will get an error that the bucket already exists.

321
Q

Question 323

A customer has enabled website hosting on a bucket named “devtoolslogging” inthe Singapore region. What website URL is assigned to your bucket?

A. devtoolslogging.s3-website-ap-southeast-1.amazonaws.com

B. s3-website.devtoolslogging.amazonaws.com

C. s3-website.devtoolslogging.website-ap-southeast-1.amazonaws.com

D. devtoolslogging.ap-southeast-1.amazonaws.com

A

Answer: A

You have the chance to enable static web site hosting for S3 buckets. This can be done via the properties option for the bucket. The end point of the bucket for static hosting will also be configured.

322
Q

Question 324

As a solutions architect, it is your job to design for high availability and fault tolerance. Company-A is utilizing Amazon S3 to store large amounts of file data. What steps would you take to ensure that if an availability zone was lost due to a natural disaster your files would still be in place and accessible

A. Copy the S3 bucket to an EBS optimized backed EC2 instance

B. Amazon S3 is highly available and fault tolerant by design and requires no additional configuration

C. Enable AWS Storage Gateway using gateway-stored setup

D. None of the above

A

Answer: B

AWS 83 is already highly available and fault tolerant.

This is very clearly mentioned in its FAQ’s -
https://aws.amazon.com/s3/faqs/

323
Q

Question 325

What are the different options available when creating a VPC using the VPC wizard? Please choose all options that apply.

A. VPC with a Primary and Secondary subnet

B. VPC with Public and Private Subnets

C. VPC with Public and Private Subnets and Hardware VPN Access

D. VPC with default settings

A

Answer: B, C

When you launch the VPC wizard, you will get these options in the VPC wizard.

324
Q

Question 326

When an EC2 EBS-backed (EBS root) instance is stopped, what happens to thedata on any ephemeral store volumes?

A. Data is automatically saved in an EBS volume.

B. Data is unavailable until the instance is restarted.

C. Data will be deleted and will no longer be accessible.

D. Data is automatically saved as an EBS snapshot.

A

Answer: C

“ephemeral is temporary storage that is always deleted when an instance is restarted in aws. When you stop or terminate an instance, every block of storage in the instance store is reset. Therefore, your data cannot be accessed through the instance store of another instance.

Data on the EBS volume is LOST only if the Root Volume is EBS backed and the Delete On Termination flag is checked (Checked by default)

Find more details in AWS documentation here:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html?shortFooter=true#instance-store-lifetime “

325
Q

Question 327

SQS provides a timeout which is a period of time during which Amazon SQS prevents other consuming components from receiving and processing. What is this time period called?

A. Component Timeout

B. Visibility Timeout

C. Processing Timeout

D. Receiving Timeout

A

Answer: B

Please refer to the AWS SQS FAQ section - https://aws.amazon.com/sqs/faqs/

326
Q

Question 328

You are currently hosting an infrastructure and most of the EC2 instances are near go — 100% utilized. What is the type of EC2 instances you would utilize to ensure costs are minimized? Assume that the EC2 instance will be running continuously throughout the year.

A. Reserved instances

B. On-demand instances

C. Spot instances

D. Regular instances

A

Answer: A

When you have instances that will be used continuously and throughout the year, the best option is to buy reserved instances. By buying reserved instances, you are actually allocated an instance for the entire year or the duration you specify with a reduced cost.

To understand more on reserved instances, please visit the link:

https: //aws.amazon.com/ec2/pricing/reserved-instances/
https: //blog.cloudability.com/maximizing-cost-savings-aws-reserved-instances/
https: //awsinsider.net/articles/2017/03/21/controlling-aws-costs.aspx

327
Q

Question 329

What is the ability provided by AWS to enable fast, easy, and secure transfers offiles over long distances between your client and your Amazon S3 bucket.

A. File Transfer

B. HTTP Transfer

C. Transfer Acceleration

D. S3 Acceleration

A

Answer: C

Please refer to the AWS S3 FAQ section -
https://aws.amazon.com/s3/faqs/

328
Q

Question 330

What is one key difference between an Amazon EBS-backed and an instance-store
backed instance?

Amazon EBS-backed instances can be stopped and restarted.

B. Instance-store backed instances can be stopped and restarted.

C. Auto scaling requires using Amazon EBS-backed instances.

D. Virtual Private Cloud requires EBS backed instances.

A

Answer: A

Amazon EBS-backed instances can be stopped and restarted. So we can say Instance-store backed instances cannot be restarted.

Please see the url for the key differences between EBS and instance store volumes
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ComponentsAMIs.html

329
Q

Question 331

You have an application hosted in AWS. The logs from the application are sent to Cloudwatch. The application has recently been encountering some errors. A patch needs to be developed for the error to be rectified. For the moment you need to automate the restart of the server whenever the error occurs. How can you achieve this?

A. Check the Cloudwatch logs for the error keywords , create an alarm and then restart the server

B. Create a cloudwatch metric which looks at the CPU utilization and then restarts the server

C. Create a cloudwatch metric which looks at the Memory utilization and then restarts the server

D. Check the Cloudwatch logs for the error keywords, then send a notification to SQS to restart the server

A

Answer: A

The AWS Documentation mentions the following on Cloudwatch Logs You can use CloudWatch Logs to monitor applications and systems using log data. For example, CloudWatch Logs can track the number of errors that occur in your application logs and send you a notification whenever the rate of errors exceeds a threshold you specify. CloudWatch Logs uses your log data for monitoring; so, no code changes are required. For example, you can monitor application logs for specific literal terms (such as “NullReferenceException”) or count the number of occurrences of a literal term at a particular position in log data (such as “404” status codes in an Apache access log). When the term you are searching for is found, CloudWatch Logs reports the data to a CloudWatch metric that you specify. Log data is encrypted while in transit and while it is at rest.

For more information on Cloudwatch logs , please visit the below URL:
http: //docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html

330
Q

Question 332

A company wants to utilize aws storage. For them low storage cost is paramount, the data is rarely retrieved, and data retrieval times of several hours are acceptable for them. What is the best storage option to use?

A. Glacier

B. Reduced Redundancy Storage

C. EBS backed storage connected to EC2

D. Cloud Front

A

Answer: A

With the above requirements, the best option is to opt for Amazon Glacier.

Please refer to the Glacier FAQ’s
https://aws.amazon.com/glacier/faqs/

331
Q

Question 333

A client application requires operating system privileges on a relational database server. What is an appropriate configuration for a highly available database architecture?

A. Standalone Amazon EC2 instance

B. Amazon RDS in a Multi-AZ configuration

C. Amazon EC2 instances in a replication configuration utilizing a Single Availability Zone

D. Amazon EC2 instances in a replication configuration utilizing two different Availability Zones

A

Answer: D

You cannot access OS of RDS Databases, as RDS is fully managed service by AWS. In case a customer wants to have access to OS for their Database for more granular control or other compliance reason, then they can install their Database engine in EC2 instance.

In choice D, DB needs to be installed in EC2 for OS access with replication to support failover.

Please follow below link for reference, which shows steps to install and configure Oracle in EC2 instance
https://oracle-base.com/articles/vm/aws-ec2-installation-of-oracle

Since the client wants privilege on the RDS, option B is not valid.

Since there is a requirement for highly availability, you cannot have just one AZ and one EC2 instance.

Hence D is the right answer. Please refer below link showing an architecture example to enable Oracle database high availability on EC2 server.
http: //docs.aws.amazon.com/quickstart/latest/oracle-database/architecture.html

For more information, please read the below
link: http://docs.aws.amazon.com/dms/latest/userguide/CHAP_Introduction.ReplicationInstance.html

332
Q

Question 334

Which aws service is used to monitor all API calls to AWS

Amazon SES

Amazon Cloudtrail

C. Amazon CloudFront

D. Amazon S3

A

Answer: B

Please refer to the product description for AWS Cloutrail at the URL -
https://aws.amazon.com/cloudtrail/

333
Q

Question 335

A company needs to deploy virtual desktops to its customers in a virtual private cloud, leveraging existing security controls. Which set of AWS services and features will meet the company’s requirements?

A. Virtual Private Network connection. AWS Directory Services, and ClassicLink

B. Virtual Private Network connection. AWS Directory Services, and Amazon Workspaces

C. AWS Directory Service, Amazon Workspaces, and AWS Identity and Access Management

D. Amazon Elastic Compute Cloud, and AWS Identity and Access Management

A

Answer: B

Option B is the correct answer because AWS Directory Services are used to to authenticate to an existing on-premises AD through VPN and AWS WorkSpaces service is used for Virtual desktops.

Option A is incorrect because a ClassicLink, within the same region, allows us to link an EC2-Classic instance to a VPC in our account.

Option C is incorrect because AWS Dictionary service needs a VPN connection to interact with an On-premise AD directory.

Option D is incorrect because we need WorkSpaces for virtual desktops.

334
Q

Question 336

Which of the following statements are true about Amazon Reduced Redundancy Storage (RRS)? Choose the correct 3 answers from the below options.

A. RRS has the ability to provide eleven nines availability.

B. RRS has the ability to provide 99.99% availability.

C. RRS has the ability to provide 99.99% durability.

D. If there is a requirement to store data that is easily reproducible or durably stored elsewhere, then RRS is the ideal option.

A

Answer: B, C and D.

The Durability and availability are given in the aws site for RRS. Reduced Redundancy Storage (RRS) is an Amazon S3 storage option that enables customers to store non-critical, reproducible data at lower levels of redundancy than Amazon S3’s standard storage. S3 is the most reliable and durable storage service from Amazon. Whereas if you have data that is non-critical and can be easily reproducible if lost, then that can be stored in RRS to reduce the cost of your storage.

The RRS option stores objects on multiple devices across multiple facilities, providing 400 times the durability of a typical disk drive, but does not replicate objects as many times as standard Amazon S3 storage.

You can read more about RRS in the below link:
https://aws.amazon.com/s3/reduced-redundancy/

335
Q

Question 337

After creating a new [AM user which of the following must be done before they can successfully make API calls?

Add a password to the user.

B. Enable Multi-Factor Authentication for the user.

C. Assign a Password Policy to the user.

D. Create a set of Access Keys for the user.

A

Answer: D

In LAM , when you create a user , you need to download the Access Key ID and Secret access key so that the user can access aws.

336
Q

Question 338

What is the AWS service provided which provides a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability.

A. AWS RDS

B. DynamoDB

C. Oracle RDS

D. Elastic Map Reduce

A

Answer: B

DynamoDB is a fully managed NoSQL offering provided by AWS. It is now available in most regions for users to consume.

The link provides the full details on the product
http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Introduction.html

337
Q

Question 339

A company’s application is intending to use Auto Scaling and has the requirement to store user state information. Which of the following AWS services provides a shared data store with durability and low latency?

A. AWS ElastiCache MemCached

B. Amazon Simple Storage Service

C. Amazon EC2 instance storage

D. Amazon DynamoDB

A

Answer: D

Amazon Dynamo DB is used for storing small amounts of data such as user state information. And this service offer’s durability and low latency.

Visit KB Article for the snapshot of when to use S3 and DynamoDB from the DynamoDB FAQ’ - https://aws.amazon.com/dynamodb/fags/

338
Q

Question 340

You have a read intensive application hosted in AWS. The application is currently using the MySQL RDS feature in AWS. Which of the following can be used to reduce the read throughput on the MySQL database

A. Enable the Multi-AZ on the MySQL RDS

B. Use Cold Storage Volumes for the MySQL RDS

C. Enable Read Replica’s and offload the reads to the replica’s

D. Use SQS to queue up the reads

A

Answer: C

The AWS documentation mentions the following on Read Replica’s Amazon RDS Read Replicas provide enhanced performance and durability for database (DB) instances. This replication feature makes it easy to elastically scale out beyond the capacity constraints of a single DB Instance for read-heavy database workloads. You can create one or more replicas of a given source DB Instance and serve high-volume application read traffic from multiple copies of your data, thereby increasing aggregate read throughput. Read replicas can also be promoted when needed to become standalone DB instances.

For more information on Read Replica’s , please visit the below
URL: https://aws.amazon.com/rds/details/read-replicas/

339
Q

Question 341

The Trusted Advisor service provides insight regarding which four categories of an AWS account?

A. Security, fault tolerance, high availability, and connectivity

B. Security, access control, high availability, and performance

C. Performance, cost optimization, security, and fault tolerance

D. Performance, cost optimization, access control, and connectivity

A

Answer: C

For more information on Trusted Advisor Dashboard offers. Please visit:
https://aws.amazon.com/premiumsupport/trustedadvisor/

340
Q

Question 342

When will you incur costs with an Elastic IP address (EIP)?

A. When an EIP is allocated.

B. When it is allocated and associated with a running instance.

C. When it is allocated and associated with a stopped instance.

D. Costs are incurred regardless of whether the EIP is associated with a running instance.

A

Answer: C

The correct answer for this questions is option ““C””. The option D is little bit tricky and which will make us think that it might be correct even though its not.

Following AWS docs shows us when costs are not incurred. An Elastic IP address
doesn’t incur charges as long as the following conditions are true: The Elastic IP
address is associated with an Amazon EC2 instance. The instance associated with the
Elastic IP address is running. The instance has only one Elastic IP address attached to
it. If you’ve stopped or terminated an EC2 instance with an associated Elastic IP
address and you don’t need that Elastic IP address any more, consider disassociating or
releasing the Elastic IP address by following the instructions at Working with Elastic IP
Addresses. Note: After an Elastic IP address is released, you can’t provision that same
Elastic IP address again, though you can provision a different Elastic IP address.

AWS doesn’t want you waste the static public IP’s . You will be charged for elastic IP

1- If EIP is created but not allocated to any instance.

2 - If EIP is attached to a stop instance. Reference
link: https://aws.amazon.com/premiumsupport/knowledge-center/elastic-ip-charges/

Please find details below regarding Elastic IP Charges:
https://aws.amazon.com/premiumsupport/knowledge-center/elastic-ip-charges/

341
Q

Question 343

How many availability zones are mapped to a subnet?

A. l

B.2

C. Depends on aws at the time of creating a subnet

D. Depends on the number of instances you are going to host in the subnet.

A

Answer: A

Remember that when a subnet is created, it is always mapped to one availability zone. When you go the VPC dashboard, and go to the Subnet section, you can click on
Create Subnet When you create the subnet, you can only attach one AZ to the
subnet.

342
Q

Question 344

A company is building a service in which Amazon EC2 worker instances process an uploaded audio file and generate a text file. You must store both of these files in the same durable storage until the text file is retrieved. You do not know what the storage capacity requirements are. Which storage option is both cost-efficient and scalable?

A. Multiple Amazon EBS volume with snapshots

B. A single Amazon Glacier vault

C. A single Amazon S3 bucket

D. Multiple instance stores

A

Answer: C

For any sort of storage for file based system, it must be done in Amazon S3.

343
Q

Question 345

A custom script needs to be passed to a new Amazon Linux instances created in your Auto Scaling group. Which feature allows you to accomplish this?

A. User data

B. EC2Config service

C. IAM roles

D. AWS Config

A

Answer: A

When you configure an instance during creation, you can add custom scripts to the User data section. So in Step 3 of creating an instance, in the Advanced Details section, we can enter custom scripts in the User Data section. The below script installs Perl during the instance creation of the EC2 instance.

344
Q

Question 346

A company is building software on AWS that require access to various AWS services. Which configuration should be used to ensure that AWS Credentials like Access Keys and Secret access keys are not compromised? (Choose Two Options)

A. Enable Multi-Factor Authentication for your AWS root account.

B. Assign an LAM role to the Amazon EC2 instance.

C. Store the AWS Access Key ID/Secret Access Key combination in software comments.

D. Assign an LAM user to the Amazon EC2 Instance.

A

Answer: A, B

It is the best practice to always create LAM roles which can be assigned to EC2
instances and enable MFA for the root account. This will help to not compromise the
Access Key ID/Secret Access Key combination.

345
Q

Question 347

A company has the requirement to store data using AWS storage services. The data is not frequently accessed. If data recovery time not an issue, which of the below is the best and cost efficient solution to fulfill this requirement?

A. S83 Standard

B. S3 Standard - IA (Infrequently Accessed)

C. Glacier

D. Reduced Redundancy Storage

A

Answer: C

The default time interval is one minute. Note: Answer can also be B. S3 Standard - IA (Infrequently Accessed). However since other details are mentioned in question. we can say C. Glacier is most effective way of cost saving in this case.

Reference
link: https://aws.amazon.com/products/storage/

346
Q

Question 348

Resources that are created in AWS are identified by a unique identifier which is known as what option given below

A. Amazon Resource Number

B. Amazon Resource Nametag

C,. Amazon Resource Name

D. Amazon Resource Namespace

A

Answer: C

Amazon Resource Names (ARNs) are used to uniquely identify AWS resources.

For information on ARN’s, refer to the link -
http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html

347
Q

Question 349

When you are using Route53 for a web site hosted in S3 , what are the one of the rules that must be adhered to? Choose the correct answer from the options below

A. The S3 bucket name must be the same as the domain name

B. The record set cannot use an alias

C. The record set must be of type “MX”

D. The S3 bucket must be in the same region as the hosted zone

A

Answer: A

This is given in the aws documentation For more information on using Route53 along with S3, please visit the link
http: //docs.aws.amazon.com/Route53/latest/ DeveloperGuide/RoutingToS3Bucket.html

348
Q

Question 350

What are some of the benefits of using the CloudFormation service? Choose 2 answers from the options given below

A. Can automatically increase instance capacity

B. A storage location for your applications code

C. Version control your infrastructure

D. A great disaster recovery option

A

Answer: C, D

The justification for Infrastructure as code is given in the aws documentation

For the justification on disaster recovery, please visit the below link
https://aws.amazon.com/blogs/aws/new-whitepaper-use-aws-for-disaster-recovery/

For more information on Cloudformation, please visit the link
https://aws.amazon.com/cloudformation/

349
Q

Question 351

AWS thrives on the concept of high availability. Which of the below follows the concept of high availability. Choose the correct answer from the options below

A. Implementing security procedures

B. Implementing multiple AWS services

C. The ability of system to easily increase in size.

D. A durable system that can operate for long periods of time without failure.

A

Answer: D

High availability is a characteristic of a system, which aims to ensure an agreed level of operational performance, usually uptime, for a higher than normal period.

For more information on high availability, please refer to the following link
https://en.wikipedia.org/wiki/High_availability

350
Q

Question 352

What best describes the “Principal of Least Privilege”? Choose the correct answer from the options given below

A. All users should have the same baseline permissions granted to them to use basic AWS services.

B. Users should be granted permission to access only resources they need to do their assigned job.

C. Users should submit all access request in written so that there is a paper trail of who needs access to different AWS resources.

D. Users should always have a little more access granted to them then they need, just in case they end up needed it in the future.

A

Answer: B

The principle means giving a user account only those privileges which are essential to perform its intended function. For example, a user account for the sole purpose of creating backups does not need to install software: hence, it has rights only to run backup and backup-related applications.

For more information on principle of least privilege, please refer to the following link
https://en.wikipedia.org/wiki/Principle_of_least_privilege

351
Q

Question 353

Which of the following best describes the purpose of an Elastic Load Balancer. Choose an answer from the options given below. Choose the correct answer from the options given below

A. To scale more EC2 instances on demand

B. To evenly distribute traffic among multiple EC2 instances located in single or different Availability Zones.

C. To distribute traffic to a second instance once the first instance capacity has reached its limit.

D. To evenly distribute traffic among multiple EC2 instances in the same Availability Zone.

A

Answer: B

Elastic Load Balancing automatically distributes incoming application traffic across multiple Amazon EC2 instances. It enables you to achieve fault tolerance in your applications, seamlessly providing the required amount of load balancing capacity needed to route application traffic. And the ELB is used to distribute traffic between instances in Multiple AZ’s.

For more information on Elastic Load Balancer, please refer to the following link https: //aws.amazon.com/elasticloadbalancing/

Some more key points about ELB: Elastic Load Balancer (ELB) is used for rotating traffic to various EC2 instances located across the multiple Availability Zones(AZ’s). ELB can detect the healthy and unhealthy EC2 instances. It will not route traffic to the unhealthy EC2 instances. If all the instances in the same AZ is not healthy, it will route the traffic to other AZ EC2 instances. Achieve higher levels of fault tolerance for your applications by using Elastic Load Balancing to automatically route traffic across multiple instances and multiple Availability Zones. Elastic Load Balancing ensures that only healthy Amazon EC2 instances receive traffic by detecting unhealthy instances and rerouting traffic across the remaining healthy instances. If all of your EC2 instances in one Availability Zone are unhealthy, and you have set up EC2 instances in multiple Availability Zones, Elastic Load Balancing will route traffic to your healthy EC2 instances in those other zones.

352
Q

Question 354

When you create a default VPC, what are the services you get by default in the VPC? Select 2 options.

A. An Elastic Load Balancer

B. Default subnet in each Availability Zone

C. An Internet Gateway attached to the default VPC

D. A light weight rds such as SQL Server Express.

A

Answer: B and C.

For the list of default services given for a default VPC and to get more information on what comes as part of a default VPC, follow the
link http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/default-vpc.html

353
Q

Question 355

In an Autoscaling policy what best describes what the purpose of a scaling policy is.
Choose an answer from the options below.

A. Aset of CloudWatch metric thresholds that dictate when to add or remove instances from the Auto Scaling group.

B. The LAM access policy granted to an Auto Scaling group.

C. The percentage at which an ELB will send traffic to an instance before it sends traffic to a different instance.

D. An SNS notification alert.

A

Answer: A

You can create a scaling policy that uses CloudWatch alarms to determine when your Auto Scaling group should scale out or scale in. Each CloudWatch alarm watches a single metric and sends messages to Auto Scaling when the metric breaches a threshold that you specify in your policy. You can use alarms to monitor any of the metrics that the services in AWS that you’re using send to CloudWatch, or you can create and monitor your own custom metrics.

For more information on Scaling policies, please refer to the following link
http: //docs.aws.amazon.com/autoscaling/latest/userguide/policy_creating.html

354
Q

Question 356

A company has a solution hosted in AWS. This solution consists of a set of EC2 instances. They have been recently getting attacks as their IT security departments identified that attacks are from a set of IP addresses. Which of the following methods can be adopted to help in this situation.

A. Place the EC2 instances into private subnets, and set up an NAT gateway so employees can access them.

B. Remove the IGW from the VPC so that no outside traffic can reach the EC2 instances.

C. Lock down of NACL for the set to IP address.

D. Place the EC2 instances into private subnets, and set up a bastion host so employees can access them.

A

Answer: C

The NACL’s can be modified to be most secure by only denying the traffic from the set of IP addresses.

For more information on NACL, please refer to the following link
http: //docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_ACLs.html

355
Q

Question 357

You have an ELB distributing traffic a fleet of EC2 instances inside your VPC, evenly spread across two Availability Zones. However, you realize that only half of our instances are actually receiving traffic. What is the most likely cause of this problem? Choose the correct answer from the options given below

A. The ELBs listener is not set to port 80.

B. One or more security groups do not allow HTTP traffic.

C. Cross-zone load balancing has not been enabled.

D. The health check ping port is set to port 80, but should be set to port 22.

A

Answer: C

For environments where clients cache DNS lookups, incoming requests might favor one of the Availability Zones. Using cross-zone load balancing, this imbalance in the request load is spread across all available instances in the region, reducing the impact of misbehaving clients. By default, your Classic Load Balancer distributes incoming requests evenly across its enabled Availability Zones. For example, if you have ten instances in Availability Zone us-west-2a and two instances in us-west-2b, the requests are distributed evenly between the two Availability Zones. As a result, the two instances in us-west-2b serve the same amount of traffic as the ten instances in us- west-2a. To ensure that your load balancer distributes incoming requests evenly across all instances in its enabled Availability Zones, enable cross-zone load balancing.

For more information on ELB Cross load balancer, please refer to the following link http: //docs.aws.amazon.com/elasticloadbalancing/latest/classic/enable-disable-crosszone-lb.html

356
Q

Question 358

You have an application currently running on five EC2 instances as part of an Auto Scaling group. For the past 30 minutes all five instances have been running at 100 CPU Utilization; however, the Auto Scaling group has not added any more instances to the group. What is the most likely cause?
Choose 2 likely answers from the options given below

A. You already have 20 on-demand instances running.

B. The Auto Scaling group’s MAX size is set at five.

C. The Auto Scaling group’s scale down policy is too high.

D. The Auto Scaling group’s scale up policy has not yet been reached.

A

Answer: A, B

Twenty instances limit is at the account level and you might have other applications running more EC2 instances across your account (may be in another region) which may cause a total number to exceed the limit. This is provided in the AWS documentation

For more information on troubleshooting Autoscaling, please refer to the following link http://docs.aws.amazon.com/autoscaling /latest/userguide/ts-as-capacity.html

357
Q

Question 359

If need to upload a file to S3 that is 600MB in size, which of the following is the best option to use? Choose an answer from the options below. Choose the correct answer from the options below

A. Single operation upload

B. Snowball

C. AWS Import/Export

D. Multi-part upload

A

Answer: D

The Multipart upload API enables you to upload large objects in parts. You can use this API to upload new large objects or make a copy of an existing object (see Operations on Objects). Multipart uploading is a three-step process: You initiate the upload, you upload the object parts, and after you have uploaded all the parts, you complete the multipart upload. Upon receiving the complete multipart upload request, Amazon S3 constructs the object from the uploaded parts, and you can then access the object just as you would any other object in your bucket.

For more information on Multi-part file upload, please refer to the following link
http: //docs.aws.amazon.com/AmazonS3/latest/dev/mpuoverview.html

358
Q

Question 360

A company has the requirement to store files in S3. After a period of a month, these files can be archived. The archived files might be required after a period of 3-4 months. Which of the following suits the requirements

A. Use EC2 instances with EBS volumes, one for normal storage and the other for
archived storage

B. Use S3 for normal file storage and use lifecycle policies for moving the files to
glacier.

C. Use EC2 instances with EBS volumes and use lifecycle policies for moving the
files to glacier.

D. Use glacier for normal file storage and use lifecycle policies for moving the files
to $3.

A

Answer: B

Lifecycle configuration enables you to specify the lifecycle management of objects in a bucket. The configuration is a set of one or more rules, where each rule defines an action for Amazon S3 to apply to a group of objects. These actions can be classified as follows: Transition actions — In which you define when objects transition to another storage class. For example, you may choose to transition objects to the STANDARD_IA (IA, for infrequent access) storage class 30 days after creation, or archive objects to the GLACIER storage class one year after creation.

For more information on Lifecycle policies, please refer to the following link
http://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html

359
Q

Question 361

Your team has an application hosted on Docker containers. You want to port that application in the easiest way possible onto AWS for your development community. Which of the following service can be used to fulfil this requirement

AWS Elastic Load Balancer

B. AWS SNS

C. AWS SQS

D. AWS Elastic Beanstalk

A

Answer: D

The AWS documentation mentions the following Elastic Beanstalk supports the deployment of web applications from Docker containers. With Docker containers, you can define your own runtime environment. You can choose your own platform, programming language, and any application dependencies (such as package managers or tools), that aren’t supported by other platforms. Docker containers are self- contained and include all the configuration information and software your web application requires to run.

For more information on Elastic beanstalk and docker, please visit the below URL:
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker.html

360
Q

Question 362

Which of the following tools for EC2 can be used to administer instances without the need to SSH or RDP into the instance.

A. AWS Config

B. AWS CodePipeline

C. Run Command

D. EC2Config

A

Answer: C

You can use Run Command from the Amazon EC2 console to configure instances without having to login to each instance

For more information on the Run Command , please visit the below URL:
http://docs.aws.amazon.com/systems-manager/latest/userguide/rc-console.html

361
Q

Question 363

If you wanted to extend your on-premise infrastructure with AWS, which of the below options would help. Choose 2 answers from the options given below

A. Virtual Private Network

B. CloudFront Service

C. Direct Connect

D. Primary Connection

A

Answer: A, C

You can either build a VPN or have a direct connect connection

For more information on VPC to on-premise networks, please refer to the following link
https://aws.amazon.com/blogs/apn/amazon-vpc-for-on-premises-network-engineers-part-one/

362
Q

Question 364

Why does stopping and starting an instance help in fixing a System Status Check error? Choose an answer from the options given below

A. Stopping and starting an instance causes the instance to change the AMI.

B. Stopping and starting an instance causes the instance to be provisioned on different AWS hardware.

C. Stopping and starting an instance reboots the operating system.

D. None of the above

A

Answer: B

Refer steps published by AWS
support: https://aws.amazon.com/premiumsupport/knowledge-center/system-reachability-check/

For more information on starting and stopping instances, please refer to the following link
http: //docs.aws.amazon.com/AWSEC2/latest/UserGuide/Stop_Start.html

363
Q

Question 365

In consolidated billing what are the 2 different types of accounts.

A. Paying account and Linked account

B. Parent account and Child account

C. Main account and Sub account.

D. Primary account and Secondary account.

A

Answer: A

You can have a combination of Paying accounts and linked accounts. When you have consolidated billing you have the facility to reduce the costs for the paying account. This is one of the main advantages of consolidated billing. Consolidated billing has the following benefits: One Bill — You get one bill for multiple accounts. Easy Tracking — You can easily track each account’s charges and download the cost data in CSV format. Combined Usage — If you have multiple accounts today, your charges might decrease because AWS combines usage from all accounts in the organization to qualify you for volume pricing discounts.

For information on Consolidated billing, please visit the link:
https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/consolidated-billing. html

364
Q

Question 366

What is the term often used to describe a DR scenario in which a minimal version of an environment is always running in the cloud. Choose an answer from the options given below

A. Backup and Restore

B. Pilot Light

C. Warm standby

D. Multi Site

A

Answer: B

This is given in a whitepaper published by AWS

For more information on disaster recovery, please refer to the below link
https://media.amazonwebservices.com/AWS_Disaster_Recovery.pdf

365
Q

Question 367

Which of the following features ensures even distribution of traffic to Amazon EC2 instances in multiple Availability Zones registered with a load balancer?

A. Elastic Load Balancing request routing

B. An Amazon Route 53 weighted routing policy

C. Elastic Load Balancing cross-zone load balancing

D. An Amazon Route 53 latency routing policy

A

Answer: C

To ensure that traffic is evenly distributed, you need to ensure the “Enable Cross-Zone Load balancing option” is chosen. This option comes up when you are creating a classic load balancer in Step 5 of Add EC2 instances.

366
Q

Question 368

Currently you have a VPC with EC2 Security Group and several running EC2 instances. You change the Security Group rules to allow inbound traffic on a new port and protocol, and launch several new instances in the same Security Group. When will the Security Group changes be applied to the EC2 instances? Please choose the correct answer.

A. Immediately to all instances in the security group.

B. Immediately to the new instances only.

C. Immediately to the new instances, but old instances must be stopped and restarted before the new rules apply.

D. To all instances, but it may take several minutes for old instances to see the changes.

A

Answer: A

By default whatever changes you make to security rules will be applied in all instances which are part of that security group. When you add or remove rules, they are automatically applied to all instances associated with the security group.

For more information, please refer the below URL:
http: //docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_SecurityGroups.html

367
Q

Question 369

What is the minimum size of an object that can be uploaded to Amazon S3?

A. 1Megabyte

B. 0 Bytes

C, 1Byte

D. 5TB

A

Answer: B

The minimum size of an object in S3 can be o bytes.

You can refer to the S3 FAQ’s for more information on the allowable storage on S3.
https://aws.amazon.com/s3/faqs/

368
Q

Question 370

A company is trying to reduce their storage costs and want a more cost effective solution than Amazon S3. Secondly they claim that their data store is not frequently accessed. What is the best and cost efficient solution that should be considered?

A. Amazon Storage Gateway

B. Amazon Glacier

C. Amazon EBS

D. Amazon S3

A

Answer: B

Since the data is not required to be accessed frequently, the data can be stored on Amazon glacier for cheaper storage. Remember that the recovery time for getting data from Glacier is from 3-5 hours. All other options are not correct and expensive compared to Amazon Glacier service.

For more information on Glacier please visit the below URL:
https://aws.amazon.com/glacier/faqs/

369
Q

Question 371

A company does not want to manage their databases. Which of the following services are fully managed databases provided by AWS?

A. AWS RDS

B. DynamoDB

C. Oracle RDS

D. Elastic Map Reduce

A

Answer: B

DynamoDB is a fully managed NoSQL offering provided by AWS. It is now available in most regions for users to consume. AWS RDS database is not fully managed database, it is partially managed. For RDS, we still need to specify the server capacity , security group etc. This is the point most of them are confused, because they assume that RDS is the fully managed database. Even though the question doesn’t ask about the type of database (NOSQL), the correct option is DynamoDB. For the fully managed option it is Aurora and DynamoDB. So, the correct option in this question is DynamoDB.

The link provides the full details on the product

  1. http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Introduction.html
  2. https://aws.amazon.com/products/databases/
370
Q

Question 372

Which of the following requires a custom CloudWatch metric to monitor?

A. Memory Utilization of an EC2 instance

B. CPU Utilization of an EC2 instance

C. Disk Reads activity of an EC2 instance

D. Networks packets out of an EC2 instance

A

Answer: A

Memory Utilization is a metric not offered directly by CloudWatch. So when you view the CloudWatch metrics for your EC2 instance, you can see CPU Utilization and Disk Read Operations metrics. You can also see Network statistics for Data transfer, but you will not be able to see Memory Utilization. This will be a custom CloudWatch metric.

For more information on CloudWatch, please refer the below URL:
https://aws.amazon.com/cloudwatch/faqs/

371
Q

Question 373

Which of the following instance types are available as SSD backed storage? Choose 2 answers from the options below

A. General purpose T2

B. General purpose M3

C. Compute-optimized C4

D. Compute-optimized C3

A

Answer: B, D

The screenshots show the details for M3 and C3 instance types.

For details for all instance types, please visit the URL:
https://aws.amazon.com/ec2/instance-types/

372
Q

Question 374

There is a requirement to install Perl on a Linux instance when it is launched. Which feature allows you to accomplish this requirement?

A. User Data

B. EC2Config Service

C. IAM Roles

D. AWS Config

A

Answer: A

When you configure an instance during creation, you can add custom scripts to the User data section. So in Step 3 of creating an instance, in the Advanced Details section, we can enter custom scripts in the User Data section. The below script installs Perl during the instance creation of the EC2 instance.

For more information on user metadata and user data , please visit the below URL:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html

373
Q

Question 375

An IAM user has been created in AWS. But the user is not able to perform any actions. What is the reason for this?

A. IAM users are created by default with partial permissions

B. LAM users are created by default with full permissions

C. IAM users are created by default with no permissions

A

Answer: C

By default no permissions are given to the user when they are created. Below is a snapshot of a newly created user. You can see that by default no permissions are assigned to the user.

For more information on LAM users , please visit the below URL:
https://aws.amazon.com/iam/details/manage-users/

374
Q

Question 376

What happens when an instance behind an ELB fails a health check?

The instance gets terminated automatically by the ELB.

B. The instance gets quarantined by the ELB for root cause analysis.

C. The instance is replaced automatically by the ELB.

D. The ELB stops sending traffic to the instance that failed its health check

A

Answer: D

To discover the availability of your EC2 instances, a load balancer periodically sends pings, attempts connections, or sends requests to test the EC2 instances. These tests are called health checks. The status of the instances that are healthy at the time of the health check is InService. The status of any instances that are unhealthy at the time of the health check is OutOfService. The load balancer performs health checks on all registered instances, whether the instance is in a healthy state or an unhealthy state. The load balancer routes requests only to the healthy instances. When the load balancer determines that an instance is unhealthy, it stops routing requests to that instance. The load balancer resumes routing requests to the instance when it has been restored to a healthy state. You can see the status of the instance in the Registered Instances section of the load balancer.

For more information on ELB health checks , please visit the below URL: http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-healthchecks.html

375
Q

Question 377

In S3, what is the feature that is available to automatically transfer or archive data to Glacier?

A. Use an EC2 instance and schedule a job to transfer the stale data from their S3 location to Amazon Glacier.

B. Use Life-Cycle Policies

C. Use AWS SQS

D. There is no option, the users will have to download the data and then transfer the data to AWS manually.

A

Answer: B

With Amazon lifecycle policies you can create transition actions in which you define when objects transition to another Amazon S3 storage class. For example, you may choose to transition objects to the STANDARD _IA (IA, for infrequent access) storage class 30 days after creation, or archive objects to the GLACIER storage class one year after creation. Follow the below steps to get this in place:

Step

1) Go the Lifecycle section of the $3 bucket and click on Add Rule Step
2) Choose what you want to export Step
3) Choose the Action to perform and then confirm on the Rule creation in the next screen.

For more information on Lifecycle management, click on the link:
http://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html

376
Q

Question 378

Someone has initiated the snapshot creation of an EBS volume. One of the application still needs to use the same EBS volume. Which of the following scenarios are possible when it comes to usage of an EBS volume while the snapshot is initiated and not completed?

A. Can be used while the snapshot is in progress.

B. Cannot be detached or attached to an EC2 instance until the snapshot completes

C. Can be used in read-only mode while the snapshot is in progress.

D. Cannot be used until the snapshot completes.

A

Answer: A

Snapshots occur asynchronously; the point-in-time snapshot is created immediately, but the status of the snapshot is pending until the snapshot is complete (when all of the modified blocks have been transferred to Amazon S3), which can take several hours for large initial snapshots or subsequent snapshots where many blocks have changed. While it is completing, an in-progress snapshot is not affected by ongoing reads and writes to the volume. You can easily create a snapshot from a volume while the instance is running and the volume is in use. You can do this from the EC2 dashboard.

For more information on EBS snapshots, please visit the link:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSSnapshots.html

377
Q

Question 379

There is a requirement to ensure that an EC2 instance can only be accessed from an IP address of 72.34.51.100. The users should be able to SSH into the instance. Which option will meet the customer requirement?

A. Security Group Inbound Rule: Protocol — TCP. Port Range — 22, Source 72.34.51.100/32

B. Security Group Inbound Rule: Protocol — UDP, Port Range — 22, Source 72.34.51.100/32

C. Network ACL Inbound Rule: Protocol — UDP, Port Range — 22, Source 72.34.51.100/32

D. Network ACL Inbound Rule: Protocol — TCP, Port Range-22, Source 72.34.51.100/0

A

Answer: A

For SSH access, the protocol has to be TCP, so Option B and C are wrong. For Bastion host, only the IP of the client should be put and not the entire network of 72.34.51.100/0 as given in option D. So this option is also wrong. A bastion host is a special purpose computer on a network specifically designed and configured to withstand attacks. The computer generally hosts a single application, for example a proxy server, and all other services are removed or limited to reduce the threat to the computer. In AWS, A bastion host is kept on a public subnet. Users log on to the bastion host via SSH or RDP and then use that session to manage other hosts in the private subnets. This is a security practice adopted by many organization to secure the assets in their private subnets.

For more information on security groups, please refer the below URL:
http: //docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security. html

378
Q

Question 380

Which of the following statements are true about Amazon Reduced Redundancy Storage (RRS) when it comes to availability?

A. RRS has the ability to provide eleven nines availability.

B. RRS has the ability to provide 99.99% availability.

C. RRS has the ability to provide 99% availability.

D. RRS has the ability to provide 100% durability.

A

Answer: B

The Durability and availability are given in the aws site for RRS.

For more information on RRS please visit the URL:
https://aws.amazon.com/s3/reduced- redundancy/

379
Q

Question 381

Which service from AWS allows one to work with existing Chief server configuration?

A. AWS OpsWorks

B. AWS Elastic Beanstalk

C. AWS CloudFormation

D. AWS SNS

A

Answer: A

AWS OpsWorks is a configuration management service that helps you configure and operate applications of all shapes and sizes using Chef. You can define the application’s architecture and the specification of each component including package installation, software configuration and resources such as storage. Start from templates for common technologies like application servers and databases or build your own to perform any task that can be scripted. AWS OpsWorks includes automation to scale your application based on time or load and dynamic configuration to orchestrate changes as your environment scales.

For more information on Opswork, please visit the link:

https: //aws.amazon.com/opsworks/
https: //aws.amazon.com/opsworks/chefautomate/

380
Q

Question 382

Which of the below AWS service can be used to deploy infrastructure using stacks and templates?

A. Amazon Simple Workflow Service

B. AWS Elastic Beanstalk

C. AWS CloudFormation

D. AWS OpsWorks

A

Answer: C

AWS CloudFormation gives developers and systems administrators an easy way to create and manage a collection of related AWS resources, provisioning and updating them in an orderly and predictable fashion. You can use AWS CloudFormation’s sample templates or create your own templates to describe the AWS resources, and any associated dependencies or runtime parameters, required to run your application. You don’t need to figure out the order for provisioning AWS services or the subtleties of making those dependencies work. CloudFormation takes care of this for you. After the AWS resources are deployed, you can modify and update them in a controlled and predictable way, in effect applying version control to your AWS infrastructure the same way you do with your software. You can also visualize templates as diagrams and edit them using a drag-and-drop interface with the AWS CloudFormation Designer.

For more information on Cloudformation, please visit the link:
https://aws.amazon.com/cloudformation/

381
Q

Question 383

Your company currently uses templates to deploy servers in their on-premise infrastructure. They want to have the same template configurations applied when deploying EC2 Instances. Which of the following can be done to ensure that EC2 Instances can be deployed as per the template standards defined by the organization.

A. Use the EC2 metadata feature to deploy those features at runtime.

B. Use the AWSConfig service to deploy updates to the EC2 Instances before they are launched.

C. Create pre-built AMI’s with the desired configuration as the organization templates.

D. It is not possible to define templates for EC2 Instances. You need to deploy the changes manually

A

Answer: C

The AWS Documentation mentions the following An Amazon Machine Image (AMI) provides the information required to launch an instance, which is a virtual server in the cloud. You specify an AMI when you launch an instance, and you can launch as many instances from the AMI as you need. You can also launch instances from as many different AMIs as you need. An AMI includes the following: - A template for the root volume for the instance (for example, an operating system, an application server, and applications) - Launch permissions that control which AWS accounts can use the AMI to launch instances - A block device mapping that specifies the volumes to attach to the instance when it’s launched

For more information on AMI’s, please visit the below URL:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html

382
Q

Question 384

What can be used for EC2 instances in a private subnet to connect to the internet? Choose an answer from the options below.

A. WAFB. Direct Connect

C. NAT GatewayD. VPN

A

Answer: C

You can use a Network Address Translation (NAT) gateway to enable instances in a private subnet to connect to the Internet or other AWS services, but prevent the Internet from initiating a connection with those instances. The below diagram from aws showcases how the NAT instance is used

For more information on NAT Gateways, please visit the URL:
http: //docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-nat-gateway.html

383
Q

Question 385

Which AWS service allows businesses and web application developers an easy and cost effective way to distribute content with low latency and high data transfer speeds?

A. Amazon SES

B. Amazon Cloudtrail

C. Amazon CloudFront

D. Amazon S3

A

Answer: C

Amazon CloudFront is a web service that gives businesses and web application developers an easy and cost effective way to distribute content with low latency and high data transfer speeds. Like other AWS services, Amazon CloudFront is a self-service, pay-per-use offering, requiring no long term commitments or minimum fees. With CloudFront, your files are delivered to end-users using a global network of edge locations.

For more information on CloudFront, please visit the link:
https://aws.amazon.com/cloudfront/

384
Q

Question 386

You try to connect to a newly created Amazon EC2 instance via SSH using PuTTY and get one of the following error messages

Error: Server refused our key (or)

Error: No supported authentication methods available

What steps should you take to identify the source of the behavior?

Choose 2 answers”

You should also verify that your private key (.pem) file has been correctly converted to the format recognized by PuTTY (.ppk).

B. Verify that your IAM user policy has permission to launch Amazon EC2
instances.

C. Verify that you are connecting with the appropriate user name for your AMI.

D. Verify that the Amazon EC2 Instance was launched with the proper AM
role.

A

Answer: A, C

This is clearly given in the AWS documentation:

For more information on the connection errors to EC2 instances, please visit the link: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/TroubleshootingInstancesConnecting.html

385
Q

Question 387

Which feature in AWS is commonly used and best solution to store session data for web based applications?

A. MySQL Installed on two Amazon EC2 Instances in a single Availability Zone

B. Amazon RDS for MySQL with Multi-AZ

C. Amazon ElastiCache

D. Amazon DynamoDB

A

Answer: C

Amazon ElastiCache is a web service that makes it easy to deploy, operate, and scale an in-memory data store or cache in the cloud. The service improves the performance of web applications by allowing you to retrieve information from fast, managed, in-memory data stores, instead of relying entirely on slower disk-based databases. Elastic Cache is a better option when compared to DynamoDB. The main consideration would be the performance. AWS Docs provides following details: In order to address scalability and to provide a shared data storage for sessions that can be accessible from any individual web server, you can abstract the HTTP sessions from the web servers themselves. A common solution to for this is to leverage an In-Memory Key/Value store such as Redis and Memcached. While Key/Value data stores are known to be extremely fast and provide sub-millisecond latency, the added network latency and added cost are the drawbacks. An added benefit of leveraging Key/Value stores is that they can also be utilized to cache any data, not just HTTP sessions, which can help boost the overall performance of your applications.

For more information on Elastic cache, please visit the link:
https://aws.amazon.com/elasticache/ https://aws.amazon.com/caching/session-management/

386
Q

Question 388

Your application is having a very high traffic, so you have enabled autoscaling in multi availability zone to suffice the needs of your application but you observe that one of the availability zone is not receiving any traffic. What can be wrong here?

A. Autoscaling only works for single availability zone

B. Autoscaling can be enabled for multi AZ only in north Virginia region

C. Availability zone is not added to Elastic load balancer

D. Instances need to manually added to availability zone

A

Answer: C

When you add an Availability Zone to your load balancer, Elastic Load Balancing creates a load balancer node in the Availability Zone. Load balancer nodes accept traffic from clients and forward requests to the healthy registered instances in one or more Availability Zones.

For more information on adding AZ’s to ELB, please refer to the below URL: http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/enable-disable-az.html

387
Q

Question 389

Your company currently has an application hosted in their on-premise infrastructure. There is a mandate from management to move the application to the AWS Cloud. AS an architect you want to be cautious for the deployment of the application onto AWS. You have suggested to divert a percentage of the traffic from the users to the new application in AWS during the launch. Once it is confirmed that the cloud based application works with no issues, a full diversion to the new site can be implemented. Which of the following mechanisms can be used to ensure this scenario can be implemented.

A. Use the Classic Elastic Load balancer to divert and proportion the traffic between the on-premise and AWS hosted application.

B. Use the Application Elastic Load balancer to divert and proportion the traffic between the on-premise and AWS hosted application.

C. Use Route53 with failover routing policy to divert and proportion the traffic between the on-premise and AWS hosted application.

D. Use Route53 with Weighted routing policy to divert and proportion the traffic between the on-premise and AWS hosted application.

A

Answer: D

The Weighted Routing policy is the best option here. You can ensure that the CNAME for your domain gets a lower proportion for the application hosted in AWS initially. Later on the percentage can be increased based on the application performance The AWS documentation mentions the following on Route 53 Weighted Routing policy Weighted routing lets you associate multiple resources with a single domain name (example.com) or subdomain name (acme.example.com) and choose how much traffic is routed to each resource. This can be useful for a variety of purposes, including load balancing and testing new versions of software.

For more information on Weighted Routing policy, please refer to the below URL:
http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html

388
Q

Question 390

What step from the below options can be carried out to ensure that after an EBS
volume is deleted, a similar volume with the same data can be created at a later
stage.

A. Create a copy of the EBS volume (not a snapshot)

B. Store a snapshot of the volume

C. Download the content to an EC2 instance

D. Back up the data in to a physical disk

A

Answer: B

Snapshots occur asynchronously; the point-in-time snapshot is created immediately, but the status of the snapshot is pending until the snapshot is complete (when all of the modified blocks have been transferred to Amazon S3), which can take several hours for large initial snapshots or subsequent snapshots where many blocks have changed. While it is completing, an in progress snapshot is not affected by ongoing reads and writes to the volume. You can easily create a snapshot from a volume while the instance is running and the volume is in use. You can do this from the EC2 dashboard.

For more information on EBS snapshots, please visit the link:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSSnapshots.html

389
Q

Question 391

Which of the AWS Services following can be used to build an application based on a serverless architecture. Choose 3 answers from the options given below

A. AWS API Gateway

B. AWS Lambda

C. AWS DynamoDB

D. AWS EC2

A

Answer: A, B, C

This is given in the AWS documentation

For more information on serverless platform, please refer to the below URL:
https://aws.amazon.com/serverless/

390
Q

Question 392

In the Amazon CloudWatch, which metric should I be checking to ensure that your DB Instance has enough free Memory?

A. FreeStorage

B. Freeable Memory

C. FreeStorageVolume

D. FreeDBStorageSpace

A

Answer: B

When you go to the Monitoring tab for your AWS RDS instance, you will be able to see the CloudWatch metrics.

For more information on Amazon Cloudwatch, please visit the below URL:
https://aws.amazon.com/cloudwatch/

391
Q

Question 393

You have an Autoscaling Group which is launching a set of t2.small instances. You now need to replace those instances with a larger instance type. How would you go about making this change in an ideal manner?

A. Change the Instance type in the current launch configuration to the new instance type.

B. Create another Autoscaling Group and attach the new instance type.

C. Create a new launch configuration with the new instance type and update your Autoscaling Group.

D. Change the Instance type of the Underlying EC2 instance directly.

A

Answer: C

The AWS Documentation mentions A launch configuration is a template that an Auto Scaling group uses to launch EC2 instances. When you create a launch configuration, you specify information for the instances such as the ID of the Amazon Machine Image (AMI), the instance type, a key pair, one or more security groups, and a block device mapping. If you’ve launched an EC2 instance before, you specified the same information in order to launch the instance. When you create an Auto Scaling group, you must specify a launch configuration. You can specify your launch configuration with multiple Auto Scaling groups. However, you can only specify one launch configuration for an Auto Scaling group at a time, and you can’t modify a launch configuration after you’ve created it. Therefore, if you want to change the launch configuration for your Auto Scaling group, you must create a launch configuration and then update your Auto Scaling group with the new launch configuration.

For more information on launch configurations please see the below link:
http://docs.aws.amazon.com/autoscaling/latest/userguide/LaunchConfiguration.html

392
Q

Question 394



In what events would cause Amazon RDS to initiate a failover to the standby replica? Choose 3 answers from the options given below

A. Loss of availability in primary Availability Zone

B. Loss of network connectivity to primary

C. Storage failure on secondary

D. Compute unit failure on primary

A

Answer: A, B, D

Amazon RDS detects and automatically recovers from the most common failure scenarios for Multi-AZ deployments so that you can resume database operations as quickly as possible without administrative intervention. Amazon RDS automatically performs a failover in the event of any of the following: Loss of availability in primary Availability Zone Loss of network connectivity to primary Compute unit failure on primary Storage failure on primary Note: When operations such as DB Instance scaling or system upgrades like OS patching are initiated for Multi-AZ deployments, for enhanced availability, they are applied first on the standby prior to an automatic failover. As a result, your availability impact is limited only to the time required for automatic failover to complete. Note that Amazon RDS Multi-AZ deployments do not failover automatically in response to database operations such as long running queries, deadlocks or database corruption errors.

For more information on read replicas, please visit the below URL:
https://aws.amazon.com/rds/details/read-replicas/

393
Q

Question 395



Which of the following tools is available to send log data from EC2 Instances.

CloudWatch Logs Agent

B. CloudWatch Agent

C. Logs Stream

A

Answer: A

The AWS Documentation mentions the following The CloudWatch Logs agent provides an automated way to send log data to CloudWatch Logs from Amazon EC2 instances. The agent is comprised of the following components: -

A plug-in to the AWS CLI that pushes log data to CloudWatch Logs. -

A script (daemon) that initiates the process to push data to CloudWatch Logs.

A cron job that ensures that the daemon is always running.

For more information on Cloudwatch logs Agent, please see the below link:
http: //docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AgentReference.html

394
Q

Question 396



You have a business-critical two tier web app currently deployed in 2 availability zones in a single region, using Elastic Load Balancing (ELB) and Auto-Scaling. The app depends on synchronous replication at the database layer. The application needs to remain fully available even if one application AZ goes off-line and AutoScaling cannot launch new instances in the remaining AZ. How can the current architecture be enhanced to ensure this requirement?

A. Deploy in 2 regions using Weighted Round Robin with AutoScaling minimums set of 50% peak load per Region.

B. Deploy in 3 AZ with Autoscaling minimum set to handle 33 percent peak load per zone.

C. Deploy in 3 AZ with Autoscaling minimum set to handle 50 percent peak load per zone.

D. Deploy in 2 regions using Weighted Round Robin with AutoScaling minimums set of 100% peak load per Region.

A

Answer: C

Since the requirement is that the application should never go down even if an AZ is not available, we need to maintain 100% availability. Option A and D are incorrect because region deployment is not possible for ELB. ELB’s can manage traffic within a region and not between regions. Option B is incorrect because even if one AZ goes down, we would be operating at only 66% and not the required 100%.

For more information on Autoscaling please visit the below URL:
https://aws.amazon.com/autoscaling/

395
Q

Question 281

Your company has resources set up on the AWS Cloud. Your company is now going through a set of scheduled audits by an external auditing firm. Which of the following services can be utilized to help ensure the right information is present for auditing purposes.

A. AWS CloudTrail

B. AWS VPC

C. AWS EC2

D. AWS Cloudwatch

A

Answer A

The AWS Documentation mentions the following:

AWS Cloudtrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. With CloudTrail, you can log, continuously monitor, and retain account activity related to actions across your AWS infrastructure. CloudTrail provides event history of your AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command line tools, and other AWS services. This event history simplifies security analysis, resource change tracking, and troubleshooting. For more information on CloudTrail, please refer to the below URL:

https://aws.amazon.com/cloudtrail

396
Q

Question 282

Which of the following will incur a cost when working with AWS resources. Choose 2 answers from the options given below

A. A running EC2 Instance

B. A stopped EC2 Instance

C. EBS Volumes attached to stopped EC2 Instances

D. Using an Amazon VPC

A

Answer: A, C

The AWS Documentation clearly mentions the cost to EC2 Instances Billing commences when Amazon EC2 initiates the boot sequence of an AMI instance. Billing ends when the instance terminates, which could occur through a web services command, by running “shutdown -h”, or through instance failure. When you stop an instance, we shut it down but don’t charge hourly usage for a stopped instance, or data transfer fees, but we do charge for the storage for any Amazon EBS volumes.

For more information, please visit the below URL: https://aws.amazon.com/ec2/faqs/

The AWS Documentation clearly mentions the cost with regards to the VPC There are no additional charges for creating and using the VPC itself.

For more information, please visit the below URL: https://aws.amazon.com/vpe/faqs