Test - 2 Flashcards
Question 397
A VPC public subnet is one that (Choose one of the correct option below):
A. Has at least one route in its associated routing table that uses an Internet gateway
B. Includes a route in its associated routing table via a Network Address Translation(NAT) instance.
C. Has a Network Access Control List (NACL) permitting outbound traffic to 0.0.0.0/0
D. Has the public Subnet option selected in its configuration
Answer: A
The public subnet has a route table that uses the internet gateway
For more information on public subnets please visit the below URL:
http: //docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Scenarioi.html
Question 398
What action is required to establish an VPC VPN connection between an on- premise data center and an VPC virtual private gateway?
A. Assign a static internet-routable IP Address to an Amazon VPC customer gateway
B. Modify the main route table to allow traffic to a network address translation instance.
C. Use a dedicated network address translation instance in the public subnet
D. Establish a dedicated networking connection using Direct Connect
Answer: A
When defining a VPN connection between the on-premise network and the VPC, you need to have a customer gateway defined. Since this is accessed over the internet, it needs to have a static internet-routable IP Address.
For more information on VPC VPN connections please visit the below URLs:
http: //docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Introduction.html
http: //docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpn-connections.html
Question 399
A startup company hired you to help them build a mobile application that will ultimately store billions of images and videos on S3. The company is lean on funding and wants to minimize operational costs however they have an aggressive marketing plan, and expect to double their current installation base every six months. Due to the nature of their business they are expecting a sudden and large increases in traffic to and from S3 and need to ensure that it can handle the performance needs of their application. What other information must you gather from this customer in order to determine whether S3 is the right option?
A. You must know how many customers the company has today because this is critical in understanding what their customer base will be in 2 years.
B. You must find out the total number of requests per second at peak usage.
C. You must know the size of the individual objects being written to S3, in order to properly design the key namespace.
D. In order to build the key namespace correctly you must understand the total amount of storage needs for each S3 bucket.
When you define an S3 bucket the billing is done on the requests.
If you go to the
URL: http://calculator.s3.amazonaws.com/index.html
which is the calculator for S3 costs, you can see that the cost is related to the total number of requests in addition to the storage. In order to find the estimated cost for the S3 storage, you should get a number of requests.
Question 400
You have configuring a solution which uses EC2 Instances and an Elastic Load Balancer. Which of the following protocols can be used to ensure that traffic is secure from the client machine to the Elastic Load Balancer. Choose 2 answers from the options given below
A. HTTP
B. HTTPS
C. TCP
D. SSL
Answer: B, D
The HTTPS protocol uses the SSL protocol to establish secure connections over the HTTP layer. You can also use the SSL protocol to establish secure connections over the TCP layer.
For more information on ELB Listener configuration please see the below link:
http: //docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-listener-config.html
Question 401
After creating a new AWS account, you use the API to request 40 on-demand EC2 instances in a single AZ. After 20 successful requests, subsequent requests failed. What could be a reason for this issue, and how can you resolve it?
A. You encountered a soft limit of 20 instances per region. Submit the limit increase form and retry the failed requests once approved.
B. AWS allows you to provision no more than 20 instances per AZ. Select a different AZ and retry the failed request.
C. You need to use VPC in order to provision more than 20 instances in a single AZ. Simply terminate the resources already provisioned and re-launch them all in a VPC.
D. You encountered an API throttling situation and should try the failed requests using an exponential decay retry algorithm.
Answer: A
There is a soft limits of 20 instances. Since this is across an instance family,
option B is wrong because it will not work even if you try another availability zone.
For more information on all service limits please visit the below URL:
https: //aws.amazon.com/ec2/faqs/#How_many_instances_can_I_run_in_Amazon_ECz2
http: //docs.aws.amazon.com/general/latest/gr/aws_service_limits.html
Question 402
You have been tasked with creating a VPC network topology for your company. The VPC network must support both internet facing applications and internally-facing applications accessed only over VPN. Both Internet-facing and internally- applications must be able to leverage at least 3 AZs for high availability. At a minimum, how many subnets must you create within your VPC to accommodate these requirements?
A.2
B.3
C4
D.6
Internet as well as intranet(private) applications must be able to make use of at least three Availability Zones for high availability. So 3 subnets for internet and 3 subnets for private is 6 subnets in total.
For more information on VPC and subnets please visit the below URL:
http: //docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Subnets.html
Question 403
You receive a Linux Spot Instance at a bid of $0.05/hr. After 30 minutes, the Spot prices increases to $0.06/hr and your Spot Instance is terminated by AWS. What was the total EC2 compute cost of running your Spot Instances?
A. $0.025
B. $0.03
C. $0.05
D. $0.06
Answer: A
From 2nd October 2017, per second billing has come into effect for some EC2 instances and EBS. AWS per-second billing will apply to Linux. On-Demand, Reserved, and Spot EC2 instances. However, Per-second billing is not applicable to Microsoft Windows instances or to all Linux distributions, so some Linux AMIs may still have an hourly charge.
https://aws.amazon.com/blogs/aws/new-per-second-billing-for-ec2-instances-and-ebs-volumes/
With per-second billing effect, if AWS stops your instance, you will be billed for exactly what you have used. For example, if your you after a half an hour of use, you only pay for the 30 mins instead of a full hour.
For more information on spot instance pricing please visit the below URL:
https://aws.amazon.com/ec2/spot/pricing/
Question 404
Which of the following is a durable key-value store?
A. Amazon Simple Storage Service
B. Amazon Simple Queue Service
C. Amazon Simple Workflow Service
D. Amazon Simple Notification Service
Answer: A
This is clearly given in the AWS documentations:
For more information on S3 please visit the below URLs:
http: //docs.aws.amazon.com/AmazonS3/latest/dev/UsingObjects.html
https: //aws.amazon.com/s3/details
Question 405
In reviewing the Auto-Scaling events for your application you notice that your application is scaling up and down multiple times in the same hour. What design choice could you make to optimize for costs while preserving elasticity? Select 2 options.
A. Modify the Auto Scaling policy to use scheduled scaling actions
B. Modify the Auto Scaling Group cool down timers
C. Modify the Amazon Cloudwatch alarm period that triggers your AutoScaling scale down policy.
D. Modify the Auto Scaling group termination policy to terminate the newest instance first.
Answer: B, C
The Auto Scaling cooldown period is a configurable setting for your Auto Scaling group that helps to ensure that Auto Scaling doesn’t launch or terminate additional instances before the previous scaling activity takes effect. After the Auto Scaling group dynamically scales using a simple scaling policy, Auto Scaling waits for the cool down period to complete before resuming scaling activities. When you manually scale your Auto Scaling group, the default is not to wait for the cool down period, but you can override the default and honor the cooldown period. Note that if an instance becomes unhealthy, Auto Scaling does not wait for the cooldown period to complete before replacing the unhealthy instance.
For more information on Autoscale cool down timers please visit the URL:
http: //docs.aws.amazon.com/autoscaling/latest/userguide/Cooldown.html
You can also modify the CloudWatch triggers to ensure the thresholds are appropriate for the scale down policy.
For more information on Autoscaling user guide please visit the URL:
http://docs.aws.amazon.com/autoscaling/latest/userguide/as-scale-based-on-demand.html
Question 406
Which route must be added to your routing table in order to allow connections to the internet from your subnet?
A. Destination:0.0.0.0/0–>Target:your internet gateway
B. Destination:192.168.1.257/0–>Target:your internet gateway
C. Destination:0.0.0.0/33–>Target:your virtual private gateway
D. Destination:0.0.0.0/0–> Target:0.0.0.0/24
Answer: A
The question indicates a public subnet. The public subnet has a route table that uses the internet gateway.
For more information on public subnets please visit the below URL:
http: //docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Scenarioi.html
Question 407
You are deploying an application on Amazon EC2 that must call AWS API’s. What is the method for securely passing credentials to the application that you use?
A. Embed the API credentials into your JAR files.
B. Use the AWS Identity and Access Management (IAM) roles for EC2 instances
C. Store API credentials as an object in S3.
D. Pass API credentials to the instance using instance userdata.
Answer: B
An IAM role is similar to a user, in that it is an AWS identity with permission policies that determine what the identity can and cannot do in AWS. However, instead of being uniquely associated with one person, a role is intended to be assumable by anyone who needs it. Also, a role does not have any credentials (password or access keys) associated with it. Instead, if a user is assigned to a role, access keys are created dynamically and provided to the user.
For more information on IAM role please visit the below URL:
http: //docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html
Question 408 What are some of the metrics that are monitored by AWS Lambda? Choose 3 answers from the options given below.
A. Invocations
B. Duration
C. Errors
D. Database Changes
Answer: A, B, C
AWS Lambda automatically monitors functions on your behalf, reporting metrics through Amazon CloudWatch. These metrics include Invocations, Duration, and Errors.
For more information on Lambda metrics please visit the below URL:
https: //docs.aws.amazon.com/lambda/latest/dg/monitoring-functions-metrics.html
http: //docs.aws.amazon.com/lambda/latest/dg/monitoring-functions-access-metrics.html
Question 409
There is a new facility from AWS which allows for fast, easy, and secure transfers of files over long distances between your client and your Amazon S3 bucket. What is this service called?
A. File Transfer
B. HTTP Transfer
C. S3 Transfer Acceleration
D. Kinesis Acceleration
Answer: C
To know more about S3 transfer acceleration, please visit the below URL:
http: //docs.aws.amazon.com/AmazonS3/latest/dev/transfer-acceleration.html
Question 410
What are the languages currently supported by AWS Lambda? Choose 3 answers from the options given below.
A. Node.js
B. Angular.js
C. Java
D. C#
Answer: A, C, D
AWS Lambda runs your code on a high-availability compute infrastructure and performs all of the administration of the compute resources, including server and operating system maintenance, capacity provisioning and automatic scaling, code monitoring and logging. All you need to do is supply your code in one of the languages that AWS Lambda supports (currently Node.js, Java, C# and Python).
For more information on Lamda please visit the below URL:
http: //docs.aws.amazon.com/lambda/latest/dg/welcome.html
Question 411
Your company has an application hosted in AWS which makes use of DynamoDB. There is a requirement from the IT security department to ensure that all source IP addresses which make calls to the DynamoDB tables are recorded. Which of the following services can be used to ensure this requirement is fulfilled.
A. AWS Code Commit
B. AWS Code Pipeline
C. AWS CloudTrail
D. AWS Cloudwatch
Answer: C
The AWS Documentation mentions the following DynamoDB is integrated with CloudTrail, a service that captures low-level API requests made by or on behalf of DynamoDB in your AWS account and delivers the log files to an Amazon S3 bucket that you specify. CloudTrail captures calls made from the DynamoDB console or from the DynamoDB low-level API. Using the information collected by CloudTrail, you can determine what request was made to DynamoDB, the source IP address from which the request was made, who made the request, when it was made, and so on.
For more information on DynamoDB and Cloudtrail, please refer to the below link:
http: //docs.aws.amazon.com/amazondynamodb/latest/developerguide/logging-using-cloudtrail.html
Question 412
Which of the following statements are false about Amazon Glacier. Choose one answer from the options given below.
A. It supports archive operations of Upload, Download and Delete
B. The archives are mutable
C. Uploading an archive is a synchronous operation
D. Archives can be as large at 40TB
Answer: B
This is clearly given in the AWS documentation. A single archive can be as large as 40 terabytes. You can store an unlimited number of archives and an unlimited amount of data in Amazon Glacier. Each archive is assigned a unique archive ID at the time of creation, and the content of the archive is immutable, meaning that after an archive is created it cannot be updated.
For more information on AWS Glacier please visit the below URL:
https://aws.amazon.com/glacier/details/
Question 413
Your company currently has a web application hosted on a single EC2 Instance.
The load on the application has increased over time and now the users are complaining
of slow response time. Which of the following implementations can help alleviate this
issue.
A. Attach an additional EBS Volume to the EC2 Instance and direct the application to make the reads from this new volume.
B. Attach an additional network interface with an Elastic IP so that requests can be made onto multiple IP’s.
C. Launch additional EC2 Instances in a web server farm type configuration and place them behind an Elastic Load Balancer.
D. Launch additional EC2 Instances in a web server farm type configuration and place them behind Routes53.
Answer: C
The AWS mentions the following about the Elastic Load balancer that can be used to help in this issue A load balancer distributes incoming application traffic across multiple EC2 instances in multiple Availability Zones. This increases the fault tolerance of your applications. Elastic Load Balancing detects unhealthy instances and routes traffic only to healthy instances. Your load balancer serves as a single point of contact for clients. This increases the availability of your application. You can add and remove instances from your load balancer as your needs change, without disrupting the overall flow of requests to your application. Elastic Load Balancing scales your load balancer as traffic to your application changes over time. Elastic Load Balancing can scale to the vast majority of workloads automatically.
For more information on the Elastic Load Balancer, please refer to the below link:
http: //docs.aws.amazon.com/elasticloadbalancing/latest/classic/introduction.html
Question 414
Which of the following are used to get data records from Amazon Kinesis? Choose an answer from the options below
A. Consumer
B. Stream
C. Producer
D. None of the above
Answer: A
Aconsumer gets data records from Amazon Kinesis streams. A consumer, known
as an Amazon Kinesis Streams application, processes the data records from a stream.
For more information on AWS Kinesis consumers please visit the below URL:
http: //docs.aws.amazon.com/streams/latest/dev/amazon-kinesis-consumers.html
Question 415
What is the maximum possible retention period for data in Kinesis Streams? Choose an answer from the options below.
A. 5 days
B. 7 days
C. 10 days
D. 24 hours
Answer: B
For more information on AWS Kinesis consumers please visit the below URL:
http: //docs.aws.amazon.com/streams/latest/dev/amazon-kinesis-consumers.html
Data records are accessible for a default of 24 hours from the time they are added toa stream. This time frame is called the retention period and is configurable in hourly increments from 24 to 168 hours (1 to 7 days).
Question 416
Which of the following is false when you create an encrypted EBS volume?
Data is encrypted at rest inside the volume
B. Data is encrypted when it is moved from one instance to another in the same subnet.
C. Data is encrypted when data is moved between the volume and the instance
D. All snapshots created from the volume are encrypted
Answer: B
The AWS mentions the following about EBS Encryption Amazon EBS encryption offers you a simple encryption solution for your EBS volumes without the need for you to build, maintain, and secure your own key management infrastructure. When you create an encrypted EBS volume and attach it to a supported instance type, the following types of data are encrypted: -
Data at rest inside the volume -
All data moving between the volume and the instance -
All snapshots created from the volume
For more information on EBS Encryption, please refer to the below link:
http: //docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption. html
Question 417
In AWS what constitutes temporary security credentials? Choose 3 answers from the options given below
A. AWS Access Key ID
B. Secret Access Key
C. Security Token
D. SSL Keys
Answer: A, B, C
This is given in the AWS documentation:
For more information on LAM please visit the below URL:
https://aws.amazon.com/iam/faqs/
Question 418
Your company has a set of resources hosted in AWS. Your IT Supervisor is concerned with the costs being incurred with the current set of AWS resources and wants to monitor the cost usage. Which of the following mechanisms can be used to monitor the costs of the AWS resources and also look at the possibility of cost optimization. Choose 3 answers from the options given below
A. Use the Cost Explorer to see the costs of AWS resources
B. Create budgets in billing section so that budgets are set beforehand
C. Send all logs to Cloudwatch logs and inspect the logs for billing details
D. Consider using the Trusted Advisor
Answer: A, B, D
The AWS Documentation mentions the following
1) For a quick, high-level analysis use Cost Explorer, which is a free tool that you can use to view graphs of your AWS spend data. It includes a variety of filters and preconfigured views, as well as forecasting capabilities. Cost Explorer displays data from the last 13 months, the current month, and the forecasted costs for the next three months, and it updates this data daily.
2) Consider using budgets if you have a defined spending plan for a project or service and you want to track how close your usage and costs are to exceeding your budgeted amount. Budgets use data from Cost Explorer to provide you with a quick way to see your usage-to-date and current estimated charges from AWS. You can also set up notifications that warn you if you exceed or are about to exceed your budgeted amount.
3) Visit the AWS Trusted Advisor console regularly. Trusted Advisor works like a customized cloud expert, analyzing your AWS environment and providing best practice recommendations to help you save money, improve system performance and reliability, and close security gaps.
For more information on cost optimization, please visit the below URL:
https://aws.amazon.com/answers/account-management/cost-optimization-monitor/
Question 419
Who are federated users when it comes to AWS? Choose an answer from the options given below
A. These are LAM users in aws
B. These are IAM groups in aws
C. These are Federated users (external identities) are users you manage outside of AWS in your corporate directory
D. None of the above
Answer: C
This is given in the AWS documentation: For more information on IAM please visit
the below URL: https://aws.amazon.com/iam/faqs/
Question 420
As a solution architect, you have been asked to decide on whether to use Amazon EBS-backed or instance-store backed instance. What is one key difference between an Amazon EBS-backed and an instance-store backed instance that you need to keep in mind.
A. Amazon EBS-backed instances can be stopped and restarted.
B. Instance-store backed instances can be stopped and restarted.
C. Auto scaling requires using Amazon EBS-backed instances.
D. Virtual Private Cloud (VPC) requires EBS backed instances.
Answer: A
Amazon EBS-backed instances can be stopped and restarted.
Please visit the below URL for the key differences between EBS and instance store volumes:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ComponentsAMIs.html
Amazon EBS-backed AMI can be placed in stopped state where instance is not running, but the root volume is persisted in Amazon EBS. Amazon Instance store -backed AMI cannot be in stopped state; instances are running or terminated
Question 421
Which of the following are not supported in the classic load balancer service provided by AWS? Choose an answer from the options given below.
A. Health Checks
B. Cloudwatch Metrics
C. Host Based Routing
D. Access Logs
Answer: C
This is clearly given in the AWS documentation: For more information on ELB please visit the below URL:
https://aws.amazon.com/elasticloadbalancing/classicloadbalancer/faqs/
Question 422
Your company has an on-premise Active Directory setup in place. The company has extended their footprint on AWS , but still want to have the ability to use their on- premise Active Directory for authentication. Which of the following AWS services can be used to ensure that AWS resources such as AWS Workspaces can continue to use the existing credentials stored in the on-premise Active Directory.
A. Use the Active Directory service on AWS
B. Use the AWS Simple AD service
C. Use the Active Directory connector service on AWS
D. Use the ClassicLink feature on AWS
Answer: C
The AWS Documentation mentions the following AD Connector is a directory gateway with which you can redirect directory requests to your on-premises Microsoft Active Directory without caching any information in the cloud. AD Connector comes in two sizes, small and large. A small AD Connector is designed for smaller organizations of up to 500 users. A large AD Connector can support larger organizations of up to 5,000 users.
For more information on the AD connector, please refer to the below URL:
http://docs.aws.amazon.com/directoryservice/latest/admin-guide/directory_ad_connector.html
Question 423
Which DNS record types does Amazon Route 53 support? Select 3 options.
A. A(address record)
B. AAAA(IPv6 address record)
C. TXT (txt record)
D. Host Information records (HINFO)
Answer: A, B, C
For more information on Route53, please visit the below URL:
https: //aws.amazon.com/route53/faqs/
Question 424
A user has been created in IAM but the user is still not able to make API calls. After creating a new IAM user which of the following must be done before they can successfully make API calls?
A. Add a password to the user.
B. Enable Multi-Factor Authentication for the user.
C. Assign a Password Policy to the user.
D. Create a set of Access Keys for the user.
Answer: D
In IAM , when you create a user , you need to download the Access Key ID and Secret access key so that the user can access aws.
For more information on IAM please visit the following URL:
http: //docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html
Question 425
Which of the following is not supported by AWS Import/Export?
Import to Amazon S3
B. Export from Amazon S3
C. Import to Amazon EBS
D. Import to Amazon Glacier
E. Export from Amazon Glacier
Answer: E
The AWS documentation mentions the following AWS Import/Export accelerates transferring data between the AWS cloud and portable storage devices that you mail to us. AWS Import/Export is a good choice if you have 16 terabytes (TB) or less of data to import into Amazon Simple Storage Service or Amazon Elastic Block Store (Amazon EBS). You can also export data from Amazon S3 with AWS Import/Export. Before Amazon Glacier data can be exported it needs to be restored to Amazon S3 using the S3 Lifecycle Restore feature For more information on AWS Import/Export.
please refer to the below link: https://docs.aws.amazon.com/amazonglacier/latest/dev/uploading-an-archive.html
http: //docs.aws.amazon.com/AWSImportExport/latest/DG/whatisdisk.html
Question 426
Which of the following programming languages have an officially supported AWS SDK? Select 2 options.
A. PHP
B. Pascal
C. Java
D. SQL
E. Perl
Answer: A, C
This is as per the AWS documentation For more information on aws toolkits available, please refer to the below url: https://aws.amazon.com/tools/
Question 427
When using the following AWS services, which should be implemented in multiple Availability Zones for high availability solutions? Choose 2 answers from the options below.
A. Amazon DynamoDB
B. Amazon Elastic Compute Cloud (EC2)
C. Amazon Elastic Load Balancing
D. Amazon Simple Storage Service (S3)
Answer: B, C
The snapshot from the AWS documentation shows how the ELB and EC2 instances get setup for high availability. You have the ELB placed in front of the instances. The instances are placed in different AZ’s. For more information on the ELB, please visit the below URL: https://aws.amazon.com/elasticloadbalancing/
Option A is wrong because the service runs across Amazon’s proven, high-availability data centers. The service replicates data across three facilities in an AWS Region to provide fault tolerance in the event of a server failure or Availability Zone outage.
Option D is wrong because Amazon Sg Standard and Standard - IA redundantly stores your objects on multiple devices across multiple facilities in an Amazon S3 Region. The service is designed to sustain concurrent device failures by quickly detecting and repairing any lost redundancy
Question 428
Which of the following statements are true with regards to EBS Volumes. Choose 3 correct answers from the options given below
EBS Volumes are automatically replicated within that zone to prevent data loss due to failure of any single hardware component
B. EBS Volumes can be attached to any EC2 Instance in any AZ.
C. After you attach a volume, it appears as a native block device similar to a hard drive or other physical device.
D. An EBS volume can be attached to only one instance at a time
Answer: A, C, D
When you create an EBS volume in an Availability Zone, it is automatically replicated within that zone to prevent data loss due to failure of any single hardware component. After you create a volume, you can attach it to any EC2 instance in the same Availability Zone. After you attach a volume, it appears as a native block device similar to a hard drive or other physical device. At that point, the instance can interact with the volume just as it would with a local drive; the instance can format the EBS volume with a file system, such as ext3, and then install applications. An EBS volume can be attached to only one instance at a time within the same Availability Zone. However, multiple volumes can be attached to a single instance.
Option B is invalid because you can attach EBS Volumes to any EC2 instance in the same Availability Zone only
For more information on EBS Volumes, please visit the below URL:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumes.html
Question 429
You are planning on hosting a static website on an EC2 Instance. Which of the below aspects can be used to create a highly available environment. Choose 3 answers from the options given below
A. An auto scaling group to recover from EC2 instance failures
B. Elastic Load Balancer
C. An SQS queue
D. Multiple Availability Zones
Answer: A, B, D
The diagram from AWS documentation shows an example of a high available architecture for hosting EC2 Instances Here you have the
1) ELB which is placed in front of the users which helps in directing the traffic to the EC2 Instances.
2) The EC2 Instances which are placed as part of an AutoScaling Group
3) And then you have multiple subnets which are mapped to multiple availability zones For a static web site , the SQS is not required to build such an environment. If you have a system such as an order processing systems , which has that sort of queuing of requests , then that could be a candidate for using SQS Queues.
For more information on high availability, please visit the below URL:
https://media.amazonwebservices.com/architecturecenter/AWS_ac_ra_ftha_o4.pdf
Question 430
Which of the following services does not natively encrypts data at rest within an AWS region?
(Choose two.)
A. AWS Storage Gateway
B. Amazon DynamoDB
C. Amazon CloudFront
D. Amazon Glacier
E. Amazon Simple Queue Service
Answer: C, E
CloudFront and SQS do not have Encryption at Rest. All remaining options have Encryption at Rest. This is clearly given in the AWS documentation
For information on Amazon Glacier, please refer to the below link:
https://aws.amazon.com/glacier/faqs/
For information on Amazon Storage gateways, please refer to the below link:
https://aws.amazon.com/storagegateway/faqs/
On Feb 8 2018, Amazon announced Encryption at Rest for DynamoDB For information on Amazon DynamoDb Encryption at Rest,
please refer to the below link: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/EncryptionAtRest.html “
Question 431
Amazon’s Redshift uses which block size for its columnar storage
A. 2KB
B. 8KB
C. 16KB
D. 32KB
E. 1024KB
Answer: E
Columnar storage for database tables is an important factor in optimizing analytic query performance because it drastically reduces the overall disk I/O requirements and reduces the amount of data you need to load from disk. Typical database block sizes efficient and further reduces the number of I/O requests needed to perform any database loading or other operations that are part of query execution. For more information on Redshift column storage,
please visit the below URL:
http: //docs.aws.amazon.com/redshift/latest/dg/c_columnar_storage_disk_mem_mgmnt.html
Question 432
Which procedure for backing up a relational database on EC2 that is using a set of RAIDed EBS volumes for storage minimizes the time during which the database cannot be written to and results in a consistent backup?
A. 1. Detach EBS volumes, 2. Start EBS snapshot of volumes, 3. Re-attach EBS volumes
B. 1. Stop the EC2 Instance. 2. Snapshot the EBS volumes
C. 1. Suspend disk I/O, 2. Create an image of the EC2 Instance, 3. Resume disk 1/0
D. 1. Suspend disk I/O, 2. Start EBS snapshot of volumes, 3. Resume disk 1/0
E. 1. Suspend disk I/O, 2. Start EBS snapshot of volumes, 3. Wait for snapshots to
complete, 4. Resume disk
Answer: E
The AWS Documentation mentions the following when considering snapshot for EBS Volumes in a RAID configuration When you take a snapshot of an attached Amazon EBS volume that is in use, the snapshot excludes data cached by applications or the operating system. For a single EBS volume, this is often not a problem. However, when cached data is excluded from snapshots of multiple EBS volumes in a RAID array, restoring the volumes from the snapshots can degrade the integrity of the array. When creating snapshots of EBS volumes that are configured in a RAID array, it is critical that there is no data I/O to or from the volumes when the snapshots are created. RAID arrays introduce data interdependencies and a level of complexity not present in a single EBS volume configuration. For more information on this,
please refer to the below link:
https://aws.amazon.com/premiumsupport/knowledge-center/snapshot-ebs-raid-array/
Question 433
For which of the following use cases are Simple Workflow Service (SWF) and Amazon EC2 an appropriate solution? Choose 2 answers
A. Using as an endpoint to collect thousands of data points per hour from a distributed fleet of sensors
B. Managing a multi-step and multi-decision checkout process of an e-commerce website
C. Orchestrating the execution of distributed and auditable business processes
D. Using as an SNS (Simple Notification Service) endpoint to trigger execution of video transcoding jobs
E. Using as a distributed session store for your web application
Answer: B, C
The AWS Documentation mentions the following on the AWS Simple Workflow service The Amazon Simple Workflow Service (Amazon SWF) makes it easier to develop asynchronous and distributed applications by providing a programming model and infrastructure for coordinating distributed components and maintaining their execution state in a reliable way. By relying on Amazon SWF, you are freed to focus on building the aspects of your application that differentiate it. For more information on the simple workflow service,
please refer to the below link:
http: //docs.aws.amazon.com/amazonswf/latest/developerguide/swf-dg-intro-to-swf.html
Question 434
An instance can have many states that perform part of its lifecycle. Choose 3 options which are correct states of an instance lifecycle
A. rebooting
B. pending
C. running
D. Shutdown
Answer: A, B, C
The question indicates the different Instance states. For more information on Instance states,
please visit the url
http: //docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-lifecycle.html
Question 435
Which of the following can be used as an origin server in CloudFront? Choose 3 answers from the options given below.
A. A web server running on EC2
B. A web server running in your own datacenter
C. ARDS instance
D. An Amazon S3 bucket
Answer: A, B, D
Currently Cloudfront supports the following types of distributions $3 buckets - When you use Amazon S3 as an origin for your distribution, you place any objects that you want CloudFront to deliver in an Amazon S3 bucket. Custom Origin - A custom origin is an HTTP server, for example, a web server. The HTTP server can be an Amazon EC2 instance or an HTTP server that you manage privately. When you use a custom origin, you specify the DNS name of the server, along with the HTTP and HTTPS ports and the protocol that you want CloudFront to use when fetching objects from your origin.
For more information on Cloudfront Distributions, please visit the url
http: //docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/DownloadDistS3AndCustomOrigins.html
Question 436
A company has configured and peered two VPCs: VPC-1 and VPC-2. VPC-1 contains only private subnets, and VPC-2 contains only public subnets. The company uses a single AWS Direct Connect connection and private virtual interface to connect their on-premises network with VPC-1. Which two methods increases the fault tolerance of the connection to VPC-1? (Choose two.)
A. Establish a hardware VPN over the internet between VPC-2 and the on- premises network.
B. Establish a hardware VPN over the internet between VPC-1 and the on-premises network.
C. Establish a new AWS Direct Connect connection and private virtual interface in the same region as VPC-2.
D. Establish a new AWS Direct Connect connection and private virtual interface in a different AWS region than VPC-1.
E. Establish a new AWS Direct Connect connection and private virtual interface in the same AWS region as VPC-1
Answer: B, E
Having a VPN Connection is considered as a backup to a Direct Connect connection.
Please find the below article on configuring a VPN connection as a backup https://aws.amazon.com/premiumsupport/knowledge-center/configure-vpn-backup-dx/
One can also have another Direct Connect connection , so that if one goes down, the other one would still be active. This needs to be in the same region as VPC-1.
Question 437
By default, what happens to data when an EC2 instance terminates? Select 3 options.
A. For EBS backed AMI, the root EBS volume with operating system preserved by default.
B. For EBS backed AMI, any volume attached apart from the OS volume is preserved
C. All the snapshots of the EBS volume with operating system is preserved
D. For S3 backed AMI, all the data in the local (ephemeral) hard drive is deleted
Answer: B, C, D
Option B is correct because when an instance is terminated, the volume will remain, unless you specifically delete the volume. When you create an instance, you have the root volume that does get deleted on deletion of the instance. But when you add a new volume, by default the “Delete on termination flag” is unchecked. So unless you don’t check this, the volume will remain.
Option C is correct because this is the whole idea of snapshots to remain even if the volume or instance is deleted.
Option D is correct because ephemeral storage is temporary storage by default and gets deleted when the system is terminated.
For more information on EBS volumes, please visit the link -
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumes.html
Question 438
When storing sensitive data on the cloud which of the below options should be carried out on AWS? Choose 3 answers from the options given below.
A. With AWS you do not need to worry about encryption
B. Enable EBS Encryption
C. Encrypt the file system on an EBS volume using Linux tools
D. Enable S3 Encryption
Answer: B, C, D
Amazon EBS encryption offers you a simple encryption solution for your EBS volumes without the need for you to build, maintain, and secure your own key management infrastructure. When you create an encrypted EBS volume and attach it to a supported instance type, the following types of data are encrypted: Data at rest inside the volume. All data moving between the volume and the instance All snapshots created from the volume
For more information on EBS Encryption, please refer to the below link
http: //docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption. html
Data protection refers to protecting data while in-transit (as it travels to and from Amazon S3) and at rest (while it is stored on disks in Amazon S3 data centers). You can protect data in transit by using SSL or by using client-side encryption.
For more information on S3 Encryption, please refer to the below link
http: //docs.aws.amazon.com/AmazonS3/latest/dev/UsingEncryption.html
Question 439 When using the following AWS services, which should be implemented in multiple Availability Zones for high availability solutions? Choose 2 answers
A. Amazon DynamoDB
B. Amazon Elastic Compute Cloud (EC2)
C. Amazon Elastic Load Balancing
D. Amazon Simple Notification Service (SNS)
E. Amazon Simple Storage Service (S3)
Answer: B, C
It is an architecture sample using Elastic Load Balancer , EC2 and Autoscaling Here the web servers are scaled on demand using Autoscaling. They are then placed behind an ELB which is used to distribute the traffic amongst the instances. Also the Web servers are placed between multiple availability zones for fault tolerance.
For more information on best practices for web hosting, please refer to the below URL:
https://do.awsstatic.com/whitepapers/aws-web-hosting-best-practices.pdf
Question 440 What is the default period for EC2 cloudwatch data with detailed monitoring disabled?
A. One second
B. Five seconds
C. One minute
D. Three minutes
E. Five minutes
Answer: E
In Amazon CloudWatch for basic monitoring of EC2 instances, the important metrics are collected at five minute intervals and stored for two weeks.
- CPU load
- disk I/O
- network I/O
For more information on Amazon Cloudwatch EC2 basic monitoring, please visit
https://aws.amazon.com/blogs/aws/amazon-cloudwatch-basic-monitoring-for-ec2-at-no-charge/
Question 441 You are a solutions architect working for a large digital media company. Your company is migrating their production estate to AWS and you are in the process of setting up access to the AWS console using Identity Access Management (IAM). You have created 5 users for your system administrators. What further steps do you need to take to enable your system administrators to get access to the AWS console?
A. Generate an Access Key ID & Secret Access Key, and give these to your system administrators.
B. Enable multi-factor authentication on their accounts and define a password policy.
C. Generate a password for each user created and give these passwords to your system administrators.
D. Give the system administrators the secret access key and access key id, and tell them to use these credentials to log in to the AWS console.
Answer: C
In order to allow the users to log into the console, you need to provide a password for the users. For more information on how to allow users to sign into an account,
please refer to the below URL:
http: //docs.aws.amazon.com/IAM/latest/UserGuide/getting-started_how-users-sign-in.html
Question 442
Which technique can be used to integrate AWS LAM (Identity and Access Management) with an on-premise LDAP (Lightweight Directory Access Protocol) directory service?
A. Use an IAM policy that references the LDAP account identifiers and the AWS credentials.
B. Use SAML (Security Assertion Markup Language) to enable single sign-on between AWS and LDAP.
C. Use AWS Security Token Service from an identity broker to issue short-lived AWS credentials.
D. Use IAM roles to automatically rotate the LAM credentials when LDAP credentials are updated.
E. Use the LDAP credentials to restrict a group of users from launching specific EC2 instance types.
Answer: C
For more information on AWS and SAML, please refer to the below URL:
https://aws.amazon.com/blogs/aws/aws-identity-and-access-management-now-with-identity-federation/
Question 443
Your fortune 500 company has under taken a TCO analysis evaluating the use of Amazon S3 versus acquiring more hardware The outcome was that ail employees would be granted access to use Amazon S3 for storage of their personal documents. Which of the following will you need to consider so you can set up a solution that incorporates single sign-on from your corporate AD or LDAP directory and restricts access for each user to a designated user folder in a bucket? Choose three answers from the options given below
A. Setting up a federation proxy or identity provider
B. Using AWS Security Token Service to generate temporary tokens
C. Tagging each folder in the bucket
D. Configuring LAM role
E. Setting up a matching LAM user for every user in your corporate directory that needs access to a folder in the bucket
Answer: A, B, D
The diagram shows how the setup is done using the Secure token service to achieve integration between AWS and an on premise Active Directory infrastructure. You need to have an identity provider such as Active Directory Federation services. The Secure Token service is used to generate temporary credentials. These credentials are then mapped to corresponding LAM roles.
For more information please refer to the below link:
http: //docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp.html
Question 444
Which of the following are true regarding encrypted Amazon Elastic Block Store (EBS) volumes? Choose 2 answers
A. Supported on all Amazon EBS volume types
B. Snapshots are automatically encrypted
C. Available to all instance types
D. Existing volumes can be encrypted
E. shared volumes can be encrypted
Answer: A, B
Please note the keyword “encrypted” in the question.
Option C is wrong because this there are some instance types that need to IOPS storage and not EBS storage.
Option D is wrong because existing volumes cannot be encrypted.
Option E is wrong because Shared volumes cannot be encrypted.
EBS volumes can be applied to all of the below Volume types:
For more information on EBS volume types, please visit the link:
http: //docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption. html
Question 445
For which of the following use cases are Simple Workflow Service (SWF) and Amazon EC2 an appropriate solution? Choose 2 answers
A. Using as an endpoint to collect thousands of data points per hour from a distributed fleet of sensors
B. Managing a multi-step and multi-decision checkout process of an e-commerce website
C. Orchestrating the execution of distributed and auditable business processes
D. Using as an SNS (Simple Notification Service) endpoint to trigger execution of video transcoding jobs
E. Using as a distributed session store for your web application
Answer: B, C
Amazon Simple Workflow Service (SWF) is a web service that makes it easy to coordinate work across distributed application components. Amazon SWF enables applications for a range of use cases, including media processing, web application back-ends, business process workflows, and analytics pipelines, to be designed as a coordination of tasks. For collection of data points, this is normally done via Amazon Kinesis, so
Option A is wrong. In SWF, you can create multi-step and decision processes for managing approvals during the workflow process, hence
Option B is correct. Since business processed can be orchestrated in AWF,
Option C is correct. Video transcoding videos generally don’t need SWF and rely more on SQS, hence
Option D is wrong. Option E is wrong because you need to use a caching solution for this and now SWF.
For more information on aws SWF - Please visit the URL -
https://aws.amazon.com/swf/faqs/
Question 446
You need to configure an Amazon S3 bucket to serve static assets for your public-facing web application. Which methods ensure that all objects uploaded to the bucket are set to public read? Choose 2 answers
A. Set permissions on the object to public read during upload.
B. Configure the bucket ACL to set all objects to public read.
C. Configure the bucket policy to set all objects to public read.
D. Use AWS Identity and Access Management roles to set the bucket to public read.
E. Amazon S3 objects default to public read, so no action is needed.
Answer: A, C
To set permissions on buckets and objects, you can give permissions to the bucket beforehand or you can set the permissions to the bucket when an object is uploaded to S3. Option B is incorrect, you cannot configure ACL for all objects to a public read. Even though you can use AWS to create identities, you cannot use it to give public read to a bucket Option E is incorrect, because public read is not set by default. To implement public read, just go to bucket and Permissions section. Click on Add more permissions, choose the Grantee as Everyone, ensure all permissions are given and then click on the Save button.
For more information on access control, please visit the link:
http://docs.aws.amazon.com/AmazonS3/latest/dev/s3-access-control.html
Question 447
Which of the following are valid statements about Amazon S3? Choose 2 options.
A. 83 provides read-after-write consistency for any type of PUT or DELETE.
B. Consistency is not guaranteed for any type of PUT or DELETE.
C. A successful response to a PUT request only occurs when a complete object is saved.
D. Partially saved objects are immediately readable with a GET after an overwrite PUT.
E. S3 provides eventual consistency for overwrite PUTS and DELETES.
Answer: C, E
By default the documentation provides a clear description on the read and write consistency for objects on S3. Based on this information Option C and E are the right options.
For more information on S3, please visit the link -
https://aws.amazon.com/s3/faqs/
Question 448
Which of the following are characteristics of a standard reserved instance? Choose 3 answers
It can be migrated across Availability Zones
B. It is specific to an Amazon Machine Image (AMI)
C. It can be applied to instances launched by Auto Scaling
D. It is specific to an instance Type
E. It can be used to lower Total Cost of Ownership (TCO) of a system
Answer: A, C, E
Option A is correct, because you can migrate instances between AZ’s.
Please refer to the link for the confirmation on this case -
http: //docs.aws.amazon.com/AWSEC2/latest/UserGuide/ri-modifying.html
Option D is incorrect because it is specific to instance family however instance type can be changed. Also when you create a reserved instance, you can see the Instance Type as an option.
Option E is correct, because reserved instances can be used to lower costs. Reserved Instances provide you with a discount on usage of EC2 instances, and a capacity reservation when they are applied to a specific Availability Zone, giving you additional confidence that you will be able to launch the instances you have reserved when you need them.
For more information on reserved instances, please visit the link -
https://aws.amazon.com/ec2/pricing/reserved-instances/
Question 449
If you’re unable to connect via SSH to your EC2 instance, which of the following should you check and possibly correct to restore connectivity?
A. Adjust Security Group to permit egress traffic over TCP port 443 from your IP.
B. Configure the LAM role to permit changes to security group settings.
C. Modify the instance security group to allow ingress of ICMP packets from your IP.
D. Adjust the instance’s Security Group to permit ingress traffic over port 22 from your IP.
E. Apply the most recently released Operating System security patches.
Answer: D
A security group acts as a virtual firewall that controls the traffic for one or more instances. When you launch an instance, you associate one or more security groups with the instance. You add rules to each security group that allow traffic to or from its associated instances. For connecting via SSH on EC2, you need to ensure that port 22 is open on the security group for the EC2 instance.
Option A is wrong, because port 443 is for HTTPS and not for SSH.
Option B is wrong because LAM role is not pertinent to security groups
Option C is wrong because this is relevant to SSH and not ICMP
Option E is wrong because it does not matter what patches are there on the system
So in your EC2 Dashboard, go to Security groups, choose the relevant security group. Then click on Inbound rules and ensure there is a rule for TCP on port 22.
For more information on EC2 Security groups, please visit the url -
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security. html
Question 450
An Auto-Scaling group spans 3 AZs and currently has 4 running EC2 instances. When Auto Scaling needs to terminate an EC2 instance by default, Auto Scaling will: Choose 2 answers.
A. Allow at least five minutes for Windows/Linux shutdown scripts to complete, before terminating the instance.
B. Terminate the instance with the least active network connections. If multiple instances meet this criterion, one will be randomly selected.
C. Send a SNS notification, if configured to do so.
D. Terminate an instance in the AZ which currently has 2 running EC2 instances.
E. Randomly select one of the 3 AZs, and then terminate an instance in that AZ.
Answer: C, D
In the above scenario, you would probably have 2 instances running in one AZ and one each running in the other AZ’s. The below diagram shows how the instances will be terminated and the policy used by Auto scaling. So it will select the AZ with the most running instances as per the flow chart and hence Option D is correct and Option A, B and E are wrong. Also Auto scaling allows for notification via SNS, so if that is enabled, it will send out the notification accordingly.
For more information on Auto scaling Termination, please visit the link:
http: //docs.aws.amazon.com/autoscaling/latest/userguide/as-instance-termination.html
Question 451
In order to optimize performance for a compute cluster that requires low inter- node latency, which of the following feature should you use?
A. Multiple Availability Zones
B. AWS Direct Connect
C. EC2 Dedicated Instances
D. Placement Groups
E. VPC private subnets
Answer: D
Option A is wrong because Multi AZ’s are used to distribute your AWS resources and is not connected to clusters for low latency. Option B is wrong because this is used to connect on-premise data centers to AWS Option C is wrong because dedicated resources does not guarantee low latency. Option E is wrong because VPC private subnets resources does not guarantee low latency. A placement group is a logical grouping of instances within a single Availability Zone. Placement groups are recommended for applications that benefit from low network latency, high network throughput, or both.
For more information on placement groups please visit
http: //docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html
Question 452
A company is preparing to give AWS Management Console access to developers Company policy mandates identity federation and role-based access control. Roles are currently assigned using groups in the corporate Active Directory. What combination of the following will give developers access to the AWS console? Choose 2 answers
A. AWS Directory Service AD Connector
B. AWS Directory Service Simple AD
C. AWS Identity and Access Management groups
D. AWS identity and Access Management roles
E. AWS identity and Access Management users
Answer: A,D
To enable trust relationship between AWS AD and Directory Service you need to create a New Role. After that, you need to assign Active Directory users or groups to those IAM roles. If roles are existing then you can assign Active Directory users or groups to existing LAM roles.
Find details below:
https://aws.amazon.com/blogs/security /how-to-connect-your-on-premises-active-directory-to-aws-using-ad-connector/
AWS Directory Service provides multiple ways to use Microsoft Active Directory with other AWS services. You can choose the directory service with the features you need at a cost that fits your budget. Use Simple AD if you need an inexpensive Active Directory—compatible service with the common directory features. Select AWS Directory Service for Microsoft Active Directory (Enterprise Edition) for a feature-rich managed Microsoft Active Directory hosted on the AWS cloud. The third option, AD Connector, lets you simply connect your existing on-premises Active Directory to AWS.
For more information on the Ad Connector, please visit
http: //docs.aws.amazon.com/directoryservice/latest/admin-guide/what_is.html
Question 453
Which of the following are true regarding encrypted Amazon Elastic Block Store (EBS) volumes? Choose two answers from the options given below
A. Supported on all Amazon EBS volume types
B. Snapshots are automatically encrypted
C. Available to all instance types
D. Existing volumes can be encrypted
E. Shared volumes can be encrypted
Answer: A, B
The AWS Documentation mentions the following on EBS Volumes and is available for all volume types You can create EBS General Purpose SSD (gp2), Provisioned IOPS SSD (io1), Throughput Optimized HDD (st1), and Cold HDD (sc1) volumes up to 16 TiB in size The snapshots of encrypted EBS Volumes are automatically encrypted, this is given in the AWS documentation
For more information on EBS Volumes , please refer to the below URL:
http: //docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonEBS.html
Question 454
Which of the following are characteristics of Amazon VPC subnets? Choose 2 answers from the options given below
A. Each subnet spans at least 2 Availability Zones to provide a high-availability environment.
B. Each subnet maps to a single Availability Zone.
C. CIDR block mask of/25 is the smallest range supported.
D. By default, all subnets can route between each other, whether they are private or public.
E. Instances in a private subnet can communicate with the Internet only if they have an Elastic IP.
Answer: B, D
Please see the below for further justification. A subnet can only map to one availability zone. So from options A and B, B is correct. When you create a CIDR block, the least allowable is /28, so option C is wrong. Option E is wrong because EC2 instances in a private subnet will not be able to route anything on the internet even if they have an elastic IP.
Question 455
Per the AWS Acceptable Use Policy, penetration testing of EC2 instances:
May be performed by AWS, and will be performed by AWS upon customer request.
B. May be performed by AWS, and is periodically performed by AWS.
C. Are expressly prohibited under all circumstances.
D. May be performed by the customer on their own instances with prior authorization from AWS.
E. May be performed by the customer on their own instances, only if performed from EC2 instances.
You need to take prior authorization from AWS before doing a penetration test on EC2 Instances.
Please refer to the below url for more details:
https://aws.amazon.com/security/penetration-testing/
Question 456
How can you secure data at rest on an EBS volume?
A. Attach the volume to an instance using EC2’s SSL interface.
B. Write the data randomly instead of sequentially.
C. Encrypt the volume using the S3 server-side encryption service.
D. Create an IAM policy that restricts read and write access to the volume.
E. Use an encrypted file system on top of the EBS volume.
Answer: E
In order to secure data at rest on an EBS volume, you either have to encrypt the volume when it is being created or encrypt the data after the volume is created.
Question 457
If you want to launch Amazon Elastic Compute Cloud (EC2) instances and assign each instance a private IP address in a predetermined range, you should: (choose one of the correct answer below)
A. Launch the instance from a private Amazon Machine Image (AMI).
B. Assign a group of sequential Elastic IP address to the instances.
C. Launch the instances in the Amazon Virtual Private Cloud (VPC).
D. Launch the instances in a Placement Group.
E. Use standard EC2 instances since each instance gets a private Domain Name Service (DNS) already.
Answer: C
This is the default reason for a VPC to host your own subnet and have EC2 instances have a private IP when it is launched in a VPC. Below is an example of an EC2 instance having a Private IP.
For more information on private IP addresses, please refer the below URL:
http: //docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-instance-addressing. html
Question 458
Which of the following notification endpoints or clients are supported by Amazon Simple Notification Service? Choose 2 answers from the options below.
A. Email
B. CloudFront distribution
C. File Transfer Protocol
D. Short Message Service
E. Simple Network Management Protocol
Answer: A, D
When you create a subscription in SNS , these are the protocols available on your console.
Question 459
Which of the following instance types are available as Amazon EBS-backed only? Choose 2 answers from the options below.
A. General purpose T2
B. General purpose M3
C. Compute-optimized C4
D. Compute-optimized C3
E. Storage-optimized I2
Answer: A, C
For details for all instance types, please visit the url -
https://aws.amazon.com/ec2/instance-types/
Question 460
There is an urgent requirement to monitor few database metrics for a database hosted on AWS and send notifications. Which AWS services can accomplish this requirements? Choose 2 answers from the options given below.
A. Amazon Simple Email Service
B. Amazon CloudWatch
C. Amazon Simple Queue Service (SQS)
D. Amazon Route 53
E. Amazon Simple Notification Service (SNS)
Answer: B, E
Amazon Cloudwatch will be used to monitor the IOP’s metrics from the RDS instance and Amazon Simple Notification Service will be used to send the notification if any alarm is triggered.
For more information on Cloudwatch and SNS, please visit the below URLs:
https: //aws.amazon.com/cloudwatch/
https: //aws.amazon.com/sns/
Question 461
A customer nightly EMR job processes a single 2-TB data file stored on S3. The EMR job runs on 2 on-demand core nodes and 3 on-demand task nodes. Which of the following may help reduce the EMR job completion time? Choose 2 answers from the options below
A. Use 3 spot instances rather than 3 on-demand instances for the task nodes.
B. Change the input split size in the MapReduce job configuration
C. Use a bootstrap action to present the S3 bucket as a local filesystem
D. Launch the core nodes and the task nodes with a VPC
E. Adjust the number of simultaneous mapper tasks
Answer: B, E
As per the AWS documentation, if you have too few tasks , then you have nodes sitting idle. You can increase the number of simultaneous mapper tasks and reduce the size of the MapReduce job configuration
For more information on EMR tasks please visit the below URL:
http://docs.aws.amazon.com/emr/latest/DeveloperGuide/TaskConfiguration_H1.0.3.html
Question 462
What combination of the following options will protect S3 objects from both accidental deletion and accidental overwriting? Choose 2 answers from the options below
A. Enable Sg versioning on the bucket
B. Access S3 data using only signed URL’s
C. Disable S3 delete using an IAM bucket policy
D. Enable S3 RRS
E. Enable MFA protected access
Answer: A, E
This is clearly given in the AWS documentation:
For more information on S3 please visit the below URL:
https://aws.amazon.com/s3/faqs/
Question 463
You have an application running in us-west-2 that requires 6 EC2 instances running at all times. With 3 AZ available in that region, which of the following deployments provides 100% fault tolerance if any single AZ in us-west-2 becomes unavailable? Choose 2 answers from the options below:
A. us-west-2a with 2 instances, us-west-2b with 2 instances, us-west-2c with 2 instances
B. us-west-2a with 3 instances, us-west-2b with 3 instances, us-west-2c with o instances
C. us-west-2a with 4 instances, us-west-2b with 2 instances, us-west-2c with 2 instances
D. us-west-2a with 6 instances, us-west-2b with 6 instances, us-west-2c with 0
instances
E. us-west-2a with 3 instances, us-west-2b with 3 instances, us-west-2c with 3
instances
Answer: D, E
If you read the question carefully, it asks you the scenario when only one AZ goes down at a time. The requirement is to make 6 instances always running even if any one of the AZ is goes down. The questions doesn’t ask you if any 2 or 3 AZ goes down at a time. Hence D and E ensures that always 6 instances are running if any one AZ goes down at a time. I hope this clears your doubts.Since we need 6 instances running at all times , only D and E fulfill this option. Option A is invalid , because if any one of Availability zones goes down , then we are left with only 4 running instances. Option B is invalid because if either us-west-2a or us-west-2b goes down then we are left with less than 6 instances. Option C is invalid if us-west-2a goes down then we are left with less than 6 instances
For more information on building fault tolerant applications in AWS , please refer to the below link
http: //media.amazonwebservices.com/AWS_Building_Fault_Tolerant_Applications.pdf
Question 464
You are developing a highly available web application using stateless web servers. Which services are suitable for storing session state data? Choose 3 answers from the options below
A. Amazon RDS
B. Amazon Elastic Cache
C. Amazon CloudWatch
D. Elastic Load Balancing (ELB)
E. Amazon DynamoDB
Answer: A, B, E
Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. It provides cost-efficient and resizable capacity while managing time-consuming database administration tasks, freeing you up to focus on your applications and business. Amazon ElastiCache is a web service that makes it easy to deploy, operate, and scale an in-memory data store or cache in the cloud. The service improves the performance of web applications by allowing you to retrieve information from fast, managed, in-memory data stores, instead of relying entirely on slower disk-based databases.
Amazon DynamoDB Amazon DynamoDB is a fast and flexible NoSQL database service for all applications that need consistent, single-digit millisecond latency at any scale. It is a fully managed cloud database and supports both document and key-value store models. Its flexible data model and reliable performance make it a great fit for mobile, web, gaming, ad tech, IoT, and many other applications.
For more information on Amazon RDS please visit the below URL:
https://aws.amazon.com/rds/ Amazon Elastic Cache
For more information on Amazon Elastic Cache please visit the below URL:
https://aws.amazon.com/elasticache/
For more information on Amazon DynamoDB please visit the below URL:
https://aws.amazon.com/dynamodb/
Question 465
You are developing a highly available web application using stateless web servers. Which services are suitable for storing session state data? Choose 3 answers
A. Amazon CloudWatch
B. Amazon Relational Database Service (RDS)
C. Elastic Load Balancing
D. Amazon ElastiCache
E. AWS Storage Gateway
F, Amazon DynamoDB
Answer: B, D, F
Please find the AWS Documentation references for Elastic Cache and DynamoDB. Relational databases have always been a source for storing session data. Amazon ElastiCache is a web service that makes it easy to deploy, operate, and scale an in- memory data store or cache in the cloud. The service improves the performance of web applications by allowing you to retrieve information from fast, managed, in-memory data stores, instead of relying entirely on slower disk-based databases.
For more information on Elastic Cache , please refer to the below link
https: //aws.amazon.com/elasticache/
An example of managing session state via DynamoDB is given below
http://docs.aws.amazon.com/sdk-for-net/v2/developer-guide/dynamodb-session-net-sdk.html
Question 466
A customer needs corporate IT governance and cost oversight of all AWS resources consumed by its divisions. The divisions want to maintain administrative control of the discrete AWS resources they consume and keep those resources separate from the resources of other divisions. Which of the following options, when used together will support the autonomy/control of divisions while enabling corporate IT to maintain governance and cost oversight? Choose two answers from the options given below
A. Use AWS Consolidated Billing and disable AWS root account access for the child accounts.
B. Enable [AM cross-account access for all corporate IT administrators in each child account.
C. Create separate VPCs for each division within the corporate IT AWS account.
D. Use AWS Consolidated Billing by creating AWS Organisations to link the divisions’ accounts to a parent corporate account.
E. Write all child AWS CloudTrail and Amazon CloudWatch logs to each child account’s Amazon S3 ‘Log’ bucket.
Answer: B, D
Since the resources need to be separated and a separate governance model is required for each section of resources , then it’s better to have a separate AWS account for each division. Each division’s AWS account can sign up for consolidating billing to the main corporate account by creating AWS Organisations. The IT administrators can then be granted access via cross account role access.
For more information on consolidating billing, please visit the below URL:
http: //docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/consolidated-billing. html
Question 467
Which of the following are use cases for Amazon DynamoDB? Choose 3 answers
A. Storing BLOB data.
B. Managing web sessions.
C. Storing JSON documents.
D. Storing metadata for Amazon S3 objects.
E. Running relational joins and complex updates.
F. Storing large amounts of infrequently accessed data
Answer: B, C, D
Amazon DynamoDB stores structured data, indexed by primary key, and allows low latency read and write access to items ranging from 1 byte up to 400KB. Amazon S3 stores unstructured blobs and suited for storing large objects up to 5 TB. DynamoDB IS a good choice to store the metadata for a BLOB, such as name, date created, owner, etc… The Binary Large OBject itself would be stored in S3.
For more information on Amazon Dynamo DB, please visit
https://aws.amazon.com/dynamodb/faqs/
Question 468
A company is storing data on Amazon Simple Storage Service (S3). The company’s security policy mandates that data is encrypted at rest. Which of the following methods can achieve this? (Choose three.)
A. Use Amazon S3 server-side encryption with AWS Key Management Service managed keys.
B. Use Amazon S3 server-side encryption with customer-provided keys.
C. Use Amazon S3 server-side encryption with EC2 key pair.
D. Use Amazon S3 bucket policies to restrict access to the data at rest.
E. Encrypt the data on the client-side before ingesting to Amazon S3 using their own master key.
F. Use SSL to encrypt the data while in transit to Amazon S3.
Answer: A, B, E
One can encrypt data in an S3 bucket using both server side encryption and client side encryption. The following techniques are available
. Use Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3)
. Use Server-Side Encryption with AWS KMS-Managed Keys (SSE-KMS)
. Use Server-Side Encryption with Customer-Provided Keys (SSE-C)
. Use Client-Side Encryption with AWS KMS-—Managed Customer Master Key (CMK)
. Use Client-Side Encryption Using a Client-Side Master Key
For more information on using encryption, please refer to the below URL:
http: //docs.aws.amazon.com/AmazonS3/latest/dev/UsingEncryption.html
Question 469
You are using an m1.small EC2 Instance with one 300 GB EBS volume to host a relational database. You determined that write throughput to the database needs to be increased. Which of the following approaches can help achieve this? Choose 2 answers
A. Use an array of EBS volumes.
B. Enable Multi-AZ mode.
C. Place the instance in an Auto Scaling Groups
D. Add an EBS volume and place into RAID 5.
E. Increase the size of the EC2 Instance.
F. Put the database behind an Elastic Load Balancer.
Answer: A, E
The AWS Documentation mentions the following With Amazon EBS, you can use any of the standard RAID configurations that you can use with a traditional bare metal server, as long as that particular RAID configuration is supported by the operating system for your instance. This is because all RAID is accomplished at the software level.
For greater I/O performance than you can achieve with a single volume, RAID o can stripe multiple volumes together; for on-instance redundancy, RAID 1 can mirror two volumes together.
For more information on RAID configuration, please refer to the below URL:
http: //docs.aws.amazon.com/AWSEC2/latest/UserGuide/raid-config.html
and then to offset the use of higher compute capacity, it is better to use a better instance type
For more information on Instance types, please refer to the below URL:
https://aws.amazon.com/ec2/instance-types/
Question 470
You are designing a social media site and are considering how to mitigate distributed denial-of-service (DDoS) attacks. Which of the below are viable mitigation techniques? Choose 3 answers from the options below
A. Add multiple elastic network interfaces (ENIs) to each EC2 instance to increase the network bandwidth.
B. Use dedicated instances to ensure that each instance has the maximum performance possible.
C. Use an Amazon CloudFront distribution for both static and dynamic content.
D. Use an Elastic Load Balancer with auto scaling groups at the web, App. Restricting direct internet traffic to Amazon Relational Database Service (RDS) tiers.
E. Add alert Amazon CloudWatch to look for high Network in and CPU utilization.
F. Create processes and capabilities to quickly add and remove rules to the instance OS firewall.
Answer: C, D, E
The snapshot from the aws documentation shows the best architecture practices for avoiding DDos attacks.
For best practises against DDos attacks , please visit the below link
https://do.awsstatic.com/whitepapers/DDoS_White_Paper_June2015.pdf
Question 471
In AWS, which security aspects are the customer’s responsibility? Choose 4 answers
A. Security Group and ACL (Access Control List) settings
B. Decommissioning storage devices
C. Patch management on the EC2 instance’s operating system
D. Life-cycle management of IAM credentials
E. Controlling physical access to compute resources
F. Encryption of EBS (Elastic Block Storage) volumes
Answer: A, C, D, F
Please view the shared responsibility model shared by AWS
https://aws.amazon.com/compliance/shared-responsibility-model/
Question 472
A Solutions Architect is developing a document sharing application and needs a storage layer. The storage should provide automatic support for versioning so that users can easily roll back to a previous version or recover a deleted account. Which AWS service will meet the requirements?
A. Amazon 83
B. Amazon EBS
C. Amazon EFS
D. Amazon Storage Gateway VTL
Answer: A
Amazon S3 is a perfect storage layer for storing documents and other types of objects Amazon S3 also has the option for versioning as shown below. The versioning is on the bucket level and can be used to recover prior versions of an object.
For more information on Amazon S3, please visit the following URL:
https://aws.amazon.com/s3/
Question 473
You have an application running in us-west-2 that requires 6 EC2 Instances running at all times. With 3 availability zones in that region us-west-2a,us-west-2b,us- west-2c) which of the following deployments provides fault tolerance if any Availability zone in us-west-2 becomes unavailable. Choose 2 answers from the options given below
A. 2 EC2 Instances in us-west-2a, 2 EC2 Instances in us-west-2b,2 EC2 Instances in us-west-2c
B. 3 EC2 Instances in us-west-2a, 3 EC2 Instances in us-west-2b,no EC2 Instances in us-west-2c
C. 4 EC2 Instances in us-west-2a, 2 EC2 Instances in us-west-2b,2 EC2 Instances in us-west-2c
D. 6 EC2 Instances in us-west-2a, 6 EC2 Instances in us-west-2b,no EC2 Instances in us-west-2c
E. 3 EC2 Instances in us-west-2a, 3 EC2 Instances in us-west-2b,3 EC2 Instances in us-west-2c
Answer: D, E
Option A is incorrect because if one AZ becomes unavailable, then you would only have 4 instances available which does not meet the requirement. Option B is incorrect because if either us-west-2a or us-west-2b becomes unavailable, then you would only have 3 instances available which does not meet the requirement. Option C is incorrect because if us-west-2a becomes unavailable, then you would only have 4 instances available which does not meet the requirement.
For more information on AWS Regions and Availability Zones, please visit the following URL:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.RegionsAndAvailabilityZones.html
Question 474
An application allows manufacturing site to upload files. Each 3 GB file is then processed to extract metadata with the processing taking a few seconds for each file. The frequency updates is unpredictable. There may be no updates for hours then several files uploaded concurrently. What architecture will address this workload the most cost efficiently
A. Use a Kinesis data delivery stream to store the file and use Lambda for processing
B. Use an SQS queue to store the file, which is then accessed by a fleet of EC2 Instances.
C. Store the file in an EBS volume which can then be accessed by another EC2 Instance for processing.
D. Store the file in an S3 bucket and use Amazon S3 event notification to invoke a Lambda function to process the file
Answer: D
One can create a Lambda function which can contain the code to process the file. You can then use the Event notification from the S3 bucket to invoke the Lambda function whenever the file is uploaded.
For more information on Amazon S3 event notification, please visit the following URL:
https://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html
Question 475
A company is migrating an on-premise 10TB MySQL database to AWS. The company expects the database to quadruple in size and the business requirement is that replica lag must be kept under 100 milliseconds. Which Amazon RDS engine meets these requirements?
A. MySQL
B. Microsoft SQL Server
C. Oracle
D. Amazon Aurora
Answer: D
The AWS Documentation supports the mentioned requirements which is supported by AWS Aurora Amazon Aurora (Aurora) is a fully managed, MySQL- and PostgreSQL-compatible, relational database engine. It combines the speed and reliability of high-end commercial databases with the simplicity and cost-effectiveness of open-source databases. It delivers up to five times the throughput of MySQL and up to three times the throughput of PostgreSQL without requiring changes to most of your existing applications. All Aurora Replicas return the same data for query results with minimal replica lag—usually much less than 100 milliseconds after the primary instance has written an update
For more information on AWS Aurora, please visit the following URL:
http: //docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Aurora.Overview.html
Question 476
For which of the following workloads should a Solutions Architect consider using Elastic Beanstalk. Choose 2 answers from the options given below
A. A Web application using Amazon RDS
B. An Enterprise data warehouse
C. Along running worker process
D. A static Website
E. A management task run once nightly
Answer: A, C
The AWS Documentation clearly mentions that the Elastic Beanstalk component can be used to create Web Server environments and Worker environments
For more information on AWS Elastic beanstalk Web server environments, please visit the following URL: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/concepts-webserver.html
Question 477
An application with a 150 GB relational database runs on an EC2 Instance. The application is used infrequently with small peaks in the morning and evening. What is the MOST cost effective storage type? Choose 2 correct answers.
A. Amazon EBS provisioned IOPS SSD
B. Amazon EBS Throughput Optimized HDD
C. Amazon EBS General Purpose SSD
D. Amazon EFS
Answer: A, C
Since the database is used infrequently and really is not used throughout the day and the question mentions the MOST cost effective storage type, you need to choose EBS General Purpose SSD over EBS provisioned IOPS SSD.
For more information on AWS EBS Volumes, please visit the following URL:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumes.html
Question 478
An administrator runs a highly available application in AWS. The Administrator needs a file storage layer that can share between instances and scale the platform more easily. Which AWS service can perform this action?
A. Amazon EBS
B. Amazon EFS
C. Amazon S3
D. Amazon EC2 Instance store
Answer: B
The AWS Documentation mentions the following Amazon EFS provides scalable file storage for use with Amazon EC2. You can create an EFS file system and configure your instances to mount the file system. You can use an EFS file system as a common data source for workloads and applications running on multiple instances
For more information on AWS EFS, please visit the following URL:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonEFS.html
Question 479
A company runs a service on AWS to provide offsite backups for images on laptops and phones. The solution must support millions of customers with thousands of images per customer. Images will be retrieved infrequently but must be available for retrieval immediately. Which is the MOST cost efficient storage option that meets these requirements?
A. Amazon Glacier with expedited retrievals
B. Amazon S3 Standard Infrequent Access
C. Amazon EFS
D. Amazon S3 Standard
Answer: B
Amazon S3 Infrequent access is perfect if you want to store data that is not frequently access. It is must more cost effective than Option D of Amazon Sg Standard. And if you choose Amazon Glacier with expedited retrievals, then you defeat the whole purpose of the requirement, because you would have an increased cost with this option
For more information on AWS Storage classes, please visit the following URL:
https://aws.amazon.com/s3/storage-classes/
Question 480
A Solutions Architect is designing a solution to store and archive corporate documents and has determined that Amazon Glacier is the right solution. Data must be delivered within 10 minutes of a retrieval request. Which feature in Amazon Glacier can help meet this requirement?
A. Vault Lock
B. Expedited retrieval
C. Bulk retrieval
D. Standard retrieval
Answer: B
The AWS Documentation mentions the following Expedited retrievals allow you to quickly access your data when occasional urgent requests for a subset of archives are required.
For more information on AWS Glacier Retrieval, please visit the following
URL: https://docs.aws.amazon.com/amazonglacier /latest/dev/downloading-an-archive-two-steps.html
Question 481
A data processing application in AWS must pull data from an Internet service. A Solutions Architect must design a highly available solution to access data without placing bandwidth constraints on the application traffic. Which solution meets these requirements?
A. Launch a NAT gateway and add routes for 0.0.0.0/0
B. Attach a VPC endpoint and add routes for 0.0.0.0/0
C. Attach an Internet gateway and add routes for 0.0.0.0/0
D. Deploy NAT instances in a public subnet and add routes for 0.0.0.0/0
Answer: C
The AWS Documentation mentions the following An Internet gateway is a horizontally scaled, redundant, and highly available VPC component that allows communication between instances in your VPC and the Internet. It therefore imposes no availability risks or bandwidth constraints on your network traffic.
For more information on the Internet gateway, please visit the following URL:
https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Internet_Gateway-html
Question 482
In reviewing the Auto Scaling events for your application you notice that your application is scaling up and down multiple times in the same hour. What design choice could you make to optimize for cost while preserving elasticity. Choose 2 answers from the options given below
A. Modify the Autoscaling group termination policy to terminate the older instance first
B. Modify the Autoscaling group termination policy to terminate the newest instance first
C. Modify the Autoscaling group cool down timers
D. Modify the Autoscaling group to use scheduled scaling actions
E. Modify the Cloudwatch alarm period that triggers your AutoScaling scale down policy
Answer: C, E
One of the main reasons for this is that not enough time is being given for the scaling activity to take effect and for the entire infrastructure to stabilize after the scaling activity. This can be defined by increasing the Autoscaling group cool down timers.
For more information on Autoscaling cool down, please visit the following URL:
https://docs.aws.amazon.com/autoscaling/ec2/userguide/Cooldown.html
Another reason is that you have defined the right threshold for the Cloudwatch alarm for the scale down policy.
For more information on Autoscaling dynamic scaling, please visit the following URL:
https: //docs.aws.amazon.com/autoscaling/ec2/userguide/as-scale-based-on-demand.html
Question 483
A company hosts a popular web application that connects to an Amazon RDS MySQL DB instance running in a private VPC subnet that was created with default ACL settings. The web servers must be accessible only to customers on an SSL connection. The database should only accessible to web servers in a public subnet. Which solution meets these requirements without impacting other running applications? Select 2 answers from the options given below
A. Create a network ACL on the web server’s subnets, allow HTTPS port 443 inbound and specify the source as 0.0.0.0/0
B. Create a web server security group that allows HTTPS port 443 inbound traffic from anywhere (0.0.0.0/0) and apply it to the web servers.
C. Create a DB server security group that allows MySQL port 3306 inbound and specify the source as the web server security group
D. Create a network ACL on the DB subnet, allow MySQL port 3306 inbound for web servers and deny all outbound traffic.
E. Create a DB Server security groups that allows the HTTPS port 443 inbound and specify the source as a web server security group
Answer: B, C
This sort of setup is given in the AWS documentation.
1) To ensure that traffic can flow into your web server from anywhere on secure traffic, you need to allow inbound security at 443
2) And then ensure that traffic can flow from the database server to the web server via the database security group The below snapshot from the AWS Documentation shows the rules tables for the security groups which relate to the same requirements as the question
For more information on this use case scenario, please visit the following URL:
https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Scenario2.html
Question 484
An application will read and write objects to an S3 bucket. When the application is fully deployed, the read/write traffic will be very high. How should the architect maximize Amazon S3 performance?
A. Prefix each object name with a random string
B. Use the STANDARD _IA storage class
C. Prefix each object name with the current data
D. Enable versioning on the S3 bucket
Answer: A
If the request rate is high, then you can use hash keys or random strings to prefix the object name. In such a case, the partitions used to store the objects will be better distributed and hence allow for better read/write performance for your objects.
For more information on how to ensure performance in S3, please visit the following
URL: https://docs.aws.amazon.com/AmazonS3/latest/dev/request-rate-perf-considerations.html
Question 485
You are deploying an application on Amazon EC2 that must call AWS API’s. What method of securely passing credentials to the application should you use
A. Pass API credentials to the instance using instance userdata
B. Store API credentials as an object in Amazon S3
C. Embed the API credentials into your application
D. Assign IAM roles to the EC2 Instances
Answer: D
The AWS Documentation mentions the following You can use roles to delegate access to users, applications, or services that don’t normally have access to your AWS resources Its not a best practice to use IAM credentials for any production based application. It’s always a good practice to use IAM Roles.
For more information on IAM Roles, please visit the following URL:
https://docs.aws.amazon.com/IAM /latest/UserGuide/id_roles.html
Question 486
A website runs on EC2 Instances behind an ELB Application Load Balancer. The instances run in an AutoScaling Group across multiple Availability Zones. The instances deliver several large files that are stored on a shared Amazon EFS file system. The company needs to avoid serving the files from EC2 Instances every time a user requests these digital assets. What should the company do to improve the user experience of the web site?
A. Move the digital assets to Amazon Glacier
B. Cache static content using Cloudfront
C. Resize the images so that they are smaller
D. Use reserved EC2 Instances
Answer: B
The AWS Documentation mentions the following on the benefits of using Cloudfront Amazon CloudFront is a web service that speeds up distribution of your static and dynamic web content, such as .html, .css, .js, and image files, to your users. CloudFront delivers your content through a worldwide network of data centers called edge locations. When a user requests content that you’re serving with CloudFront, the user is routed to the edge location that provides the lowest latency (time delay), so that content is delivered with the best possible performance. If the content is already in the edge location with the lowest latency
For more information on AWS Cloudfront, please visit the following URL:
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Introduction.btml
Question 487
A Solutions Architect is designing a highly scalable system to track records. Records must remain available for immediate download for three months and then the records must be deleted. What is the most appropriate decision for this use case?
A. Store the files in Amazon EBS and create a lifecycle policy to remove the files after 3 months.
B. Store the files in Amazon S3 and create a lifecycle policy to remove the files after 3 months.
C. Store the files in Amazon Glacier and create a lifecycle policy to remove the files after 3 months.
D. Store the files in Amazon EFS and create a lifecycle policy to remove the files after 3 months.
Answer: B
Option A is invalid since the records need to be stored in a highly scalable system
Option C is invalid since the records must be available for immediate download
Option D is invalid because it does not have the concept of the lifecycle policy
The AWS Documentation mentions the following on lifecycle policies Lifecycle configuration enables you to specify the lifecycle management of objects in a bucket.
The configuration is a set of one or more rules, where each rule defines an action for Amazon S3 to apply to a group of objects. These actions can be classified as follows:
- Transition actions — In which you define when objects transition to another storage class. For example, you may choose to transition objects to the STANDARD_IA (IA, for infrequent access) storage class 30 days after creation, or archive objects to the GLACIER storage class one year after creation.
- Expiration actions — In which you specify when the objects expire. Then
Amazon S3 deletes the expired objects on your behalf.
For more information on AWS S3 Lifecycle policies, please visit the following URL:
https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html
Question 488
A consulting firm repeatedly build large architectures for their customers using AWS resources from many AWS services including IAM, Amazon EC2, Amazon RDS, DynamoDB and Amazon VPC. The consultants have architecture diagrams for each of their architectures and they are frustrated that they cannot use them to automatically create their resources. Which service should provide immediate benefits to the organization?
A. AWS Beanstalk
B. AWS Cloudformation
C. AWS CodeBuild
D. AWS CodeDeploy
Answer: B
The AWS Documentation mentions the below on AWS Cloudformation. This supplements the requirement in the question for the consultants to use their architecture diagrams to construct cloudformation templates. AWS CloudFormation is a service that helps you model and set up your Amazon Web Services resources so that you can spend less time managing those resources and more time focusing on your applications that run in AWS. You create a template that describes all the AWS resources that you want (like Amazon EC2 instances or Amazon RDS DB instances), and AWS CloudFormation takes care of provisioning and configuring those resources for you.
For more information on AWS Cloudformation, please visit the following URL:
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html
Question 489
The security policy of an organization requires an application to encrypt data before writing to the disk. Which solution should the organization use to meet this requirement?
A. AWS KMS API
B. AWS Certificate Manager
C. API Gateway with STS
D. IAM Access Key
Answer: A
Option B is incorrect - The AWS Certificate manager can be used to generate SSL certificates that can be used to encrypt traffic in transit, but not at rest
Option C is incorrect is again used for issuing tokens when using API gateway for traffic in transit.
Option D is used for secure access to EC2 Instances
The AWS Documentation mentions the following on AWS KMS AWS Key Management Service (AWS KMS) is a managed service that makes it easy for you to create and control the encryption keys used to encrypt your data. AWS KMS is integrated with other AWS services including Amazon Elastic Block Store (Amazon EBS), Amazon Simple Storage Service (Amazon S3), Amazon Redshift, Amazon Elastic Transcoder, Amazon WorkMail, Amazon Relational Database Service (Amazon RDS), and others to make it simple to encrypt your data with encryption keys that you manage
For more information on AWS KMS, please visit the following URL:
https://docs.aws.amazon.com/kms/latest/developerguide/overview.html
Question 490
An application currently stores all data on Amazon EBS Volumes. All EBS volumes must be backed up durably across multiple Availability Zones. What is the MOST resilient way to backup the volumes?
A. Take regular EBS snapshots
B. Enable EBS volume encryption
C. Create a script to copy data to an EC2 Instance store
D. Mirror data across 2 EBS volumes
Answer: A
Option B is incorrect because it does not help in durability of EBS Volumes
Option C is incorrect since EC2 Instance stores are not durable
Option D is incorrect since mirroring data across EBS volumes is inefficient, when you already have the option for
EBS snapshots
The AWS Documentation mentions the following on AWS EBS Snapshots You can back up the data on your Amazon EBS volumes to Amazon S3 by taking point-in-time snapshots. Snapshots are incremental backups, which means that only the blocks on the device that have changed after your most recent snapshot are saved. This minimizes the time required to create the snapshot and saves on storage costs by not duplicating data. When you delete a snapshot, only the data unique to that snapshot is removed. Each snapshot contains all of the information needed to restore your data (from the moment when the snapshot was taken) to a new EBS volume
For more information on AWS EBS Snapshots, please visit the following URL:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSSnapshots.html
Question 491
A retailer exports data from its transactional databases daily into an S3 bucket. The retailer’s data warehousing team wants to import that data into an existing Amazon Redshift cluster in their VPC. Corporate security policy mandates that the data can only be transported within a VPC. What combination of the following steps will satisfy the security policy? Choose 2 answers from the options given below
A. Enable Amazon Redshift Enhanced VPC routing
B. Create a cluster security group to allow the Amazon Redshift cluster to access Amazon S3
C. Create a NAT gateway in a public subnet to allow the Amazon Redshift cluster to access Amazon S3.
D. Create and configure an Amazon S3 VPC endpoint.
E. Setup a NAT gateway in a private subnet to allow the Amazon Redshift cluster to Access Amazon S3
Answer: C, D
The AWS Documentation mentions the following on the benefits for using NAT gateways and VPC endpoints for better and secure communication of private resources to public endpoints like $3 You can use a network address translation (NAT) gateway to enable instances in a private subnet to connect to the internet or other AWS services, but prevent the internet from initiating a connection with those instances
For more information on AWS NAT Gateway, please visit the following URL:
https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-nat-gateway.html
A VPC endpoint enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Instances in your VPC do not require public IP addresses to communicate with resources in the service. Traffic between your VPC and the other service does not leave the Amazon network.
For more information on AWS VPC endpoints, please visit the following URL:
https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-endpoints.html
Question 492
A team is building an application that must persist and index JSON files in a highly available data store. Latency of data access must remain consistent despite very high application traffic. Which services should the team choose?
A. Amazon EFS
B. Amazon Redshift
C. DynamoDB
D. AWS Cloudformation
Answer: C
The AWS Documentation mentions the following on DynamoDB Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. The data in DynamoDB is stored in JSON format and hence is the perfect data store for the requirement in the question.
For more information on AWS DynamoDB, please visit the following URL:
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/ Introduction.html
Question 493
An organization hosts a multi-language website on AWS. The website is served using Cloudfront. The language is specified in the HTTP request
http: //d11111f8.cloudfront.net/main.htm]l?language=de
http: //d11111f8.cloudfront.net/main.htm]?language=en
http: //d11111f8.cloudfront.net/main.html?language=es
How should AWS Cloudfront be configured to delivered cache data in the correct language?
A. Forward cookies to the origin
B. Based on query string parameters
C. Cache objects at the origin
D. Serve dynamic content
Answer: B
Since the language is specified in the query string parameters, hence the Cloudfront should be configured for query string parameters
For more information on configuring cloudfront via Query string parameters, please visit the following URL:
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/QueryStringParameters.html
Question 494
A Solutions Architect is designing a web page for event registrations and needs a managed service to send a text message to users every time users sign up for an event. Which AWS Service should the Architect use to achieve this?
Amazon STS
B. Amazon SQS
C. AWS Lambda
D. Amazon SNS
Answer: D
The AWS Documentation mentions the following You can use Amazon SNS to send text messages, or SMS messages, to SMS-enabled devices. You can send a message directly to a phone number, or you can send a message to multiple phone numbers at once by subscribing those phone numbers to a topic and sending your message to the topic.
For more information on configuring SNS and SMS messages, please visit the
following URL: https://docs.aws.amazon.com/sns/latest/dg/SMSMessages.html
Question 495
A Solutions Architect is designing a shared service for hosting containers from several customers on Amazon ECS. These containers will use several AWS services. A container from one customer must not be able access data from another customer. Which solution should the architect use to meet the requirements?
A. IAM roles for tasks
B. IAM roles for EC2 Instances
C. IAM Instance profile for EC2 Instances
D. Security Group rules
Answer: A
The AWS Documentation mentions the following With LAM roles for Amazon ECS tasks, you can specify an IAM role that can be used by the containers in a task. Applications must sign their AWS API requests with AWS credentials, and this feature provides a strategy for managing credentials for your applications to use, similar to the way that Amazon EC2 instance profiles provide credentials to EC2 instances.
For more information on configuring IAM Roles for tasks in ECS, please visit the following URL:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html
Question 496
A company is generating large datasets with millions of rows that must be summarized by column. Existing business intelligence tools will be used to build daily reports. Which storage service meets the requirements?
A. Amazon Redshift
B. Amazon RDS
C. ElastiCache
D. DynamoDB
Answer: A
The AWS Documentation mentions the following Amazon Redshift is a fully managed, petabyte-scale data warehouse service in the cloud. You can start with just a few hundred gigabytes of data and scale to a petabyte or more. This enables you to use your data to acquire new insights for your business and customers.
For more information on AWS Redshift, please visit the following URL:
https://docs.aws.amazon.com/redshift/latest/mgmt/welcome.html
Question 497
A company is developing a web application that will be hosted in AWS. The application needs to have a data store for session data. As an AWS Solution Architect, which of the following would you recommend for this requirement? Choose 2 answer from the options given below
A. CloudWatch
B. DynamoDB
C. Elastic Load Balancing
D. ElastiCache
E. Storage Gateway
Answer: B, D
DynamoDB and Elasticache are the perfect options for storing session data. The AWS Documentation mentions the following on these services Amazon DynamoDB is a fast and flexible NoSQL database service for all applications that need consistent, single-digit millisecond latency at any scale. It is a fully managed cloud database and supports both document and key-value store models. Its flexible data model, reliable performance, and automatic scaling of throughput capacity, makes it a great fit for mobile, web, gaming, ad tech, IoT, and many other applications
For more information on AWS DynamoDB, please visit the following URL:
https://aws.amazon.com/dynamodb/
ElastiCache is a web service that makes it easy to set up, manage, and scale a distributed in-memory data store or cache environment in the cloud. It provides a high-performance, scalable, and cost-effective caching solution, while removing the complexity associated with deploying and managing a distributed cache environment.
For more information on AWS Elasticache, please visit the following URL:
https://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/Whatls.html
Question 498
A company needs to store images that are uploaded by users via a mobile application. There is also a need to ensure that there is a security measure in place to protect against users accidentally deleting images. Which action will protect against unintended user actions?
A. Store data in an EBS volume and create snapshots once a week.
B. Store data in an S3 bucket and enable versioning.
C. Store data in two S3 buckets in different AWS regions.
D. Store data on EC2 instance storage
Answer: B
Amazon S3 also has the option for versioning as shown below. The versioning is on the bucket level and can be used to recover prior versions of an object.
For more information on AWS S3 versioning, please visit the following URL:
https://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html
Question 499
An application needs to have a data store hosted in AWS. The following requirements are in place for the data store
a) Ability to have an initial storage of 8 TB
b) The database will grow by 8 GB every day.
c) The ability to have 4 read replicas
Which of the following data store would you choose for this requirement?
A. DynamoDB
B. Amazon S3
C. Amazon Aurora
D. Amazon Redshift
Answer: D
Amazon Redshift has all the features which meet the requirements. The AWS Documentation mentions the following Amazon Redshift is a fully managed, petabyte- scale data warehouse service in the cloud. You can start with just a few hundred gigabytes of data and scale to a petabyte or more Amazon Redshift replicates all your data within your data warehouse cluster when it is loaded and also continuously backs up your data to S3. Amazon Redshift always attempts to maintain at least three copies of your data (the original and replica on the compute nodes and a backup in Amazon S3). Redshift can also asynchronously replicate your snapshots to $3 in another region for disaster recovery.
For more information on AWS Redshift, please visit the following
URL: https://docs.aws.amazon.com/redshift/latest/mgmt/welcome.html
Question 500
There is a requirement to host a database on an EC2 Instance. There is a requirement for the EBS volume to support 12,000 IOPS. Which Amazon EBS volume type can meet the performance requirements of this database?
A. EBS Provisioned IOPS SSD
B. EBS Throughput Optimized HDD
C. EBS General Purpose SSD
D. EBS Cold HDD
Answer: A
Since there is a high performance requirement with high IOPS needed, one needs to opt for EBS Provisioned IOPS SSD The below snapshot from the AWS Documentation mentions the need of using Provisioned IOPS for better IOPS performance for database based applications.
For more information on AWS EBS Volume types, please visit the following URL:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html
Question 501
Development teams in your organization use S3 buckets to store the log files for various application hosted in development environments in AWS. The developers want to keep the logs for one month for troubleshooting purposes, and then purge the logs. What feature will enable this requirement?
A. Adding a bucket policy on the S3 bucket.
B. Configuring lifecycle configuration rules on the S3 bucket.
C. Creating an IAM policy for the S3 bucket.
D. Enabling CORS on the S3 bucket.
Answer: B
The AWS Documentation mentions the following on lifecycle policies Lifecycle configuration enables you to specify the lifecycle management of objects in a bucket. The configuration is a set of one or more rules, where each rule defines an action for Amazon S3 to apply to a group of objects. These actions can be classified as follows:
- Transition actions — In which you define when objects transition to another storage class. For example, you may choose to transition objects to the STANDARD_IA (IA, for infrequent access) storage class 30 days after creation, or archive objects to the GLACIER storage class one year after creation.
- Expiration actions — In which you specify when the objects expire. Then Amazon S3 deletes the expired objects on your behalf.
For more information on AWS S3 Lifecycle policies, please visit the following URL:
https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html
Question 502
A legacy application needs a propriety file system. Which of the following can be used to store the data which can be used by an EC2 Instance.
A. AWS EBS Volumes
B. AWS S3
C. AWS Glacier
D. AWS EFS
Answer: D
The AWS Documentation mentions the following Amazon Elastic File System (Amazon EFS) provides simple, scalable file storage for use with Amazon EC2 instances in the AWS Cloud. Amazon EFS is easy to use and offers a simple interface that allows you to create and configure file systems quickly and easily. With Amazon EFS, storage capacity is elastic, growing and shrinking automatically as you add and remove files, so your applications have the storage they need, when they need it. When mounted on Amazon EC2 instances, an Amazon EFS file system provides a standard file system
interface and file system access semantics, allowing you to seamlessly integrate Amazon EFS with your existing applications and tools. Multiple Amazon EC2 instances can access an Amazon EFS file system at the same time, allowing Amazon EFS to provide a common data source for workloads and applications running on more than one Amazon EC2 instance.
For more information on AWS EFS, please visit the
following URL: https://aws.amazon.com/efs/
Question 503
Which of the following can be used to host an application which uses NGINX and can be scaled at any point in time
A. AWS EC2
B. AWS Elastic Beanstalk
C. AWS SQS
D. AWS ELB
Answer: B
The below snippet from the AWS Documentation shows the server available for Web server environments that can be created via Elastic Beanstalk. The server shows that nginx servers can be provisioned via the Elastic Beanstalk service.
For more information on the supported platforms for AWS Elastic beanstalk, please visit the following URL:
https: //docs.aws.amazon.com/elasticbeanstalk/latest/dg/concepts.platforms.html
Question 504
There is a requirement to upload a million images to S3. Which of the following can be used to ensure optimal performance
A. Use a sequential ID for the prefix
B. Use a hexadecimal hash for the prefix
C. Use a hexadecimal hash for the suffix
D. Use a sequential ID for the suffix
Answer: B
This recommendation for increasing performance if you have a high request rate in S3 is given in the AWS documentation
For more information on S3 performance considerations, please visit the following URL:
https://docs.aws.amazon.com/AmazonS3/latest/dev/request-rate-perf-considerations.html
Question 505
There is a requirement to get the IP address for resources accessed in a private subnet. Which of the following can be used
A. Trusted Advisor
B. VPC Flow Logs
C. Use Cloudwatch metrics
D. Use Cloudtrail
Answer: B
The AWS Documentation mentions the following VPC Flow Logs is a feature that enables you to capture information about the IP traffic going to and from network interfaces in your VPC. Flow log data is stored using Amazon CloudWatch Logs. After you’ve created a flow log, you can view and retrieve its data in Amazon CloudWatch Logs.
For more information on VPC Flow Logs, please visit the following URL:
https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/flow-logs.html
Question 506
There is a requirement for 500 message to be sent and processed in order. Which service can be used in this regard?
A. AWS SQS
B. AWS SNS
C. AWS Config
D. AWS ELB
Answer: A
One can use SQS FIFO queues for this purpose. The AWS Documentation mentions the following on SQS FIFO Queues. Amazon SQS is a reliable and highly-scalable managed message queue service for storing messages in transit between application components. FIFO queues complement the existing Amazon SQS standard queues, which offer high throughput, best-effort ordering, and at-least-once delivery. FIFO queues have essentially the same features as standard queues, but provide the added benefits of supporting ordering and exactly-once processing. FIFO queues provide additional features that help prevent unintentional duplicates from being sent by message producers or from being received by message consumers. Additionally, message groups allow multiple separate ordered message streams within the same queue.
For more information on SQS FIFO Queues, please visit the following URL:
https://aws.amazon.com/about-aws/whats-new/2016/11/amazon-sqs-introduces-fifo-queues-with-exactly-once-processing-and-lower-prices-for-standard-queues/
Question 507
There is a requirement for a database for a two tier application. The data would go
through multiple schema changes. The database needs to be durable and also changes
to the database should not result in downtime for the database. Which of the following
is the best option for data storage
A. AWS S83
B. AWS Redshift
C. AWS DynamoDB
D. AWS Aurora
Answer: C
AWS DynamoDB is a database that is schema-less and hence is ideal if you have multiple schema changes. It is also durable. Option A is incorrect because S3 is an object storage device and not a database. Option B is more of a data warehousing solution. Option D needs support for a constant schema and hence is not an ideal solution.
For more information on AWS Aurora, please visit the following URL:
http: //docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Aurora.Overview.html
Question 508
A redshift cluster currently contains 60TB of data. There is a requirement to ensure a disaster recovery site in a region located 600 KM away is put in place. Which of the following solutions would help ensure that this requirement is fulfilled.
A. Take a copy of the underlying EBS volumes to S3 and then do cross region replication
B. Enable cross region snapshots for the Redshift Cluster
C. Create a Cloudformation template to restore the Cluster in another region
D. Enable cross availability zone snapshots for the Redshift Cluster
Answer: B
The diagram in the article shows that snapshots are available for Redshift clusters which enables clusters to be available in different regions
For more information on managing Redshift snapshots, please visit the following URL:
https://docs.aws.amazon.com/redshift/latest/mgmt/managing-snapshots-console.html
Question 509
A company is using a Redshift cluster to store their data warehouse. There is a requirement from the Internal IT Security team to ensure that data gets encrypted for the Redshift database. How can this be achieved?
A. Encrypt the EBS volumes of the underlying EC2 Instances
B. Use AWS KMS Customer Default master key
C. Use SSL/TLS for encrypting the data
D. Use S3 Encryption
Answer: B
The AWS Documentation mentions the following Amazon Redshift uses a hierarchy of encryption keys to encrypt the database. You can use either AWS Key Management Service (AWS KMS) or a hardware security module (HSM) to manage the top-level encryption keys in this hierarchy. The process that Amazon Redshift uses for encryption differs depending on how you manage keys.
For more information on Redshift encryption, please visit the following URL:
https://docs.aws.amazon.com/redshift/latest/mgmt/working-with-db-encryption.html
Question 510
There is a requirement for block level storage which would be able to store 500GB of data. Also encryption of the data is required. Which of the following can be used in such a case
A. AWS EBS Volumes
B. AWS S3
C. AWS Glacier
D. AWS EFS
Answer: A
When you consider block level storage , then you need to consider EBS Volumes.
Option B and C is incorrect since they are object level storage. Option D is incorrect since this is file level storage.
For more information on EBS volumes, please visit the following URL:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumes.html
Question 511
An application requires an EC2 Instance to do continuously batch processing activities which requires at least 500MiB/s throughput of data. Which of the following is the best storage option for this.
A. EBS IOPS
B. EBS SSD
C. EBS Throughput Optimized
D. EBS Cold Storage
Answer: C
When you are considering storage volume types for batch processing activities with large throughput , then consider using EBS Throughput Optimized volume type.
This is also mentioned in the AWS Documentation For more information on EBS volume types, please visit the following URL:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html
Question 512
An application needs to access data in another AWS account in the same region. Which of the following can be used to ensure data can be accessed as required
A. Establish a NAT instance between both accounts
B. Use a VPN between both accounts
C. Use a NAT gateway between both accounts
D. Use VPC Peering between both accounts
Answer: D
Option A and C are incorrect because you normally use these options when you want private resources to access the Internet.
Option B is incorrect since the resources are in the same region, so you don’t need a VPN connection.
The AWS Documentation mentions the following about VPC Peering A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them privately. Instances in either VPC can communicate with each other as if they are within the same network. You can create a VPC peering connection between your own VPCs, with a VPC in another AWS account, or with a VPC in a different AWS Region.
For more information on VPC Peering, please visit the following URL:
https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-peering.html
Question 513
An application currently uses a NAT instance and now wants to use a NAT gateway. Which of the following can be used to accomplish this
A. Use NAT Instances along with the NAT Gateway
B. Host the NAT Instance in the private subnet
C. Migrate NAT Instance to NAT Gateway and host the NAT Gateway in the public
subnet
D. Convert the NAT Instance to a NAT Gateway
Answer: C
One can simply start using the NAT gateway service and stop using the deployed NAT instances. But you need to ensure that the NAT gateway is deployed in the public subnet
For more information on migrating to a NAT gateway, please visit the following URL:
https://aws.amazon.com/premiumsupport/knowledge-center/migrate-nat-instance-gateway/
Question 514
An application consists of the following architecture.
a. EC2 Instances in multiple AZ’s behind an ELB.
b. The EC2 Instances are launched via an Autoscaling Group
c. There is a NAT instance which is used to ensure that instances can download updates from the internet.
Which of the following is the bottleneck in the architecture? A. The EC2 Instances B. The ELB C. The NAT Instance D. The Autoscaling Group
Answer: C
Since there is only one NAT instance, this is a bottleneck for the architecture. For high availability, launch NAT instances in multiple available zones and make it as part of an Autoscaling Group.
For more information on NAT Instances, please visit the following URL:
https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_NAT_Instance.html
Question 515
A company owns an API which currently gets 1000 requests per sec. They want to host this using AWS. Which of the following is the best cost effective solution for this. The API is currently hosted on a t2.xlarge instance
A. Use API gateway with the backend services as it is.
B. Use the API gateway along with AWS Lambda
C. Use Cloudfront along with the API backend service as it is.
D. Use Elastic Cache along with the API backend service as it is.
Answer: B
Since the company has full ownership of the API, the best solution would be to convert the code for the API and use it in a Lambda function. You can save on cost, since in Lambda you don’t pay for any infrastructure and only pay for how much time the Lambda function runs. And then you can use the API gateway along with the AWS Lambda function which can scale accordingly.
For more information on using API gateway with AWS Lambda, please visit the following URL:
https://docs.aws.amazon.com/apigateway /latest/developerguide/getting-started-with-lambda-integration.html
Question 516
There is a requirement to host a database application which will have a lot of resource intensive reads and writes. Which of the following is the best storage option to ensure that the data is persistent.
A. EBS IOPS
B. EBS SSD
C. EBS Throughput Optimized
D. EBS Cold Storage
Answer: A
Since there is a high performance requirement with high IOPS needed, one needs to opt for EBS Provisioned IOPS SSD The below snapshot from the AWS Documentation mentions the need of using Provisioned IOPS for better IOPS performance for database based applications.
For more information on AWS EBS Volume types, please visit the following URL:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html
Question 517
An application sends images to S3. The metadata for these images needs to be saved in persistent storage. The metadata needs to be indexed. Which of the following can be used for the underlying storage.
A. AWS Aurora
B. AWS S3
C. AWS DynamoDB
D. AWS RDS
Answer: C
The most efficient storage mechanism for just storing metadata is DynamoDB. DynamoDB is normally used in conjunction with the Simple Storage service. So after storing the images in S3 , you can store the metadata in DynamoDB. You can also create secondary indexes for DynamoDB Tables
For more information on managing indexes in DynamoDB, please visit the following URL:
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/SQLtoNoSQL-Indexes.html
Question 518
An application is hosted on EC2 Instances for an application. There is a promotion campaign due to start in 2 weeks for the application. There is a mandate from management to ensure that no performance problems are encountered due to traffic growth during this time. Which of the following must be done to the Autoscaling Group to ensure this requirement can be fulfilled.
A. Configure step scaling for the Autoscaling Group
B. Configure Dynamic scaling for the Autoscaling Group
C. Configure Scheduled scaling for the Autoscaling Group
D. Configure static scaling for the Autoscaling Group
Answer: C
The AWS Documentation mentions the following Scaling based on a schedule allows you to scale your application in response to predictable load changes. For example, every week the traffic to your web application starts to increase on Wednesday, remains high on Thursday, and starts to decrease on Friday. You can plan your scaling activities based on the predictable traffic patterns of your web application. To configure your Auto Scaling group to scale based on a schedule, you create a scheduled action, which tells Amazon EC2 Auto Scaling to perform a scaling action at specified times.
For more information on Autoscaling scheduled scaling, please visit the following URL:
https://docs.aws.amazon.com/autoscaling/ec2/userguide/schedule_time.html
Question 519
Currently a company makes user of EBS snapshots to back up their EBS Volumes. As part of the business continuity requirement, these snapshots need to be made available in another region. How can this be achieved?
A. Directly create the snapshot in the other region
B. Create a snapshot and then create it in the new region
C. Copy the snapshot to an S3 bucket and then enable cross region replication for the bucket.
D. Copy the EBS Snapshot to an EC2 instance in another region
Answer: B
The AWS Documentation mentions the following A snapshot is constrained to the region where it was created. After you create a snapshot of an EBS volume, you can use it to create new volumes in the same region. For more information, see Restoring an Amazon EBS Volume from a Snapshot. You can also copy snapshots across regions, making it possible to use multiple regions for geographical expansion, data center migration, and disaster recovery
For more information on EBS Snapshots, please visit the following URL:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSSnapshots.html
Question 520
A company has an application hosted in AWS. This application consists of EC2 Instances which sits behind an ELB with EC2 Instances. The following are requirements from an administrative perspective
a) Ensure notifications are sent when the read requests goes beyond 1000 requests per minute
b) Ensure notifications are sent when the latency goes beyond 10 seconds
c) Also any API activity which calls for sensitive data should monitored
Which of the following can be used to achieve this requirement. Choose 2 answers from the options given below
A. Use Cloudtrail to monitor the API Activity
B. Use Cloudwatch logs to monitor the API Activity
C. Use Cloudwatch metrics for whatever metrics need to be monitored.
D. Use a custom log software to monitor the latency and read requests to the ELB
Answer: A, C
AWS Cloudtrail can be used to monitor the API calls.
For more information on Cloudtrail, please visit the following URL: https://aws.amazon.com/cloudtrail/
When you use Cloudwatch metrics for an ELB, you can get the amount of read requests and latency out of the box.
For more information on using Cloudwatch with the ELB, please visit the following
URL: https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-cloudwatch-metrics.html
Question 521
A company has resources hosted in their AWS Account. There is a requirement to
monitor all API activity for all regions. The audit needs to be applied for future regions
as well. Which of the following can be used to fulfil this requirement.
A. Ensure Cloudtrail for each region. Then enable for each future region.
B. Ensure one Cloudtrail trail is enabled for all regions.
C. Create a Cloudtrail for each region. Use Cloudformation to enable the trail for all
future regions.
D. Create a Cloudtrail for each region. Use AWS Config to enable the trail for all future regions.
Answer: A
The AWS Documentation mentions the following You can now turn on a trail across all regions for your AWS account. CloudTrail will deliver log files from all regions to the Amazon S3 bucket and an optional CloudWatch Logs log group you specified. Additionally, when AWS launches a new region, CloudTrail will create the same trail in the new region. As a result, you will receive log files containing API activity for the new region without taking any action.
For more information on this feature, please visit the following URL: https://aws.amazon.com/about-aws/whats-
new/2015/12/turn-on-cloudtrail-across-all-regions-and-support-for-multiple-trails/
Question 522
There is a requirement for an iSCI device and the legacy application needs local
storage. Which of the following can be used to meet the demands of the
application.
A. Configure the Simple storage service
B. Configure Storage gateway cached volume
C. Configure Storage gateway stored volume
D. Configure Amazon Glacier
Answer: C
The AWS Documentation mentions the following If you need low-latency access to your entire dataset, first configure your on-premises gateway to store all your data locally. Then asynchronously back up point-in-time snapshots of this data to Amazon S3. This configuration provides durable and inexpensive offsite backups that you can recover to your local data center or Amazon EC2. For example, if you need replacement capacity for disaster recovery, you can recover the backups to Amazon EC2.
For more information on the Storage gateway, please visit the following URL: https://docs.aws.amazon.com/storagegateway /latest/userguide/WhatIsStorageGateway-html
Question 523
There is a requirement for EC2 Instances in a private subnet to access an S3 bucket. The traffic should not traverse to the internet. Which of the following can be used to fulfill this requirement
A. VPC endpoint
B. NAT Instance
C. NAT gateway
D. Internet gateway
Answer: A
A VPC endpoint enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Instances in your VPC do not require public IP addresses to communicate with resources in the service. Traffic between your VPC and the other service does not leave the Amazon network.
For more information on AWS VPC endpoints, please visit the following URL: https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-endpoints.html
Question 524
There is an application which consists of EC2 Instances behind a classic ELB. An EC2 proxy is used for content management to backend instances. The application might not be able to scale properly. Which of the following can be used to scale the proxy and backend instances appropriately. Choose 2 answers from the options given below
A. Use Autoscaling for the proxy servers
B. Use Autoscaling for the backend instances
C. Replace the Classic ELB with Application ELB
D. Use Application ELB for both the front end and backend instances
Answer: A, B
As soon as you see the requirement for scaling , automatically think of the Autoscaling service provided by AWS. This can be used to scale both the proxy servers and the backend instances.
For more information on Autoscaling, please visit the following URL:
https://docs.aws.amazon.com/autoscaling /plans/userguide/what-is-aws-auto-scaling.html
Question 525
There is a marketing application hosted in AWS that might get a lot of traffic over the next couple of weeks. Which of the following can be used to reduce the potential disruption to users incase of any issues.
A. Use an ELB to divert traffic to an Infrastructure hosted in another region
B. Use an ELB to divert traffic to an Infrastructure hosted in another AZ
C. Use Cloudformation to create backup resources in another AZ
D. Use Routes3 to route to static web site
Answer: D
In a disaster recovery scenario , the best from the above options is to divert the traffic to a static web site. Option A is wrong because ELB can only balance traffic in one region and not across regions. Option B and C are incorrect because using backups across AZ’s is not enough for disaster recovery purposes.
For more information on disaster recovery in AWS, please visit the following URL:
https://aws.amazon.com/disaster-recovery/
Question 526
You have a requirement to host a static web site for a domain called mycompany.com in AWS. You need to ensure that the traffic is scaled properly. How can this be achieved. Choose 2 answers from the options given below
A. Host the static site on an EC2 Instance
B. Use Route53 with static web site in S3
C. Enter the NS records from Route§3 in the domain registrar
D. Place the EC2 instance behind the ELB
Answer: B, C
You can host a static web site in S3. You need to ensure that the nameserver
records for the Route53 hosted zone are entered in your domain registrar.
For more information on website hosting in S3, please visit the following URL:
https: //docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteHosting.html
Question 527
A database is hosted using the AWS RDS service. The database is getting a lot of database queries and is now become a bottleneck for the associating application. Which can be used to ensure that the database is not a performance bottleneck?
Setup a CloudFront distribution in front of the database
B. Setup an ELB in front of the database
C. Setup Elasticache in front of the database
D. Setup SNS in front of the database
Answer: C
Elastic cache is an in-memory solution that can be used in front of a database to cache the common queries issued against the database. This can reduce the overall load on the database.
Option A is incorrect because normally this is used for content distribution
Option B is partially correct , but you need to have one more database as an internal load balancing solution.
Option D is incorrect because SNS is a simple notification service.
For more information on Elasticache, please visit the following
URL: https://aws.amazon.com/elasticache/
Question 528
A database is being hosted using the AWS RDS service. The database is now going to be made into a production database. There is a requirement for the database to be made highly available. Which of the following can be used to achieve this requirement.
A. Use Multi-AZ for the RDS instance to ensure that a secondary database is created in another region
B. Use the Read Replica feature to create another instance of the DB in another region
C. Use Multi-AZ for the RDS instance to ensure that a secondary database is created in another Availability zone.
D. Use the Read Replica feature to create another instance of the DB in another Availability zone.
Answer: C
Option A is incorrect because the Multi-AZ feature allows for high availability across availability zones and not regions. Option B and D are incorrect because Read Replica’s can be used to offload database reads. But if you want high availability then opt for the Multi-AZ feature. The AWS Documentation mentions the following Amazon RDS Multi-AZ deployments provide enhanced availability and durability for Database (DB) Instances, making them a natural fit for production database workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ).
For more information on AWS RDS Multi-AZ, please visit the following URL:
https://aws.amazon.com/rds/details/multi-az/
Question 529
A company wants to host a web application and a database layer in AWS. This will be done with the use of subnets in a VPC. Which of the following is the proper architecture design for supporting the required tiers of the application
A. Use a public subnet for the web tier and a public subnet for the database layer
B. Use a public subnet for the web tier and a private subnet for the database layer
C. Use a private subnet for the web tier and a private subnet for the database layer
D. Use a private subnet for the web tier and a public subnet for the database layer
Answer: B
The ideal setup is to ensure that the web server is hosted in the public subnet so that it can be accessed by users on the internet. The database server can be hosted in the private subnet. The below diagram from the AWS Documentation shows how this can be setup
For more information on public and private subnets in AWS, please visit the following URL:
https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Scenario2.html
Question 530
You require the ability to analyze a customer’s clickstream data on a website so they can do behavioral analysis. Your customer needs to know what sequence of pages and ads their customer clicked on. This data will be used in real time to modify the page layouts as customers click through the site to increase stickiness and advertising click- through. Which option meets the requirements for captioning and analyzing this data?
A. Log clicks in weblogs by URL store to Amazon S3, and then analyze with Elastic MapReduce
B. Push web clicks by session to Amazon Kinesis and analyze behavior using Kinesis workers
C. Write click events directly to Amazon Redshift and then analyze with SQL
D. Publish web clicks by session to an Amazon SQS queue. Then send the events to AWS RDs for further processing
Answer: B
The AWS Documentation mentions the following Amazon Kinesis Data Streams enables you to build custom applications that process or analyze streaming data for specialized needs. Kinesis Data Streams can continuously capture and store terabytes of data per hour from hundreds of thousands of sources such as website clickstreams, financial transactions, social media feeds, IT logs, and location-tracking events.
For more information on Amazon Kinesis, please visit the following URL:
https://aws.amazon.com/kinesis/data-streams/
Question 531
A company has an infrastructure that consist of machines that keep on sending log information every 5 minutes. The number of machines can run into thousands. There should be a requirement to ensure that the data can be analyzed at a later stage. Which of the following would help in fulfilling this requirement.
A. Use Kinesis Firehose with S3 to take the logs and store them in S3 for further processing
B. Launch an Elastic beanstalk application to take the processing job of the logs
C. Launch an EC2 instance with enough EBS volumes to consume the logs which can be used for further processing
D. Use Cloudtrail to store all the logs which can be analyzed at a later stage
Answer: A
The AWS Documentation mentions the following which perfectly matches this requirement Amazon Kinesis Data Firehose is the easiest way to load streaming data into data stores and analytics tools. It can capture, transform, and load streaming data into Amazon S3, Amazon Redshift, Amazon Elasticsearch Service, and Splunk, enabling near real-time analytics with existing business intelligence tools and dashboards you’re already using today.
For more information on Amazon Kinesis firehose, please visit the following URL: https://aws.amazon.com/kinesis/data-firehose/
Question 532
An application hosted in AWS allows for users to upload videos in an S3 bucket. There is a requirement for a user to upload some videos during one week based on the profile. How can be this be accomplished in the best way possible
A. Create an IAM bucket policy to provide access for a week’s duration
B. Create a pre-signed URL for each profile which will last for a week’s duration
C. Create an S3 bucket policy to provide access for a week’s duration
D. Create an IAM role to provide access for a week’s duration
Answer: B
Pre-signed URL’s are the perfect solution when you want to give temporary access to users for S3 buckets. So whenever a new profile is created, you can create a pre- signed URL to ensure that the URL lasts for a week to allow for users to upload the required objects.
For more information on pre-signed URL’s, please visit the following URL:
https://docs.aws.amazon.com/AmazonS3/latest/dev/PresignedUrlUploadObject.html
Question 533
A company is planning to use Docker containers and the necessary container orchestration tools for their batch processing requirements. There is a requirement for batch processing for both critical and not critical data. Which of the following is the best implementation steps for this requirement, to ensure that cost is effectively managed.
Use Kubernetes for container orchestration and Reserved instances for all underlying instances
B. Use ECS orchestration and use Reserved instances for all underlying instances
C. Use Docker for container orchestration and a combination of Spot and Reserved instances for the underlying instances
D. Use ECS for container orchestration and a combination of Spot and Reserved instances for the underlying instances
Answer: D
The Elastic Container service from AWS can be used for container orchestration. Since there are both critical and non-critical loads, one can use Spot instances for the non-critical workloads for ensuring cost is kept at a minimal.
For more information on AWS ECS, please visit the following URL:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html
Question 534
A company has a requirement for archival of 6 TB of data. There is an agreement with the stakeholders for an 8hr agreed retrieval time. Which of the following can be used as the MOST cost effective storage option.
A. AWS S3 Standard
B. AWS S3 Infrequent Access
C. AWS Glacier
D. AWS EBS Volumes
Answer: C
Amazon Glacier is the perfect solution for this. Since the agreed timeframe for retrieval is met at 8h, this will be the most cost effective option.
For more information on AWS Glacier, please visit the following URL:
https://aws.amazon.com/documentation/glacier/
Question 535
A company hosts 5 web servers in AWS. They want to ensure that Route53 can be used to randomly provide users with the web server when they request for the underlying web application. Which routing policy should be used to fulfil this requirement
A. Simple
B. Weighted
C. Multi-Answer
D. Latency
Answer: C
The AWS Documentation mentions the following to support this If you want to route traffic approximately randomly to multiple resources, such as web servers, you can create one multivalue answer record for each resource and, optionally, associate an Amazon Route 53 health check with each record. For example, suppose you manage an HTTP web service with a dozen web servers that each have their own IP address. No one web server could handle all of the traffic, but if you create a dozen multi-value answer records, Amazon Route 53 responds to DNS queries with up to eight healthy records in response to each DNS query. Amazon Route 53 gives different answers to different DNS resolvers. If a web server becomes unavailable after a resolver caches a response, client software can try another IP address in the response.
For more information on this option, please visit the following URL:
https://aws.amazon.com/about-aws/whats-new/2017/06/amazon-route-53-announces-support-for-multivalue-answers-in-response-to-dns-queries/
Question 536
Acompany has a requirement for a managed database in AWS. There is a
requirement that joins need to be performed on the underlying queries. Which of the
following can be used as the underlying database
A. AWS Aurora
B. AWS DynamoDB
C. AWS S3
D. AWS Redhsift
Answer: A
In this case AWS Aurora would be the perfect choice
Option B is incorrect because joins is not supported in DynamoDB
Option C is incorrect because this is more an option for object storage
Option D is incorrect because this option is better for data warehousing solutions
For more information on AWS Aurora please visit the following URL:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Aurora.Overview.html
Question 537
A customer wants to create a stream of EBS Volumes in AWS. The customer has a requirement to ensure that data on the volume is encrypted at rest. How can this be achieved?
A. Create an SSL certificate and attach it to the EBS Volume
B. Use KMS to generate encryption keys which can be used to encrypt the volume
C. Use Cloudfront in front of the EBS volume to encrypt all requests.
D. Use EBS snapshots to encrypt the requests.
Answer: B
When you create a volume, you have the option to encrypt the volume using keys generated by the Key Management service.
For more information on using KMS, please refer to the below URL:
https://docs.aws.amazon.com/kms/latest/developerguide/services-ebs.html
Question 538
Acompany has a requirement to store 100 TB of data to AWS. The data will be exported using AWS Snowball. The data needs to then reside in a database layer. The database should have the facility to be queries from a business intelligence application. Each item is roughly 500KB in size. Which of the following is the ideal storage mechanism for the underlying data layer
A. AWS DynamoDB
B. AWS Aurora
C. AWS RDS
D. AWS Redshift
Answer: D
For the sheer data size, the ideal storage unit would be to use AWS Redshift. The AWS Documentation mentions the following on AWS Redshift Amazon Redshift is a fully managed, petabyte-scale data warehouse service in the cloud. You can start with just a few hundred gigabytes of data and scale to a petabyte or more. This enables you to use your data to acquire new insights for your business and customers. The first step to create a data warehouse is to launch a set of nodes, called an Amazon Redshift cluster. After you provision your cluster, you can upload your data set and then perform data analysis queries. Regardless of the size of the data set, Amazon Redshift offers fast query performance using the same SQL-based tools and business intelligence applications that you use today.
For more information on AWS Redshift, please refer to the below URL:
https://docs.aws.amazon.com/redshift/latest/mgmt/welcome.html
Question 539
A company is planning on testing a large set of IoT enabled devices. These devices will be streaming data every second. A proper service needs to be chosen in AWS which could be used to collect and analyze these streams in real time. Which of the following could be used for this purpose.
A. Use AWS EMR to store and process the streams
B. Use AWS Kinesis streams to process and analyze the data
C. Use AWS SQS to store the data
D. Use SNS to store the data
Answer: B
The AWS Documentation mentions the following on Amazon Kinesis Amazon Kinesis makes it easy to collect, process, and analyze real-time, streaming data so you can get timely insights and react quickly to new information. Amazon Kinesis offers key capabilities to cost-effectively process streaming data at any scale, along with the flexibility to choose the tools that best suit the requirements of your application. With Amazon Kinesis, you can ingest real-time data such as video, audio, application logs, website clickstreams, and IoT telemetry data for machine learning, analytics, and other applications.
For more information on Amazon Kinesis, please refer to the below URL:
https://aws.amazon.com/kinesis/
Question 540
Your company currently has a set of EC2 Instances hosted in AWS. The state of the instances needs to be monitored and each state change needs to be recorded. Which of the following can help fulfil this requirement. Choose 2 answers from the options given below
A. Use Cloudwatch logs to store the state change of the instances
B. Use Cloudwatch Events to monitor the state change of the events
C. Use SQS to trigger a record to be added to a DynamoDB table.
D. Use AWS Lambda to store a change record in a DynamoDB table.
Answer: B, D
Cloudwatch Events can be used to monitor the state change of EC2 Instances. You can choose the Event Source and the Event type as shown below. You can then have a AWS Lambda function as a target which can then be used to store the record in a DynamoDB table.
For more information on Cloudwatch events, please refer to the below URL:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/WhatIsCloudWatcEvents.html
Question 541
You have instances hosted in a private subnet in a VPC. There is a need for the instance to download updates from the internet. As an architect what change can you suggest to the IT operations team which would be MOST efficient and secure.
A. Create a new public subnet and move the instance to that subnet
B. Create a new EC2 Instance to download the updates separately and then push them to the required instance.
C. Use a NAT gateway to allow the instances in the private subnet to download the updates
D. Create a VPC link to the internet to allow the instances in the private subnet to download the updates
Answer: C
The NAT gateway is the ideal option to ensure that instances in the private subnet
have the ability to download updates from the internet.
For more information on the NAT gateway, please refer to the below URL:
https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-nat-gateway.html
Question 542
Acompany has opted to store their cold data on EBS volumes. To ensure optimal cost which of the following would be the ideal EBS volume type to host this type of data.
A. General Purpose SSD
B. Provisioned IOPS SSD
C. Throughput Optimized HDD
D. Cold HDD
Answer: D
AWS Documentation also shows that the ideal and cost efficient storage type would be Cold HDD
For more information on EBS volume types, please refer to the below URL:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html
Question 543
Acompany is planning to have their application hosted in AWS. The application consists of users uploading files and then having a public URL for downloading them at a later stage. Which of the following designs would help fulfil this requirement
A. Have EBS volumes hosted on EC2 Instances to store the files
B. Use Amazon S3 to host the files
C. Use Amazon Glacier to host the files since this would be the cheapest storage option
D. Use EBS snapshots attached to EC2 Instances to store the files
Answer: B
If you need storage for the Internet, then AWS Simple Storage service is the best option. Each file uploaded would automatically get a public URL which could be used to download the file at a later point in time.
For more information on Amazon S3, please refer to the below URL:
https://aws.amazon.com/s3/
Question 544
You are planning on hosting a web application on AWS. You create an EC2 Instance in a public subnet. This instance needs to connect to an EC2 Instance that will host an Oracle database. Which of the following steps should be followed to ensure a secure setup is in place
A. Place the EC2 Instance with the Oracle database in the same public subnet as the Web server for faster communication.
B. Place the EC2 Instance with the Oracle database in a separate private subnet
C. Create a database security group and ensure the web security group to allowed incoming access
D. Ensure the database security group allows incoming traffic from 0.0.0.0/0
Answer: B, C
The best secure option is to place the database in a private subnet. The below diagram from the AWS Documentation shows this setup. Also ensure that access is not allowed from all sources but just from the web servers.
For more information on this type of setup, please refer to the below URL:
https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Scenario2.html
Question 545
An EC2 Instance hosts a Java based application that access a DynamoDB table. This EC2 Instance is currently serving production based users. Which of the following is a secure way of ensuring that the EC2 Instance access the DynamoDB table
A. Use IAM Roles with permissions to interact with DynamoDB and assign it to the EC2 Instance
B. Use KMS keys with the right permissions to interact with DynamoDB and assign it to the EC2 Instance
C. Use LAM Access Keys with the right permissions to interact with DynamoDB and assign it to the EC2 Instance
D. Use LAM Access Groups with the right permissions to interact with DynamoDB and assign it to the EC2 Instance
Answer: A
To always ensure secure access to AWS resources from EC2 Instances, always ensure to assign a Role to the EC2 Instance
For more information on IAM Roles, pleaserefer to the below URL:
https://docs.aws.amazon.com/IAM /latest/UserGuide/id_roles.html
Question 546
Acompany is planning on building and deploying a web application on AWS. They need to have a data store to store session data. Which of the below services can be used to meet this requirement.
A. AWS RDS
B. AWS SQS
C. AWS ELB
D. AWS Elasticache
Answer: D
The AWS Documentation mentions the following Amazon ElastiCache offers fully managed Redis and Memcached. Seamlessly deploy, operate, and scale popular open source compatible in-memory data stores. Build data-intensive apps or improve the performance of your existing apps by retrieving data from high throughput and low latency in-memory data stores. Amazon ElastiCache is a popular choice for Gaming, Ad-Tech, Financial Services, Healthcare, and IoT apps.
For more information on Elasticache, please refer to the below URL:
https://aws.amazon.com/elasticache/
Question 547
A company has setup an application in AWS that interacts with DynamoDB. There is a requirement that when an item is modified in a DynamoDB table, an immediate entry is made to an associating application. How can this be accomplished? Choose 2 correct answers.
A. Setup Cloudwatch to monitor the DynamoDB table for any changes. Then trigger a Lambda function to send the changes to the application.
B. Setup Cloudwatch logs to monitor the DynamoDB table for any changes. Then trigger AWS SQS to send the changes to the application.
C. Use DynamoDB streams to monitor the changes to the DynamoDB table
D. Use an AWS Lambda function on a scheduled basis to monitor the changes to the DynamoDB table
Answer: D
One can use DynamoDB streams to monitor the changes to a DynamoDB table The AWS Documentation mentions the following A DynamoDB stream is an ordered flow of information about changes to items in an Amazon DynamoDB table. When you enable a stream on a table, DynamoDB captures information about every modification to data items in the table.
For more information on DynamoDB streams, please refer to the below URL:
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.html
If you enable DynamoDB Streams on a table, you can associate the stream ARN with a Lambda function that you write. Immediately after an item in the table is modified, a new record appears in the table’s stream. AWS Lambda polls the stream and invokes your Lambda function synchronously when it detects new stream records. Since wehave a requirement that when an item is modified in a DynamoDB table, an immediate entry need to be made to an associating application a lambda function is also required.
For more information on DynamoDB streams Lambda, please refer to the below URL:
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.Lambda.html
Question 548
A company currently has an application hosted on their On-premise environment. The application has a combination of web instances with worker instances and Rabbit-MQ for messaging purposes. They now want to move this infrastructure to the AWS Cloud. How could they easily start using messaging on the AWS Cloud?
A. Continue using Rabbit-MQ. Host is on a separate EC2 Instance.
B. Make use of AWS SQS to manage the messages
C. Make use of DynamoDB to store the messages
D. Make use of AWS RDS to store the messages
Answer: B
The ideal option would be to make use of AWS Simple Queue Service to manage the messages between the application components. The AWS SQS service is a highly scalable and durable service.
For more information on Amazon SQS, please refer to the
below URL: https://aws.amazon.com/sqs/
Question 549
An application currently uses AWS RDS MySQL as their data layer. Recently they have been getting a lot of performance issues on the database. They are planning to separate the querying part of the application by setting up a separate reporting layer. Which of the following additional steps could also potential assist in improving the performance of the underlying database.
A. Make use of Multi-AZ to setup a secondary database in another Availability Zone
B. Make use of Multi-AZ to setup a secondary database in another Region
C. Make use of Read Replica’s to setup a secondary read-only database
D. Make use of Read Replica’s to setup a secondary read and write database
Answer: C
The AWS Documentation mentions the following Amazon RDS Read Replicas provide enhanced performance and durability for database (DB) instances. This feature makes it easy to elastically scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads. You can create one or more replicas of a given source DB Instance and serve high-volume application read traffic from multiple copies of your data, thereby increasing aggregate read throughput
For more information on Amazon Read Replica’s, please refer to the below URL:
https://aws.amazon.com/rds/details/read-replicas/
Question 550
A company is asking their developers to store the application logs in an S3 bucket. These logs are only required for a temporary period of time. After this, the logs can be deleted. Which of the following steps can be used to effectively manage this.
A. Create a cron job to detect the stale logs and delete them accordingly.
B. Use a bucket policy to manage the deletion
C. Use an IAM policy to manage the deletion
D. Use 83 lifecycle policies to manage the deletion
Answer: D
The AWS Documentation mentions the following which can be used to support the requirement Lifecycle configuration enables you to specify the lifecycle management of objects in a bucket. The configuration is a set of one or more rules, where each rule defines an action for Amazon S3 to apply to a group of objects.
These actions can be classified as follows:
- Transition actions — In which you define when objects transition to another storage class. For example, you may choose to transition objects to the STANDARD_IA (IA, for infrequent access) storage class 30 days after creation, or archive objects to the GLACIER storage class one year after creation.
- Expiration actions — In which you specify when the objects expire. Then Amazon S3 deletes the
expired objects on your behalf.
For more information on S3 lifecycle policies, please refer to the below URL: https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html
Question 551
An application running on EC2 instances processes sensitive information stored on Amazon S3. The information is accessed over the Internet. The security team is concerned that the Internet connectivity to Amazon S3 is a security risk. Which solution will resolve the security concern?
A. Access the data through an Internet Gateway.
B. Access the data through a VPN connection.
C. Access the data through a NAT Gateway.
D. Access the data through a VPC endpoint for Amazon S3
Answer: D
The AWS Documentation mentions the following A VPC endpoint enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Instances in your VPC do not require public IP addresses to communicate with resources in the service. Traffic between your VPC and the other service does not leave the Amazon network.
For more information on VPC endpoints, please refer to the below URL:
https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-endpoints.html
Question 552
You have setup a Redshift cluster in AWS. You are trying to access the Redshift Cluster, but are not able to do so. What can be done to ensure you can access the Redshift Cluster?
A. Ensure the Cluster is created in the right Availability Zone
B. Ensure the Cluster is created in the right Region
C. Change the security groups for the cluster
D. Change the encryption key associated with the cluster
Answer: C
The AWS Documentation mentions the following When you provision an Amazon Redshift cluster, it is locked down by default so nobody has access to it. To grant other users inbound access to an Amazon Redshift cluster, you associate the cluster with a security group.
For more information on Redshift Security Groups, please refer to the below URL: https://docs.aws.amazon.com/redshift/latest/mgmt/working-with-security-groups.html
Question 553
You have a web application hosted on an EC2 Instance in AWS. The application is now being accessed by users across the globe. The Operations team is getting support requests from users in some parts that is experiencing extreme slowness. What can be done to the architecture to improve the response time for users?
A. Add more EC2 Instances to support the load
B. Change the Instance type to a higher instance type
C. Add Route53 health checks to improve the performance
D. Place the EC2 Instance behind Cloudfront
Answer: D
The AWS Documentation mentions the following Amazon CloudFront is a web service that speeds up distribution of your static and dynamic web content, such as -html, .css, .js, and image files, to your users. CloudFront delivers your content through a worldwide network of data centers called edge locations. When a user requests content that you’re serving with CloudFront, the user is routed to the edge location that provides the lowest latency (time delay), so that content is delivered with the best possible performance.
For more information on Amazon Cloudfront, please refer to the below URL:
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Introduction.btml
Question 554
You currently have a NAT gateway defined for your private instances. You need to make the NAT gateway highly available. How can this be accomplished?
A. Create another NAT gateway and place is behind an ELB
B. Create a NAT gateway in another Availability Zone
C. Create a NAT gateway in another Region
D. Use Autoscaling groups to scale the NAT gateway
Answer: B
The AWS Documentation mentions the following If you have resources in multiple Availability Zones and they share one NAT gateway, in the event that the NAT gateway’s Availability Zone is down, resources in the other Availability Zones lose internet access. To create an Availability Zone-independent architecture, create a NAT gateway in each Availability Zone and configure your routing to ensure that resources use the NAT gateway in the same Availability Zone.
For more information on the NAT gateway, please refer to the below URL:
https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-nat-gateway.html