AWS Certified Cloud Practitioner Practice Test 3 (Bonso) Flashcards

1
Q

You are planning to create point-in-time backups of your Amazon EBS volumes. Which of the following are correct statements? (Select TWO.)
A.You can take EBS backups by creating Amazon Machine Images (AMIs)
B.You can create point-in-time backups through EBS snapshots
C.EBS backups are stored durably in Amazon S3
D.Instances will have to be stopped first to start the EBS backup
E.Backing up the same EBS volume

A

B.You can create point-in-time backups through EBS snapshots
C.EBS backups are stored durably in Amazon S3

Explanation:
You can back up the data on your Amazon EBS volumes to Amazon S3 by taking point-in-time snapshots. Snapshots are incremental backups, which means that only the blocks on the device that have changed after your most recent snapshot are saved. This minimizes the time required to create the snapshot and saves on storage costs by not duplicating data. Each snapshot contains all of the information that is needed to restore your data (from the moment when the snapshot was taken) to a new EBS volume.

Hence, the correct answer are:

  • You can create point-in-time backups through EBS snapshots
  • EBS backups are stored durably in Amazon S3

The option that says: You can take EBS backups by creating Amazon Machine Images (AMIs) is incorrect because EBS backup volumes are not called AMIs. An Amazon Machine Image (AMI) is a backup of an EC2 instance along with all of its EBS volumes. Since only a single volume needs to be backed up, you should create EBS Snapshots instead.

The option that says: Backing up the same EBS volume will create a new back up of the whole volume is incorrect because EBS Snapshots are incremental, which means that only the latest changes are backed up when you run the backup process. It does not backup the whole volume again.

The option that says: Instances will have to be stopped first to start the EBS backup is incorrect because you can actually run EBS backup while the EC2 instance is running. This may cause some deterioration in EC2 performance, but it should not have a noticeable effect.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q
You noticed that you cannot reach one of your EC2 web servers behind an ELB whenever you enter the DNS name of your load balancer. Which of the following should you first check to gain more insight on the issue?
A.ELB Health Check
B.AWS CloudTrail
C.Amazon CloudWatch
D.AWS Config
A

A.ELB Health Check

Explanation:
Your Application Load Balancer periodically sends requests to its registered targets to test their status. These tests are called health checks.

Each load balancer node routes requests only to the healthy targets in the enabled Availability Zones for the load balancer. Each load balancer node checks the health of each target, using the health check settings for the target groups with which the target is registered. After your target is registered, it must pass one health check to be considered healthy. After each health check is completed, the load balancer node closes the connection that was established for the health check.

If a target group contains only unhealthy registered targets, the load balancer nodes route requests across its unhealthy targets.

In this scenario, a possible cause why you are not able to connect to an EC2 instance behind an ELB is because the load balancer deems the instance as unhealthy. This is verified by the ELB health checks that you can see in your ELB dashboard, which determines whether an instance is healthy or not. If the instance is unhealthy under the load balancer, then it will not receive any traffic.

Hence, the correct answer is ELB Health Check.

Amazon CloudWatch is incorrect because this is just used to monitor your AWS resources and collect information in the form of logs, metrics, and events. Although this service can prove useful for investigation, it is not the first thing you should check in this scenario.

AWS CloudTrail is incorrect because this simply provides an event history of your AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command line tools, and other AWS services. Although this service can prove useful for investigation, it is not the first thing you should check in this scenario.

AWS Config is incorrect because it just continuously monitors and records your AWS resource configurations and allows you to automate the evaluation of recorded configurations against desired configurations. This service will not help you very much in your investigation of the issue.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q
Which among the options below is tied to a specific Availability Zone where it was launched?
A.Amazon EFS
B.EBS Volume
C.Amazon EC2 Key Pairs
D.Amazon Machine Image
A

B.EBS Volume

Explanation:
An Amazon EBS volume is tied to its Availability Zone and can be attached only to instances in the same Availability Zone. Your EC2 instance is also tied to the Availability Zones where it was launched. However, its EC2 instance ID is tied to the AWS region.

An AWS resource can be a Global, Regional or a Zonal service. A Global service means that it covers all of the AWS Regions across the globe, while a regional service means that a resource is only applicable to one specific region at a time. A regional service may or may not have the ability to replicate the same resource to another region. Lastly, a Zonal service can only exist in one Availability Zone.

You don’t need to memorize the scope of all of the AWS services as long as you know the pattern. There are actually only a handful of services that are considered as global services such as IAM, STS, Route 53, CloudFront, and WAF. For Zonal services, the examples are EC2 Instance and EBS Volumes where they are tied to the Availability Zone where they were launched. Take note that although EBS Volumes are considered as a zonal service, the EBS snapshots are considered as regional since they are not tied to a specific Availability Zone. The rest of the services are regional in scope.

Hence, the correct answer is: EBS volume.

Amazon EC2 key pairs, Amazon EFS and Amazon Machine Image are all incorrect because these are considered as regional resources.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q
Which service in AWS is best used for data analytics and data warehousing?
A.Amazon Aurora
B.Amazon RedShift
C.Amazon DynamoDB
D.Amazon S3
A

B.Amazon RedShift

Explanation:
Amazon Redshift is a fast, fully managed data warehouse that makes it simple and cost-effective to analyze all your data using standard SQL and your existing Business Intelligence (BI) tools. It allows you to run complex analytic queries against petabytes of structured data, using sophisticated query optimization, columnar storage on high-performance local disks, and massively parallel query execution.

Traditional data warehouses require significant time and resource to administer, especially for large datasets. In addition, the financial cost associated with building, maintaining, and growing self-managed, on-premise data warehouses is very high. As your data grows, you have to constantly trade-off what data to load into your data warehouse and what data to archive in storage so you can manage costs, keep ETL complexity low, and deliver good performance.

Amazon Redshift not only significantly lowers the cost and operational overhead of a data warehouse, but with Redshift Spectrum, also makes it easy to analyze large amounts of data in its native format without requiring you to load the data.

Hence, the correct answer is Amazon Redshift.

Amazon Aurora is not the optimal choice for data warehousing since it uses a traditional SQL setup, which is not as fast as Amazon Redshift.

Amazon DynamoDB and Amazon S3 are incorrect because these are not the best choices for business analytics databases due to the complex queries that are being performed on large sets of data. You need a storage type that is designed for this purpose, such as Redshift.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Which of the following should you set up in order to connect your AWS VPC network to your local network via an IPsec tunnel?
A.A NAT Gateway in your private subnet connected to your on-premise network
B.An on-premise NAT gateway device connected to your VPC’s Internet Gateway
C.A VPN Gateway in your VPC connected to the Customer Gateway in your on-premise network
D.VPC Peering connection between your on-premise network and VPC

A

C.A VPN Gateway in your VPC connected to the Customer Gateway in your on-premise network

Explanation:
An Amazon VPC VPN connection links your data center (or network) to your Amazon Virtual Private Cloud (VPC). A customer gateway is the anchor on your side of that connection. It can be a physical or software appliance. The anchor on the AWS side of the VPN connection is called a virtual private gateway.

By default, instances that you launch into an Amazon VPC can’t communicate with your own (remote) network. You can enable access to your remote network from your VPC by attaching a virtual private gateway to the VPC, creating a custom route table, updating your security group rules, creating an AWS Site-to-Site VPN (Site-to-Site VPN) connection, and configuring routing to pass traffic through the connection. Site-to-Site VPN supports Internet Protocol security (IPsec) VPN connections.

Hence, the correct answer is A VPN gateway in your VPC connected to the Customer Gateway in your on-premises network.

The option that says: VPC Peering connection between your on-premises network and VPC is incorrect because VPC Peering connects two different VPCs for inter-VPC communication. It does not connect your local network via IPsec VPN.

The option that says: A NAT gateway in your private subnet connected to your on-premises network is incorrect because a NAT Gateway is primarily used to allow EC2 instances launched in your private subnet to be able to connect to the public Internet, but disallows external servers to establish Internet connection to the VPC.

The option that says: An on-premises NAT gateway device connected to your VPC’s Internet Gateway is incorrect because as mentioned above, a NAT Gateway is not a suitable service/network device to be used here.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q
You are having a hard time tracking the different background jobs with parallel or sequential steps across your distributed application system. What AWS service will simplify this task for you?
A.AWS Lambda
B.Amazon SQS
C.Amazon Simple Workflow
D.AWS CodePipeline
A

C.Amazon Simple Workflow

Explanation:
With AWS Simple Workflow, you can create workflow tasks that will simplify and automate operations for you. You can configure automatic retry to allow SWF to retry failed activity or workflow tasks along with your given conditions. If your app’s steps take more than 500 milliseconds to complete, you need to track the state of processing, and you need to recover or retry if a task fails, Amazon SWF can help you.

Amazon SWF promotes a separation between the control flow of your background job’s stepwise logic and the actual units of work that contain your unique business logic. This allows you to separately manage, maintain, and scale “state machinery” of your application from the core business logic that differentiates it. As your business requirements change, you can easily change application logic without having to worry about the underlying state machinery, task dispatch, and flow control.

Hence, the correct answer is Amazon SWF.

Amazon SQS is incorrect because this is just a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. It does not solve the problem stated in this scenario.

AWS Lambda is incorrect because this one is basically a serverless compute service in AWS and not an orchestration service. You cannot efficiently track processes using AWS Lambda, so it’s better to use AWS SWF instead.

AWS CodePipeline is incorrect because this is simply a fully managed continuous delivery service that helps you automate your release pipelines. This is a CI/CD tool in which you can deliver application updates faster and more reliably. Though, in a distributed application system, CodePipeline is not the correct service to use if you want to track and coordinate work across distributed application components.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q
Which of the following is the most cost-effective service to use if you want to coordinate multiple AWS services into serverless workflows?
A.AWS Batch
B.AWS Lambda
C.AWS Step Functions
D.Amazon SWF
A

C.AWS Step Functions

Explanation:
AWS Step Functions provides serverless orchestration for modern applications. Orchestration centrally manages a workflow by breaking it into multiple steps, adding flow logic, and tracking the inputs and outputs between the steps. As your applications execute, Step Functions maintains application state, tracking exactly which workflow step your application is in, and stores an event log of data that is passed between application components. That means that if networks fail or components hang, your application can pick up right where it left off.

Application development is faster and more intuitive with Step Functions because you can define and manage the workflow of your application independently from its business logic. Making changes to one does not affect the other. You can easily update and modify workflows in one place, without having to struggle with managing, monitoring and maintaining multiple point-to-point integrations. Step Functions frees your functions and containers from excess code, so your applications are faster to write, more resilient, and easier to maintain.

Hence, the correct answer is AWS Step Functions.

Amazon SWF is incorrect because it is just a fully-managed state tracker and task coordinator service. It does not provide serverless orchestration to multiple AWS resources.

AWS Lambda is incorrect because although this service is used for serverless computing, it does not provide a direct way to coordinate multiple AWS services into serverless workflows.

AWS Batch is incorrect because this is primarily used to efficiently run hundreds of thousands of batch computing jobs in AWS.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Which of the following is true if you store your data in AWS?
A.You are the owner of the data you store in AWS
B.Encryption is required for all data at rest and in transit
C.AWS has the right to review any data stored for potential threats
D.All data are stored durably and redundantly in different AZs

A

A.You are the owner of the data you store in AWS

Explanation:
AWS knows that their customers care deeply about privacy and data security. That’s why AWS gives you ownership and control over your content through simple, powerful tools that allow you to determine where your content will be stored, secure your content in transit and at rest, and manage your access to AWS services and resources for your users. They also implement responsible and sophisticated technical and physical controls that are designed to prevent unauthorized access to or disclosure of your content.

Hence, the correct answer is: You are the owner of the data you store in AWS.

The option that says: Encryption is required for all data at rest and in transit is incorrect since there is no such requirement imposed by AWS. Customers are free to encrypt data they wish to encrypt.

The option that says: All data are stored durably and redundantly in different AZs is incorrect since not all data receive such treatment. It is up to the customer to decide how they want their data stored in AWS.

The option that says: AWS has the right to review any data stored for potential threats is incorrect since the customer owns their data. AWS cannot simply access customer data without permission.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q
Which AWS support plan includes a Concierge Support Team which will assist you with your billing and account inquiries, and work with you to implement billing and account best practices?
A.Developer support plan
B.Business support plan
C.Enterprise support plan
D.Basic support plan
A

C.Enterprise support plan

Explanation:
The Concierge Support Team is composed of AWS billing and account experts that specialize in working with enterprise accounts. They will quickly and efficiently assist you with your billing and account inquiries, and work with you to implement billing and account best practices so that you can focus on what matters: running your business.

AWS Support offers a range of plans that provide access to tools and expertise that support the success and operational health of your AWS solutions. All support plans provide 24x7 access to customer service, AWS documentation, whitepapers, and support forums. For technical support and more resources to plan, deploy, and improve your AWS environment, you can select a support plan that best aligns with your AWS use case.

AWS Support offers four support plans: Basic, Developer, Business, and Enterprise. The Basic plan is free of charge and offers support for account and billing questions and service limit increases. The other plans offer an unlimited number of technical support cases with pay-by-the-month pricing and no long-term contracts, providing the level of support that meets your needs.

All AWS customers automatically have around-the-clock access to these features of the Basic support plan:

  • Customer Service: one-on-one responses to account and billing questions
  • Support forums
  • Service health checks
  • Documentation, whitepapers, and best-practice guides

Customers with an Enterprise support plan have more access since they are eligible for these additional features unlike the Developer or Business plans:

  • Application architecture guidance: consultative partnership supporting specific use cases and applications.
  • Infrastructure event management: short-term engagement with AWS Support to get a deep understanding of your use case—and after analysis, provide architectural and scaling guidance for an event.
  • Technical account manager
  • White-glove case routing
  • Management business reviews
  • Concierge Support Team

Hence, the correct answer is the Enterprise support plan.

All other options are incorrect because the Basic, Developer and Business support plans do not include a Concierge Support Team.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q
You wish to host a static website of your own in AWS at a low cost. Which service should be used for this purpose?
A.Amazon S3 Standard
B.Amazon S3 Infrequent Access
C.Amazon EC2
D.Amazon Elastic Load Balancer
A

A.Amazon S3 Standard

Explanation:
You can host a static website on Amazon Simple Storage Service (Amazon S3). On a static website, individual webpages include static content. To host a static website, you configure an Amazon S3 bucket for website hosting, and then upload your website content to the bucket. This bucket must have public read access.

Amazon S3 Standard only charges you a small amount per month, which is perfect for this scenario’s needs. Hence, the correct answer is Amazon S3 Standard.

Amazon S3 infrequent access is incorrect because this may not be the storage choice for a static website. Since this is a website, you expect multiple visitors a day who will also be accessing your objects at a frequent rate. S3 infrequent access GET requests cost more than a standard S3 storage type.

Amazon EC2 is incorrect because using this will not be as cost-effective as using Amazon S3 Standard for static website hosting. This is because there are other costs to consider when using EC2 instances, such as EBS volumes.

Amazon Elastic Load Balancer is incorrect because a standalone load balancer cannot host content. You require a source for your content, such as an EC2 instance or Lambda function, and use this source in tandem with an ELB. The purpose of an ELB is to redirect incoming traffic to only your healthy instances to increase the availability of your website.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q
You have a large number of log files that will be archived in AWS for a long time and should have a retrieval time of 12 hours or less. Which service is the most cost-effective storage class for this purpose?
A.Amazon S3 Standard-IA
B.Amazon EBS Cold HDD
C.Amzon S3 Glacier Deep Archive
D.Amazon S3 Glacier
A

C.Amzon S3 Glacier Deep Archive

Explanation:
S3 Glacier Deep Archive is Amazon S3’s lowest-cost storage class and supports long-term retention and digital preservation for data that may be accessed once or twice in a year. It is designed for customers — particularly those in highly-regulated industries, such as the Financial Services, Healthcare, and Public Sectors — that retain data sets for 7-10 years or longer to meet regulatory compliance requirements.

S3 Glacier Deep Archive can also be used for backup and disaster recovery use cases, and is a cost-effective and easy-to-manage alternative to magnetic tape systems, whether they are on-premises libraries or off-premises services. S3 Glacier Deep Archive complements Amazon S3 Glacier, which is ideal for archives where data is regularly retrieved and some of the data may be needed in minutes.

All objects stored in S3 Glacier Deep Archive are replicated and stored across at least three geographically-dispersed Availability Zones, protected by 99.999999999% of durability, and can be restored within 12 hours.

Hence, the correct answer is Amazon S3 Glacier Deep Archive.

Amazon S3 Standard-IA is incorrect because this costs more than Glacier and Glacier Deep Archive. This storage type takes into consideration that you will still need to retrieve your objects in a timely manner, although infrequently.

Amazon S3 Glacier is incorrect because the scenario already mentioned that the retrieval option should be within 12 hours and thus, Glacier Deep Archive can provide a more cost-effective option than the Glacier class including the capability to retrieve the data within the mentioned timeframe.

Amazon EBS Cold HDD is incorrect because this is not the best nor the cheapest choice for archival. You use Cold HDD if you have infrequent workloads that require consistent throughput. EBS volumes need to be used along with EC2 instances for you to have access to the files stored in it.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Which of the following is the responsibility of the customer in the AWS cloud? (Select TWO.)
A.Ensuring that AWS services comply with the standards required of them
B.Managing users in their AWS account
C.Managing data stored in the AWS Resources
D.Upgrading chipsets to the latest commercially available product
E.Disposal of disk drives

A

B.Managing users in their AWS account
C.Managing data stored in the AWS Resources

Explanation:
Deploying workloads on Amazon Web Services (AWS) helps streamline time-to-market, increase business efficiency, and enhance user performance for many organizations. But as you capitalize on this strategy, it is important to understand your role in securing your AWS environment. Based on the AWS Shared Responsibility Model, AWS provides a data center and network architecture built to meet the requirements of the most security-sensitive organizations, while you are responsible for securing services built on top of this infrastructure, notably including network traffic from remote networks.

This customer/AWS shared responsibility model also extends to IT controls. Just as the responsibility to operate the IT environment is shared between AWS and its customers, so is the management, operation and verification of IT controls shared. AWS can help relieve customer burden of operating controls by managing those controls associated with the physical infrastructure deployed in the AWS environment that may previously have been managed by the customer. As every customer is deployed differently in AWS, customers can take advantage of shifting management of certain IT controls to AWS which results in a (new) distributed control environment.

Customers can then use the AWS control and compliance documentation available to them to perform their control evaluation and verification procedures as required. Below are examples of controls that are managed by AWS, AWS Customers and/or both.

Inherited Controls: Controls which a customer fully inherits from AWS.

  • Physical and Environmental controls

Shared Controls: Controls which apply to both the infrastructure layer and customer layers, but in completely separate contexts or perspectives. In a shared control, AWS provides the requirements for the infrastructure and the customer must provide their own control implementation within their use of AWS services.

Examples include:

  • Patch Management: AWS is responsible for patching and fixing flaws within the infrastructure, but customers are responsible for patching their guest OS and applications.
  • Configuration Management: AWS maintains the configuration of its infrastructure devices, but a customer is responsible for configuring their own guest operating systems, databases, and applications.
  • Awareness & Training: AWS trains AWS employees, but a customer must train their own employees.

Customer Specific: Controls which are solely the responsibility of the customer based on the application they are deploying within AWS services.

Examples include:

  • Service and Communications Protection or Zone Security which may require a customer to route or zone data within specific security environments.

Hence, the correct answers are: managing users in their AWS account and managing data stored in the AWS resources.

The following options are incorrect because these tasks fall under the responsibilities of AWS:

  • Disposal of disk drives
  • Ensure that AWS services comply with the standards required of them
  • Upgrading chipsets to the latest commercially available product
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q
A customer wants to send push notifications to her mobile users whenever a new patch comes out. What service is the most appropriate for this task?
A.SES
B.SNS
C.SQS
D.SMS
A

B.SNS

Explanation:
Amazon Simple Notification Service (SNS) is a highly available, durable, secure, fully managed pub/sub messaging service that enables you to decouple microservices, distributed systems, and serverless applications. Amazon SNS provides topics for high-throughput, push-based, many-to-many messaging. Using Amazon SNS topics, your publisher systems can fan out messages to a large number of subscriber endpoints for parallel processing, including Amazon SQS queues, AWS Lambda functions, and HTTP/S webhooks. Additionally, SNS can be used to fan out notifications to end users using mobile push, SMS, and email.

Amazon SNS uses cross availability zone message storage to provide high message durability. Running within Amazon’s proven network infrastructure and datacenters, Amazon SNS topics are available whenever your applications need them. All messages published to Amazon SNS are stored redundantly across multiple geographically separated servers and data centers. Amazon SNS reliably delivers messages to all valid AWS endpoints, such as Amazon SQS queues and AWS Lambda functions.

Amazon SNS enables message filtering and fanout to a large number of subscribers, including serverless functions, queues, and distributed systems. Additionally, Amazon SNS fans out notifications to end users via mobile push messages, SMS, and email.

Hence, SNS is the correct answer.

Amazon SES is incorrect since this is an emailing service. It cannot send push notifications to mobile users.

Amazon SQS is incorrect since this is a poll type messaging service.

AWS Server Migration Service (SMS) is incorrect because this is simply an agentless service that makes it easier and faster for you to migrate thousands of on-premises workloads to AWS.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q
Which AWS service allows you to check if your running resources conform with the AWS best practices?
A.AWS Trusted Advisor
B.AWS IAM
C.Amazon CloudWatch
D.AWS Config
A

A.AWS Trusted Advisor

Explanation:
AWS Trusted Advisor is an application that draws upon best practices learned from AWS’ aggregated operational history of serving hundreds of thousands of AWS customers. Trusted Advisor inspects your AWS environment and makes recommendations for saving money, improving system performance, or closing security gaps.

Trusted Advisor includes an ever-expanding list of checks in the following five categories:

Cost Optimization – recommendations that can potentially save you money by highlighting unused resources and opportunities to reduce your bill.

Security – identification of security settings that could make your AWS solution less secure.

Fault Tolerance – recommendations that help increase the resiliency of your AWS solution by highlighting redundancy shortfalls, current service limits, and over-utilized resources.

Performance – recommendations that can help to improve the speed and responsiveness of your applications.

Service Limits – recommendations that will tell you when service usage is more than 80% of the service limit.

Hence, the correct answer is AWS Trusted Advisor.

Amazon CloudWatch is incorrect because this simply provides you with data and actionable insights to monitor your applications, respond to system-wide performance changes, optimize resource utilization, and get a unified view of operational health. CloudWatch collects monitoring and operational data in the form of logs, metrics, and events, providing you with a unified view of AWS resources, applications, and services that run on AWS and on-premises servers. It does not check whether your resources conform with the best practices.

AWS Config is incorrect because this is just a compliance monitoring tool. It makes sure your resources are in line with your set guidelines and requirements.

AWS IAM is incorrect because this is simply a security service used to manage your accounts and user permissions. It does not check whether your resources conform to the best practices.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Which of the following security group rules are valid? (Select TWO.)
A.Inbound RDP rule with an address range as a source
B.Inbound HTTP rule with security group ID as a source
C.Outbound HTTPS rule with hostname as destination
D.Outbound MYSQL rule with IP address as source
E.Inbound TCP rule with instance ID as source

A

A.Inbound RDP rule with an address range as a source
B.Inbound HTTP rule with security group ID as a source

Explanation:
Security groups accept IP address, IP address range, and security group ID as either source or destination of inbound or outbound rules.

Hence, the correct answers are: Inbound HTTP rule with security group ID as source and Inbound RDP rule with an address range as source.

Inbound TCP rule with instance ID as source and Outbound HTTPS rule with hostname as destination are both incorrect because Instance IDs or hostnames are not valid values.

Outbound MYSQL rule with IP address as source is incorrect because the source cannot be modified. Since it is outbound, you should set the allowed destination.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q
Which of the following services allow you to mask downtime of your application by rerouting your traffic to healthy instances? (Select TWO.)
A.AWS ELB
B.VPC Route tables
C.Amazon CloudFront
D.AWS EC2 Auto Scaling
E.Amazon Route 53
A

A.AWS ELB
E.Amazon Route 53

Explanation:
AWS ELB and Amazon Route 53 help mask downtime by redirecting traffic to your healthy instances and allowing failover to your secondary systems. This is achieved through a combination of different health checks, routing policies, and failover policies.

Amazon CloudFront, AWS EC2 Auto Scaling, and VPC Route Tables do not help mask downtime. Amazon CloudFront boosts your content delivery speeds through the help of edge locations found globally. EC2 Auto Scaling allows you to automatically scale the number of running instances required to handle your workloads. VPC Route Tables dictate how traffic flows inside your VPC.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

`How can your RDS production instances be more cost-effective when they will be used for a long period of time?
A.you can stop your RDS instances when idle to prevent AWS from charging you during this time
B.You can easily backup, terminate and restore RDS instances when you need them
C.You can avail of reserved instances to get discounts on your instance costs
D.AWS does not charge you when your RDS is idle

A

C.You can avail of reserved instances to get discounts on your instance costs

Explanation:
Amazon RDS Reserved Instances give you the option to reserve a DB instance for a one or three year term and in turn receive a significant discount compared to the On-Demand Instance pricing for the DB instance.

You can choose between three payment options when you purchase a Reserved Instance. With the All Upfront option, you pay for the entire Reserved Instance with one upfront payment. This option provides you with the largest discount compared to On-Demand Instance pricing. With the Partial Upfront option, you make a low upfront payment and are then charge a discounted hourly rate for the instance for the duration of the Reserved Instance term. The No Upfront option does not require any upfront payment and provides a discounted hourly rate for the duration of the term.

All Reserved Instance types are available for Aurora, MySQL, MariaDB, PostgreSQL, Oracle and SQL Server database engines.

Hence, the correct answer is: You can avail of reserved instances to get discounts on your instance costs.

The option that says: You can stop your RDS instances when idle to prevent AWS from charging you during this time is not the best way to save money as it entails more effort than required to do so. It is still better to opt for reserved instances for your RDS database cluster instead.

The option that says: You can easily backup, terminate, and restore RDS instances when you need them is not the best solution. There is too much effort involved.

The option that says: AWS does not charge you when your RDS is idle is incorrect. Idle time or not, once your RDS instance is running, AWS charges you for it.

`

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q
What is the minimum support plan that will provide you access to all Trusted Advisor Checks?
A.Enterprise
B.Basic
C.Developer
D.Business
A

D.Business

Explanation:
AWS Support offers a range of plans that provide access to tools and expertise that support the success and operational health of your AWS solutions. All support plans provide 24x7 access to customer service, AWS documentation, whitepapers, and support forums. For technical support and more resources to plan, deploy, and improve your AWS environment, you can select a support plan that best aligns with your AWS use case.

It is important to know what services each support plan offers, and how they will matter to your company. Both business and enterprise support plans provide the full set of trusted advisor checks. The Developer plan only allow you access to the 7 core Trusted Advisor checks.

Since the question is asking for the minimum support plan available, the Business support plan is the correct answer.

Basic and Developer support plans are incorrect because these will only allow you access to the 7 core Trusted Advisor checks.

Enterprise support plan is incorrect because although it provides the full set of trusted advisor checks, it costs a lot more than the Business support plan.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q
Which of the following services will be able to reroute traffic to your secondary EC2 instances in another region during disaster recovery?
A.VPC Peering
B.Amazon VPC
C.Amazon Route 53
D.AWS ELB
A

C.Amazon Route 53

Explanation:
Amazon Route 53 is a highly available and scalable cloud Domain Name System (DNS) web service. It is designed to give developers and businesses an extremely reliable and cost-effective way to route end users to Internet applications by translating names like www.i-love-boracay.com into the numeric IP addresses like 192.0.2.1 that computers use to connect to each other. Amazon Route 53 is fully compliant with IPv6 as well.

You can use Route 53 to check the health of your resources and only return healthy resources in response to DNS queries. There are three types of DNS failover configurations:

  1. Active-passive: Route 53 actively returns a primary resource. In case of failure, Route 53 returns the backup resource. Configured using a failover policy.
  2. Active-active: Route 53 actively returns more than one resource. In case of failure, Route 53 fails back to the healthy resource. Configured using any routing policy besides failover.
  3. Combination: Multiple routing policies (such as latency-based, weighted, etc.) are combined into a tree to configure more complex DNS failover.

Hence, the correct answer is: Amazon Route 53.

Both Amazon VPC and VPC Peering are incorrect because they don’t do any kind of failover rerouting procedure during a disaster.

AWS ELB is incorrect. If the ELB itself were to fail then no traffic would reach your servers at all. In the event of a regional outage, this can likely happen since ELB is a regional service, unlike Route 53 which is considered a global service. You would have to then rely on Route 53 to redirect traffic to your secondary application hosted in another region.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What service should you use in order to add user sign-up, sign-in, and access control to your mobile app with a feature that supports sign-in with social identity providers such as Facebook, Google, and Amazon, and enterprise identity providers via SAML 2.0?
A.AWS Directory Service
B.AWS Single Sign-On (SSO)
C.AWS Identity and Access Management (IAM)
D.Amazon Cognito

A

D.Amazon Cognito

Explanation:
Amazon Cognito lets you add user sign-up, sign-in, and access control to your web and mobile apps quickly and easily. Amazon Cognito scales to millions of users and supports sign-in with social identity providers, such as Facebook, Google, and Amazon, and enterprise identity providers via SAML 2.0.

Amazon Cognito provides solutions to control access to backend resources from your app. You can define roles and map users to different roles so your app can access only the resources that are authorized for each user.

With a built-in UI and easy configuration for federating identity providers, you can integrate Amazon Cognito to add user sign-in, sign-up, and access control to your app in minutes. You can customize the UI to put your company branding front and center for all user interactions.

Hence, the correct answer is Amazon Cognito.

AWS Single Sign-On (SSO) is incorrect because this is just a cloud SSO service that makes it easy to centrally manage SSO access to multiple AWS accounts and business applications. It doesn’t provide an easy way to add user sign-in, sign-up, and access control to your mobile app, unlike Amazon Cognito.

AWS Identity and Access Management (IAM) is incorrect because this just enables you to manage access to AWS services and resources securely.

AWS Directory Service is incorrect because it only enables your directory-aware workloads and AWS resources to use managed Active Directory in the AWS Cloud.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q
Which of the following infrastructure correlates to a VPC’s subnet?
A.Region
B.Server
C.Edge Location
D.Availability Zone
A

D.Availability Zone

Explanation:
A VPC spans all the Availability Zones in the region. After creating a VPC, you can add one or more subnets in each Availability Zone. Each subnet must reside entirely within one Availability Zone and cannot span zones. Availability Zones are distinct locations that are engineered to be isolated from failures in other Availability Zones. By launching instances in separate Availability Zones, you can protect your applications from the failure of a single location.

If a subnet’s traffic is routed to an Internet gateway, the subnet is known as a public subnet. If you want your instance in a public subnet to communicate with the Internet over IPv4, it must have a public IPv4 address or an Elastic IP address (IPv4).

Hence, the correct answer is Availability zone.

Both AWS Regions and Edge locations are incorrect because subnets are not directly correlated with these two.

Server is incorrect because subnets are part of a VPC within the AWS global network. Subnets are not bound by servers in any way.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What is the best way to keep track of all activities made in your AWS account?
A.Create a multi-region trail in AWS CloudTrail
B.Set up MFA logging to know who is currently in your environment
C.USe Amazon CloudWatch Logs to log all activities
D.Use LDAP authentication on your AWS account

A

A.Create a multi-region trail in AWS CloudTrail

Explanation

AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. With CloudTrail, you can log, continuously monitor, and retain account activity related to actions across your AWS infrastructure. CloudTrail provides event history of your AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command line tools, and other AWS services. Creating a multi-region trail will allow you to keep your activity records in an S3 bucket and prevent them from getting rewritten automatically.

Hence, the correct answer is: Create a multi-region trail in AWS CloudTrail.

Using Amazon Cloudwatch Logs is incorrect since this service is not related to user actions in your account. CloudWatch Logs enables you to centralize the logs from all of your systems, applications, and AWS services that you use, in a single, highly scalable service.

Setting up MFA is incorrect because it will not tell you exactly who performed what in your AWS account.

Using LDAP authentication on your AWS account is incorrect because not all company supports it. Access logging can be done from the company’s side however, this cannot capture the actions performed within the AWS account.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

In which of the following occasions should you use the Amazon SQS application system? (Select TWO.)
A.If you need to submit push notifications to your event subscribers
B.When your application requires the use of industry-standard messing protocols for message delivery
C.When you have to automate certain tasks in your workflow
D.IF you need to decouple certain parts of your system for better fault rolerance
E.If you require a durable storage for your application events of messages

A

D.IF you need to decouple certain parts of your system for better fault rolerance
E.If you require a durable storage for your application events of messages

Explanation

Use Amazon SQS to transmit any volume of data, at any level of throughput, without losing messages or requiring other services to be available. SQS lets you decouple application components so that they run and fail independently, increasing the overall fault tolerance of the system. Multiple copies of every message are stored redundantly across multiple availability zones so that they are available whenever needed.

If you need to submit push notifications, you should use Amazon SNS instead.

If you need to automate certain workflows in AWS, you should use Amazon SWF.

If your messaging service requires the use of certain protocols, try using Amazon MQ.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q
Which of the following services should you use to deploy and easily rollback a web application from your Git repository to your on-premises server?
A.AWS System Manager
B.AWS Elastic Beanstalk
C.AWS OpsWorks
D.AWS CloudFormation
A

C.AWS OpsWorks

Explanation

Enterprise environments are often a mix of cloud, on-premises data centers, and edge locations. Hybrid cloud architectures help organizations integrate their on-premises and cloud operations to support a broad spectrum of use cases using a common set of cloud services, tools, and APIs across on-premises and cloud environments.

Customers can seamlessly integrate their on-premises and cloud storage, networking, identity management, and security policies to enable use cases such as data center extension to the cloud, backup, and disaster recovery to the cloud, and hybrid data processing.

AWS offers services that integrate application deployment and management across on-premises and cloud environments for a robust hybrid architecture. Below are the following services that you can use to manage or deploy applications to your servers running on-premises:

OpsWorks – AWS OpsWorks is a configuration management service that helps customers configure and operate applications, both on-premises and in the AWS Cloud, using Chef and Puppet.

CodeDeploy – AWS CodeDeploy automates code deployments to any instance, including Amazon EC2 instances and instances running on-premises. AWS CodeDeploy makes it easier to rapidly release new features, avoids downtime during application deployment, and handles the complexity of updating applications.

Hence, the correct answer in this scenario is AWS OpsWorks.

Both AWS CloudFormation and AWS Elastic Beanstalk are incorrect because these services can only deploy applications to your AWS resources and not to the servers located in your on-premises data center.

AWS Systems Manager is incorrect because although you can remotely operate and deploy packages/scripts to your on-premises servers with this one, this service is still not suitable to be used for deploying your web application. It also doesn’t have a feature to easily rollback your deployments unlike OpsWorks. This service is primarily used to automate maintenance and deployment tasks on Amazon EC2 and on-premises instances, or automatically apply patches, updates, and configuration changes across any resource group.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

You have multiple AWS accounts in your organization. How can you take advantage of this to save on your monthly costs?
A.Use AWS Organizations and enable consolidated billing
B.Migrate all resources to a central account to take advantafe of volume discounts
C.Set a budget to each account to minimize spending
D.Create an IAM Group containing all the AWS accounts and enable consolidated billing

A

A.Use AWS Organizations and enable consolidated billing

Explanation:
AWS Organizations is an account management service that enables you to consolidate multiple AWS accounts into an organization that you create and centrally manage. AWS Organizations includes account management and consolidated billing capabilities that enable you to better meet the budgetary, security, and compliance needs of your business. As an administrator of an organization, you can create accounts in your organization and invite existing accounts to join the organization.

You can use the consolidated billing feature in AWS Organizations to consolidate billing and payment for multiple AWS accounts. You can combine the usage across all accounts in the organization to share the volume pricing discounts and Reserved Instance discounts. This can result in a lower charge for your project, department, or company than with individual standalone accounts.

Hence, the correct answer is: Use AWS Organizations and enable consolidated billing.

The option that says: Create an IAM Group containing all the AWS accounts and enable consolidated billing is incorrect because IAM Groups are used for IAM Users. It is not intended to be used with other AWS accounts.

The option that says: Migrate all resources to a central account to take advantage of volume discounts is incorrect because migrating your applications would take too much time, effort and resources to pull off. You can easily combine your AWS accounts through AWS Organizations, and lessen the work that needed to be done.

The option that says: Set a budget for each account to minimize spending is incorrect because although setting a maximum budget for each account is an acceptable practice, you still cannot gain significant cost reductions in your billing. Better to use AWS Organizations and enable Consolidated Billing instead to take advantage of volume discounts.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Which of the following statements is true for AWS CloudTrail?
A.CloudTrail is disabled by default for newly created AWS Accounts
B.CloudTrail is able to capture application error logs from EC2 Instances
C.CloudTrail charges you for every management event trail created
D.When you create a trail in the AWS Management Console, the trail applies to all AWS Regions by default

A

D.When you create a trail in the AWS Management Console, the trail applies to all AWS Regions by default

Explanation:
AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. With CloudTrail, you can log, continuously monitor, and retain account activity related to actions across your AWS infrastructure. CloudTrail provides event history of your AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command-line tools, and other AWS services. This event history simplifies security analysis, resource change tracking, and troubleshooting.

With AWS CloudTrail, simplify your compliance audits by automatically recording and storing event logs for actions made within your AWS account. Integration with Amazon CloudWatch Logs provides a convenient way to search through log data, identify out-of-compliance events, accelerate incident investigations, and expedite responses to auditor requests.

Hence, the correct answer to the question is: When you create a trail in the AWS Management Console, the trail applies to all AWS Regions by default. Alternatively, you can also specify the trail to only a specific Region if you wish to.

The option that says: CloudTrail is disabled by default for newly created AWS accounts is incorrect because AWS CloudTrail is now enabled by default for ALL CUSTOMERS and will provide visibility into the past seven days of account activity without the need for you to configure a trail in the service to get started.

The option that says: CloudTrail is able to capture application error logs from your EC2 is incorrect because CloudTrail actually does not capture error logs in your EC2 instances. You may instead use CloudWatch Logs for this purpose.

The option that says: CloudTrail charges you for every management event trail created is incorrect because actually, CloudTrail does not charge you for your first management trail, but only the additional management trails you create after the first one.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q
Which service in AWS supports various business intelligence tools such as Apache Spark so that you may perform data transformation workloads (ETL) and analytics at a low cost?
A.Amazon EMR
B.Amazon RDS
C.AWS Elasticsearch
D.Amazon Redshift
A

A.Amazon EMR

Explanation:
Amazon EMR is a web service that enables businesses, researchers, data analysts, and developers to easily and cost-effectively process vast amounts of data. It utilizes a hosted Apache Hadoop framework running on the web-scale infrastructure of Amazon Elastic Compute Cloud (Amazon EC2) and Amazon Simple Storage Service (Amazon S3). Amazon EMR lets you focus on crunching or analyzing your data without having to worry about time-consuming set-up, management or tuning of Hadoop clusters or the compute capacity upon which they sit.

Customers across many industry verticals use EMR to securely and reliably handle broad sets of big data use cases, including machine learning, data transformations (ETL), financial and scientific simulation, bioinformatics, log analysis, and deep learning. EMR gives teams the flexibility to run use cases on single-purpose short lived clusters that automatically scale to meet demand, or on long running highly available clusters using the new multi-master deployment mode.

Hence, the correct answer is Amazon EMR.

Amazon Elasticsearch, Amazon Redshift, and Amazon RDS are incorrect because all of these are different types of databases and search engines. They do not support Apache business intelligence tools and you cannot perform ETL jobs using these tools alone.

28
Q

Which of the following are advantages of Cloud Computing? (Select TWO.)
A. Increase speed and agility
B.Trade capital expense for variable expense
C.Trade variable expense for capital expense
D.Achieve lower economies of scale
E.Massive discounts for your computers, network devices and other equipment from Amazon Online shopping website

A

A. Increase speed and agility
B.Trade capital expense for variable expense

Explanation:
Cloud computing is the on-demand delivery of compute power, database, storage, applications, and other IT resources via the internet with pay-as-you-go pricing.

Whether you are using it to run applications that share photos to millions of mobile users or to support business critical operations, a cloud services platform provides rapid access to flexible and low cost IT resources. With cloud computing, you don’t need to make large upfront investments in hardware and spend a lot of time on the heavy lifting of managing that hardware. Instead, you can provision exactly the right type and size of computing resources you need to power your newest idea or operate your IT department. You can access as many resources as you need, almost instantly, and only pay for what you use.

There are six advantages of using Cloud Computing:

  1. Trade capital expense for variable expense

– Instead of having to invest heavily in data centers and servers before you know how you’re going to use them, you can pay only when you consume computing resources, and pay only for how much you consume.

  1. Benefit from massive economies of scale

– By using cloud computing, you can achieve a lower variable cost than you can get on your own. Because usage from hundreds of thousands of customers is aggregated in the cloud, providers such as AWS can achieve higher economies of scale, which translates into lower pay as-you-go prices.

  1. Stop guessing capacity

– Eliminate guessing on your infrastructure capacity needs. When you make a capacity decision prior to deploying an application, you often end up either sitting on expensive idle resources or dealing with limited capacity. With cloud computing, these problems go away. You can access as much or as little capacity as you need, and scale up and down as required with only a few minutes’ notice.

  1. Increase speed and agility

– In a cloud computing environment, new IT resources are only a click away, which means that you reduce the time to make those resources available to your developers from weeks to just minutes. This results in a dramatic increase in agility for the organization, since the cost and time it takes to experiment and develop is significantly lower.

  1. Stop spending money running and maintaining data centers

– Focus on projects that differentiate your business, not the infrastructure. Cloud computing lets you focus on your own customers, rather than on the heavy lifting of racking, stacking, and powering servers.

  1. Go global in minutes

– Easily deploy your application in multiple regions around the world with just a few clicks. This means you can provide lower latency and a better experience for your customers at minimal cost.

Hence, the correct answers are Trade capital expense for variable expense and Increase speed and agility.

The option that says: Trade variable expense for capital expense is incorrect because it should be the other way around. One of the advantages of Cloud Computing is the opportunity to trade capital expense for variable expense.

The option that says: Achieve lower economies of scale is incorrect because this is actually the opposite. By using cloud computing, you can achieve a lower variable cost than you can get on your own through the massive economies of scale.

The option that says: Massive discounts for your computers, network devices and other equipment from Amazon online shopping website is incorrect because the Amazon.com platform is different from its Amazon Web Services cloud computing division. Discounts from purchasing computer hardware is not an advantage of cloud computing.

29
Q
Which of the following services allows you to store Docker images and orchestrate Docker containers in a simple and cost-effective manner? (Select TWO.)
A.AWS Lambda
B.AWS CodeCommit
C.Amazon ECR
D.Amazon ECS
E.AWS Batch
A

C.Amazon ECR
D.Amazon ECS

Explanation:
Amazon Elastic Container Service (Amazon ECS) is a highly scalable, high-performance container orchestration service that supports Docker containers and allows you to easily run and scale containerized applications on AWS.

Amazon Elastic Container Registry (ECR) is a fully-managed Docker container registry that makes it easy for developers to store, manage, and deploy Docker container images. Amazon ECR is integrated with Amazon Elastic Container Service (ECS), simplifying your development to production workflow.

AWS Lambda is a serverless compute service of AWS. It does not handle containers.

AWS CodeCommit is just a fully-managed source control service that hosts secure Git-based repositories. It makes it easy for teams to collaborate on code in a secure and highly scalable ecosystem.

AWS Batch is a compute service that is used to run hundreds of thousands of batch computing jobs on AWS. It is not used for Docker container orchestration or Docker image repository.

30
Q
What feature will allow you to label and sort your EC2 instances according to their deployment stage (development, staging, production)?
A.Instance userdata
B.Instance type
C.Instance metadata
D.Instance tags
A

D.Instance tags

Explanation:
Amazon EC2 provides different resources that you can create and use. Some of these resources include images, instances, volumes, and snapshots. When you create a resource, AWS assigns the resource a unique resource ID. Some resources can be tagged with values that you define, to help you organize and identify them.

To help you manage your instances, images, and other Amazon EC2 resources, you can optionally assign your own metadata to each resource in the form of tags. Tags enable you to categorize your AWS resources in different ways, for example, by purpose, owner, or environment. This is useful when you have many resources of the same type—you can quickly identify a specific resource based on the tags you’ve assigned to it.

Hence, the correct answer is Instance tags.

Instance metadata is incorrect because it just details the server and network information of the instance itself. This is automatically generated for you by AWS when you launch an instance. To add metadata of your own, use tags.

Instance user data is incorrect because this is just a custom script that you prepare if you want your instance to be initialized every time it is launched. You cannot add metadata such as environment tags here.

Instance type is incorrect because this is not modifiable and cannot contain user-provided metadata. These are already fixed values provided to you by AWS.

31
Q
Which of the following is a continuous delivery service that you should use to automate your release pipelines for fast and reliable application and infrastructure updates?
A.AWS CodeCommit
B.Amazon Data Pipeline
C.AWS CodePipeline
D.AWS CodeDeploy
A

C.AWS CodePipeline

Explanation:
CodePipeline is a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates. CodePipeline automates the build, test, and deploy phases of your release process every time there is a code change, based on the release model you define. This enables you to rapidly and reliably deliver features and updates. You can easily integrate CodePipeline with third-party services such as GitHub or with your own custom plugin. With AWS CodePipeline, you only pay for what you use. There are no upfront fees or long-term commitments.

AWS CodePipeline automates your software release process, allowing you to rapidly release new features to your users. With CodePipeline, you can quickly iterate on feedback and get new features to your users faster.

Automating your build, test, and release process allows you to quickly and easily test each code change and catch bugs while they are small and simple to fix. You can ensure the quality of your application or infrastructure code by running each change through your staging and release process.

Hence, the correct answer is AWS CodePipeline.

AWS CodeDeploy is incorrect because this is primarily used to automate code deployments to any instance, including EC2 instances and instances running on-premises.

Amazon Data Pipeline is incorrect because this service is primarily used for data workflow orchestration which helps you reliably process and move data between different AWS compute and storage services, as well as on-premises data sources, at specified intervals. This has nothing to do with application release pipelines which is what AWS CodePipeline handles.

AWS CodeCommit is incorrect because this is just a fully-managed source control service that makes it easy for companies to host secure and highly scalable private Git repositories.

32
Q
Which of the following is a serverless compute service of AWS?
A.Amazon DynamoDB
B.AWS Lambda
C.Amazon Aurora
D.amazon Athena
A

B.AWS Lambda

Explanation:
AWS Lambda is a serverless compute service that lets you run code without provisioning or managing servers. You pay only for the compute time you consume - there is no charge when your code is not running. With Lambda, you can run code for virtually any type of application or backend service - all with zero administration. Just upload your code and Lambda takes care of everything required to run and scale your code with high availability.

AWS Lambda automatically scales your application by running code in response to each trigger. Your code runs in parallel and processes each trigger individually, scaling precisely with the size of the workload. With AWS Lambda, you are charged for every 100ms your code executes and the number of times your code is triggered.

Hence, the correct answer is AWS Lambda.

Amazon DynamoDB is incorrect since this is a fast, scalable NoSQL database service of AWS.

Amazon Aurora is also incorrect since this is primarily used as a database service and not a compute service. Although it has a serverless feature (Aurora Serverless), it is still used as a database and not for data processing.

Amazon Athena is incorrect since this service is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL.

33
Q

AWS Identity and Access Management (IAM) enables you to manage access to AWS services and resources securely. Which of the following best describes what an account alias is in IAM?
A.A substitute for an account ID in the web address for your account
B.The name AWS assigns yo your account
C.Your IAM root username
D.The numberical value of your account ID

A

A.A substitute for an account ID in the web address for your account

Explanation:
An account alias substitutes for an account ID in the web address for your account. You can create and manage an account alias from the AWS Management Console, AWS CLI, or AWS API. You use an account alias when you prefer to use a more user-friendly account name rather than the long string of numbers.

For example, you have the following account ID: 403299380220

By default, your sign-in page URL is this:

https://403299380220.signin.aws.amazon.com/console

If you create an AWS account alias for your AWS account ID, say tutorialsdojo, your sign-in page URL looks like the following example:

https://tutorialsdojo.signin.aws.amazon.com/console

Hence, the correct answer is: A substitute for an account ID in the web address for your account.

All other options are incorrect regarding what account alias is.

34
Q
Where can you track the costs you’ve incurred so far in your AWS account with a graphical visualization?
A.AWS Cost Explorer
B.AWS budgets
C.AWS Consolidated Billing
D.AWS Cost and Usage Reports
A

A.AWS Cost Explorer

Explanation:
The AWS Cost Explorer service has an easy-to-use interface that lets you visualize, understand, and manage your AWS costs and usage over time. Get started quickly by creating custom reports (including charts and tabular data) that analyze cost and usage data, both at a high level and for highly-specific requests. Using AWS Cost Explorer, you can dive deeper into your cost and usage data to identify trends, pinpoint cost drivers, and detect anomalies.

When you first sign up for Cost Explorer, AWS prepares the data about your costs for the current month and the last three months, and then calculates the forecast for the next three months. The current month’s data is available for viewing in about 24 hours. The rest of your data takes a few days longer. Cost Explorer updates your cost data at least once every 24 hours. After you sign up, Cost Explorer can display up to 12 months of historical data (if you have that much), the current month, and the forecasted costs for the next three months. The first time that you use Cost Explorer, Cost Explorer walks you through the main parts of the console with an explanation for each section. You can trigger this walkthrough at a later time as well.

Hence, AWS Cost Explorer is the correct answer.

AWS Budgets is incorrect because this service will just give you the ability to set custom budgets that alert you when your costs or usage exceed (or are forecasted to exceed) your budgeted amount. You can also use AWS Budgets to set reservation utilization or coverage targets and receive alerts when your utilization drops below the threshold you define.

AWS Cost & Usage Report is incorrect because this simply lists AWS usage for each service category used by an account and its IAM users in hourly or daily line items, as well as any tags that you have activated for cost allocation purposes. You can also customize the AWS Cost & Usage Report to aggregate your usage data to the daily or hourly level.

AWS Consolidated Billing is incorrect because this is merely a feature in AWS Organizations to consolidate billing and payment for multiple AWS accounts or multiple Amazon Internet Services Pvt. Ltd accounts. Every organization in AWS Organizations has a master account that pays the charges of all the member accounts. You can combine the usage across all accounts in the organization to share the volume pricing discounts and Reserved Instance discounts.

35
Q
Among the following payment options, which of the following can you choose when you purchase a Standard or Convertible Reserved Instance? (Select TWO.)
A.All Upfront Payment
B.Partial upfront payment
C.Deferred payment
D.Reserved payment
E.Bill-Me-Later payment
A

A.All Upfront Payment
B.Partial upfront payment

Explanation:
Reserved Instances provide you with a significant discount (up to 75%) compared to On-Demand instance pricing. In addition, when Reserved Instances are assigned to a specific Availability Zone, they provide a capacity reservation, giving you additional confidence in your ability to launch instances when you need them.

You can choose between three payment options when you purchase a Standard or Convertible Reserved Instance:

All Upfront option: You pay for the entire Reserved Instance term with one upfront payment. This option provides you with the largest discount compared to On-Demand instance pricing.

Partial Upfront option: You make a low upfront payment and are then charged a discounted hourly rate for the instance for the duration of the Reserved Instance term.

No Upfront option: Does not require any upfront payment and provides a discounted hourly rate for the duration of the term.

Here’s a sample calculation to see the price difference between a Standard RI and Convertible RI on various payment options for 1-year and 3-year terms:

As a general rule, Standard RI provides more savings than Convertible RI, which means that the former is the cost-effective option. The All Upfront option provides you with the largest discount compared with the other types. Opting for a longer compute reservation, such as the 3-year term, gives us greater discount as opposed to a shorter 1-year renewable term.

Hence, the correct answers are: All Upfront payment and Partial upfront payment.

Deferred payment, Reserved payment and Bill-Me-Later payment are all incorrect because these pricing constructs are not supported in Reserved Instances.

36
Q
Which type of EC2 instance is the most suitable and cost-effective if the customer will be running mission-critical workloads continuously for a whole year?
A.Spot
B.Dedicated
C.Reserved
D.On-Demand
A

C.Reserved

Explanation:
Reserved Instances provide you with a significant discount compared to On-Demand Instance pricing. RIs can provide a capacity reservation, offering additional confidence in your ability to launch the number of instances you have reserved when you need them. You can purchase a Reserved Instance for a one-year or three-year commitment, with the three-year commitment offering a bigger discount.

With Reserved Instances (RIs), you can choose the type that best fits your application needs such as Standard RIs, Convertible RIs, and Scheduled RIs.

Reserved Instances are recommended for:

  • Applications with steady state usage
  • Applications that may require reserved capacity
  • Customers that can commit to using EC2 over a 1 or 3 year term to reduce their total computing costs

Hence, the correct answer is Reserved Instances.

Both On-Demand and Dedicated Instance Types are incorrect since they are more expensive than Reserved Instances if billed over a whole year or longer.

Spot Instance is incorrect since this is not optimal for servers that need to be continuously running.

37
Q
Which of the following AWS services are used to secure your VPC network? (Select TWO.)
A.Network ACL
B.CloudFront
C.Security group
D.Application load balancer
E.IAM
A

A.Network ACL
C.Security group

Explanation:
A security group acts as a virtual firewall for your instance to control inbound and outbound traffic. Security groups act at the instance level, not the subnet level. Therefore, each instance in a subnet in your VPC could be assigned to a different set of security groups. For each security group, you add rules that control the inbound traffic to instances, and a separate set of rules that control the outbound traffic.

A network access control list (ACL) is an optional layer of security for your VPC that acts as a firewall for controlling traffic in and out of one or more subnets. You might set up network ACLs with rules similar to your security groups in order to add an additional layer of security to your VPC.

All other options (Application load balancers, Amazon CloudFront and AWS IAM) are incorrect because these services do not safeguard your VPC traffic. They are different products that serve different functions.

38
Q
What service allows you to create alarms that notify you when EC2 CPU Utilization thresholds are breached?
A.AWS Auto Scaling
B.AWS Config
C.Amazon CloudWatch
D.Amazon SNS
A

C.Amazon CloudWatch

Explanation:
Amazon CloudWatch monitors your Amazon Web Services (AWS) resources and the applications you run on AWS in real time. You can use CloudWatch to collect and track metrics, which are variables you can measure for your resources and applications.

The CloudWatch home page automatically displays metrics about every AWS service you use. You can additionally create custom dashboards to display metrics about your custom applications, and display custom collections of metrics that you choose.

You can create alarms which watch metrics and send notifications or automatically make changes to the resources you are monitoring when a threshold is breached. For example, you can monitor the CPU usage and disk reads and writes of your Amazon EC2 instances and then use this data to determine whether you should launch additional instances to handle increased load. You can also use this data to stop under-used instances to save money.

You can create a CloudWatch alarm that watches a single CloudWatch metric or the result of a math expression based on CloudWatch metrics. The alarm performs one or more actions based on the value of the metric or expression relative to a threshold over a number of time periods. The action can be an Amazon EC2 action, an Amazon EC2 Auto Scaling action, or a notification sent to an Amazon SNS topic.

Hence, the correct answer is Amazon CloudWatch.

Amazon SNS is incorrect because this service alone cannot capture metrics inputs from EC2 and create alarms out of it. It relies on another service such as CloudWatch Alarms to forward the actual alarm so that it can notify its subscribers of the issue.

AWS Config is incorrect because this is just a compliance checking tool and does not monitor resource metrics.

Amazon EC2 Auto Scaling is incorrect because the primary function of this is to help you maintain application availability and allows you to automatically add or remove EC2 instances according to conditions you define. It cannot create alarms that notify you if certain thresholds are breached. You should integrate auto-scaling with CloudWatch alarms to create a highly efficient, scalable system.

39
Q
You are planning to deploy a video streaming application with frequently accessed, throughput-intensive workloads to your EC2 instance which requires fast, consistent throughput. What EBS volume type should you use to maximize performance as well as cost?
A.Throughput Optimized HDD
B.General Purpose SSD
C.Cold HDD
D.Provisioned IOPS SSD
A

A.Throughput Optimized HDD

Explanation:
In the exam, always consider the difference between SSD and HDD as shown in the table below. This will allow you to easily eliminate specific EBS-types in the options which are not SSD or not HDD, depending on whether the question asks for a storage type that has small, random I/O operations or large, sequential I/O operations.

Throughput Optimized HDD (st1) volumes provide low-cost magnetic storage that defines performance in terms of throughput rather than IOPS. This volume type is a good fit for large, sequential workloads such as Amazon EMR, ETL, data warehouses, and log processing. Bootable st1 volumes are not supported. Throughput Optimized HDD (st1) volumes, though similar to Cold HDD (sc1) volumes, are designed to support frequently accessed data.

Hence, the correct answer is Throughput Optimized HDD.

Provisioned IOPS SSD is incorrect because this is not the most cost-effective EBS type and is primarily used for critical business applications that require sustained IOPS performance.

General Purpose SSD is incorrect because although this volume balances price and performance for a wide variety of workloads, it is not suitable for frequently accessed, throughput-intensive workloads. Throughput Optimized HDD is a more suitable option to use than General Purpose SSD.

Cold HDD is incorrect because although this one provides the lowest cost among all other options, it is much suitable for less frequently accessed workloads.

40
Q
\_\_\_\_\_\_\_\_ is an online software store that helps customers find, buy, and immediately start using the software and services that run on AWS.
A.Amazon.com
B.AWS Marketplace
C.AWS IQ
D.Reserved Instance Marketplace
A

B.AWS Marketplace

Explanation

The AWS Marketplace enables qualified partners to market and sell their software to AWS Customers. AWS Marketplace is an online software store that helps customers find, buy, and immediately start using the software and services that run on AWS.

AWS Marketplace is designed for Independent Software Vendors (ISVs), Value-Added Resellers (VARs), and Systems Integrators (SIs) who have software products they want to offer to customers in the cloud. Partners use AWS Marketplace to be up and running in days and offer their software products to customers around the world.

Hence, the correct answer is AWS Marketplace.

AWS IQ is incorrect because this just enables customers to quickly find, engage, and pay AWS Certified third-party experts for on-demand project work.

Reserved Instance Marketplace is incorrect because this is just a platform that supports the sale of third-party and AWS customers’ unused Standard Reserved Instances, which vary in term lengths and pricing options. For example, you may want to sell Reserved Instances after moving instances to a new AWS Region, changing to a new instance type, ending projects before the term expiration, when your business needs change, or if you have unneeded capacity.

Amazon.com is incorrect because this is just an online shopping platform and is not related to Amazon Web Services. The AWS Marketplace is the right website to look for software and services that run on AWS.

41
Q
What service acts as a firewall for your EC2 instances?
A.Security Group
B.Elastic Network Interface
C.VPC
D.Network ACL
A

A.Security Group

Explanation:
A security group acts as a virtual firewall for your instance to control inbound and outbound traffic. When you launch an instance in a VPC, you can assign up to five security groups to the instance. Security groups act at the instance level, not the subnet level. Therefore, each instance in a subnet in your VPC could be assigned to a different set of security groups. If you don’t specify a particular group at launch time, the instance is automatically assigned to the default security group for the VPC.

For each security group, you add rules that control the inbound traffic to instances, and a separate set of rules that control the outbound traffic. This section describes the basic things you need to know about security groups for your VPC and their rules.

Hence, the correct answer is Security Group.

Network ACL is incorrect because this is primarily used to protect your subnets by controlling inbound and outbound traffic. They are not used to secure resources at an instance level.

Elastic Network Interface is incorrect because this is just a logical networking component in a VPC that represents a virtual network card. It does not serve as a virtual firewall for your instances.

Amazon VPC is incorrect because this simply lets you provision a logically isolated section of the AWS Cloud where you can launch AWS resources in a virtual network that you define. A VPC does not serve as a virtual firewall for your instances.

42
Q
Which of the following services allows you to purchase Reserved Instances? (Select TWO.)
A.AWS Elastic Beanstalk
B.Amazon RDS
C.AWS Batch
D.Amazon EC2
E.Amaozn EKS
A

B.Amazon RDS
D.Amazon EC2

Explanation:
Reserved Instances provide you with a significant discount (up to 75%) compared to On-Demand instance pricing. In addition, when Reserved Instances are assigned to a specific Availability Zone, they provide a capacity reservation, giving you additional confidence in your ability to launch instances when you need them.

Standard Reserved Instances (RI) provide you with a significant discount (up to 75%) compared to On-Demand instance pricing and can be purchased for a 1-year or 3-year term. The average discount off On-Demand instances varies based on your term and chosen payment options (up to 40% for 1-year and 60% for a 3-year term). Customers have the flexibility to change the Availability Zone, the instance size, and networking type of their Standard Reserved Instances.

Convertible Reserved Instances (RI) provide you with a significant discount (up to 54%) compared to On-Demand Instances and can be purchased for a 1-year or 3-year term. Purchase Convertible Reserved Instances if you need additional flexibility, such as the ability to use different instance families, operating systems, or tenancies over the Reserved Instance term.

For stable applications, your organization can achieve significant cost savings by using Amazon EC2 Reserved Instances. Amazon EC2 Reserved Instances enable you to commit to usage parameters at the time of purchase to achieve a lower hourly rate. Reservation models are also available for Amazon Relational Database Service (Amazon RDS), Amazon ElastiCache, Amazon Redshift, and Amazon DynamoDB.

Hence, the correct answers are Amazon EC2 and RDS.

All other options are incorrect because AWS Batch, AWS Elastic Beanstalk and Amazon EKS do not have reservation models of their own.

43
Q

Your web servers are showing relatively poor performance in delivering content. How can you improve its performance and cost efficiency?
A.Request AWS to use faster network cables for your server
B.Apply a caching mechanism that stores frequently accessed content
C.Use HTTP protocol instead to decrease the time consumed in decrypting content
D.Run more web servers to distribute the workload

A

B.Apply a caching mechanism that stores frequently accessed content

Explanation:
Caching is a technique that stores previously calculated data for future use. This
technique is used to improve application performance and increase the cost-efficiency
of an implementation. It can be applied at multiple layers of an IT architecture.

Running more webservers is incorrect since this is not the most cost-efficient method.

Using HTTP protocol is usually not desirable since this weakens the security of your traffic.

AWS uses the best network cables they can get from vendors. Improving their physical infrastructure cannot be easily requested, so it is better to introduce improvements from your side instead.

44
Q
A customer needs to identify the IAM user who terminated their production EC2 instance in AWS. Which service should they use in this situation?
A.Amazon CloudWatch
B.AWS CloudTrail
C.AWS Systems Manager
D.Amazon AppStream 2.0
A

B.AWS CloudTrail

Explanation:
AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. With CloudTrail, you can log, continuously monitor, and retain account activity related to actions across your AWS infrastructure. CloudTrail provides event history of your AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command-line tools, and other AWS services. This event history simplifies security analysis, resource change tracking, and troubleshooting.

With AWS CloudTrail, you can simplify your compliance audits by automatically recording and storing event logs for actions made within your AWS account. Integration with Amazon CloudWatch Logs provides a convenient way to search through log data, identify out-of-compliance events, accelerate incident investigations, and expedite responses to auditor requests.

It also increases visibility into your user and resource activity by recording AWS Management Console actions and API calls. You can identify which users and accounts called AWS, the source IP address from which the calls were made, and when the calls occurred.

Hence, the correct answer in this scenario is AWS CloudTrail.

Amazon CloudWatch is incorrect because this service is primarily used to collect monitoring and operational data in the form of logs, metrics, and events, providing you with a unified view of AWS resources, applications, and services that run on AWS and on-premises servers.

Amazon AppStream 2.0 is incorrect because this is just a fully managed application streaming service which you can use to centrally manage your desktop applications.

AWS Systems Manager is incorrect because this simply provides a unified user interface so you can view operational data from multiple AWS services and allows you to automate operational tasks across your AWS resources.

45
Q
Which service in AWS allows you to host your own Puppet Enterprise infrastructure?
A.AWS Opsworks
B.AWS Service Catalog
C.AWS CloudFormation
D.AWS Elastic Beanstalk
A

A.AWS Opsworks

Explanation:
AWS OpsWorks for Puppet Enterprise is a configuration management service that provides managed instances of Puppet. Puppet is an automation platform that allows you to use code to automate the configurations of your servers. OpsWorks lets you use Puppet to automate how servers are configured, deployed, and managed across your Amazon EC2 instances or on-premises compute environments.

Hence, the correct answer is: AWS Opsworks.

AWS CloudFormation allows you to set up your infrastructure using JSON or YAML code. You can easily provision and configure the resources you need in your AWS environment. CloudFormation does not use Puppet, nor is it a platform used for web servers configuration management.

AWS Elastic Beanstalk allows you to simply upload your application while the service automatically handles the deployment, from capacity provisioning, load balancing, auto-scaling to application health monitoring, for you. Elastic Beanstalk does not use Puppet, nor is it a platform used for web servers configuration management.

AWS Service Catalog allows you to centrally manage commonly deployed IT services, and helps you achieve consistent governance and meet your compliance requirements, while enabling users to quickly deploy only the approved IT services they need. Service Catalog does not use Puppet, nor is it a platform used for web servers configuration management.

46
Q
Which of the following is needed to retrieve a list of your EC2 instances using the AWS CLI?
A.MFA
B.USername and password
C.SSH keys
D. Access Keys
A

D. Access Keys

Explanation:
The AWS Access Key ID and AWS Secret Access Key are your AWS credentials. They are associated with an AWS Identity and Access Management (IAM) user or role that determines what permissions you have.

Access keys are long-term credentials for an IAM user or the AWS account root user. You can use access keys to sign programmatic requests to the AWS CLI or AWS API (directly or using the AWS SDK). If you don’t have access keys, you can create them from the AWS Management Console. As a best practice, do not use the AWS account root user access keys for any task where it’s not required. Instead, create a new administrator IAM user with access keys for yourself.

Like a user name and password, you must use both the access key ID and secret access key together to authenticate your requests. You can use this on your AWS CLI to access your AWS resources.

Hence, the correct answer is Access keys.

Username and password is incorrect because it is stated in the question that the action is done through the AWS CLI.

MFA is not required when using the AWS CLI.

SSH keys is incorrect because this is primarily used to authenticate your SSH connection to an EC2 instance. For this question, these keys are not needed by the AWS CLI.

47
Q

Which of the following practices demonstrate operational excellence in AWS cloud? (Select TWO.)
A.Deploy small, incremental changes to your production servers using AWS CodeDeploy
B.User serverless applications such as AWS Lambda
C.Monitor EC2 metric consumption and adjust the instance type accordingly
D.Perform monthly game days on your AWS environment
E.Launching your infrastructure manually via the Console

A

A.Deploy small, incremental changes to your production servers using AWS CodeDeploy

D.Perform monthly game days on your AWS environment

Explanation:
The Well-Architected Framework has been developed to help cloud architects build secure, high-performing, resilient, and efficient infrastructure for their applications. This is based on five pillars namely:

  1. Operational Excellence
  2. Security
  3. Reliability
  4. Performance Efficiency
  5. Cost Optimization

This Framework provides a consistent approach for customers and partners to evaluate architectures, and implement designs that will scale over time.

The AWS Well-Architected Framework helps you understand the pros and cons of decisions you make while building systems on AWS. By using this Framework, you will learn architectural best practices for designing and operating reliable, secure, efficient, and cost-effective systems in the cloud. It provides a way for you to consistently measure your architectures against best practices and identify areas for improvement. The process for reviewing an architecture is a constructive conversation about architectural decisions and is not an audit mechanism. Having well-architected systems greatly increases the likelihood of business success.

The operational excellence pillar focuses on running and monitoring systems to deliver business value, and continually improving processes and procedures. Key topics include managing and automating changes, responding to events, and defining standards to successfully manage daily operations.

Using tools such as AWS CodeDeploy to deploy small, incremental changes to your application ensures that you do not introduce drastic updates that may affect your application entirely. Performing monthly game days allows you to test your environment for different failure scenarios so you can quickly plan out ways to remediate them.

Hence, the correct answers are: Deploy small, incremental changes to your production servers using AWS CodeDeploy and Perform monthly game days on your AWS environment.

Launching your infrastructure manually via the console is incorrect because this is not a notable best practice under operational excellence. In the cloud, it is preferred to automate majority of the tasks to achieve a predictable and constant result.

Using serverless applications such as AWS Lambda is incorrect because this is more of a design principle that focuses on performance efficiency and not operational excellence. Serverless is a very useful tool that steers away from traditional server management and lets you focus more on your applications and services.

Monitoring EC2 consumption and adjusting your instance type accordingly is incorrect because this is more related to the performance efficiency pillar. Underprovisioned instances need to be scaled up to deliver better performance. Overprovisioned instances need to be scaled down to save on costs.

48
Q
You have a fleet of on-premises servers that require a centralized scalable and durable file storage. It should be able to support massive parallel access. Which of the following is the most appropriate service to use?
A.Amazon EFS
B.Amazon Storage Gateway - File Gateway
C.Amazon S3
D.Amazon RedShift
A

A.Amazon EFS

Explanation:
Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources. It is built to scale on-demand to petabytes without disrupting applications, growing and shrinking automatically as you add and remove files, eliminating the need to provision and manage capacity to accommodate growth.

Amazon EFS offers two storage classes: the Standard storage class, and the Infrequent Access storage class (EFS IA). EFS IA provides price/performance that’s cost-optimized for files not accessed every day. By simply enabling EFS Lifecycle Management on your file system, files not accessed according to the lifecycle policy you choose will be automatically and transparently moved into EFS IA.

Amazon EFS provides secure access for thousands of connections for Amazon EC2 instances and on-premises servers simultaneously using a traditional file permissions model, file locking capabilities, and hierarchical directory structure via the NFSv4 protocol. Amazon EC2 instances can access your file system across AZs, regions, and VPCs, while on-premises servers can access using AWS Direct Connect or AWS VPN.

Amazon EFS is designed to provide massively parallel shared access to thousands of Amazon EC2 instances, enabling your applications to achieve high levels of aggregate throughput and IOPS with consistent low latencies.

Hence, Amazon EFS is the correct answer.

Amazon S3 is incorrect. First, it is meant specifically for object storage, and second, EFS can serve a fleet of EC2 instances better than S3 as file storage.

Amazon Storage Gateway is incorrect because this service simply provides a file interface into Amazon Simple Storage Service (Amazon S3) and is a combination of storage service and a virtual software appliance. This service is meant for local software hosted on your on-premises data center which requires connection to S3. It is not meant to serve a fleet of EC2 instances.

Amazon Redshift is incorrect because this is a data warehousing service offered by AWS. It cannot be used for file storage.

49
Q
Which service in AWS protects your resources from common DDoS attacks in a proactive manner?
A.Amazon Inspector
B.Security Groups
C.AWS WAF
D.AWS Shield
A

D.AWS Shield

Explanation:
AWS Shield is a managed Distributed Denial of Service (DDoS) protection service that safeguards applications running on AWS. AWS Shield provides always-on detection and automatic inline mitigations that minimize application downtime and latency, so there is no need to engage AWS Support to benefit from DDoS protection. There are two tiers of AWS Shield - Standard and Advanced.

All AWS customers benefit from the automatic protections of AWS Shield Standard, at no additional charge. AWS Shield Standard defends against most common, frequently occurring network and transport layer DDoS attacks that target your web site or applications. When you use AWS Shield Standard with Amazon CloudFront and Amazon Route 53, you receive comprehensive availability protection against all known infrastructure (Layer 3 and 4) attacks.

Hence, the correct answer is: AWS Shield.

Amazon Inspector is incorrect as this is just an automated security assessment service that helps improve the security and compliance of applications deployed on AWS. Amazon Inspector automatically assesses applications for exposure, vulnerabilities, and deviations from best practices. After performing an assessment, Amazon Inspector produces a detailed list of security findings prioritized by level of severity.

Security groups is incorrect because although it protects your instance-level traffic by acting as virtual firewalls for your instances, it is not enough to protect you from DDoS attacks. It controls which inbound and outbound traffic is allowed via security group rules. The traffic controls are, however, added via whitelisting, so you have a reactive answer to DDoS. AWS Shield offers a proactive solution since you do not need to specify which types of traffic are considered malicious.

AWS WAF is incorrect since this is simply a web application firewall that helps protect your web applications from common web exploits such as XSS and SQL injection. You use AWS WAF to create custom rules that block common attack patterns and rules that are designed for your specific application.

50
Q
A company is planning to deploy their high-frequency trading (HFT) application which will store constantly changing financial data in AWS and require low latency access.  Which AWS services below should you use? (Select TWO.)
A.Amazon EFS
B.AWS Snowball
C.Amazon RDS
D.Amazon S3
E.Amazon GLacier
A

A.Amazon EFS
C.Amazon RDS

Explanation:
Amazon Web Services (AWS) provides low-cost data storage with high durability and availability. AWS offers storage choices for backup, archiving, and disaster recovery use cases and provides block, file, and object storage.

Amazon Elastic File System (Amazon EFS) provides simple, scalable file storage for use with Amazon EC2. With Amazon EFS, storage capacity is elastic, growing and shrinking automatically as you add and remove files, so your applications have the storage they need, when they need it. Amazon EFS has a simple web services interface that allows you to create and configure file systems quickly and easily.

Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. It provides cost-efficient and resizable capacity while automating time-consuming administration tasks such as hardware provisioning, database setup, patching and backups. It frees you to focus on your applications so you can give them the fast performance, high availability, security and compatibility they need.

Amazon Simple Storage Service (Amazon S3) provides developers and IT teams secure, durable, highly-scalable object storage at a very low cost. You can store and retrieve any amount of data, at any time, from anywhere on the web through a simple web service interface. You can write, read, and delete objects containing from zero to 5 TB of data. Amazon S3 is highly scalable, allowing concurrent read or write access to data by many separate clients or application threads.

Amazon S3 doesn’t suit all storage situations. The table above presents some storage needs for which you should consider other AWS storage options. If you are storing data that must be updated very frequently, you should consider to use other services that take into account read and write latencies, such as Amazon EBS volumes, Amazon RDS, Amazon DynamoDB, Amazon EFS, or relational databases running on Amazon EC2.

Read and write latencies can occur if your storage service is not placed within your VPC or in the same Availability Zone of your EC2 instance. This means that it will take some time for your data to be sent over from your server to your data storage. Amazon EBS provides the lowest latency access to data from a single EC2 instance. This is because the EBS volume is directly attached to the EC2 instance and is also located at the same Availability Zone.

Hence, the correct answers are Amazon EFS and Amazon RDS.

Amazon Glacier is incorrect because this is just a low-cost storage service that provides highly secure, durable, and flexible storage for data archiving and online backup. This is not suitable to be used for application with rapidly changing data.

AWS Snowball is incorrect because this is primarily used for data migration where you need to move large amounts of data into and out of AWS.

Amazon S3 is incorrect because although you can technically use this service as a data storage for rapidly changing data, this entails high latency since S3 is located outside of your VPC.

51
Q

What is the main benefit you receive when moving to serverless from non-serverless compute services?
A.Serverless are highly available so you dont have to worry about downtime
B.You can get overall cheaper costs compared to using non-serverless services
C.Security is fully managed for you by the service provider
D.Serverless removes management overhead so you can focus on your applications instead

A

D.Serverless removes management overhead so you can focus on your applications instead

Explanation:
Serverless is the native architecture of the cloud that enables you to shift more of your operational responsibilities to AWS, increasing your agility and innovation. Serverless allows you to build and run applications and services without thinking about servers. It eliminates infrastructure management tasks such as server or cluster provisioning, patching, operating system maintenance, and capacity provisioning.

Building serverless applications means that your developers can focus on their core product instead of worrying about managing and operating servers or runtimes, either in the cloud or on-premises. This reduced overhead lets developers reclaim time and energy that can be spent on developing great products which scale and that are reliable.

Using serverless does not always guarantee cheaper costs. There might be instances where serverless can be more expensive for your setup, such as workloads with long processing durations and high memory usages.

Although in most occasions serverless does not experience downtime, you should still create failover solutions for your critical applications. Serverless, essentially, still uses servers in the background. The benefit you get from serverless is that you won’t have to worry about server management anymore. So it is a good practice to always have a fail-safe plan.

Security is not fully managed for you when using serverless. Always go back to the AWS shared responsibility model to know which security responsibilities are yours and which ones are AWS’.

52
Q
Which AWS service lets you provision a logically isolated section of the AWS Cloud where you can launch AWS resources?
A.Internet Gateway
B.Amazon VPC
C.AWS Elastic Beanstalk
D.AMazon EC2
A

B.Amazon VPC

Explanation:
Amazon Virtual Private Cloud (Amazon VPC) lets you provision a logically isolated section of the AWS Cloud where you can launch AWS resources in a virtual network that you define. You have complete control over your virtual networking environment, including selection of your own IP address range, creation of subnets, and configuration of route tables and network gateways.

You can launch your AWS resources, such as Amazon EC2 instances, into your VPC. You can specify an IP address range for the VPC, add subnets, associate security groups, and configure route tables.

A subnet is a range of IP addresses in your VPC. You can launch AWS resources into a specified subnet. Use a public subnet for resources that must be connected to the internet, and a private subnet for resources that won’t be connected to the internet. To protect the AWS resources in each subnet, you can use multiple layers of security, including security groups and network access control lists (ACL).

Hence, the correct answer is Amazon VPC.

Internet gateway is incorrect because this is just a VPC component that enables your servers in the public subnet to communicate with the public Internet. It does not isolate a virtual network space for you to launch resources in.

Amazon EC2 is incorrect because this is simply a compute service where you can host your applications and other databases; essentially, a server. It does not isolate a virtual network space for you to launch resources in.

AWS Elastic Beanstalk is incorrect because this service is just a platform where you can easily deploy applications and the service handles the infrastructure provisioning process for you. It does not isolate a virtual network space for you to launch resources in.

53
Q
Which of the following provides you access to Reserved Instance (RI) purchase recommendations based on your past usage and indicate potential opportunities for savings as compared to On-Demand usage?
A.AWS Cost and Usage Report
B.AWS Billing Dashboard
C.AWS Cost explorer
D.AWS Budgets
A

C.AWS Cost explorer

Explanation:
If you enable Cost Explorer, you automatically get Amazon EC2, Amazon RDS, ElastiCache, Amazon ES, and Amazon Redshift Reserved Instance (RI) purchase recommendations that could help you reduce your costs. RIs provide a discounted hourly rate (up to 75%) compared to On-Demand pricing.

Cost Explorer generates your RI recommendations using the following process:

  • Identifies your On-Demand Instance usage for a service during a specific time period
  • Collects your usage into categories that are eligible for an RI
  • Simulates every combination of RIs in each category of usage
  • Identifies the best number of each type of RI to purchase to maximize your estimated savings

For example, Cost Explorer automatically aggregates your Amazon EC2 Linux, shared tenancy, and c4 family usage in the US West (Oregon) Region and recommends that you buy size-flexible regional RIs to apply to the c4 family usage. Cost Explorer recommends the smallest size instance in an instance family. This makes it easier to purchase a size-flexible RI. Cost Explorer also shows the equal number of normalized units so that you can purchase any instance size that you want. For this example, your RI recommendation would be for c4.large because that is the smallest size instance in the c4 instance family.

Cost Explorer recommendations are based on a single account or organization usage of the past seven, 30, or 60 days. Cost Explorer ignores usage that is already covered by an RI. Amazon EC2, ElastiCache, Amazon ES, and Amazon Redshift recommendations are for RIs scoped to Region, not Availability Zones, and your estimated savings reflects the application of those RIs to your usage. Amazon RDS recommendations are scoped to either Single-AZ or Multi-AZ RIs. Cost Explorer updates your recommendations at least once every 24 hours.

Hence, the correct answer is AWS Cost Explorer.

AWS Billing Dashboard, AWS Budgets and the AWS Cost and Usage report are all incorrect since these tools do not provide Reserved Instance (RI) purchase recommendations unlike AWS Cost Explorer.

54
Q
The use of multi-threading in your Amazon S3 requests via the Multipart Upload API is an example of which AWS cloud best practice?
A.Decouple your components
B.Allow for evolutionary architectures
C.Implement elasticity
D.Think parallel
A

D.Think parallel

Explanation:
There are various best practices that you can follow which can help you build an application in the cloud. The notable ones are:

  1. Design for failure
  2. Decouple your components
  3. Implement elasticity
  4. Think parallel

As the name implies, the Think Parallel best practice accentuates the use of parallelization when designing architectures in the AWS cloud. It is advisable to not only implement parallelization wherever possible but also automate it because the cloud allows you to create a repeatable process every easily.

When it comes to accessing (retrieving and storing) data, the cloud is designed to handle massively parallel operations. In order to achieve maximum performance and throughput, you should leverage request parallelization. Multi-threading your requests by using multiple concurrent threads will store or fetch the data faster than requesting it sequentially. Hence, wherever possible, the processes of a cloud application should be made thread-safe through a share-nothing philosophy and leverage multi-threading.

When it comes to processing or executing requests in the cloud, it becomes even more important to leverage parallelization. A general best practice, in the case of a web application, is to distribute the incoming requests across multiple asynchronous web servers using a load balancer. In the case of batch processing application, you can master node can spawn up multiple slave worker nodes that process tasks in parallel (as in distributed processing frameworks like Hadoop).

The AWS specific tactics for parallelization are:

  1. Multi-thread your Amazon S3 requests as detailed in Best practices paper
  2. Multi-thread your Amazon SimpleDB GET and BATCHPUT requests
  3. Create a JobFlow using the Amazon Elastic MapReduce Service for each of your daily batch processes (indexing, log analysis etc.) which will compute the job in parallel and save time.
  4. Use the Elastic Load Balancing service and spread your load across multiple web app servers dynamically

Hence, the correct answer is: Think Parallel.

Decouple your components is incorrect because this principle simply reinforces the Service-Oriented Architecture (SOA) design principle that the more loosely coupled the components of the system, the bigger and better it scales. This can be implemented by using Amazon SQS to isolate components and act as a buffer between them.

Implement elasticity is incorrect because this principle is primarily implemented by automating your deployment process and streamlining the configuration and build process of your architecture. This ensures that the system can scale without any human intervention.

Allowing for evolutionary architectures is incorrect because this is actually one of the general design principles of the AWS Well-Architected Framework and not particularly a type of cloud best practice.

55
Q
Which IAM service is responsible for enforcing privileges and access controls in your AWS environment?
A.IAM Policy
B.IAM User
C.IAM Role
D.IAM Group
A

A.IAM Policy

Explanation:
You manage access in AWS by creating policies and attaching them to IAM identities (users, groups of users, or roles) or AWS resources. A policy is an object in AWS that, when associated with an identity or resource, defines their permissions. AWS evaluates these policies when a principal uses an IAM entity (user or role) to make a request. Permissions in the policies determine whether the request is allowed or denied.

IAM Users, Groups and Roles are entities to which you attach IAM Policies to provide the necessary permissions for their function.

56
Q
You are tasked to simplify the migration of your databases from your on-premises server to AWS. Which service fits best for this purpose?
A.AWS Snowball
B.AWS Database Migration Service
C.AWS Server Migration Service
D.AWS Snowmobile
A

B.AWS Database Migration Service

Explanation:
AWS Database Migration Service helps you migrate databases to AWS quickly and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database. The AWS Database Migration Service can migrate your data to and from most widely used commercial and open-source databases.

In homogeneous database migrations, the source and target database engines are the same or are compatible like Oracle to Amazon RDS for Oracle, MySQL to Amazon Aurora, MySQL to Amazon RDS for MySQL, or Microsoft SQL Server to Amazon RDS for SQL Server. Since the schema structure, data types, and database code are compatible between the source and target databases, this kind of migration is a one-step process.

You create a migration task with connections to the source and target databases, then start the migration with the click of a button. AWS Database Migration Service takes care of the rest. The source database can be located in your own premises outside of AWS, running on an Amazon EC2 instance, or it can be an Amazon RDS database. The target can be a database in Amazon EC2 or Amazon RDS.

Hence, the correct answer is AWS Database Migration Service.

Server Migration Service is incorrect because, evident in the name, this tool is used to easily migrate server workloads such as VMs and containers to AWS. This tool is not used to facilitate database migration.

AWS Snowball and AWS Snowmobile are both incorrect because these are just migration services that allow you to transfer large amounts of data from your on-premises data center to Amazon S3. These tools are not meant to transfer databases.

57
Q
\_\_\_\_\_\_\_\_ is AWS’s digital user engagement service that enables AWS customers to effectively communicate with their end users and measure user engagement across multiple channels including email, Text Messaging (SMS) and Mobile Push Notifications.
A.Amazon Simple Notification Service
B/Amazon SNS Mobile Push
C.Amazon Simple Email Service
D.Amazon Pinpoint
A

D.Amazon Pinpoint

Explanation:
Amazon Pinpoint is AWS’s Digital User Engagement Service that enables AWS customers to effectively communicate with their end-users and measure user engagement across multiple channels including email, Text Messaging (SMS) and Mobile Push Notifications.

Amazon Pinpoint also provides tools that enable audience management and segmentation, campaign management, scheduling, template management, A/B testing, analytics and data integration. It captures data to track deliverability as well as usage and messaging analytics covering a range of dimensions including user, channels and custom attributes.

Amazon Pinpoint is built on a service-based architecture. Developers can extend their applications and backend services in various ways, including: sending messages directly from their applications via the Amazon Pinpoint channels (Email, SMS and Mobile Push), accessing segmentation data to alter their application behavior for segments of users, create and run campaigns directly from their applications, and access deliverability and analytics data to improve the user engagement of their applications. The system empowers customers to send the right message, to the right audience, at the right time and on the most effective channel.

Hence, the correct answer is: Amazon Pinpoint.

Amazon Simple Notification Service is incorrect because this is just a fully managed pub/sub messaging service that enables you to decouple microservices, distributed systems, and serverless applications.

Amazon Simple Email Service is incorrect because this is merely a cloud-based email sending service designed to help digital marketers and application developers send marketing, notification, and transactional emails.

Amazon SNS Mobile Push is incorrect because this is just a feature of Amazon SNS that pushes notification messages to both mobile devices and desktops. This is not considered as a fully-fledged digital user engagement service, unlike Amazon Pinpoint.

58
Q
Which type of Elastic Load Balancer allows you to forward the incoming request to a target group with a Lambda function as a target?
A.Classic Load Balancer
B.None of the Above
C.Application Load Balancer
D.Network Load Balancer
A

C.Application Load Balancer

Explanation:
Elastic Load Balancing automatically distributes incoming application traffic across multiple targets, such as Amazon EC2 instances, containers, IP addresses, and Lambda functions. It can handle the varying load of your application traffic in a single Availability Zone or across multiple Availability Zones.

Elastic Load Balancing offers three types of load balancers that all feature the high availability, automatic scaling, and robust security necessary to make your applications fault-tolerant. They are:

Application Load Balancer - This is best suited for load balancing of HTTP and HTTPS traffic and provides advanced request routing targeted at the delivery of modern application architectures, including microservices and containers. Operating at the individual request level (Layer 7), Application Load Balancer routes traffic to targets within Amazon Virtual Private Cloud (Amazon VPC) based on the content of the request.

Network Load Balancer - This is best suited for load balancing of Transmission Control Protocol (TCP), User Datagram Protocol (UDP) and Transport Layer Security (TLS) traffic where extreme performance is required. Operating at the connection level (Layer 4), Network Load Balancer routes traffic to targets within Amazon Virtual Private Cloud (Amazon VPC) and is capable of handling millions of requests per second while maintaining ultra-low latencies. Network Load Balancer is also optimized to handle sudden and volatile traffic patterns.

Classic Load Balancer - This provides basic load balancing across multiple Amazon EC2 instances and operates at both the request level and connection level. Classic Load Balancer is intended for applications that were built within the EC2-Classic network.

With your Application Load Balancer, you can register your Lambda functions as targets and configure a listener rule to forward requests to the target group for your Lambda function. When the load balancer forwards the request to a target group with a Lambda function as a target, it invokes your Lambda function and passes the content of the request to the Lambda function, in JSON format.

Hence, the correct type of elastic load balancer to use is the Application Load Balancer.

Both Network Load Balancer and Classic Load Balancer are incorrect because they don’t support Lambda functions as targets.

The option that says: None of the above is incorrect because you can use an Application Load Balancer to route incoming traffic to your Lambda functions as a target.

59
Q

What is a primary reason why you should be using an elastic load balancer?
A.ELBs help you scale easily without manual intervention
B.ELBs boost your websites overall performance
C.ELBs help tighten security through the use of security groups
D.ELBs provide elasticity by directing traffic to a minimum number of instances required to handle the traffic load

A

D.ELBs provide elasticity by directing traffic to a minimum number of instances required to handle the traffic load

Explanation:
You use Elastic Load Balancing to automatically distribute incoming application traffic across multiple targets, such as Amazon EC2 instances, containers, IP addresses, and Lambda functions. It can handle the varying load of your application traffic in a single Availability Zone or across multiple Availability Zones. Using a load balancer also increases the availability and fault tolerance of your applications. You can add and remove compute resources from your load balancer as your needs change, without disrupting the overall flow of requests to your applications.

For automatic scaling of your compute capacity, you need another service called AWS Auto Scaling to go with your load balancers. Auto-scaling handles the scaling of capacity for you so that your instances are not being overwhelmed.

Although ELBs do add security for your instances, it is not solely because of security groups. Security groups can be used directly with EC2 instances, so this statement is not the best answer for the scenario.

ELBs do not boost website performance. This is usually done by another AWS service known as Amazon CloudFront. ELBs redirect traffic to healthy instances in a controlled manner, providing you the elasticity and fault tolerance your applications need.

60
Q
Which of the following AWS well-architected pillars discusses the use of the right computing resources to meet demand levels even as the demand changes and technologies evolve?
A.Cost optimization
B.Reliability
C.Performance Efficiency
D.Operational Excellence
A

C.Performance Efficiency

Explanation:
The Well-Architected Framework has been developed to help cloud architects build secure, high-performing, resilient, and efficient infrastructure for their applications. This is based on five pillars namely:

  1. Operational Excellence
  2. Security
  3. Reliability
  4. Performance Efficiency
  5. Cost Optimization

The performance efficiency pillar focuses on using IT and computing resources efficiently. It focuses on the ability to use computing resources efficiently to meet system requirements, and to maintain that efficiency as demand changes and technologies evolve.

Hence, the correct answer is Performance efficiency.

Operational Excellence is incorrect because this pillar focuses on running and monitoring systems to deliver business value and continually improving processes and procedures.

Reliability is incorrect because this pillar focuses on the ability to prevent and quickly recover from failures to meet business and customer demand.

Cost optimization is incorrect because this pillar focuses on avoiding un-needed costs by choosing the right services for the job and by right-sizing them.

61
Q
Which AWS service lets you provision either Windows or Linux desktops in just a few minutes and can scale easily to provide thousands of desktops to workers?
A.AWS System Manager
B.AWS Organizations
C.AWS Cloud9
D.Amazon Workspaces
A

D.Amazon Workspaces

Explanation:
Amazon WorkSpaces is a managed, secure Desktop-as-a-Service (DaaS) solution where you provision either Windows or Linux desktops in just a few minutes and quickly scale to provide thousands of desktops to workers across the globe.

For both Windows and Amazon Linux WorkSpaces, each WorkSpace is associated with a virtual private cloud (VPC), and a directory to store and manage information for your WorkSpaces and users. Directories are managed through the AWS Directory Service, which offers the following options: Simple AD, AD Connector, or AWS Directory Service for Microsoft Active Directory, also known as AWS Managed Microsoft AD.

Hence, the correct answer is Amazon WorkSpaces.

AWS Organizations is incorrect because this just helps you to centrally govern your environment as you grow and scale your workloads on AWS. You cannot launch virtual desktops using AWS Organizations.

AWS Systems Manager is incorrect because this just provides a unified user interface so you can view operational data from multiple AWS services and allows you to automate operational tasks across your AWS resources. You can perform actions such as automation, run specific commands to your EC2 instances, apply patch management, etc.

AWS Cloud9 is incorrect because this is simply a cloud-based integrated development environment (IDE) that lets you write, run, and debug your code with just a browser. It includes a code editor, debugger, and terminal.

62
Q
A customer wants to run their application in the cloud via EC2 instances. Their code is currently stored in Github. Which tool could they use to launch their code onto EC2 instances?
A.AWS CloudFormation
B.AWS CodeCommit
C.Amazon Kinesis
D.AWS CodeDeploy
A

D.AWS CodeDeploy

Explanation:
CodeDeploy is a deployment service that automates application deployments to Amazon EC2 instances, on-premises instances, serverless Lambda functions, or Amazon ECS services. CodeDeploy can deploy application content that runs on a server and is stored in Amazon S3 buckets, GitHub repositories, or Bitbucket repositories. CodeDeploy can also deploy a serverless Lambda function. You do not need to make changes to your existing code before you can use CodeDeploy.

When you deploy to an AWS Lambda compute platform, the deployment configuration specifies the way traffic is shifted to the new Lambda function versions in your application.

Hence, the correct answer is AWS CodeDeploy.

AWS CodeCommit is incorrect since it is a fully-managed source control service that hosts secure Git-based repositories. It does not deploy code to instances.

AWS CloudFormation turns your infrastructure into code. This service does not deploy applications.

Amazon Kinesis is incorrect because this is just a data streaming service in AWS. This service does not deploy applications.

63
Q

Which of the following is true about the enhanced technical support response times of the Enterprise support plan in AWS? (Select TWO.)
A.Provides a 15-minute response time support if your business-critical system goes down
B.Provides a 12-hour response time support for general guidance
C.Provides a 2-hour response time support if time support if your production system got impaired
D.Provides a 28-hour response time support for general guidance
E.Provides a 1-hour response time support if your production system goes down

A

E.Provides a 1-hour response time support if your production system goes down

Explanation:
AWS Enterprise Support provides you with concierge-like service where the main focus is helping you achieve your outcomes and find success in the cloud.

With Enterprise Support, you get 24x7 technical support from high-quality engineers, tools and technology to automatically manage health of your environment, consultative architectural guidance delivered in the context of your applications and use-cases, and a designated Technical Account Manager (TAM) to coordinate access to proactive / preventative programs and AWS subject matter experts.

The AWS Enterprise support plan has enhanced technical support which provides 24x7 access to Cloud Support Engineers via phone, chat, and email. You can have an unlimited number of contacts that can open an unlimited amount of cases.

Based on the above table, the correct answers are: provides a 15-minute response time support if your business-critical system goes down and provides a 1-hour response time support if your production system goes down.

The option that says: provides a 2-hour response time support if your production system got impaired is incorrect because the Enterprise support plan actually provides you with a 4-hour response time if your system in production got impaired.

The option that says: provides a 12-hour response time support for general guidance is incorrect because this support plan actually provides a 24-hour response time and not within 12 hours.

The option that says:provides a 48-hour response time support for general guidance is incorrect because just as mentioned above, the Enterprise Support Plan has a 24-hour response time.

64
Q
Which service will allow you to quickly deploy your application into the AWS Cloud without having to build or launch the individual resources yourself?
A.AWS Elastic Beanstalk
B.Amazon EBS
C.Amazon ECS
D.Amazon EKS
A

A.AWS Elastic Beanstalk

Explanation:
AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services developed with Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker on familiar servers such as Apache, Nginx, Passenger, and IIS.

You can simply upload your code and Elastic Beanstalk automatically handles the deployment, from capacity provisioning, load balancing, auto-scaling to application health monitoring. At the same time, you retain full control over the AWS resources powering your application and can access the underlying resources at any time.

Elastic Beanstalk is the fastest and simplest way to deploy your application on AWS. You simply use the AWS Management Console, a Git repository, or an integrated development environment (IDE) such as Eclipse or Visual Studio to upload your application, and Elastic Beanstalk automatically handles the deployment details of capacity provisioning, load balancing, auto-scaling, and application health monitoring. Within minutes, your application will be ready to use without any infrastructure or resource configuration work on your part.

Hence, the correct answer is AWS Elastic Beanstalk.

Amazon EBS is incorrect since this is a storage volume for EC2 instances.

Amazon ECS is incorrect since this is a compute service for Docker containerized applications.

Amazon EKS is incorrect since this is the Kubernetes solution of AWS.

65
Q

How can you easily and securely copy your infrastructure to another AWS Region?
A.Take an EBS snapshot on all your storage devices and copy them to the new region
B.Enable RDS multi-AZ to have a similar database instance running in the new region
C.Create a golden AMI which you can use to redeploy your instances to the new region
D.Create a CloudFormation template and deploy it in the new region

A

D.Create a CloudFormation template and deploy it in the new region

Explanation:
AWS CloudFormation provides a common language for you to describe and provision all the infrastructure resources in your cloud environment. CloudFormation allows you to use programming languages or a simple text file to model and provision, in an automated and secure manner, all the resources needed for your applications across all regions and accounts. By turning your infra into code, you can deploy the code in your other regions.

AWS CloudFormation allows you to model your entire infrastructure with either a text file or programming languages. This provides a single source of truth for your AWS resources and helps you to standardize infrastructure components used across your organization, enabling configuration compliance and faster troubleshooting.

Hence, the correct answer is AWS CloudFormation.

Taking EBS snapshots is incorrect since EBS contains EC2 data, not your whole infrastructure.

Creating a golden AMI is incorrect since you only create a copy of your EC2 instances. You might have other resources in your environment that have to be transferred too.

Enabling multi-AZ RDS is incorrect since doing this does not create a copy of your infrastructure in another region.