AWS Certified Cloud Practitioner Practice Test 4 (Bosos) Flashcards

1
Q

How is expense shifted when moving from traditional servers to the Cloud?
A.Operational expense is traded for variable expense
B.Variable expense is traded for capital expense
C.Capital expense is traded for variable expense
D.Capital expense is traded for operational expense

A

C.Capital expense is traded for variable expense

Explanation:
Cloud computing is the on-demand delivery of compute power, database storage, applications, and other IT resources through a cloud services platform via the Internet with pay-as-you-go pricing. Whether you are running applications that share photos to millions of mobile users or you’re supporting the critical operations of your business, a cloud services platform provides rapid access to flexible and low-cost IT resources. With cloud computing, you don’t need to make large upfront investments in hardware and spend a lot of time on the heavy lifting of managing that hardware. Instead, you can provision exactly the right type and size of computing resources you need to power your newest bright idea or operate your IT department. You can access as many resources as you need, almost instantly, and only pay for what you use.

One of the advantages of cloud computing is that instead of having to invest heavily in data centers and servers before you know how you’re going to use them, you can pay only when you consume computing resources, and pay only for how much you consume.

Therefore, trade capital expense for variable expense is the correct answer.

The option that says: Capital expense is traded for operational expense is incorrect because capital expense is actually not traded for operational expense, since you still handle operations.

The option that says: Variable expense is traded for capital expense is incorrect because it should be the other way around.

The option that says: Operational expense is traded for variable expense is incorrect because you do not trade operational expense for variable expense since you still handle the operations of your company.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q
Due to a high number of visitors, many customers are timing out from your website which is running in an Auto Scaling group of EC2 instances behind an ELB. Upon checking, the Auto Scaling group has stopped adding new instances to your group.   Which of the following Trusted Advisor categories will give you more insight on this issue? (Select TWO.)
A.Performance
B.Cost Optimization
C.Security
D.Fault Tolerance
E. Service Limits
A

A.Performance

Explanation:
AWS Trusted Advisor is an online tool that provides you real-time guidance to help you provision your resources following AWS best practices. It inspects your AWS environment and makes recommendations for saving money, improving system performance and reliability, or closing security gaps.

Whether establishing new workflows, developing applications, or as part of ongoing improvement, take advantage of the recommendations provided by Trusted Advisor on a regular basis to help keep your solutions provisioned optimally.

Trusted Advisor includes an ever-expanding list of checks in the following five categories:

Cost Optimization – recommendations that can potentially save you money by highlighting unused resources and opportunities to reduce your bill.

Security – identification of security settings that could make your AWS solution less secure.

Fault Tolerance – recommendations that help increase the resiliency of your AWS solution by highlighting redundancy shortfalls, current service limits, and over-utilized resources.

Performance – recommendations that can help to improve the speed and responsiveness of your applications.

Service Limits – recommendations that will tell you when service usage is more than 80% of the service limit.

Since your instances are experiencing high usage, it is best to perform some performance checks and see if they are underprovisioned. Since auto-scaling instances are not adding new ones, you might have hit your limit on the number of instances for the region.

Hence, the correct answers are Performance and Service Limits.

Cost optimization, Security, and Fault Tolerance checks will not help you determine and address the issue.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q
If you are tasked to create a lightweight Wordpress site in AWS without having to install the package on your own, which PaaS solution in AWS will allow you to do this easily?
A.AWS Elastic BeanStalk
B.Amazon LightSail
C.Amazon Elastic Kubernetes Service
D.AWS CloudFormation
A

B.Amazon LightSail

Explanation:
Amazon Lightsail is a PaaS solution for users who need a simple virtual private server (VPS) solution. Lightsail provides developers compute, storage, and networking capacity and capabilities to deploy and manage websites and web applications in the cloud. Lightsail includes everything you need to launch your project quickly – a virtual machine, SSD-based storage, data transfer, DNS management, and a static IP – for a low, predictable monthly price.

Lightsail offers a range of operating system and application templates that are automatically installed when you create a new Lightsail instance. Application templates include WordPress, Drupal, Joomla!, Magento, Redmine, LAMP, Nginx (LEMP), MEAN, Node.js, and more.

Amazon Elastic Beanstalk is another PaaS solution of AWS. This is not the correct answer, however, because you would have to deploy and set up your own Wordpress application first. Elastic Beanstalk does not offer a simple solution to quickly setup a functional Wordpress website in minutes, unlike Lightsail.

AWS CloudFormation is not PaaS, and is not used to quickly deploy applications. Rather, it allows you to automate infrastructure deployments through the use of CloudFormation templates.

Amazon EKS is a service for managing containerized applications using Kubernetes orchestration. This is not the best service to use if you only wish to run a quick setup of a Wordpress website.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is a good disaster recovery precaution if you are launching a dynamic web application with mission-critical workloads that need to be available all the time?
A.Always keep backup data stored in two different S3 buckets
B.Launch applications in two different AWS Regions to prevent downtime during regional outages
C.Run applications in the cloud but keep all data locally
D.Launch applications in two different AZs to prevent downtime during regional outtages

A

B.Launch applications in two different AWS Regions to prevent downtime during regional outages

Explanation:
Business continuity depends on the efficient, uninterrupted flow of data across an organization – from operations to sales to fulfillment. Even a brief lapse in workload continuity can mean thousands of lost sales opportunities, disruption to production, and compromised customer trust. The causes of those lapses can range from natural disasters to mechanical failure or human error. You need a proactive cloud disaster recovery strategy that will help you stay up and running in the event that your physical infrastructure is unavailable for any length of time.

An Auto Scaling group can contain Amazon EC2 instances from multiple Availability Zones within the same Region. By deploying your instances in different AZs, you minimize the impact to your infrastructure when an AZ goes down. The ELB will redirect the traffic of the impacted zone to the healthy instances.

Hence, the correct answer is Launch applications in two different AWS Regions to prevent downtime during regional outages.

The option that says: Launch applications in two different AZs to prevent downtime during regional outages is incorrect since regional outages affect all the AZs in it. You cannot launch an application in different AZs of different Regions.

The option that says: Always keep backup data stored in two different S3 buckets is incorrect since S3 boasts high durability and availability. There is no need to store backup data in two separate S3 buckets for the sake of disaster recovery.

The option that says: Run applications in the cloud but keep all data locally is incorrect since this is not an optimal disaster recovery strategy among the given choices. Keeping your data locally or in your on-premises data center is susceptible to data loss as well.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q
In implementing continuous integration and continuous delivery (CI/CD) in your cloud architecture, which service will make it easy for you to set up your entire development and continuous delivery toolchain for coding, building, testing, and deploying your application code?
A.AWS CodePipeline
B.AWS CodeStar
C.AWS CodeBuild 
D.AWS CodeCommit
A

B.AWS CodeStar

Explanation:
AWS CodeStar enables you to quickly develop, build, and deploy applications on AWS. AWS CodeStar provides a unified user interface, enabling you to easily manage your software development activities in one place. With AWS CodeStar, you can set up your entire continuous delivery toolchain in minutes, allowing you to start releasing code faster. AWS CodeStar makes it easy for your whole team to work together securely, allowing you to easily manage access and add owners, contributors, and viewers to your projects. Each AWS CodeStar project comes with a project management dashboard, including an integrated issue tracking capability powered by Atlassian JIRA Software. With the AWS CodeStar project dashboard, you can easily track progress across your entire software development process, from your backlog of work items to teams’ recent code deployments.

AWS CodeStar makes it easy for you to set up your entire development and continuous delivery toolchain for coding, building, testing, and deploying your application code. To start a project, you can choose from a variety of AWS CodeStar templates for Amazon EC2, AWS Lambda, and AWS Elastic Beanstalk. You have the option to choose AWS CodeCommit or GitHub to use as your project’s source control. You also have the option to edit your source code using one of several options including AWS Cloud9, Microsoft Visual Studio, or Eclipse. After you make your selections the underlying AWS services are provisioned in minutes, allowing you to quickly start coding and deploying your applications.

Hence, the correct answer is AWS CodeStar.

AWS CodeBuild is incorrect because this is just a fully managed build service that compiles source code, runs tests, and produces software packages that are ready to deploy.

AWS CodeCommit is incorrect because this is simply a fully-managed source control service that makes it easy for companies to host secure and highly scalable private Git repositories.

AWS CodePipeline is incorrect because this basically helps you automate your release pipelines for fast and reliable application and infrastructure updates. It doesn’t provide an entire development and continuous delivery toolchain for coding, building, testing, and deploying your application code, unlike AWS CodeStar.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Which of the following is true regarding the AWS Cost and Usage report? (Select TWO.)
A.Allows you to load your cost and usage information into Amazon Athena, Amazon Redshift, and AWS QuickSight
B.Provides you a dashboard that lets you view the status of your month-to-date AWS expenditure and provides access to a number of other cost management products that can help you dig deeper into your AWS costs and usage
C.Lets you set custom cost and usage budgets that alert you when those thresholds are exceeded
D.Helps you visualize, understand, and manage your AWS costs and usage over time via an intuitive interface that enables you to quickly create customer reports
E.Provides you with granular data about your AWS costs and usage

A

A.Allows you to load your cost and usage information into Amazon Athena, Amazon Redshift, and AWS QuickSight
E.Provides you with granular data about your AWS costs and usage

Explanation:
The Cost and Usage Report is your one-stop-shop for accessing the most granular data about your AWS costs and usage. You can also load your cost and usage information into Amazon Athena, Amazon Redshift, AWS QuickSight, or a tool of your choice.

It lists AWS usage for each service category used by an account and its IAM users in hourly or daily line items, as well as any tags that you have activated for cost allocation purposes. You can also customize the AWS Cost & Usage Report to aggregate your usage data to the daily or hourly level.

With the AWS Cost & Usage Report, you can do the following:

Access comprehensive AWS cost and usage information

  • The AWS Cost & Usage Report gives you the ability to delve deeply into your AWS cost and usage data, understand how you are using your AWS implementation, and identify opportunities for optimization.

Track your Amazon EC2 Reserved Instance (RI) usage

  • Each line item of usage that receives an RI discount contains information about where the discount was allocated. This makes it easier to trace which instances are benefitting from specific reservations.

Leverage strategic data integrations

  • Using the Amazon Athena data integration feature, you can quickly query your cost and usage information using standard SQL queries. You can also upload your data directly into Amazon Redshift or Amazon QuickSight.

One of the core benefits of the AWS Cost & Usage Report is the wealth of RI-related data that is made available to you. It can be customized to collect cost and usage data at the daily and monthly levels of detail and is updated at least once per day. Each line item of usage that receives an RI discount contains information about where the discount came from. This makes it easier to trace which instances are benefitting from specific reservations. If desired, the AWS Cost & Usage Report can even be ingested directly into Amazon Athena, Amazon QuickSight, or your Amazon Redshift cluster.

Hence, the correct answers in this scenario are the following:

  • Provides you with granular data about your AWS costs and usage
  • Allows you to load your cost and usage information into Amazon Athena, Amazon Redshift and AWS QuickSight

The option that says: Lets you set custom cost and usage budgets that alert you when those thresholds are exceeded is incorrect because this refers to AWS Budgets.

The option that says: Helps you visualize, understand, and manage your AWS costs and usage over time via an intuitive interface that enables you to quickly create custom reports is incorrect because this refers to AWS Cost Explorer.

The option that says: Provides you a dashboard that lets you view the status of your month-to-date AWS expenditure and provides access to a number of other cost management products that can help you dig deeper into your AWS costs and usage is incorrect because this refers to the AWS Cost Management dashboard.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Which of the following does the Enterprise support plan provide to customers in AWS? (Select TWO.)
A.Proactive Technical Account Management
B.15-Minute response time support if your production system goes down
C.Limited access to the 7 Core Trusted Advisor checks
D.Access to online self-paced labs
E.5-minute response time support if your business-critical system goes down

A

A.Proactive Technical Account Management
D.Access to online self-paced labs

Explanation:
AWS Enterprise Support provides you with concierge-like service where the main focus is helping you achieve your outcomes and find success in the cloud.

With Enterprise Support, you get 24x7 technical support from high-quality engineers, tools and technology to automatically manage health of your environment, consultative architectural guidance delivered in the context of your applications and use-cases, and a designated Technical Account Manager (TAM) to coordinate access to proactive / preventative programs and AWS subject matter experts.

In addition to what is available with Basic Support, Enterprise Support provides:

AWS Trusted Advisor - Access to the full set of Trusted Advisor checks and guidance to provision your resources following best practices to help reduce costs, increase performance and fault tolerance, and improve security.

AWS Personal Health Dashboard - A personalized view of the health of AWS services, and alerts when your resources are impacted. Also includes the Health API for integration with your existing management systems.

AWS Support API - Programmatic access to AWS Support Center features to create, manage, and close your support cases, and operationally manage your Trusted Advisor check requests and status.

Proactive Technical Account Management - A Technical Account Manager (TAM) is your designated technical point of contact who provides advocacy and guidance to help plan and build solutions using best practices, coordinate access to subject matter experts and product teams, and proactively keep your AWS environment operationally healthy.

Architecture Support – Contextual guidance on how services fit together to meet your specific use-case, workload, or application.

Third-Party Software Support - Guidance, configuration, and troubleshooting of AWS interoperability with many common operating systems, platforms, and application stack components.

Proactive Support Programs – Included access to Well-Architected Reviews, Operations Reviews, and Infrastructure Event Management.

Support Concierge - the Concierge Team are AWS billing and account experts that specialize in working with enterprise accounts. They will quickly and efficiently assist you with your billing and account inquiries, and work with you to implement billing and account best practices so that you can focus on what matters: running your business.

Enhanced Technical Support – 24x7 access to Cloud Support Engineers via phone, chat, and email. You can have an unlimited number of contacts that can open an unlimited amount of cases. Response times are as follows:

General Guidance - < 24 hours

System Impaired - < 12 hours

Production System Impaired - < 4 hours

Production System Down - < 1 hour

Business Critical System Down - <15 min

Hence, the correct answers are proactive Technical Account Management and Access to online self-paced labs.

All other options are incorrect as per the information shared above.

The option that says: limited access to the 7 Core Trusted Advisor checks is incorrect because the Enterprise support plan has access to the full set of Trusted Advisor checks.

The option that says:15-minute response time support if your production system goes down is incorrect because the Enterprise support plan actually provides a 1-hour response time for this kind of incident.

The option that says: 5-minute response time support if your business-critical system goes down is incorrect because the Enterprise support plan actually provides a 15-minute response time for this type of outage.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q
What infrastructure does Amazon CloudFront utilize to speed up content delivery to customers?
A.Direct Connect
B.Edge Locations
C.Fiber optics cables
D.Availability Zones
A

B.Edge Locations

Explanation:
Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment. CloudFront is integrated with AWS – both physical locations that are directly connected to the AWS global infrastructure, as well as other AWS services. CloudFront works seamlessly with services including AWS Shield for DDoS mitigation, Amazon S3, Elastic Load Balancing or Amazon EC2 as origins for your applications, and Lambda@Edge to run custom code closer to customers’ users and to customize the user experience. Lastly, if you use AWS origins such as Amazon S3, Amazon EC2 or Elastic Load Balancing, you don’t pay for any data transferred between these services and CloudFront.

You can get started with the Content Delivery Network in minutes, using the same AWS tools that you’re already familiar with: APIs, AWS Management Console, AWS CloudFormation, CLIs, and SDKs. Amazon’s CDN offers a simple, pay-as-you-go pricing model with no upfront fees or required long-term contracts, and support for the CDN is included in your existing AWS Support subscription.

Amazon CloudFront uses Edge Locations in every Region to speed up content delivery to customers. Edge locations cache copies of your content on their local storage.

Hence, the correct answer is Edge Locations.

AWS Direct Connect is a cloud service solution that makes it easy to establish a dedicated network connection from your premises to AWS. Using AWS Direct Connect, you can establish private connectivity between AWS and your datacenter, office, or colocation environment. Since this is a private dedicated line, it would be impractical for CloudFront to use this.

Availability Zones are the locations where different data centers are located. These are not the main infrastructure CloudFront uses to speed up content delivery.

Although AWS does use fiber optic cables for their network lines, this is not the specific answer that is requested by the scenario.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q
A customer has a popular website which has millions of viewers from all over the world and has read-heavy database workloads. Which of the following is the best option to use to increase the read throughput on their database?
A.Enable Multi-AZ deployments
B.Enable Amazon RDS Read Replicas
C.Use SQS to queue up the request
D.Enable Amazon RDS Setandby Replicas
A

B.Enable Amazon RDS Read Replicas

Explanation:
Amazon RDS Read Replicas provide enhanced performance and durability for database (DB) instances. This feature makes it easy to elastically scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads. You can create one or more replicas of a given source DB Instance and serve high-volume application read traffic from multiple copies of your data, thereby increasing aggregate read throughput. Read replicas can also be promoted when needed to become standalone DB instances. Read replicas are available in Amazon RDS for MySQL, MariaDB, Oracle and PostgreSQL as well as Amazon Aurora.

Hence, the correct answer is: Enable Amazon RDS Read Replicas.

The option that says: Enable Multi-AZ deployments is incorrect because the Multi-AZ deployments feature is mainly used to achieve high availability and failover support for your database.

The option that says: Enable Amazon RDS Standby Replicas is incorrect because a Standby replica is used in Multi-AZ deployments and hence, it is not a solution to reduce read-heavy database workloads.

The option that says: Use SQS to queue up the requests is incorrect because although an SQS queue can effectively manage the requests, it won’t be able to entirely improve the read-throughput of the database by itself.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q
Which compliance requirement has AWS achieved that allows handling of medical information?
A.SOC 2
B.PCI DSS
C.HIPAA
D.SOC 1
A

C.HIPAA

Explanation:
HIPAA (Health Insurance Portability and Accountability Act of 1996) is United States legislation that provides data privacy and security provisions for safeguarding medical information. AWS enables covered entities and their business associates subject to the HIPAA to use the secure AWS environment to process, maintain, and store protected health information.

Hence, HIPAA is the correct choice for this scenario.

Payment Card Industry Data Security Standard (PCI DSS) is incorrect because this is a set of security standards designed to ensure that ALL companies that accept, process, store, or transmit credit card information maintain a secure environment. This compliance does not concern medical information.

SOC 1 Report (System and Organization Controls Report) is incorrect because this is a report on Controls at a Service Organization which are relevant to user entities’ internal control over financial reporting. This compliance does not concern medical information.

SOC 2 is incorrect because this is focused more on making sure that systems are set up so they assure security, availability, processing integrity, confidentiality, and privacy of customer data. This compliance does not concern medical information.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

You have an Amazon Linux EC2 instance running for an hour and thirty minutes. How will AWS bill you in terms of usage?
A.You will be billed for an hour and thirty minutes according to the per-second billing rule
B.You will only be billed for an hour according to the hourly billing rule
C.You will be billed for an hour and twenty-nine minutes according to the per second billing rule
D.You will be billed for one hour and thirty minutes according to the hourly billing rule

A

A.You will be billed for an hour and thirty minutes according to the per-second billing rule

Explanation:
Amazon Linux instances are now billed in a per-second duration. With per-second billing, you pay for only what you use. It takes cost of unused minutes and seconds in an hour off of the bill, so you can focus on improving your applications instead of maximizing usage to the hour. Especially, if you manage instances running for irregular periods of time, such as dev/testing, data processing, analytics, batch processing and gaming applications, can benefit.

EC2 usage is billed on one-second increments, with a minimum of 60 seconds. Similarly, provisioned storage for EBS volumes will be billed per-second increments, with a 60-second minimum. It is in your best interest to understand the billing structure of your commonly used service to avoid budgeting concerns.

Hence, the correct answer is: You will be billed for an hour and thirty minutes according to the per-second billing rule.

All other options are incorrect because these are false statements about EC2 Usage pricing.

You can also set your billing to hourly if you wish. In this case, even a fraction of an hour of usage will cost you an equivalent of a full hour’s usage. You will NOT be billed for only an hour if you choose to follow hourly billing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

`A company is planning to use AWS Cloud to augment the resources of their on-premises data center to better serve their customers around the world. How does a company benefit from using AWS?
A.Benefit from massic discounts from the Amazon.com shopping website
B.Benefit from massive economies of scale
C.Replace high variable costs with upfront capital infrastructure expenses
D.Replace low variable costs with upfront capital infrastructure expenses

A

B.Benefit from massive economies of scale

Explanation:
In 2006, Amazon Web Services (AWS) began offering IT infrastructure services to businesses as web services—now commonly known as cloud computing. One of the key benefits of cloud computing is the opportunity to replace upfront capital infrastructure expenses with low variable costs that scale with your business. With the cloud, businesses no longer need to plan for and procure servers and other IT infrastructure weeks or months in advance. Instead, they can instantly spin up hundreds or thousands of servers in minutes and deliver results faster.

Whether you are using it to run applications that share photos to millions of mobile users or to support business critical operations, a cloud services platform provides rapid access to flexible and low cost IT resources. With cloud computing, you don’t need to make large upfront investments in hardware and spend a lot of time on the heavy lifting of managing that hardware. Instead, you can provision exactly the right type and size of computing resources you need to power your newest idea or operate your IT department. You can access as many resources as you need, almost instantly, and only pay for what you use.

There are six advantages of using Cloud Computing:

  1. Trade capital expense for variable expense

– Instead of having to invest heavily in data centers and servers before you know how you’re going to use them, you can pay only when you consume computing resources, and pay only for how much you consume.

  1. Benefit from massive economies of scale

– By using cloud computing, you can achieve a lower variable cost than you can get on your own. Because usage from hundreds of thousands of customers is aggregated in the cloud, providers such as AWS can achieve higher economies of scale, which translates into lower pay as-you-go prices.

  1. Stop guessing capacity

– Eliminate guessing on your infrastructure capacity needs. When you make a capacity decision prior to deploying an application, you often end up either sitting on expensive idle resources or dealing with limited capacity. With cloud computing, these problems go away. You can access as much or as little capacity as you need, and scale up and down as required with only a few minutes’ notice.

  1. Increase speed and agility

– In a cloud computing environment, new IT resources are only a click away, which means that you reduce the time to make those resources available to your developers from weeks to just minutes. This results in a dramatic increase in agility for the organization, since the cost and time it takes to experiment and develop is significantly lower.

  1. Stop spending money running and maintaining data centers

– Focus on projects that differentiate your business, not the infrastructure. Cloud computing lets you focus on your own customers, rather than on the heavy lifting of racking, stacking, and powering servers.

  1. Go global in minutes

– Easily deploy your application in multiple regions around the world with just a few clicks. This means you can provide lower latency and a better experience for your customers at minimal cost.

Hence, the correct answer is Benefit from massive economies of scale.

The option that says: Benefit from massive discounts from the Amazon.com shopping website is incorrect because the Amazon.com platform is different from its Amazon Web Services cloud computing division. This statement is actually not related to Cloud Computing.

The option that says: Replace low variable costs with upfront capital infrastructure expenses is incorrect as it should be the other way around. Using AWS allows you to replace upfront capital infrastructure expenses with low variable costs that scale with your business.

The option that says: Replace high variable costs with upfront capital infrastructure expenses is incorrect because you are actually replacing upfront capital infrastructure expenses and not high variable costs. Take note that by using AWS, you can actually have low (not high) variable costs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Which of the following is one example of a security best practice in AWS?
A.Enable MFA only on the root account
B.Grant Most Privilege
C.Create an IAM user with admin privileges instead of using root
D.Always manually define permissions to each and every individual IAM user

A

C.Create an IAM user with admin privileges instead of using root

Explanation:
You use an access key (an access key ID and secret access key) to make programmatic requests to AWS. However, do not use your AWS account root user access key. The access key for your AWS account root user gives full access to all your resources for all AWS services, including your billing information. You cannot reduce the permissions associated with your AWS account root user access key. Therefore, protect your root user access key like you would your credit card numbers or any other sensitive secret.

As indicated in the Security Best Practices whitepaper, you should not use your root account to administer your account. You should instead create an IAM user with administration privileges that will be used as the administrator of your AWS account.

Hence, the correct answer is Create an IAM user with admin privileges instead of using root.

The remaining choices are not part of the security best practices.

The option that says: Enable MFA only on the root account is incorrect because MFA should be enabled as much as possible for every account to prevent others from maliciously logging in to your account.

The option that says: Grant Most Privilege is incorrect because when you create IAM policies, you should follow the standard security advice of granting least privilege, or granting only the permissions required to perform a task.

The option that says: Always manually define permissions to each and every individual IAM users is incorrect because you should use IAM Groups to make it easier to manage the permissions for multiple users.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q
Which of the following AWS services are not considered to be region-specific services? (Select TWO.) 
A.Amazon VPC
B.AWS WAF
C.AWS Lambda
D.Amazon Route 53
E.AWS CloudTrail
A

B.AWS WAF
D.Amazon Route 53

Explanation

An AWS resource can be a Global, Regional or a Zonal service. A Global service means that it covers all of the AWS Regions across the globe, while a regional service means that a resource is only applicable to one specific region at a time. A regional service may or may not have the ability to replicate the same resource to another region. Lastly, a Zonal service can only exist in one Availability Zone.

You don’t need to memorize the scope of all of the AWS services as long as you know the pattern. There are actually only a handful of services that are considered as global services such as IAM, STS Route 53, CloudFront and WAF. For Zonal services, the examples are EC2 Instance and EBS Volumes where they are tied to the Availability Zone where they were launched. Take note that although EBS Volumes are considered as a zonal service, the EBS snapshots are considered as a regional since it is not tied to a specific Availability Zone. The rest of the services are regional in scope.

Amazon Route 53 and AWS WAF are both global services such that they are not dependent on the Region in which they were launched in. This can be verified by signing in to an AWS Console and viewing the area where the Regions are typically located.

Hence, the correct answers are AWS Route 53 and AWS WAF.

AWS Lambda, AWS CloudTrail, and Amazon VPC are all incorrect because these are region-specific services. You can only find your resources in the regions in which you created them. This means that the location you select matters when creating these resources.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q
You are in need of a database that is capable of self-healing and has a high throughput. Which of the following services fits these criteria?
A.Amazon Aurora
B.Amazon Redshift
C.Amazon DynamoDB
D.Amazon RDS
A

A.Amazon Aurora

Explanation:
Amazon Aurora is a relational database engine that combines the speed and reliability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. It is designed to transparently handle the loss of up to two copies of data without affecting database write availability and up to three copies without affecting read availability. Amazon Aurora storage is also self-healing. Data blocks and disks are continuously scanned for errors and repaired automatically.

Amazon Aurora is fully managed by Amazon Relational Database Service (RDS), which automates time-consuming administration tasks like hardware provisioning, database setup, patching, and backups.

It also features a distributed, fault-tolerant, self-healing storage system that auto-scales up to 64TB per database instance. It delivers high performance and availability with up to 15 low-latency read replicas, point-in-time recovery, continuous backup to Amazon S3, and replication across three Availability Zones (AZs).

Hence, the correct answer is Amazon Aurora.

Amazon RDS is incorrect because although it is similar to Amazon Aurora such that they are both SQL databases, RDS does not have self-healing capabilities.

Amazon DynamoDB and Amazon Redshift are both incorrect since these services are not self-healing databases. Amazon Redshift is considered more as a data warehouse rather than a database.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q
Which AWS service will allow you to serve your dynamic web content to users globally?
A.AWS Elastic Load Balancer
B.Amazon S3
C.Amazon CloudFront
D.Amazon Route 53
A

C.Amazon CloudFront

Explanation:
Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment. CloudFront is integrated with AWS – both physical locations that are directly connected to the AWS global infrastructure, as well as other AWS services. CloudFront works seamlessly with services including AWS Shield for DDoS mitigation, Amazon S3, Elastic Load Balancing or Amazon EC2 as origins for your applications, and Lambda@Edge to run custom code closer to customers’ users and to customize the user experience. Lastly, if you use AWS origins such as Amazon S3, Amazon EC2 or Elastic Load Balancing, you don’t pay for any data transferred between these services and CloudFront.

You can get started with the Content Delivery Network in minutes, using the same AWS tools that you’re already familiar with: APIs, AWS Management Console, AWS CloudFormation, CLIs, and SDKs. Amazon’s CDN offers a simple, pay-as-you-go pricing model with no upfront fees or required long-term contracts, and support for the CDN is included in your existing AWS Support subscription.

Hence, the correct answer is Amazon CloudFront.

Amazon S3 is incorrect because this can only directly serve static objects.

Amazon Route 53 is incorrect because this is just a DNS service offered by AWS and hence, it is not a suitable service to use for this scenario.

AWS Elastic Load Balancer is incorrect because this just automatically distributes incoming application traffic across multiple targets, such as Amazon EC2 instances, containers, IP addresses, and Lambda functions. It cannot serve content to users globally without a source behind it.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q
What cloud computing model deals with services such as EC2 instances?
A.PaaS
B.IaaS
C.SaaS
D.DBaaS
A

B.IaaS

Explanation:
There are three main models for cloud computing. Each model represents a different part of the cloud computing stack, they are:

  • Infrastructure as a Service (IaaS)
  • Platform as a Service (PaaS)
  • Software as a Service (SaaS)

Infrastructure as a Service, sometimes abbreviated as IaaS, contains the basic building blocks for cloud IT and typically provide access to networking features, computers (virtual or on dedicated hardware), and data storage space. Amazon EC2 is considered IaaS because you have total control on what could be done within the instances. You are borrowing the server infrastructure of AWS to fulfill your business needs, and you are charged at a rate for this service.

PaaS, or platform as a service, is incorrect. PaaS services include AWS Elastic Beanstalk, which provides you a platform to launch your applications, while the service prepares all the necessary infrastructure to run your application.

SaaS, or software as a service, is incorrect. Software as a Service provides you with a completed product that is run and managed by the service provider. With a SaaS offering, you do not have to think about how the service is maintained or how the underlying infrastructure is managed; you only need to think about how you will use that particular piece of software.

DBaaS, or database as a service, is incorrect. Amazon EC2 is not a managed database service.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

A MariaDB RDS database is known to have high memory consumption during peak hours which deteriorates the overall performance of your application. What cost-effective change can you introduce to resolve this issue if the database is handling write-intensive operations?
A.Switch to DynamoDB to enjoy millisecond latency
B.Add read replicas to lessen the burden of read operations
C.Distribute the workload by running multiple instances
D.Scale the instance vertically to a higher memory capacity

A

D.Scale the instance vertically to a higher memory capacity

Explanation:
Systems that are expected to grow over time need to be built on top of a scalable architecture. Such an architecture can support growth in users, traffic, or data size with no drop-in performance. It should provide that scale in a linear manner where adding extra resources results in at least a proportional increase in ability to serve additional load. Growth should introduce economies of scale, and cost should follow the same dimension that generates business value out of that system. While cloud computing provides virtually unlimited on-demand capacity, your design needs to be able to take advantage of those resources seamlessly.

There are generally two ways to scale an IT architecture: vertically and horizontally.

Vertical Scaling

  • Scaling vertically takes place through an increase in the specifications of an individual resource, such as upgrading a server with a larger hard drive or a faster CPU. With Amazon EC2, you can stop an instance and resize it to an instance type that has more RAM, CPU, I/O, or networking capabilities. This way of scaling can eventually reach a limit, and it is not always a cost-efficient or highly available approach. However, it is very easy to implement and can be sufficient for many use cases especially in the short term.

Horizontal Scaling

  • Scaling horizontally takes place through an increase in the number of resources, such as adding more hard drives to a storage array or adding more servers to support an application. This is a great way to build internet-scale applications that leverage the elasticity of cloud computing. Take note that not all architectures are designed to distribute their workload to multiple resources.

In this scenario, it is better to scale vertically than horizontally since this is a database. Scaling vertically will give the database more capacity to perform its job and avoid high memory consumption.

Hence, the correct answer is Scale the instance vertically to a higher memory capacity.

The option that says: Distribute the workload by running multiple instances is incorrect because adding more instances is not the best choice for databases unless you have a proper orchestrator. It is better to modify the database to a higher instance type instead.

The option that says: Add read replicas to lessen the burden of read operations is incorrect because although can reduce the load on your source DB instance by routing read queries from your applications to Read Replicas, the scenario mentioned that the database is handling write-intensive operations. This means that adding read replicas will only do little for this scenario’s issue, and it might be more costly as well.

The option that says: Switch to DynamoDB to enjoy millisecond latency is incorrect because although switching to DynamoDB is sometimes a valid choice of action, it is still too much work to refactor your SQL database to a NoSQL database. Migration can affect your costs too.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q
What tool can assist you in estimating your monthly AWS bill?
A.AWS TCO Calculator
B.AWS Pricing Calculator
C.AWS Free Monthly Calculator
D.AWS IAM
A

B.AWS Pricing Calculator

Explanation:
To estimate a bill, use the AWS Pricing Calculator. You can enter your planned resources by service, and the Pricing Calculator provides an estimated cost per month. The AWS Pricing Calculator is an easy-to-use online tool that enables you to estimate the monthly cost of AWS services for your use case based on your expected usage. It is continuously updated with the latest pricing for all AWS services in all Regions.

The AWS Pricing Calculator helps customers and prospects estimate their monthly AWS bill more efficiently. Using this tool, they can add, modify and remove services from their ‘bill’ and it will recalculate their estimated monthly charges automatically. The calculator also shows common customer samples and their usage, such as Disaster Recovery and Backup or Web Application.

Hence, the correct answer is AWS Pricing Calculator.

AWS TCO Calculator is incorrect because this is just used to estimate your cost savings when using AWS rather than traditional data centers. This is not the correct calculator to use for estimating your monthly AWS costs.

AWS Free Monthly Calculator is incorrect since no such service exists in AWS.

AWS IAM is incorrect because this is primarily an account management service, and is not used for any billing purposes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q
In the event of a disaster or an outage, AWS recommends that you spread your servers and databases at least in different \_\_\_\_\_?
A.AWS Accounts
B.Availability Zones
C.Regions
D.VPCs
A

B.Availability Zones

Explanations;
To create a highly available system, AWS recommends that you launch your applications in more than one availability zone. This is to ensure that there is service continuity in case an AZ is struck by disaster or experiences an outage. Each AWS Region provides full redundancy and connectivity to the network. AWS Regions consist of multiple Availability Zones (AZ), each of which is a fully isolated partition of the AWS infrastructure that consists of discrete data centers, each with redundant power, networking, and connectivity, and each housed in separate facilities.

Hence, the correct answer is Availability Zones.

Creating a highly available infrastructure boosts the reliability of your services. This is heavily emphasized by AWS in their Well-Architected Framework whitepaper. There are multiple services and features provided by AWS to help you construct a highly available environment for your business needs.

At a minimum, your servers and databases should be running in more than one AZ. You can also launch them in a separate region if you are worried about regional outages.

Creating another VPC is not exactly the solution you’re looking for in this scenario. It helps if your additional VPC has subnets in other AZs or is running in a different region.

Creating a new account is unnecessary.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q
What is the lowest support plan that allows an unlimited number of technical support cases to be opened?
A.Basic
B.Business
C.Enterprise
D.Developer
A

D.Developer

Explanation:
AWS Support offers a range of plans that provide access to tools and expertise that support the success and operational health of your AWS solutions. All support plans provide 24x7 access to customer service, AWS documentation, whitepapers, and support forums. For technical support and more resources to plan, deploy, and improve your AWS environment, you can select a support plan that best aligns with your AWS use case.

AWS Support offers four support plans: Basic, Developer, Business, and Enterprise. The Basic plan is free of charge and offers support for account and billing questions and service limit increases. The other plans offer an unlimited number of technical support cases with pay-by-the-month pricing and no long-term contracts, providing the level of support that meets your needs.

All AWS customers automatically have around-the-clock access to these features of the Basic support plan:

  • Customer Service: one-on-one responses to account and billing questions
  • Support forums
  • Service health checks
  • Documentation, whitepapers, and best-practice guides

The cheapest support plan that offers technical support with unlimited amount of cases that can be opened is the Developer support plan. Additionally, it provides you access to the 7 core Trusted Advisor checks and the Personal Health Dashboard, where you get a personalized view of the health of AWS services, and alerts when your resources are impacted.

Hence, the correct answer is Developer.

Basic is incorrect because this support plan does not offer Technical Support cases.

Both Business and Enterprise are incorrect because these support plans are more expensive than the Developer plan.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

A number of servers in your on-premises data center have been collecting dust over the past few years. What is the benefit of moving to the Cloud in this case?
A.Physical servers are managed and maintained by AWS for you
B.The ability to provision resources only when you need them
C.AWS has automated services for you
D.The ability to pay for only what you use

A

B.The ability to provision resources only when you need them

Explanation:
In 2006, Amazon Web Services (AWS) began offering IT infrastructure services to businesses as web services—now commonly known as cloud computing. One of the key benefits of cloud computing is the opportunity to replace upfront capital infrastructure expenses with low variable costs that scale with your business. With the cloud, businesses no longer need to plan for and procure servers and other IT infrastructure weeks or months in advance. Instead, they can instantly spin up hundreds or thousands of servers in minutes and deliver results faster.

Whether you are using it to run applications that share photos to millions of mobile users or to support business-critical operations, a cloud services platform provides rapid access to flexible and low cost IT resources. With cloud computing, you don’t need to make large upfront investments in hardware and spend a lot of time on the heavy lifting of managing that hardware. Instead, you can provision exactly the right type and size of computing resources you need to power your newest idea or operate your IT department. You can access as many resources as you need, almost instantly, and only pay for what you use.

One of the best practices to perform in the Cloud is to have disposable resources instead of fixed servers. Therefore, you can easily provision resources when you need them and take them down when you don’t in the Cloud, and this is the advantage you receive compared to having servers on-premises.

Physical servers are managed by AWS for you. However, this is not the main compelling advantage on why you should shift to the cloud rather than have your unused servers sit in your data center collecting dust.

The ability to pay for only what you use is an incorrect option since the scenario is not discussing about cost issues.

The option that says: AWS has automated services for you is incorrect since this option is not related to the concerns of the given scenario.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What are the benefits of using Amazon DynamoDB as your database? (Select TWO.)
A.DynamoDB offers 11 9’s in terms of durability, according to the SLA
B.You can perform very complex queries and joins without deterioration in performance
C.DynamoDB is self-healing, which means your data is scanned for errors and is repaired continuously
D.Database size scales automatically so you wont have to worry about capacity
E.You can store different kinds of unstructured data that would normally not be suitable for relational databases

A

D.Database size scales automatically so you wont have to worry about capacity
E.You can store different kinds of unstructured data that would normally not be suitable for relational databases

Explanation:
Amazon DynamoDB is a fast, scalable NoSQL managed database service. You can build applications with virtually unlimited throughput and storage. You can use DynamoDB to store structured documents, JSON data, key-value pairs and many more items that you would not be able to store on a relational database.

DynamoDB supports some of the world’s largest scale applications by providing consistent, single-digit millisecond response times at any scale. You can build applications with virtually unlimited throughput and storage. DynamoDB global tables replicate your data across multiple AWS Regions to give you fast, local access to data for your globally distributed applications. For use cases that require even faster access with microsecond latency, DynamoDB Accelerator (DAX) provides a fully managed in-memory cache.

Hence, the correct answers are:

  • Database size scales automatically so you won’t have to worry about capacity
  • You can store different kinds of unstructured data that would normally not be suitable for relational databases

The option that says: DynamoDB is self-healing, which means your data is scanned for errors and is repaired continuously is incorrect because DynamoDB is not a self-healing database and it doesn’t automatically scan or repair your data. An example of a self-healing database in AWS is Amazon Aurora in which the concept of “self-healing” relates to the underlying infrastructure.

The option that says: DynamoDB offers 11 9’s in terms of durability, according to the SLA is incorrect because it does not offer 11 9’s in durability, unlike Amazon S3.

The option that says: You can perform very complex queries and joins without deterioration in performance is incorrect because usually, with NoSQL databases, you would not perform complex queries due to the unstructured formatting of your data. Complex queries are better performed in SQL databases due to predictable structuring.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

You believe that your AWS root account may have been used without you knowing. What can you do to tighten the security on the root account?
A.Configure MFA for your root acount
B.Disable root account credentials and use an IAM user with admin privileges
C.Change the root account password often
D.Enable Cloudtrail to monitor for suspicious logins

A

A.Configure MFA for your root acount

Explanation:
Multi-Factor Authentication (MFA) is a simple best practice that adds an extra layer of protection on top of your user name and password. It is actually one of the first few steps that you should perform after logging in to your newly created AWS account. With MFA enabled, when a user signs in to an AWS website, they will be prompted for their user name and password, as well as for an authentication response from their AWS MFA device. Taken together, these multiple factors provide increased security for your AWS account settings and resources.

You can enable MFA for your AWS account and for individual IAM users you have created under your account. MFA can be also be used to control access to AWS service APIs. After you’ve obtained a supported hardware or a virtual MFA device, AWS does not charge any additional fees for using MFA.

Hence, the correct answer is: Configure MFA for your root account.

The option that says: Disable root account credentials and use an IAM user with admin privileges instead is incorrect because there is no actual way to disable root account credentials in AWS.

The option that says: Change the root account password often is incorrect because although changing the password often is a good practice for any kind of account that has a password, this is still not the best option for this scenario. It is better to add an additional layer of protection to your root account via MFA instead.

The option that says: Enable Cloudtrail to monitor for suspicious logins is incorrect because enabling Cloudtrail will just give you information on the login event that occurred, but does not necessarily provide any further protection beyond that.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

What is the right arrangement of the AWS Global Infrastructure components according to their geographical coverage area size, in descending order?
A.Regions, Availability Zone, Edge Locations
B.Edge Locations, Availability Zones, Regions
C.Regions, Edge Locations, Availability zones
D.Availability Zones, Edge Locations, Regions

A

A.Regions, Availability Zone, Edge Locations

Explanation:
The AWS Global Infrastructure delivers a cloud infrastructure companies can depend on—no matter their size, changing needs, or challenges. The AWS Global Infrastructure is designed and built to deliver the most flexible, reliable, scalable, and secure cloud computing environment with the highest quality global network performance available today. Every component of the AWS infrastructure is designed and built for redundancy and reliability, from regions to networking links to load balancers to routers and firmware.

You can explore the AWS Global infrastructure with this interactive map.

AWS provides a more extensive global footprint than any other cloud provider, and it opens up new Regions faster than other providers. To support its global footprint and ensure customers are served across the world, AWS maintains multiple geographic regions, including Regions in North America, South America, Europe, Asia Pacific, and the Middle East.

Each AWS Region provides full redundancy and connectivity to the network. Unlike other cloud providers, who define a region as a single data center, at AWS Regions consist of multiple Availability Zones, each of which is a fully isolated partition of the AWS infrastructure that consists of discrete data centers, each with redundant power, networking, and connectivity, and each housed in separate facilities.

An Availability Zone gives customers the ability to operate production applications and databases that are more highly available, fault-tolerant, and scalable than would be possible from a single data center. All AZs are interconnected with high-bandwidth, low-latency networking, over fully redundant, dedicated metro fiber providing high-throughput, low-latency networking between AZs. The network performance is sufficient to accomplish synchronous replication between AZs.

Hence, the correct answer is: Regions, Availability Zones, Edge Locations.

All other options are incorrect due to their wrong order.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q
What AWS service can monitor the compliance status of your AWS resources against a set of compliance guidelines?
A.Amazon CloudWatch
B.AWS IAM
C.AWS Artifact
D.AWS Config
A

D.AWS Config

Explanation:
AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. Config continuously monitors and records your AWS resource configurations and allows you to automate the evaluation of recorded configurations against desired configurations.

With AWS Config, you are able to continuously monitor and record configuration changes of your AWS resources. Config also enables you to inventory your AWS resources, the configurations of your AWS resources, as well as software configurations within EC2 instances at any point in time. Once change from a previous state is detected, an Amazon Simple Notification Service (SNS) notification can be delivered for you to review and take action.

Hence, the correct answer is AWS Config.

Amazon CloudWatch is incorrect because this service is just a monitoring tool for your AWS resources. CloudWatch provides you with data and actionable insights to monitor your applications, respond to system-wide performance changes, optimize resource utilization, and get a unified view of operational health. It does not check for resource compliance unlike AWS Config.

AWS Artifact is incorrect because this simply acts as a central resource for compliance-related information. It provides on-demand access to AWS’ security and compliance reports and select online agreements. It does not, however, monitor the compliance status of your AWS resources and alert you when there are deviations from your set guidelines.

AWS IAM is incorrect since this is a security service for your AWS account, user and access management.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

An employee is asking for access to your S3 buckets. What should be the level of access that you should provide to him?
A.Give him S3 full access
B.Ask what type of access he requires and only provides him those permissions
C.Give him administrator access levels
D.Give him read-only access

A

B.Ask what type of access he requires and only provides him those permissions

Explanation:
When you create IAM policies, follow the standard security advice of granting least privilege, or granting only the permissions required to perform a task. Determine what users (and roles) need to do and then craft policies that allow them to perform only those tasks.

It is not a good practice to give admin or full access privileges to users to prevent any malicious or unintended actions. It is also not ideal to give users below minimum access since this might affect their work.

28
Q

Which of the following is not a standard design principle when designing systems in AWS?
A.Servers, not services
B.Loose coupling
C.Design for failure
D.Disposable resources instead of fixed servers

A

A.Servers, not services

Explanation:
There are various best practices that you can follow which can help you build an application in the cloud. The notable ones are:

  1. Design for failure
  2. Decouple your components
  3. Implement elasticity
  4. Think parallel

By focusing on concepts and best practices - like designing for failure, decoupling the application components, understanding and implementing elasticity, combining it with parallelization, and integrating security in every aspect of the application architecture - cloud architects can understand the design considerations necessary for building highly scalable cloud applications.

One of the advantages of using the Cloud is that it allows you to focus on services rather than servers or hardware. Since these resources can now be easily provisioned in the Cloud, you can direct your attention to more critical business areas such as your services and products.

Hence, Servers, not services is the correct answer for this question since this is not considered as one of the AWS Cloud design principles.

All other options (Loose coupling, design for failure and disposable resources instead of fixed servers) are incorrect since these are all good design principles that one should follow when designing systems in AWS.

29
Q

Which of the following is an example of having a highly available application in AWS?
A.Running spot instances for your EC2 workloads
B.Using SQS to decouple messages between a sender and a receiver
C.Running your DS instance with multi-AZ enabled
D.Running CloudFront for the static website in your S3 bucket

A

C.Running your DS instance with multi-AZ enabled

Explanation:
Amazon RDS Multi-AZ deployments provide enhanced availability and durability for database instances. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Each AZ runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable. In case of an infrastructure failure, Amazon RDS performs an automatic failover to the standby, so that you can resume database operations as soon as the failover is complete.

In a Multi-AZ deployment, Amazon RDS automatically provisions and maintains a synchronous standby replica in a different Availability Zone. The primary DB instance is synchronously replicated across Availability Zones to a standby replica to provide data redundancy, eliminate I/O freezes, and minimize latency spikes during system backups. Running a DB instance with high availability can enhance availability during planned system maintenance, and help protect your databases against DB instance failure and Availability Zone disruption.

Hence, the correct answer is: Running your RDS instance with multi-AZ enabled.

Running spot instances for your EC2 workloads is incorrect because Spot instances can be terminated anytime without warning which makes it not suitable for workloads that need high availability.

Using SQS to decouple messages is incorrect because this is just an example of decoupling systems to make them more fault-tolerant. It also provides better durability for your messages since this is an inherent SQS characteristic. This option does not exactly exhibit high availability practice.

Running CloudFront for the static website in your S3 bucket is incorrect because this just allows your content to become globally available while at the same time enhancing delivery speeds. This option does not exactly exhibit high availability practice.

30
Q

Which of the following instances is it better to use IAM roles rather than IAM users? (Select TWO.)
A.When you want to provide AWS services permissions to do certain actions
B.When you have outside entities that need to perform specific actions in your AWS account
C.When you need a GUI to interact with your AWS environment
D.If you have employees who will constantly need access to your AWS resources
E.When you need an administrator to handle the AWS account for you

A

A.When you want to provide AWS services permissions to do certain actions
B.When you have outside entities that need to perform specific actions in your AWS account

Explanation:
You can use roles to delegate access to users, applications, or services that don’t normally have access to your AWS resources. For example, you might want to grant users in your AWS account access to resources they don’t usually have, or grant users in one AWS account access to resources in another account. Sometimes you want to give AWS access to users who already have identities defined outside of AWS, such as in your corporate directory.

If you need constant access to your AWS account, or a GUI such as the AWS Management Console, it is better to use IAM users instead since they provide long term credentials for logging in to your account. Entities such as account handlers and administrators should have their own IAM accounts instead as well to be more efficient in their work.

Hence, the correct answers are:

  • When you want to provide AWS services permissions to do certain actions
  • When you have outside entities that need to perform specific actions in your AWS account.

All the other options are incorrect since these are more suitable for IAM User:

  • If you have employees who will constantly need access to your AWS resources.
  • When you need a GUI to interact with your AWS environment.
  • When you need an administrator to handle the AWS account for you.
31
Q

Which of the following does AWS automatically handle for you? (Select TWO.)
A.Introduce updates and patches to EC2 guest operating systems
B.Introduces updates and patches to EC2 hypervisors
C.Secure AWS data centers from environmental hazards
D.Make sure your data is safely kept and replicated between AZs
E.Automatically encrypt data passing through the AWS Global Network

A

B.Introduces updates and patches to EC2 hypervisors
C.Secure AWS data centers from environmental hazards

Explanation:
The responsibility shouldered by AWS is on the security of the cloud. This includes protecting data centers from environmental hazards, and ensuring that the latest security patches have been applied onto their hardware.

Patching guest operating systems and managing customer data in and out of AWS is the responsibility of the customer.

These responsibilities are defined in the AWS Shared Responsibility Model.

32
Q
Which of the following controls does the customer fully inherit from AWS based on the Shared Responsibility Model?
A.Configuration Management
B.Patch Management
C.Physical and Environmental Controls
D.Awareness and Training
A

C.Physical and Environmental Controls

Explanation:
AWS handles all the physical and environmental controls on their data centers, network cables, servers, and other equipment. These controls are fully inherited by the customer.

Awareness and training on how to use the AWS cloud are handled by the customer. AWS trains AWS employees, but a customer must train their own employees.

On the other hand, configuration management and patch management are shared between AWS and the customer. AWS configures and patches its infrastructure and managed services, such as Amazon RDS, while the customer is responsible for patching guest operating systems and client-side applications.

33
Q
Which AWS well-architected pillar stresses the importance of selecting the most appropriate and right number of resource types for your requirements?
A.Operational Excellence
B.Cost optimization
C.Performance Efficiency
D.Reliability
A

B.Cost optimization

Explanation:
The Well-Architected Framework has been developed to help cloud architects build secure, high-performing, resilient, and efficient infrastructure for their applications. This is based on five pillars namely:

  1. Operational Excellence
  2. Security
  3. Reliability
  4. Performance Efficiency
  5. Cost Optimization

Cost Optimization focuses on avoiding un-needed costs. Key topics include understanding and controlling where money is being spent, selecting the most appropriate and right number of resource types, analyzing spend over time, and scaling to meet business needs without overspending.

Hence, the correct answer is Cost optimization.

Performance efficiency is incorrect because this pillar focuses on using IT and computing resources efficiently. Key topics include selecting the right resource types and sizes based on workload requirements, monitoring performance, and making informed decisions to maintain efficiency as business needs evolve.

Operational Excellence is incorrect because this pillar focuses on running and monitoring systems to deliver business value, and continually improving processes and procedures.

Reliability is incorrect because this pillar focuses on the ability to prevent and quickly recover from failures to meet business and customer demand.

34
Q
Which service offers volume discounts when you enable Consolidated Billing?
A.Amazon S3
B.AWS CloudTrail
C.Amazon SNS
D.Amazon CloudFront
A

A.Amazon S3

Explanation:
For billing purposes, AWS treats all the accounts in the organization as if they were one account. Some services, such as Amazon EC2 and Amazon S3, have volume pricing tiers across certain usage dimensions that give you lower prices the more you use the service. With consolidated billing, AWS combines the usage from all accounts to determine which volume pricing tiers to apply, giving you a lower overall price whenever possible. AWS then allocates each linked account a portion of the overall volume discount based on the account’s usage.

AWS CloudTrail, Amazon CloudFront, and Amazon SNS are not supported under the Consolidated Billing volume discount feature since they do not have volume pricing tiers of their own in the first place

35
Q
\_\_\_\_\_\_\_\_ has been developed to help cloud architects build secure, high-performing, resilient, and efficient infrastructure for their applications based on five pillars: operational excellence, security, reliability, performance efficiency, and cost optimization.
A.AWS Cloud Adoption Framework
B.Well-Architected Framework
C.AWS Serverless Application Model
D.Scrum Development Process
A

B.Well-Architected Framework

Explanation:
The Well-Architected Framework has been developed to help cloud architects build secure, high-performing, resilient, and efficient infrastructure for their applications. This is based on five pillars namely:

  1. Operational Excellence
  2. Security
  3. Reliability
  4. Performance Efficiency
  5. Cost Optimization

This Framework provides a consistent approach for customers and partners to evaluate architectures, and implement designs that will scale over time.

The AWS Well-Architected Framework helps you understand the pros and cons of decisions you make while building systems on AWS. By using this Framework, you will learn architectural best practices for designing and operating reliable, secure, efficient, and cost-effective systems in the cloud. It provides a way for you to consistently measure your architectures against best practices and identify areas for improvement. The process for reviewing an architecture is a constructive conversation about architectural decisions and is not an audit mechanism. Having well-architected systems greatly increases the likelihood of business success.

AWS Solutions Architects have years of experience architecting solutions across a wide variety of business verticals and use cases. We have helped design and review thousands of customers’ architectures on AWS. From this experience, AWS has identified best practices and core strategies for architecting systems in the cloud that you can also implement.

You can also use the AWS Well-Architected Tool, helps you review the state of your workloads and compares them to the latest AWS architectural best practices. The tool is based on the AWS Well-Architected Framework, developed to help cloud architects build secure, high-performing, resilient, and efficient application infrastructure.

Hence, the correct answer is Well-Architected Framework.

Scrum Development Process is incorrect because this is based on agile methodology and is not exclusive to the AWS cloud computing model.

AWS Serverless Application Model is incorrect because this is just an open-source framework that you can use to build serverless applications on AWS.

AWS Cloud Adoption Framework is incorrect because this is primarily used to help organizations design and travel an accelerated path to successful cloud adoption. AWS Professional Services created the AWS Cloud Adoption Framework (AWS CAF) which can help you build a comprehensive approach to cloud computing across your organization, and throughout your IT lifecycle.

36
Q

A customer is using your S3-based service to store sprites of game characters. When players retrieve these sprites, they are temporarily stored on the player’s computer. If the S3 bucket is using a standard storage class, how can you further optimize cost?
A.Add a lifecycle policy to move sprites to S3 Standard- Infrequent Access after the customer uploads them
B.Have the customer directly upload the sprites to S3 Standard- Infrequent Access
C.Add a lifecycle policy to move sprites to S3 Glacier after the customer uploads them
D.Have the customer compress the sprites to reduce storage consumption

A

B.Have the customer directly upload the sprites to S3 Standard- Infrequent Access

Explanation:
Amazon S3 offers a range of storage classes designed for different use cases. These include S3 Standard for general-purpose storage of frequently accessed data; S3 Intelligent-Tiering for data with unknown or changing access patterns; S3 Standard-Infrequent Access (S3 Standard-IA) and S3 One Zone-Infrequent Access (S3 One Zone-IA) for long-lived, but less frequently accessed data; and Amazon S3 Glacier (S3 Glacier) and Amazon S3 Glacier Deep Archive (S3 Glacier Deep Archive) for long-term archive and digital preservation. Amazon S3 also offers capabilities to manage your data throughout its lifecycle. Once an S3 Lifecycle policy is set, your data will automatically transfer to a different storage class without any changes to your application.

S3 Standard-IA is for data that is accessed less frequently, but requires rapid access when needed. S3 Standard-IA offers high durability, high throughput, and low latency of S3 Standard, with a low per GB storage price and per GB retrieval fee.

Since sprites are rarely accessed (and after accessing them they are stored on the user’s phone), using the S3 IA storage class is the best storage type to use.

Using S3 Glacier is incorrect since this storage is more suited for archival. It takes a long time to retrieve objects from this storage type, and is costly as well.

Using lifecycle policies to move objects to Standard-IA is not necessary since you can store the objects directly to S3 Standard-IA.

Compressing the objects is not the best solution for this scenario since you will still be charged for the bucket price of S3 Standard.

37
Q
Which of the following RDS engines allows you to bring your own license (BYOL)?
A.PostgreSQL
B.MS SQL
C. MySQL
D.Oracle
A

D.Oracle

Explanation:
You can run Amazon RDS for Oracle under two different licensing models – “License Included” and “Bring-Your-Own-License (BYOL)”. The “BYOL” model is designed for customers who prefer to use existing Oracle database licenses or purchase new licenses directly from Oracle.

Other database engines do not currently support the BYOL model.

38
Q

Which cloud computing advantage describes how you can easily deploy your application in multiple AWS regions with just a few clicks?
A.Stop spending money running and maintaining data centers
B.Go global in minutes
C.Benefit from massive economies of scale
D.Stop guessing capacity

A

B.Go global in minutes

Explanation:
Going global in minutes is a cloud advantage that discusses how you can easily deploy your application in multiple regions around the world with just a few clicks. This means you can provide lower latency and a better experience for your customers at minimal cost.

Benefit from massive economies of scale is incorrect. This advantage discusses how you can achieve a lower variable cost with cloud computing than you can get on your own. Because usage from hundreds of thousands of customers is aggregated in the cloud, AWS can achieve higher economies of scale, which translates into lower pay as-you-go prices.

Stop guessing capacity is incorrect. This advantage discusses how cloud allows you to provision resources with no commitments. You can easily spin up and tear down servers and storage devices as needed. You also benefit from the scalability of the cloud which reduces your overall cost, compared to when leaving your physical servers in idle or having limited capacity.

Stop spending money running and maintaining data centers is incorrect. This advantage discusses more on how you can reduce costs and management overhead by using fully managed, low cost services in the AWS cloud. Furthermore, it allows you to focus more on your products and services, rather than worry about all the heavy lifting jobs.

39
Q
Where is the best place to archive logs and other infrequently accessed files for a long time with no strict retrieval time?
A.Amazon S3 Standard-IA
B.Amazon Storage Gateway
C.Amazon Glacier
D.Amaon EBS
A

C.Amazon Glacier

Explanation:
Amazon Glacier is a secure, durable, and extremely low-cost Amazon S3 cloud storage classes for data archiving and long-term backup. This makes it feasible to retain all the data you want for use cases like data lakes, analytics, IoT, machine learning, compliance, and media asset archiving.

The Amazon Glacier storage class provides three retrieval options to fit your use case. Expedited retrievals typically return data in 1-5 minutes, and are great for Active Archive use cases. Standard retrievals typically complete between 3-5 hours and work well for less time-sensitive needs like backup data, media editing, or long-term analytics. Bulk retrievals are the lowest-cost retrieval option, returning large amounts of data within 5-12 hours. The Amazon S3 Glacier Deep Archive storage class provides two retrieval options ranging from 12-48 hours.

Amazon Glacier and S3 Glacier Deep Archive are designed to be the lowest-cost Amazon S3 storage classes, allowing you to archive large amounts of data at a very low cost. This makes it feasible to retain all the data you want for use cases like data lakes, analytics, IoT, machine learning, compliance, and media asset archiving. You pay only for what you need, with no minimum commitments or up-front fees.

Hence, the correct answer is Amazon Glacier.

Amazon S3 Standard-IA is incorrect because although this storage class is good for storing infrequently accessed files, S3 Glacier is a much better choice if the archived files will be kept for a long period of time without a strict retrieval time.

Amazon EBS is incorrect because it is not the best choice for archiving objects. EBS serves as volumes for EC2 instances so you need to attach them first to EC2 to access the files within.

Amazon Storage Gateway is incorrect because this is not the best storage option for archived files. In terms of cost and function, Glacier is the better service to use. AWS Storage Gateway is a hybrid cloud storage service that gives you on-premises access to virtually unlimited cloud storage.

40
Q
Which of the following services should you provision if your local data center requires additional storage space without having to migrate data?
A.AWS Snowball Edge
B.Amazon Data Pipeline
C.AWS Direct Connect
D.AWS Storage Gateway
A

D.AWS Storage Gateway

Explanation:
AWS Storage Gateway is a hybrid cloud storage service that gives you on-premises access to virtually unlimited cloud storage. Your applications connect to the service through a virtual machine or hardware gateway appliance using standard storage protocols, such as NFS, SMB, and iSCSI. The gateway connects to AWS storage services, such as Amazon S3, Amazon S3 Glacier, Amazon S3 Glacier Deep Archive, Amazon EBS, and AWS Backup, providing storage for files, volumes, snapshots, and virtual tapes in AWS.

Hybrid cloud storage means your data can be used on-premises and stored durably in AWS Cloud storage services, including Amazon S3, Amazon S3 Glacier, Amazon S3 Glacier Deep Archive, and Amazon EBS. Once data is moved to AWS, you can apply AWS compute, machine learning, and big data analytics services to it. Additionally, you can leverage the full AWS portfolio of security and management services including AWS Backup, AWS KMS, AWS Identity and Access Management (IAM), SNS workflows, Amazon CloudWatch and AWS CloudTrail.

Hence, the correct answer is: AWS Storage Gateway.

AWS Direct Connect is incorrect because this is primarily used in order for you to establish private connectivity between AWS and your datacenter, office, or colocation environment. Although it provides a more consistent network connection to AWS, it does not extend your storage capability, unlike Storage Gateway.

AWS Snowball Edge is incorrect because this is just an edge computing and data transfer device provided by the AWS Snowball service. It has on-board storage and compute power that provides select AWS services for use in edge locations. Since the scenario stated that no migration will be done, this is not the best solution to go for.

Amazon Data Pipeline is incorrect because this is simply a data workflow orchestration service that helps you reliably process and move data between different AWS compute and storage services, as well as on-premises data sources, at specified intervals. This service is useful for handling data transfers and transforms, but it does not directly extend your storage space, unlike Amazon storage gateway.

41
Q

Availability Zones are physically separated by a meaningful distance from any other AZ, although all are within 100 km or 60 miles of each other. What is the primary reason why Availability Zones are set up the way they are now?
A.To keep them as far apart from each other in case of a disaster
B.To maximize area coverage in a Region
C.To achieve better network conectivity to users in the location
C.To achieve better network connectivity to users in the location
D.Price of the land is cheaper in those locations

A

A.To keep them as far apart from each other in case of a disaster

Explanation:
AWS has identified critical system components required to maintain the availability of their system and recover service in the event of outage. Critical system components are backed up across multiple, isolated locations known as Availability Zones. Each Availability Zone is engineered to operate independently with high reliability.

Availability Zones are connected to enable you to easily architect applications that automatically fail-over between Availability Zones without interruption. All AZs are interconnected with high-bandwidth, low-latency networking, over fully redundant, dedicated metro fiber providing high-throughput, low-latency networking between AZs.

Availability Zones (AZs) give customers the ability to operate production applications and databases that are more highly available, fault-tolerant, and scalable than would be possible from a single data center. AWS maintains 69 AZ around the world and we continue to add at a fast pace. Each AZ can be multiple data centers (typically 3), and at full scale can be hundreds of thousands of servers. They are fully isolated partitions of the AWS Global Infrastructure. With their own power infrastructure, the AZs are physically separated by a meaningful distance, many kilometers, from any other AZ, although all are within 100 km (60 miles of each other).

All AZs are interconnected with high-bandwidth, low-latency networking, over fully redundant, dedicated metro fiber providing high-throughput, low-latency networking between AZs. The network performance is sufficient to accomplish synchronous replication between AZs. AWS Availability Zones are also powerful tools for helping build highly available applications. AZs make partitioning applications about as easy as it can be. If an application is partitioned across AZs, companies are better isolated and protected from issues such as lightning strikes, tornadoes, earthquakes and more.

Hence, the correct answer is: To keep them as far apart from each other in case of a disaster.

All other options are incorrect since these are all false information about Availability Zones

42
Q

You have a customized EC2 instance running your latest web application. How can you create an exact copy of this instance in another region?
A.Create backups of all your EBS volumes and copy them to another region
B.There is no way to do this in AWS. You will have to perform the transfer manually
C.Create a golden AMI of the instance and copy it to the other region
D.Create a load balancer with an auto scaling group that is linked between your two regions.
Scale up to have another instance running in the other region

A

C.Create a golden AMI of the instance and copy it to the other region

Explanation:
AMIs provide the information required to launch an Amazon EC2 instance, which is a virtual server in the AWS Cloud. A golden AMI is an AMI that contains the latest security patches, software, configuration, and software agents that you need to install for logging, security maintenance, and performance monitoring.

An AMI includes the following:

One or more EBS snapshots, or, for instance-store-backed AMIs, a template for the root volume of the instance (for example, an operating system, an application server, and applications).

Launch permissions that control which AWS accounts can use the AMI to launch instances.

A block device mapping that specifies the volumes to attach to the instance when it’s launched.

Hence, the correct answer is: Create a golden AMI of the instance and copy it to the other Region.

Taking EBS snapshots is not the most efficient way to create a copy of your instance in another region. It does not capture your instance configurations unlike how an AMI does.

Elastic load balancers and auto-scaling groups cannot span different regions. Only availability zones.

43
Q

What does AWS do when a storage device reaches the end of its lifespan?
A.AWS simply wipes the device and disposes it
B.AWS wipes the drives and sends it back to the manufacturer to procure a new one
C.AWS archives the device in case customer request their data again
D.AWS follows a strict decommissioning process as described in compliance procedures

A

D.AWS follows a strict decommissioning process as described in compliance procedures

Explanation:
Media storage devices used to store customer data are classified by AWS as Critical and treated accordingly, as high impact, throughout their life cycles. AWS has exacting standards on how to install, service, and eventually destroy the devices when they are no longer useful.

When a storage device has reached the end of its useful life, AWS decommissions media using techniques detailed in NIST 800-88. Media that stored customer data is not removed from AWS control until it has been securely decommissioned.

Hence, the correct answer is: AWS follows a strict decommissioning process as described in compliance procedures.

AWS makes sure that the devices are properly handled according to procedures. They do not send it back to manufacturers and archive it longer than its lifespan.

44
Q

When is using Amazon RDS a better choice than using a local database?
A.When you want to offload administration responsibilities from yourself
B.When ypu need full control of your SQL database
C.When you need a free Enterprise license for your Enterprise databases
D.When you need a fast network connection to your local web servers

A

A.When you want to offload administration responsibilities from yourself

Explanation:
Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. It provides cost-efficient and resizable capacity while automating time-consuming administration tasks such as hardware provisioning, database setup, patching and backups. It frees you to focus on your applications so you can give them the fast performance, high availability, security and compatibility they need.

Hence, the correct answer is: When you want to offload administration responsibilities from yourself.

If you need to have full control of your database servers then Amazon RDS is not for you. You should instead use Amazon EC2 or stick to your on-premises databases.

Using Amazon RDS does not ensure that your network connections will be better for you.

Licenses do not come free with your databases in Amazon RDS. If you decide to use a database with licensing included, then you are charged at a higher rate for the license package. You also have the option to bring your own license if you are using an Oracle database.

45
Q

Which payment plan will give you the largest discount when purchasing EC2 reserved instances?
A.All upfront payment for a 3-year term purchase
B.All upfront payment for a 1-year term purchase
C.Partial upfront payment for a 1-year term purchase
D.Partail upfront payment for a 3-yeare term purchase

A

A.All upfront payment for a 3-year term purchase

Explanation:
Reserved Instances provide you with a significant discount (up to 75%) compared to On-Demand instance pricing. In addition, when Reserved Instances are assigned to a specific Availability Zone, they provide a capacity reservation, giving you additional confidence in your ability to launch instances when you need them.

You can choose between three payment options when you purchase a Standard or Convertible Reserved Instance:

All Upfront option: You pay for the entire Reserved Instance term with one upfront payment. This option provides you with the largest discount compared to On-Demand instance pricing.

Partial Upfront option: You make a low upfront payment and are then charged a discounted hourly rate for the instance for the duration of the Reserved Instance term.

No Upfront option: Does not require any upfront payment and provides a discounted hourly rate for the duration of the term.

Here’s a sample calculation to see the price difference between a Standard RI and Convertible RI on various payment options for 1-year and 3-year terms:

As a general rule, Standard RI provides more savings than Convertible RI, which means that the former is the cost-effective option. The All Upfront option provides you with the largest discount compared with the other types. Opting for a longer compute reservation, such as the 3-year term, gives us a greater discount as opposed to a shorter 1-year renewable term.

Hence, the correct answer is: All upfront payment for a 3-year term purchase.

46
Q
Where can the customer view his Reserved Instance usage for the past month?
A.Amazon S3
B.AWS organizations
C.AWS Billing Console
D.Amazon EC2
A

C.AWS Billing Console

Explanation:
AWS provides a free reporting tool called Cost Explorer that enables you to analyze the cost and usage of your EC2 instances and the usage of your Reserved Instances. The Cost Explorer tool is found under the AWS Billing Console.

Cost Explorer is a tool that enables you to view and analyze your costs and usage. You can explore your usage and costs using the main graph, the Cost Explorer cost and usage reports, or the Cost Explorer RI reports. You can view data for up to the last 13 months, forecast how much you’re likely to spend for the next three months, and get recommendations for what Reserved Instances to purchase. You can use Cost Explorer to identify areas that need further inquiry and see trends that you can use to understand your costs.

A forecast is a prediction of how much you will use AWS services over the forecast time period that you selected, based on your past usage. Forecasting provides an estimate of what your AWS bill will be and enables you to use alarms and budgets for amounts that you’re predicted to use. Because forecasts are predictions, the forecasted billing amounts are estimated and might differ from your actual charges for each statement period.

Hence, the correct answer is: AWS Billing Console.

Amazon S3 is incorrect because although billing reports are technically stored in an S3 bucket, these reports are not directly accessible by the customer from the service. You should view them in AWS Cost Explorer on your AWS Billing Console instead.

Amazon EC2 is incorrect becaues this just allows you to purchase Reserved Instances to obtain discounts in pricing. However, to view RI usage, you should visit the Cost Explorer tool instead.

AWS Organizations service is incorrect because this simply helps you configure AWS services and share resources across accounts in your organization. You cannot view your RI usage under this service.

47
Q

Your local servers are in need of an upgrade to serve your consumers better. However, locally stored data cannot be migrated due to compliance requirements. Which model should you adopt for this scenario?
A.Do not perform any migration for now to save up some capital for a large scale migration in the future
B.Utilize AWS web servers but keep your data locally, Setup a hybrid model in which the web servers in AWS will communicate with your local data store
C.Dispose all of your servers and migrate your data to AWS since it will be cheaper and more durable to store data in the cloud
D.Slowly migrate your web servers and data into AWS while keeping your budget in check

A

B.Utilize AWS web servers but keep your data locally, Setup a hybrid model in which the web servers in AWS will communicate with your local data store

Explanation:
Enterprise environments are often a mix of cloud, on-premises data centers, and edge locations. Hybrid cloud architectures help organizations integrate their on-premises and cloud operations to support a broad spectrum of use cases using a common set of cloud services, tools, and APIs across on-premises and cloud environments.

Customers can seamlessly integrate their on-premises and cloud storage, networking, identity management, and security policies to enable use cases such as data center extension to the cloud, backup, and disaster recovery to the cloud, and hybrid data processing.

The best approach for this scenario is to set up a hybrid environment, where you will use EC2 instances for your servers, while keeping data stored locally. By applying this type of set up, you can utilize faster servers at a low cost while still keeping compliance with your data.

Hence, the correct answer is: Utilize AWS web servers but keep your data locally. Setup a hybrid model in which the web servers in AWS will communicate with your local data store.

The option that says: Dispose all of your servers and migrate your data to AWS since it will be cheaper and more durable to store data in the cloud is incorrect because this means that you are going fully to the cloud. This is not the best approach since you have a compliance requirement that needs to be met. You should set up a hybrid cloud environment instead to make good use of the AWS Cloud while still fulfilling requirements.

The option that says: Slowly migrate your web servers and data into AWS while keeping your budget in check is incorrect because although the migration is incremental, this solution still doesn’t provide a way so that your on-premises data center and your VPC can co-exist.

The option that says: Do not perform any migration for now to save up some capital for a large scale migration in the future is incorrect because not migrating as soon as possible can affect the quality of service. Amazon EC2 instances are already offered at a low cost. Migration can be simplified using AWS tools as well.

48
Q

What is the primary benefit of creating a scalable system?
A.Application resources will cost less than usual
B.Applications are able to failover gracefully
C.Applications are not easily targeted by malicious attacks
D.Applications are able to handle varying loads of work

A

D.Applications are able to handle varying loads of work

Explanation:
Cloud computing is the on-demand delivery of compute power, database, storage, applications, and other IT resources via the internet with pay-as-you-go pricing.

Whether you are using it to run applications that share photos to millions of mobile users or to support business-critical operations, a cloud services platform provides rapid access to flexible and low cost IT resources. With cloud computing, you don’t need to make large upfront investments in hardware and spend a lot of time on the heavy lifting of managing that hardware. Instead, you can provision exactly the right type and size of computing resources you need to power your newest idea or operate your IT department. You can access as many resources as you need, almost instantly, and only pay for what you use.

Having a scalable system means that it can adapt to varying workloads as needed. This also enables you to save on costs since resources will only be created when they are needed.

Hence, the correct answer is: Applications are able to handle varying loads of work.

The option that says: Application resources will cost less than usual is incorrect since AWS will still charge you at the same rate, even if you have a scalable system. The cost-benefit you get is in the total monthly billing.

The option that says: Applications are able to failover gracefully is incorrect since this is the primary benefit of having a fault-tolerant system.

The option that says: Applications are not easily targeted by malicious attacks is incorrect since this benefit is related more on security.

49
Q
You have a mission-critical server that is currently down in AWS, and you need to bring it back up within 15 minutes for service continuity. Which of the following support plans will allow you to contact technical support to immediately resolve this issue?
A.Business
B.Basic
C.Enterprise
D.Developer
A

C.Enterprise

Explanation:
AWS Support offers a range of plans that provide access to tools and expertise that support the success and operational health of your AWS solutions. All support plans provide 24x7 access to customer service, AWS documentation, whitepapers, and support forums. For technical support and more resources to plan, deploy, and improve your AWS environment, you can select a support plan that best aligns with your AWS use case.

AWS Enterprise support plan offers:

  • less than 24 hours response time for general guidance support cases,
  • less than 12 hours response time for impaired system support cases,
  • less than 4 hours response time for impaired production system support cases,
  • less than 1 hour response time for downed production system support cases,
  • and less than 15 minutes response time for downed business-critical system support cases

Hence, the correct support plan is Enterprise.

All other options are incorrect because these support plans do not offer immediate responses for critical system downtimes. Be sure to compare which is the most suitable support plan for your company’s needs.

50
Q
What type of EBS volume is recommended for most workloads and is also usable as a boot volume?
A.Provisioned IOPS SSD
B.Cold HDD
C.General Purpose SSD
D.Throughput Optimized HDD
A

C.General Purpose SSD

Explanation:
Amazon EBS provides the following volume types, which differ in performance characteristics and price, so that you can tailor your storage performance and cost to the needs of your applications:

General Purpose SSD - Recommended for most workloads; Can be used as system boot volumes; Best for development and test environments

Provisioned IOPS SSD - Meant for critical business applications that require sustained IOPS performance; Best used for large database workloads

Throughput Optimized HDD - Meant for streaming workloads requiring consistent, fast throughput at a low price, big data, data warehouses, and log processing. It cannot be a boot volume

Cold HDD - Meant for throughput-oriented storage for large volumes of data that is infrequently accessed, or scenarios where the lowest storage cost is important. It cannot be a boot volume

General Purpose SSD (gp2) volumes offer cost-effective storage that is ideal for a broad range of workloads. These volumes deliver single-digit millisecond latencies and the ability to burst to 3,000 IOPS for extended periods of time. Between a minimum of 100 IOPS (at 33.33 GiB and below) and a maximum of 10,000 IOPS (at 3,334 GiB and above), baseline performance scales linearly at 3 IOPS per GiB of volume size. AWS designs gp2 volumes to deliver the provisioned performance 99% of the time. A gp2 volume can range in size from 1 GiB to 16 TiB.

Hence, the correct answer is General Purpose SSD.

Provisioned IOPS SSD is incorrect because this is primarily used for critical business applications that require sustained IOPS performance.

Throughput Optimized HDD is incorrect because this is mainly used for frequently accessed, throughput-intensive workloads. Although it is a low-cost HDD volume, it cannot be used as a system boot volume.

Cold HDD is incorrect because although it provides a lower cost HDD volume compared to General Purpose SSD, it cannot be used as a system boot volume.

51
Q
Which of the following do you need to programmatically interact with your AWS environment? (Select TWO.)
A.AWS Lambda
B.AWS SDK
C.Account username and password
D.Access Keys
E.AWS Management Console
A

B.AWS SDK
D.Access Keys

Explanation:
You use can AWS SDKs to programmatically interact with your AWS resources. Using access keys, which are unique identifiers for your IAM User, you can connect to your resources in a secure manner.

The AWS Access Key ID and AWS Secret Access Key are your AWS credentials. They are associated with an AWS Identity and Access Management (IAM) user or role that determines what permissions you have.

Access keys are long-term credentials for an IAM user or the AWS account root user. You can use access keys to sign programmatic requests to the AWS CLI or AWS API (directly or using the AWS SDK). If you don’t have access keys, you can create them from the AWS Management Console. As a best practice, do not use the AWS account root user access keys for any task where it’s not required. Instead, create a new administrator IAM user with access keys for yourself.

Hence, the correct answers are AWS SDK and Access keys.

AWS Management Console is incorrect because when you are programmatically interacting with AWS, you use APIs (and not your web browser) to send and receive messages.

Account username and password is incorrect because you only need your credentials if you are accessing the AWS Management Console on your web browser.

AWS Lambda is incorrect because although you can control your other AWS resources using Lambda via their respective APIs, you still need to set up the required IAM role in order for your function to work.

52
Q

As an AWS customer, what offering do you naturally inherit from AWS after you sign up?
A.All the best practices of AWS policies, architecture and operational processes built to satisfy your requirements
B.All the data you store in and retrieve from AWS
C.All the responsibilities in enforcing security and compliance policies of your organization
D.All the hardware and software that you provision

A

A.All the best practices of AWS policies, architecture and operational processes built to satisfy your requirements

Explanation:
Security at AWS is top priority. Today, AWS protects millions of active customers around the world, from large enterprises and government organizations, to start-ups and non-profits. AWS customers inherit all of the benefits of their security controls, including best practices for security policies, architecture, and operational processes validated against external assurance frameworks.

You do not inherit software and hardware from AWS. AWS retains full control of their physical infrastructure.

All the data you store and retrieve are yours. AWS does not take ownership of your data to begin with, so there is nothing to inherit for this section.

All the responsibilities in enforcing security and compliance policies of your organization are your responsibilities. AWS handles security of the cloud, while the customer handles security in the cloud.

All the responsibilities in enforcing security and compliance policies of your organization.

53
Q

What is an advantage of cloud computing when it comes to equipment expenditures?
A.AWS makes sure that physical devices are continuously secured and monitored
B.AWS offers tools and services that help you monitor your budget consumtpion in the cloud
C.You can easily scale and manage the number of resources running in your cloud environment
D.AWS introduces cost reductions reach year in their services

A

D.AWS introduces cost reductions reach year in their services

Explanation:
AWS does their best to reduce the cost of their operations and infrastructures each year. This reduction in cost translates to the customer such that the customer also receives lower prices for using AWS resources. Research also shows that it is almost always cheaper to run your workloads in the cloud because of the flexibility of the cloud and pricing discounts, than to run them locally. It is always one of the priorities of AWS to remain cost-effective in the market while offering quality services to its customers.

The other options do not specifically tackle expenditures and are incorrect.

54
Q
A customer is choosing the best AWS support plan which includes a dedicated Technical Account Manager. Which of the following should they choose?
A.Basic
B.Enterprise
C.Developer
D.Business
A

B.Enterprise

Explanation:
AWS Support offers a range of plans that provide access to tools and expertise that support the success and operational health of your AWS solutions. All support plans provide 24x7 access to customer service, AWS documentation, whitepapers, and support forums. For technical support and more resources to plan, deploy, and improve your AWS environment, you can select a support plan that best aligns with your AWS use case.

AWS Support offers four support plans: Basic, Developer, Business, and Enterprise. The Basic plan is free of charge and offers support for account and billing questions and service limit increases. The other plans offer an unlimited number of technical support cases with pay-by-the-month pricing and no long-term contracts, providing the level of support that meets your needs.

All AWS customers automatically have around-the-clock access to these features of the Basic support plan:

  • Customer Service: one-on-one responses to account and billing questions
  • Support forums
  • Service health checks
  • Documentation, whitepapers, and best-practice guides

Customers with an Enterprise support plan have more access since they are eligible for these additional features unlike the Developer or Business plans:

  • Application architecture guidance: consultative partnership supporting specific use cases and applications.
  • Infrastructure event management: short-term engagement with AWS Support to get a deep understanding of your use case—and after analysis, provide architectural and scaling guidance for an event.
  • Technical account manager
  • White-glove case routing
  • Management business reviews
  • Concierge Support Team

Hence, the correct answer is the Enterprise support plan.

All other options are incorrect because the Basic, Developer and Business support plans do not include a Technical Account Manager.

55
Q
Which AWS storage service offers faster disk read and write performance and provides temporary block-level storage for your instance?
A.EBS Throughput Optimized HDD
B.EBS Provisioned IOPS SSD
C.Instance Store
D.EFS
A

C.Instance Store

Explanation:
An instance store provides temporary block-level storage for your instance. This storage is located on disks that are physically attached to the host computer. Instance store is ideal for temporary storage of information that changes frequently, such as buffers, caches, scratch data, and other temporary content, or for data that is replicated across a fleet of instances, such as a load-balanced pool of web servers.

An instance store consists of one or more instance store volumes exposed as block devices. The size of an instance store, as well as the number of devices available, varies by instance type.

The virtual devices for instance store volumes are ephemeral[0-23]. Instance types that support one instance store volume have ephemeral0. Instance types that support two instance store volumes have ephemeral0 and ephemeral1, and so on.

Since the disk is physically attached to the instance, disk read and writes are faster than EBS volumes or EFS systems which are virtually attached.

Hence, Instance store is the correct answer to this question.

All other options (EBS Provisioned IOPS SSD, EFS and EBS Throughput Optimized HDD) are incorrect.

56
Q
Auto-Scaling based on demand is an implementation of which AWS cloud best practice?
A.Implement elasticity
B.Design for failure
C.Drive archihtectures using data
D.Design for perfection
A

A.Implement elasticity

Explanation:
There are various best practices that you can follow which can help you build an application in the cloud. The notable ones are:

  1. Design for failure
  2. Decouple your components
  3. Implement elasticity
  4. Think parallel

The cloud brings a new concept of elasticity in your applications. Elasticity can be implemented in three ways:

  1. Proactive Cyclic Scaling: Periodic scaling that occurs at a fixed interval (daily, weekly, monthly, quarterly)
  2. Proactive Event-based Scaling: Scaling just when you are expecting a big surge of traffic requests due to a scheduled business event (new product launch, marketing campaigns)
  3. Auto-scaling based on demand. By using a monitoring service, your system can send triggers to take appropriate actions so that it scales up or down based on metrics (utilization of the servers or network i/o, for instance)

To implement “Elasticity”, one has to first automate the deployment process and streamline the configuration and build process. This will ensure that the system can scale without any human intervention. This will result in immediate cost benefits as the overall utilization is increased by ensuring your resources are closely aligned with demand rather than potentially running servers that are under-utilized.

One of the most important benefits of using a cloud environment is the ability to use the cloud’s APIs to automate your deployment process. It is recommended that you take the time to create an automated deployment process early on during the migration process and not wait until the end. Creating an automated and repeatable deployment process will help reduce errors and facilitate an efficient and scalable update process.

Some of the AWS specific tactics to automate your infrastructure are:

  1. Define Auto-scaling groups for different clusters using the Amazon Auto-scaling feature in Amazon EC2.
  2. Monitor your system metrics (CPU, Memory, Disk I/O, Network I/O) using Amazon CloudWatch and take appropriate actions (launching new AMIs dynamically using the Auto-scaling service) or send notifications.
  3. Store and retrieve machine configuration information dynamically: Utilize Amazon SimpleDB to fetch config data during boot-time of an instance (eg. database connection strings). SimpleDB may also be used to store information about an instance such as its IP address, machine name, and role.
  4. Design a build process such that it dumps the latest builds to a bucket in Amazon S3; download the latest version of an application during system startup.
  5. Reduce bundling and launch time by booting from Amazon EBS volumes and attaching multiple Amazon EBS volumes to an instance. Create snapshots of common volumes and share snapshots among accounts wherever appropriate.

Hence, the correct answer is: Implement elasticity.

Driving architectures using data is incorrect because this is actually one of the general design principles of the AWS Well-Architected Framework and not particularly a type of cloud best practice.

Design for perfection is incorrect because this is not considered as a best practice at all. The implementation of Auto-Scaling based on demand is an example of the Implement elasticity best practice.

Design for failure is incorrect because it only encourages you to be a pessimist when designing architectures in the cloud; assume things will fail. In other words, you should always design, implement, and deploy for automated recovery from failure.

57
Q
A customer wants to further secure his network beyond security groups and network access control lists. Which of the services below can be used to provide the additional security features? (Select TWO.)
A.AWS Key Management Service
B.AWS Single Sign-On
C.Amazon GuardDuty
D.Amazon SQS
E.AWS WAF
A

C.Amazon GuardDuty
E.AWS WAF

Explanation:
Amazon GuardDuty is a threat detection service that continuously monitors for malicious activity and unauthorized behavior to protect your AWS accounts and workloads. GuardDuty analyzes tens of billions of events across multiple AWS data sources, such as AWS CloudTrail, Amazon VPC Flow Logs, and DNS logs. You obtain an intelligent and cost-effective option for continuous threat detection in the AWS Cloud.

AWS WAF is a web application firewall that helps protect your web applications from common web exploits that could affect application availability, compromise security, or consume excessive resources. You can use AWS WAF to create custom rules that block common attack patterns, such as SQL injection or cross-site scripting, and rules that are designed for your specific application.

Amazon SQS is not a security service. This is a messaging service that allows you to decouple applications and provides more durability for your messages.

AWS Single Sign-On allows you to centrally manage SSO access to multiple AWS accounts and business applications. SSO does not protect your network from potential security threats, but it does provide additional access security for your AWS account.

AWS KMS or Key Management Service is your central repository for encryption keys in your account. It is not used to protect your network from potential security threats. KMS is useful if you have data that you need to encrypt, and you want a central location where you can manage your keys.

58
Q
Which of the following is an example of IaaS in AWS?
A.AWS CloudFormation
B.AWS IAM
C.AWS Elastic Beanstalk
D.Amazon EC2
A

D.Amazon EC2

Explanation:
There are three main models for cloud computing. Each model represents a different part of the cloud computing stack, they are:

  • Infrastructure as a Service (IaaS)
  • Platform as a Service (PaaS)
  • Software as a Service (SaaS)

Infrastructure as a Service, sometimes abbreviated as IaaS, contains the basic building blocks for cloud IT and typically provide access to networking features, computers (virtual or on dedicated hardware), and data storage space. Infrastructure as a Service provides you with the highest level of flexibility and management control over your IT resources and is most similar to existing IT resources that many IT departments and developers are familiar with today. Amazon EC2 is one of the IaaS solutions offered by AWS.

Hence, the correct answer is Amazon EC2.

AWS CloudFormation is incorrect because this is a service that enables you to turn your infrastructure into code, making it more reusable and modifiable. This service is categorized under automation, not IaaS.

AWS Elastic Beanstalk is incorrect because this is a PaaS solution, not IaaS. It is PaaS since it gives you a platform where you can simply deploy your code, and AWS handles all the necessary infrastructure provisioning for you.

AWS IAM is incorrect because this is just a security service in AWS used for user and account management. It is not IaaS since you are not using any cloud infrastructure with IAM.

59
Q
Which of the following is not a part of the AWS Global Infrastructure?
A.Regions
B.Availbility Zones
C.Placement Groups
D.Edge Locations
A

C.Placement Groups

Explanation:
The AWS Global Infrastructure delivers a cloud infrastructure companies can depend on—no matter their size, changing needs, or challenges. The AWS Global Infrastructure is designed and built to deliver the most flexible, reliable, scalable, and secure cloud computing environment with the highest quality global network performance available today. Every component of the AWS infrastructure is designed and built for redundancy and reliability, from regions to networking links to load balancers to routers and firmware.

You can explore the AWS Global infrastructure with this interactive map.

AWS provides a more extensive global footprint than any other cloud provider, and it opens up new Regions faster than other providers. To support its global footprint and ensure customers are served across the world, AWS maintains multiple geographic regions, including Regions in North America, South America, Europe, Asia Pacific, and the Middle East.

Each AWS Region provides full redundancy and connectivity to the network. Unlike other cloud providers, who define a region as a single data center, at AWS Regions consist of multiple Availability Zones, each of which is a fully isolated partition of the AWS infrastructure that consists of discrete data centers, each with redundant power, networking, and connectivity, and each housed in separate facilities.

An Availability Zone gives customers the ability to operate production applications and databases that are more highly available, fault-tolerant, and scalable than would be possible from a single data center. All AZs are interconnected with high-bandwidth, low-latency networking, over fully redundant, dedicated metro fiber providing high-throughput, low-latency networking between AZs. The network performance is sufficient to accomplish synchronous replication between AZs.

The AWS Global Infrastructure consists of Region, Availability Zones, and Edge Locations.

Placement group is just a logical grouping of instances within a single Availability Zone that benefit from low network latency, high network throughput.

60
Q
Which of the following purchase options offers the most significant discount compared to On-Demand instance pricing to process steady-state workloads that will continuously be running for a year?
A.Dedicated Instance
B.Scheduled Reserved Instance
C.Standard Reserved Instance
D.Convertible Reserved Instance
A

C.Standard Reserved Instance

Explanation:
Reserved Instances provide you with a significant discount (up to 75%) compared to On-Demand instance pricing. In addition, when Reserved Instances are assigned to a specific Availability Zone, they provide a capacity reservation, giving you additional confidence in your ability to launch instances when you need them.

Standard Reserved Instances provide you with a significant discount (up to 75%) compared to On-Demand instance pricing and can be purchased for a 1-year or 3-year term. The average discount off On-Demand instances varies based on your term and chosen payment options (up to 40% for 1-year and 60% for a 3-year term). Customers have the flexibility to change the Availability Zone, the instance size, and networking type of their Standard Reserved Instances.

Convertible Reserved Instances provide you with a significant discount (up to 54%) compared to On-Demand Instances and can be purchased for a 1-year or 3-year term. Purchase Convertible Reserved Instances if you need additional flexibility, such as the ability to use different instance families, operating systems, or tenancies over the Reserved Instance term.

Here’s a sample calculation to see the price difference between a Standard RI and Convertible RI on various payment options for 1-year and 3-year terms:

As a general rule, Standard RI provides more savings than Convertible RI, which means that the former is the cost-effective option. The All Upfront option provides you with the largest discount compared with the other types. Opting for a longer compute reservation, such as the 3-year term, gives us greater discount as opposed to a shorter 1-year renewable term.

Scheduled RIs are available to launch within the time windows you reserve. This option allows you to match your capacity reservation to a predictable recurring schedule that only requires a fraction of a day, a week, or a month. However, these are not suitable if you have a steady-state workload running continuously for 1 to 3 years.

Hence, the correct answer is Standard Reserved Instance.

Scheduled Reserved Instance is incorrect because this type of instance pricing option is not suitable if you have a steady-state workload running continuously for 1 to 3 years.

Convertible Reserved Instance is incorrect because although it is suitable to process steady-state workloads, this is actually more expensive compared with Standard Reserved Instance.

Dedicated Instance is incorrect because this is actually more expensive than Reserved Instances. With a Dedicated Instance, you can pay for instances that run on single-tenant hardware by the hour.

61
Q

A customer has recently experienced an SQL injection attack on their web application’s database hosted in EC2. They submitted a complaint ticket to AWS. What should be the response from AWS?
A.AWS should reiterate that the customer is responsible for the security of their application in the Cloud
B.AWS and the customer should contact a third party auditor to verify the incident
C.AW should not be liable for the damages since the customer should have properly patched their EC2 instance
D.AWS should secure their infrastructure better to reduce these kinds of incidents

A

A.AWS should reiterate that the customer is responsible for the security of their application in the Cloud

Explanation:
It is the customer’s responsibility to secure their infrastructure through various available security services. Customers can use different network and firewall configurations such as Security Groups, Network ACLs, AWS WAF, AWS Shield Advanced, Client/Server-Side encryption, and many others.

AWS does not share this responsibility with the customer. Hence, the correct answer in this scenario is: AWS should reiterate that the customer is responsible for the security of their applications in the Cloud.

The option that says: AWS should secure their infrastructure better to reduce these kinds of incidents is an incorrect choice since the customer is at fault for not securing their applications against the SQL injection attack. AWS continuously secures and updates their infrastructure, but this does not include customer applications.

The option that says: AWS and the customer should contact a third-party auditor to verify the incident is incorrect because this is not the responsibility of AWS. The customer can do this for an audit of their own system if they wish to.

The option that says: AWS should not be liable for the damages since the customer should have properly patched the EC2 instance is incorrect because although it is true that AWS is not responsible for the damages incurred, it is not the best answer among the options. AWS should reiterate to the customer how the responsibilities in the Cloud are shared between them.

62
Q
Which AWS service allows me to patch my Windows EC2 instances without having to RDP into them?
A.AWS System Manager
B.AWS CloudFormation
C.AWS CodeDeploy
D.Amazon Simple Workflow
A

A.AWS System Manager

Explanation:
AWS Systems Manager Patch Manager automates the process of patching managed instances with both security-related and other types of updates. You can use Patch Manager to apply patches for both operating systems and applications. Patch Manager uses patch baselines, which include rules for auto-approving patches within days of their release, as well as a list of approved and rejected patches.

AWS CodeDeploy is a deployment service that automates application deployments to Amazon EC2 instances, on-premises instances, serverless Lambda functions, or Amazon ECS services. These deployments do not support Windows patches so the service cannot be used to patch Windows instances.

AWS CloudFormation allows for infrastructure as code. Instead of building your infrastructure manually, you can automate the process in a controlled and predictable manner. It also allows you to reuse your infrastructure and launch it in other AWS regions.

Amazon SWF helps developers build, run, and scale background jobs that have parallel or sequential steps. You can think of Amazon SWF as a fully-managed state tracker and task coordinator in the Cloud. This service does not automate nor handle Windows patching for you.

63
Q

Which of the following actions does not affect costs when using Amazon S3?
A.Making GET requests to your S3 Objects
B.Data transfer costs for uploading objects into your S3 bucket
C.Choosing S3 Standard IA rather than One Zone IA
D.Moving objects out of your S3 bucket to another bucket

A

B.Data transfer costs for uploading objects into your S3 bucket

Explanation:
With Amazon S3, you pay only for the storage you use, with no minimum fee. Prices are based on the location of your Amazon S3 bucket. When you begin to estimate the cost of Amazon S3, consider the following:

Storage - Costs vary with number and size of objects stored in your Amazon S3 buckets as well as type of storage.

Requests - The number and type of requests. GET requests incur charges at different rates than other requests, such as PUT and COPY requests.

Data transfer - The amount of data transferred out of the Amazon S3 region.

Given these pricing parameters, data transferred into Amazon S3 are not charged by AWS. The action of uploading objects, therefore, should not incur data transfer costs.

64
Q
Which among the following is the most cost-effective and scalable choice for object storage?
A.Amazon EBS
B.Amazon EFS
C.AWS Storage Gateway
D.Amazon S3
A

D.Amazon S3

Explanation:
Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance. It is the most cost-effective choice for storing objects since this is its primary purpose. Another advantage you receive from Amazon S3 is volume discounts.

Amazon S3 provides a simple web service interface that you can use to store and retrieve any amount of data, at any time, from anywhere on the web. Using this web service, you can easily build applications that make use of Internet storage. Since Amazon S3 is highly scalable and you only pay for what you use, you can start small and grow your application as you wish, with no compromise on performance or reliability.

Amazon EFS is incorrect because this is an NFS file storage solution used together with EC2 instances or other virtual servers. It is definitely not a cheap service given its pricing mechanics.

Amazon EBS is incorrect because this is just a volume storage solution for EC2 instances. Although EBS volumes can be used for object storage, they need to be attached to instances for you to store and retrieve objects. This can add unnecessary charges to your billing.

AWS Storage Gateway is incorrect because this is just a hybrid cloud storage service that gives you on-premises access to virtually unlimited cloud storage. It essentially connects your local data storage to Amazon S3 and hence, this is not the correct choice.

65
Q

An engineer is having trouble migrating and configuring a licensed application she uses onto her EC2 instances. What alternative does she have to do to quickly get her applications up and running in AWS?
A.Try to look for an AMI in the AWS MArketplace that provides a similar setup to her application stack
B.She can simply use AWS Application Discovery Service to create an exact copy of the application in EC2
C.Create a Docker image of the application and launch Docker in the EC2 instances
D.Setup a VPN connection from her local network to her AWS VPC, which essentially means that her work is now running in the Cloud.

A

A.Try to look for an AMI in the AWS MArketplace that provides a similar setup to her application stack

Explanation:
AWS Marketplace is a curated digital catalog that makes it easy for customers to find, buy, deploy, and manage third-party software and services that customers need to build solutions and run their businesses. AWS Marketplace includes thousands of software listings from popular categories such as security, networking, storage, machine learning, business intelligence, database, and devOps. AWS Marketplace also simplifies software licensing and procurement with flexible pricing options and multiple deployment methods.

If you need a specific stack or business solution for your application, then there is a good chance that someone already offers it in the AWS Marketplace. This migration strategy is known as Repurchasing.

Creating a Docker image is not the best way to go if you have trouble with configuration and initialization. The licensing may also affect how quickly you can migrate your stack onto AWS. Search for an AMI in the Marketplace and see if there is any available. Doing so might relieve the customer of her troubles in the migration process.

The AWS Application Discovery Service does not automatically migrate your applications onto AWS for you.

Setting up a VPN just to connect to the AWS network is not the best solution for this scenario. There might be reasons why the customer wants to migrate her applications onto Amazon EC2, such as reduced costs, higher availability, etc.