AWS Certified Cloud Practitioner Practice Test 4 (Bosos) Flashcards
How is expense shifted when moving from traditional servers to the Cloud?
A.Operational expense is traded for variable expense
B.Variable expense is traded for capital expense
C.Capital expense is traded for variable expense
D.Capital expense is traded for operational expense
C.Capital expense is traded for variable expense
Explanation:
Cloud computing is the on-demand delivery of compute power, database storage, applications, and other IT resources through a cloud services platform via the Internet with pay-as-you-go pricing. Whether you are running applications that share photos to millions of mobile users or you’re supporting the critical operations of your business, a cloud services platform provides rapid access to flexible and low-cost IT resources. With cloud computing, you don’t need to make large upfront investments in hardware and spend a lot of time on the heavy lifting of managing that hardware. Instead, you can provision exactly the right type and size of computing resources you need to power your newest bright idea or operate your IT department. You can access as many resources as you need, almost instantly, and only pay for what you use.
One of the advantages of cloud computing is that instead of having to invest heavily in data centers and servers before you know how you’re going to use them, you can pay only when you consume computing resources, and pay only for how much you consume.
Therefore, trade capital expense for variable expense is the correct answer.
The option that says: Capital expense is traded for operational expense is incorrect because capital expense is actually not traded for operational expense, since you still handle operations.
The option that says: Variable expense is traded for capital expense is incorrect because it should be the other way around.
The option that says: Operational expense is traded for variable expense is incorrect because you do not trade operational expense for variable expense since you still handle the operations of your company.
Due to a high number of visitors, many customers are timing out from your website which is running in an Auto Scaling group of EC2 instances behind an ELB. Upon checking, the Auto Scaling group has stopped adding new instances to your group. Which of the following Trusted Advisor categories will give you more insight on this issue? (Select TWO.) A.Performance B.Cost Optimization C.Security D.Fault Tolerance E. Service Limits
A.Performance
Explanation:
AWS Trusted Advisor is an online tool that provides you real-time guidance to help you provision your resources following AWS best practices. It inspects your AWS environment and makes recommendations for saving money, improving system performance and reliability, or closing security gaps.
Whether establishing new workflows, developing applications, or as part of ongoing improvement, take advantage of the recommendations provided by Trusted Advisor on a regular basis to help keep your solutions provisioned optimally.
Trusted Advisor includes an ever-expanding list of checks in the following five categories:
Cost Optimization – recommendations that can potentially save you money by highlighting unused resources and opportunities to reduce your bill.
Security – identification of security settings that could make your AWS solution less secure.
Fault Tolerance – recommendations that help increase the resiliency of your AWS solution by highlighting redundancy shortfalls, current service limits, and over-utilized resources.
Performance – recommendations that can help to improve the speed and responsiveness of your applications.
Service Limits – recommendations that will tell you when service usage is more than 80% of the service limit.
Since your instances are experiencing high usage, it is best to perform some performance checks and see if they are underprovisioned. Since auto-scaling instances are not adding new ones, you might have hit your limit on the number of instances for the region.
Hence, the correct answers are Performance and Service Limits.
Cost optimization, Security, and Fault Tolerance checks will not help you determine and address the issue.
If you are tasked to create a lightweight Wordpress site in AWS without having to install the package on your own, which PaaS solution in AWS will allow you to do this easily? A.AWS Elastic BeanStalk B.Amazon LightSail C.Amazon Elastic Kubernetes Service D.AWS CloudFormation
B.Amazon LightSail
Explanation:
Amazon Lightsail is a PaaS solution for users who need a simple virtual private server (VPS) solution. Lightsail provides developers compute, storage, and networking capacity and capabilities to deploy and manage websites and web applications in the cloud. Lightsail includes everything you need to launch your project quickly – a virtual machine, SSD-based storage, data transfer, DNS management, and a static IP – for a low, predictable monthly price.
Lightsail offers a range of operating system and application templates that are automatically installed when you create a new Lightsail instance. Application templates include WordPress, Drupal, Joomla!, Magento, Redmine, LAMP, Nginx (LEMP), MEAN, Node.js, and more.
Amazon Elastic Beanstalk is another PaaS solution of AWS. This is not the correct answer, however, because you would have to deploy and set up your own Wordpress application first. Elastic Beanstalk does not offer a simple solution to quickly setup a functional Wordpress website in minutes, unlike Lightsail.
AWS CloudFormation is not PaaS, and is not used to quickly deploy applications. Rather, it allows you to automate infrastructure deployments through the use of CloudFormation templates.
Amazon EKS is a service for managing containerized applications using Kubernetes orchestration. This is not the best service to use if you only wish to run a quick setup of a Wordpress website.
What is a good disaster recovery precaution if you are launching a dynamic web application with mission-critical workloads that need to be available all the time?
A.Always keep backup data stored in two different S3 buckets
B.Launch applications in two different AWS Regions to prevent downtime during regional outages
C.Run applications in the cloud but keep all data locally
D.Launch applications in two different AZs to prevent downtime during regional outtages
B.Launch applications in two different AWS Regions to prevent downtime during regional outages
Explanation:
Business continuity depends on the efficient, uninterrupted flow of data across an organization – from operations to sales to fulfillment. Even a brief lapse in workload continuity can mean thousands of lost sales opportunities, disruption to production, and compromised customer trust. The causes of those lapses can range from natural disasters to mechanical failure or human error. You need a proactive cloud disaster recovery strategy that will help you stay up and running in the event that your physical infrastructure is unavailable for any length of time.
An Auto Scaling group can contain Amazon EC2 instances from multiple Availability Zones within the same Region. By deploying your instances in different AZs, you minimize the impact to your infrastructure when an AZ goes down. The ELB will redirect the traffic of the impacted zone to the healthy instances.
Hence, the correct answer is Launch applications in two different AWS Regions to prevent downtime during regional outages.
The option that says: Launch applications in two different AZs to prevent downtime during regional outages is incorrect since regional outages affect all the AZs in it. You cannot launch an application in different AZs of different Regions.
The option that says: Always keep backup data stored in two different S3 buckets is incorrect since S3 boasts high durability and availability. There is no need to store backup data in two separate S3 buckets for the sake of disaster recovery.
The option that says: Run applications in the cloud but keep all data locally is incorrect since this is not an optimal disaster recovery strategy among the given choices. Keeping your data locally or in your on-premises data center is susceptible to data loss as well.
In implementing continuous integration and continuous delivery (CI/CD) in your cloud architecture, which service will make it easy for you to set up your entire development and continuous delivery toolchain for coding, building, testing, and deploying your application code? A.AWS CodePipeline B.AWS CodeStar C.AWS CodeBuild D.AWS CodeCommit
B.AWS CodeStar
Explanation:
AWS CodeStar enables you to quickly develop, build, and deploy applications on AWS. AWS CodeStar provides a unified user interface, enabling you to easily manage your software development activities in one place. With AWS CodeStar, you can set up your entire continuous delivery toolchain in minutes, allowing you to start releasing code faster. AWS CodeStar makes it easy for your whole team to work together securely, allowing you to easily manage access and add owners, contributors, and viewers to your projects. Each AWS CodeStar project comes with a project management dashboard, including an integrated issue tracking capability powered by Atlassian JIRA Software. With the AWS CodeStar project dashboard, you can easily track progress across your entire software development process, from your backlog of work items to teams’ recent code deployments.
AWS CodeStar makes it easy for you to set up your entire development and continuous delivery toolchain for coding, building, testing, and deploying your application code. To start a project, you can choose from a variety of AWS CodeStar templates for Amazon EC2, AWS Lambda, and AWS Elastic Beanstalk. You have the option to choose AWS CodeCommit or GitHub to use as your project’s source control. You also have the option to edit your source code using one of several options including AWS Cloud9, Microsoft Visual Studio, or Eclipse. After you make your selections the underlying AWS services are provisioned in minutes, allowing you to quickly start coding and deploying your applications.
Hence, the correct answer is AWS CodeStar.
AWS CodeBuild is incorrect because this is just a fully managed build service that compiles source code, runs tests, and produces software packages that are ready to deploy.
AWS CodeCommit is incorrect because this is simply a fully-managed source control service that makes it easy for companies to host secure and highly scalable private Git repositories.
AWS CodePipeline is incorrect because this basically helps you automate your release pipelines for fast and reliable application and infrastructure updates. It doesn’t provide an entire development and continuous delivery toolchain for coding, building, testing, and deploying your application code, unlike AWS CodeStar.
Which of the following is true regarding the AWS Cost and Usage report? (Select TWO.)
A.Allows you to load your cost and usage information into Amazon Athena, Amazon Redshift, and AWS QuickSight
B.Provides you a dashboard that lets you view the status of your month-to-date AWS expenditure and provides access to a number of other cost management products that can help you dig deeper into your AWS costs and usage
C.Lets you set custom cost and usage budgets that alert you when those thresholds are exceeded
D.Helps you visualize, understand, and manage your AWS costs and usage over time via an intuitive interface that enables you to quickly create customer reports
E.Provides you with granular data about your AWS costs and usage
A.Allows you to load your cost and usage information into Amazon Athena, Amazon Redshift, and AWS QuickSight
E.Provides you with granular data about your AWS costs and usage
Explanation:
The Cost and Usage Report is your one-stop-shop for accessing the most granular data about your AWS costs and usage. You can also load your cost and usage information into Amazon Athena, Amazon Redshift, AWS QuickSight, or a tool of your choice.
It lists AWS usage for each service category used by an account and its IAM users in hourly or daily line items, as well as any tags that you have activated for cost allocation purposes. You can also customize the AWS Cost & Usage Report to aggregate your usage data to the daily or hourly level.
With the AWS Cost & Usage Report, you can do the following:
Access comprehensive AWS cost and usage information
- The AWS Cost & Usage Report gives you the ability to delve deeply into your AWS cost and usage data, understand how you are using your AWS implementation, and identify opportunities for optimization.
Track your Amazon EC2 Reserved Instance (RI) usage
- Each line item of usage that receives an RI discount contains information about where the discount was allocated. This makes it easier to trace which instances are benefitting from specific reservations.
Leverage strategic data integrations
- Using the Amazon Athena data integration feature, you can quickly query your cost and usage information using standard SQL queries. You can also upload your data directly into Amazon Redshift or Amazon QuickSight.
One of the core benefits of the AWS Cost & Usage Report is the wealth of RI-related data that is made available to you. It can be customized to collect cost and usage data at the daily and monthly levels of detail and is updated at least once per day. Each line item of usage that receives an RI discount contains information about where the discount came from. This makes it easier to trace which instances are benefitting from specific reservations. If desired, the AWS Cost & Usage Report can even be ingested directly into Amazon Athena, Amazon QuickSight, or your Amazon Redshift cluster.
Hence, the correct answers in this scenario are the following:
- Provides you with granular data about your AWS costs and usage
- Allows you to load your cost and usage information into Amazon Athena, Amazon Redshift and AWS QuickSight
The option that says: Lets you set custom cost and usage budgets that alert you when those thresholds are exceeded is incorrect because this refers to AWS Budgets.
The option that says: Helps you visualize, understand, and manage your AWS costs and usage over time via an intuitive interface that enables you to quickly create custom reports is incorrect because this refers to AWS Cost Explorer.
The option that says: Provides you a dashboard that lets you view the status of your month-to-date AWS expenditure and provides access to a number of other cost management products that can help you dig deeper into your AWS costs and usage is incorrect because this refers to the AWS Cost Management dashboard.
Which of the following does the Enterprise support plan provide to customers in AWS? (Select TWO.)
A.Proactive Technical Account Management
B.15-Minute response time support if your production system goes down
C.Limited access to the 7 Core Trusted Advisor checks
D.Access to online self-paced labs
E.5-minute response time support if your business-critical system goes down
A.Proactive Technical Account Management
D.Access to online self-paced labs
Explanation:
AWS Enterprise Support provides you with concierge-like service where the main focus is helping you achieve your outcomes and find success in the cloud.
With Enterprise Support, you get 24x7 technical support from high-quality engineers, tools and technology to automatically manage health of your environment, consultative architectural guidance delivered in the context of your applications and use-cases, and a designated Technical Account Manager (TAM) to coordinate access to proactive / preventative programs and AWS subject matter experts.
In addition to what is available with Basic Support, Enterprise Support provides:
AWS Trusted Advisor - Access to the full set of Trusted Advisor checks and guidance to provision your resources following best practices to help reduce costs, increase performance and fault tolerance, and improve security.
AWS Personal Health Dashboard - A personalized view of the health of AWS services, and alerts when your resources are impacted. Also includes the Health API for integration with your existing management systems.
AWS Support API - Programmatic access to AWS Support Center features to create, manage, and close your support cases, and operationally manage your Trusted Advisor check requests and status.
Proactive Technical Account Management - A Technical Account Manager (TAM) is your designated technical point of contact who provides advocacy and guidance to help plan and build solutions using best practices, coordinate access to subject matter experts and product teams, and proactively keep your AWS environment operationally healthy.
Architecture Support – Contextual guidance on how services fit together to meet your specific use-case, workload, or application.
Third-Party Software Support - Guidance, configuration, and troubleshooting of AWS interoperability with many common operating systems, platforms, and application stack components.
Proactive Support Programs – Included access to Well-Architected Reviews, Operations Reviews, and Infrastructure Event Management.
Support Concierge - the Concierge Team are AWS billing and account experts that specialize in working with enterprise accounts. They will quickly and efficiently assist you with your billing and account inquiries, and work with you to implement billing and account best practices so that you can focus on what matters: running your business.
Enhanced Technical Support – 24x7 access to Cloud Support Engineers via phone, chat, and email. You can have an unlimited number of contacts that can open an unlimited amount of cases. Response times are as follows:
General Guidance - < 24 hours
System Impaired - < 12 hours
Production System Impaired - < 4 hours
Production System Down - < 1 hour
Business Critical System Down - <15 min
Hence, the correct answers are proactive Technical Account Management and Access to online self-paced labs.
All other options are incorrect as per the information shared above.
The option that says: limited access to the 7 Core Trusted Advisor checks is incorrect because the Enterprise support plan has access to the full set of Trusted Advisor checks.
The option that says:15-minute response time support if your production system goes down is incorrect because the Enterprise support plan actually provides a 1-hour response time for this kind of incident.
The option that says: 5-minute response time support if your business-critical system goes down is incorrect because the Enterprise support plan actually provides a 15-minute response time for this type of outage.
What infrastructure does Amazon CloudFront utilize to speed up content delivery to customers? A.Direct Connect B.Edge Locations C.Fiber optics cables D.Availability Zones
B.Edge Locations
Explanation:
Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment. CloudFront is integrated with AWS – both physical locations that are directly connected to the AWS global infrastructure, as well as other AWS services. CloudFront works seamlessly with services including AWS Shield for DDoS mitigation, Amazon S3, Elastic Load Balancing or Amazon EC2 as origins for your applications, and Lambda@Edge to run custom code closer to customers’ users and to customize the user experience. Lastly, if you use AWS origins such as Amazon S3, Amazon EC2 or Elastic Load Balancing, you don’t pay for any data transferred between these services and CloudFront.
You can get started with the Content Delivery Network in minutes, using the same AWS tools that you’re already familiar with: APIs, AWS Management Console, AWS CloudFormation, CLIs, and SDKs. Amazon’s CDN offers a simple, pay-as-you-go pricing model with no upfront fees or required long-term contracts, and support for the CDN is included in your existing AWS Support subscription.
Amazon CloudFront uses Edge Locations in every Region to speed up content delivery to customers. Edge locations cache copies of your content on their local storage.
Hence, the correct answer is Edge Locations.
AWS Direct Connect is a cloud service solution that makes it easy to establish a dedicated network connection from your premises to AWS. Using AWS Direct Connect, you can establish private connectivity between AWS and your datacenter, office, or colocation environment. Since this is a private dedicated line, it would be impractical for CloudFront to use this.
Availability Zones are the locations where different data centers are located. These are not the main infrastructure CloudFront uses to speed up content delivery.
Although AWS does use fiber optic cables for their network lines, this is not the specific answer that is requested by the scenario.
A customer has a popular website which has millions of viewers from all over the world and has read-heavy database workloads. Which of the following is the best option to use to increase the read throughput on their database? A.Enable Multi-AZ deployments B.Enable Amazon RDS Read Replicas C.Use SQS to queue up the request D.Enable Amazon RDS Setandby Replicas
B.Enable Amazon RDS Read Replicas
Explanation:
Amazon RDS Read Replicas provide enhanced performance and durability for database (DB) instances. This feature makes it easy to elastically scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads. You can create one or more replicas of a given source DB Instance and serve high-volume application read traffic from multiple copies of your data, thereby increasing aggregate read throughput. Read replicas can also be promoted when needed to become standalone DB instances. Read replicas are available in Amazon RDS for MySQL, MariaDB, Oracle and PostgreSQL as well as Amazon Aurora.
Hence, the correct answer is: Enable Amazon RDS Read Replicas.
The option that says: Enable Multi-AZ deployments is incorrect because the Multi-AZ deployments feature is mainly used to achieve high availability and failover support for your database.
The option that says: Enable Amazon RDS Standby Replicas is incorrect because a Standby replica is used in Multi-AZ deployments and hence, it is not a solution to reduce read-heavy database workloads.
The option that says: Use SQS to queue up the requests is incorrect because although an SQS queue can effectively manage the requests, it won’t be able to entirely improve the read-throughput of the database by itself.
Which compliance requirement has AWS achieved that allows handling of medical information? A.SOC 2 B.PCI DSS C.HIPAA D.SOC 1
C.HIPAA
Explanation:
HIPAA (Health Insurance Portability and Accountability Act of 1996) is United States legislation that provides data privacy and security provisions for safeguarding medical information. AWS enables covered entities and their business associates subject to the HIPAA to use the secure AWS environment to process, maintain, and store protected health information.
Hence, HIPAA is the correct choice for this scenario.
Payment Card Industry Data Security Standard (PCI DSS) is incorrect because this is a set of security standards designed to ensure that ALL companies that accept, process, store, or transmit credit card information maintain a secure environment. This compliance does not concern medical information.
SOC 1 Report (System and Organization Controls Report) is incorrect because this is a report on Controls at a Service Organization which are relevant to user entities’ internal control over financial reporting. This compliance does not concern medical information.
SOC 2 is incorrect because this is focused more on making sure that systems are set up so they assure security, availability, processing integrity, confidentiality, and privacy of customer data. This compliance does not concern medical information.
You have an Amazon Linux EC2 instance running for an hour and thirty minutes. How will AWS bill you in terms of usage?
A.You will be billed for an hour and thirty minutes according to the per-second billing rule
B.You will only be billed for an hour according to the hourly billing rule
C.You will be billed for an hour and twenty-nine minutes according to the per second billing rule
D.You will be billed for one hour and thirty minutes according to the hourly billing rule
A.You will be billed for an hour and thirty minutes according to the per-second billing rule
Explanation:
Amazon Linux instances are now billed in a per-second duration. With per-second billing, you pay for only what you use. It takes cost of unused minutes and seconds in an hour off of the bill, so you can focus on improving your applications instead of maximizing usage to the hour. Especially, if you manage instances running for irregular periods of time, such as dev/testing, data processing, analytics, batch processing and gaming applications, can benefit.
EC2 usage is billed on one-second increments, with a minimum of 60 seconds. Similarly, provisioned storage for EBS volumes will be billed per-second increments, with a 60-second minimum. It is in your best interest to understand the billing structure of your commonly used service to avoid budgeting concerns.
Hence, the correct answer is: You will be billed for an hour and thirty minutes according to the per-second billing rule.
All other options are incorrect because these are false statements about EC2 Usage pricing.
You can also set your billing to hourly if you wish. In this case, even a fraction of an hour of usage will cost you an equivalent of a full hour’s usage. You will NOT be billed for only an hour if you choose to follow hourly billing.
`A company is planning to use AWS Cloud to augment the resources of their on-premises data center to better serve their customers around the world. How does a company benefit from using AWS?
A.Benefit from massic discounts from the Amazon.com shopping website
B.Benefit from massive economies of scale
C.Replace high variable costs with upfront capital infrastructure expenses
D.Replace low variable costs with upfront capital infrastructure expenses
B.Benefit from massive economies of scale
Explanation:
In 2006, Amazon Web Services (AWS) began offering IT infrastructure services to businesses as web services—now commonly known as cloud computing. One of the key benefits of cloud computing is the opportunity to replace upfront capital infrastructure expenses with low variable costs that scale with your business. With the cloud, businesses no longer need to plan for and procure servers and other IT infrastructure weeks or months in advance. Instead, they can instantly spin up hundreds or thousands of servers in minutes and deliver results faster.
Whether you are using it to run applications that share photos to millions of mobile users or to support business critical operations, a cloud services platform provides rapid access to flexible and low cost IT resources. With cloud computing, you don’t need to make large upfront investments in hardware and spend a lot of time on the heavy lifting of managing that hardware. Instead, you can provision exactly the right type and size of computing resources you need to power your newest idea or operate your IT department. You can access as many resources as you need, almost instantly, and only pay for what you use.
There are six advantages of using Cloud Computing:
- Trade capital expense for variable expense
– Instead of having to invest heavily in data centers and servers before you know how you’re going to use them, you can pay only when you consume computing resources, and pay only for how much you consume.
- Benefit from massive economies of scale
– By using cloud computing, you can achieve a lower variable cost than you can get on your own. Because usage from hundreds of thousands of customers is aggregated in the cloud, providers such as AWS can achieve higher economies of scale, which translates into lower pay as-you-go prices.
- Stop guessing capacity
– Eliminate guessing on your infrastructure capacity needs. When you make a capacity decision prior to deploying an application, you often end up either sitting on expensive idle resources or dealing with limited capacity. With cloud computing, these problems go away. You can access as much or as little capacity as you need, and scale up and down as required with only a few minutes’ notice.
- Increase speed and agility
– In a cloud computing environment, new IT resources are only a click away, which means that you reduce the time to make those resources available to your developers from weeks to just minutes. This results in a dramatic increase in agility for the organization, since the cost and time it takes to experiment and develop is significantly lower.
- Stop spending money running and maintaining data centers
– Focus on projects that differentiate your business, not the infrastructure. Cloud computing lets you focus on your own customers, rather than on the heavy lifting of racking, stacking, and powering servers.
- Go global in minutes
– Easily deploy your application in multiple regions around the world with just a few clicks. This means you can provide lower latency and a better experience for your customers at minimal cost.
Hence, the correct answer is Benefit from massive economies of scale.
The option that says: Benefit from massive discounts from the Amazon.com shopping website is incorrect because the Amazon.com platform is different from its Amazon Web Services cloud computing division. This statement is actually not related to Cloud Computing.
The option that says: Replace low variable costs with upfront capital infrastructure expenses is incorrect as it should be the other way around. Using AWS allows you to replace upfront capital infrastructure expenses with low variable costs that scale with your business.
The option that says: Replace high variable costs with upfront capital infrastructure expenses is incorrect because you are actually replacing upfront capital infrastructure expenses and not high variable costs. Take note that by using AWS, you can actually have low (not high) variable costs.
Which of the following is one example of a security best practice in AWS?
A.Enable MFA only on the root account
B.Grant Most Privilege
C.Create an IAM user with admin privileges instead of using root
D.Always manually define permissions to each and every individual IAM user
C.Create an IAM user with admin privileges instead of using root
Explanation:
You use an access key (an access key ID and secret access key) to make programmatic requests to AWS. However, do not use your AWS account root user access key. The access key for your AWS account root user gives full access to all your resources for all AWS services, including your billing information. You cannot reduce the permissions associated with your AWS account root user access key. Therefore, protect your root user access key like you would your credit card numbers or any other sensitive secret.
As indicated in the Security Best Practices whitepaper, you should not use your root account to administer your account. You should instead create an IAM user with administration privileges that will be used as the administrator of your AWS account.
Hence, the correct answer is Create an IAM user with admin privileges instead of using root.
The remaining choices are not part of the security best practices.
The option that says: Enable MFA only on the root account is incorrect because MFA should be enabled as much as possible for every account to prevent others from maliciously logging in to your account.
The option that says: Grant Most Privilege is incorrect because when you create IAM policies, you should follow the standard security advice of granting least privilege, or granting only the permissions required to perform a task.
The option that says: Always manually define permissions to each and every individual IAM users is incorrect because you should use IAM Groups to make it easier to manage the permissions for multiple users.
Which of the following AWS services are not considered to be region-specific services? (Select TWO.) A.Amazon VPC B.AWS WAF C.AWS Lambda D.Amazon Route 53 E.AWS CloudTrail
B.AWS WAF
D.Amazon Route 53
Explanation
An AWS resource can be a Global, Regional or a Zonal service. A Global service means that it covers all of the AWS Regions across the globe, while a regional service means that a resource is only applicable to one specific region at a time. A regional service may or may not have the ability to replicate the same resource to another region. Lastly, a Zonal service can only exist in one Availability Zone.
You don’t need to memorize the scope of all of the AWS services as long as you know the pattern. There are actually only a handful of services that are considered as global services such as IAM, STS Route 53, CloudFront and WAF. For Zonal services, the examples are EC2 Instance and EBS Volumes where they are tied to the Availability Zone where they were launched. Take note that although EBS Volumes are considered as a zonal service, the EBS snapshots are considered as a regional since it is not tied to a specific Availability Zone. The rest of the services are regional in scope.
Amazon Route 53 and AWS WAF are both global services such that they are not dependent on the Region in which they were launched in. This can be verified by signing in to an AWS Console and viewing the area where the Regions are typically located.
Hence, the correct answers are AWS Route 53 and AWS WAF.
AWS Lambda, AWS CloudTrail, and Amazon VPC are all incorrect because these are region-specific services. You can only find your resources in the regions in which you created them. This means that the location you select matters when creating these resources.
You are in need of a database that is capable of self-healing and has a high throughput. Which of the following services fits these criteria? A.Amazon Aurora B.Amazon Redshift C.Amazon DynamoDB D.Amazon RDS
A.Amazon Aurora
Explanation:
Amazon Aurora is a relational database engine that combines the speed and reliability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. It is designed to transparently handle the loss of up to two copies of data without affecting database write availability and up to three copies without affecting read availability. Amazon Aurora storage is also self-healing. Data blocks and disks are continuously scanned for errors and repaired automatically.
Amazon Aurora is fully managed by Amazon Relational Database Service (RDS), which automates time-consuming administration tasks like hardware provisioning, database setup, patching, and backups.
It also features a distributed, fault-tolerant, self-healing storage system that auto-scales up to 64TB per database instance. It delivers high performance and availability with up to 15 low-latency read replicas, point-in-time recovery, continuous backup to Amazon S3, and replication across three Availability Zones (AZs).
Hence, the correct answer is Amazon Aurora.
Amazon RDS is incorrect because although it is similar to Amazon Aurora such that they are both SQL databases, RDS does not have self-healing capabilities.
Amazon DynamoDB and Amazon Redshift are both incorrect since these services are not self-healing databases. Amazon Redshift is considered more as a data warehouse rather than a database.
Which AWS service will allow you to serve your dynamic web content to users globally? A.AWS Elastic Load Balancer B.Amazon S3 C.Amazon CloudFront D.Amazon Route 53
C.Amazon CloudFront
Explanation:
Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment. CloudFront is integrated with AWS – both physical locations that are directly connected to the AWS global infrastructure, as well as other AWS services. CloudFront works seamlessly with services including AWS Shield for DDoS mitigation, Amazon S3, Elastic Load Balancing or Amazon EC2 as origins for your applications, and Lambda@Edge to run custom code closer to customers’ users and to customize the user experience. Lastly, if you use AWS origins such as Amazon S3, Amazon EC2 or Elastic Load Balancing, you don’t pay for any data transferred between these services and CloudFront.
You can get started with the Content Delivery Network in minutes, using the same AWS tools that you’re already familiar with: APIs, AWS Management Console, AWS CloudFormation, CLIs, and SDKs. Amazon’s CDN offers a simple, pay-as-you-go pricing model with no upfront fees or required long-term contracts, and support for the CDN is included in your existing AWS Support subscription.
Hence, the correct answer is Amazon CloudFront.
Amazon S3 is incorrect because this can only directly serve static objects.
Amazon Route 53 is incorrect because this is just a DNS service offered by AWS and hence, it is not a suitable service to use for this scenario.
AWS Elastic Load Balancer is incorrect because this just automatically distributes incoming application traffic across multiple targets, such as Amazon EC2 instances, containers, IP addresses, and Lambda functions. It cannot serve content to users globally without a source behind it.
What cloud computing model deals with services such as EC2 instances? A.PaaS B.IaaS C.SaaS D.DBaaS
B.IaaS
Explanation:
There are three main models for cloud computing. Each model represents a different part of the cloud computing stack, they are:
- Infrastructure as a Service (IaaS)
- Platform as a Service (PaaS)
- Software as a Service (SaaS)
Infrastructure as a Service, sometimes abbreviated as IaaS, contains the basic building blocks for cloud IT and typically provide access to networking features, computers (virtual or on dedicated hardware), and data storage space. Amazon EC2 is considered IaaS because you have total control on what could be done within the instances. You are borrowing the server infrastructure of AWS to fulfill your business needs, and you are charged at a rate for this service.
PaaS, or platform as a service, is incorrect. PaaS services include AWS Elastic Beanstalk, which provides you a platform to launch your applications, while the service prepares all the necessary infrastructure to run your application.
SaaS, or software as a service, is incorrect. Software as a Service provides you with a completed product that is run and managed by the service provider. With a SaaS offering, you do not have to think about how the service is maintained or how the underlying infrastructure is managed; you only need to think about how you will use that particular piece of software.
DBaaS, or database as a service, is incorrect. Amazon EC2 is not a managed database service.
A MariaDB RDS database is known to have high memory consumption during peak hours which deteriorates the overall performance of your application. What cost-effective change can you introduce to resolve this issue if the database is handling write-intensive operations?
A.Switch to DynamoDB to enjoy millisecond latency
B.Add read replicas to lessen the burden of read operations
C.Distribute the workload by running multiple instances
D.Scale the instance vertically to a higher memory capacity
D.Scale the instance vertically to a higher memory capacity
Explanation:
Systems that are expected to grow over time need to be built on top of a scalable architecture. Such an architecture can support growth in users, traffic, or data size with no drop-in performance. It should provide that scale in a linear manner where adding extra resources results in at least a proportional increase in ability to serve additional load. Growth should introduce economies of scale, and cost should follow the same dimension that generates business value out of that system. While cloud computing provides virtually unlimited on-demand capacity, your design needs to be able to take advantage of those resources seamlessly.
There are generally two ways to scale an IT architecture: vertically and horizontally.
Vertical Scaling
- Scaling vertically takes place through an increase in the specifications of an individual resource, such as upgrading a server with a larger hard drive or a faster CPU. With Amazon EC2, you can stop an instance and resize it to an instance type that has more RAM, CPU, I/O, or networking capabilities. This way of scaling can eventually reach a limit, and it is not always a cost-efficient or highly available approach. However, it is very easy to implement and can be sufficient for many use cases especially in the short term.
Horizontal Scaling
- Scaling horizontally takes place through an increase in the number of resources, such as adding more hard drives to a storage array or adding more servers to support an application. This is a great way to build internet-scale applications that leverage the elasticity of cloud computing. Take note that not all architectures are designed to distribute their workload to multiple resources.
In this scenario, it is better to scale vertically than horizontally since this is a database. Scaling vertically will give the database more capacity to perform its job and avoid high memory consumption.
Hence, the correct answer is Scale the instance vertically to a higher memory capacity.
The option that says: Distribute the workload by running multiple instances is incorrect because adding more instances is not the best choice for databases unless you have a proper orchestrator. It is better to modify the database to a higher instance type instead.
The option that says: Add read replicas to lessen the burden of read operations is incorrect because although can reduce the load on your source DB instance by routing read queries from your applications to Read Replicas, the scenario mentioned that the database is handling write-intensive operations. This means that adding read replicas will only do little for this scenario’s issue, and it might be more costly as well.
The option that says: Switch to DynamoDB to enjoy millisecond latency is incorrect because although switching to DynamoDB is sometimes a valid choice of action, it is still too much work to refactor your SQL database to a NoSQL database. Migration can affect your costs too.
What tool can assist you in estimating your monthly AWS bill? A.AWS TCO Calculator B.AWS Pricing Calculator C.AWS Free Monthly Calculator D.AWS IAM
B.AWS Pricing Calculator
Explanation:
To estimate a bill, use the AWS Pricing Calculator. You can enter your planned resources by service, and the Pricing Calculator provides an estimated cost per month. The AWS Pricing Calculator is an easy-to-use online tool that enables you to estimate the monthly cost of AWS services for your use case based on your expected usage. It is continuously updated with the latest pricing for all AWS services in all Regions.
The AWS Pricing Calculator helps customers and prospects estimate their monthly AWS bill more efficiently. Using this tool, they can add, modify and remove services from their ‘bill’ and it will recalculate their estimated monthly charges automatically. The calculator also shows common customer samples and their usage, such as Disaster Recovery and Backup or Web Application.
Hence, the correct answer is AWS Pricing Calculator.
AWS TCO Calculator is incorrect because this is just used to estimate your cost savings when using AWS rather than traditional data centers. This is not the correct calculator to use for estimating your monthly AWS costs.
AWS Free Monthly Calculator is incorrect since no such service exists in AWS.
AWS IAM is incorrect because this is primarily an account management service, and is not used for any billing purposes.
In the event of a disaster or an outage, AWS recommends that you spread your servers and databases at least in different \_\_\_\_\_? A.AWS Accounts B.Availability Zones C.Regions D.VPCs
B.Availability Zones
Explanations;
To create a highly available system, AWS recommends that you launch your applications in more than one availability zone. This is to ensure that there is service continuity in case an AZ is struck by disaster or experiences an outage. Each AWS Region provides full redundancy and connectivity to the network. AWS Regions consist of multiple Availability Zones (AZ), each of which is a fully isolated partition of the AWS infrastructure that consists of discrete data centers, each with redundant power, networking, and connectivity, and each housed in separate facilities.
Hence, the correct answer is Availability Zones.
Creating a highly available infrastructure boosts the reliability of your services. This is heavily emphasized by AWS in their Well-Architected Framework whitepaper. There are multiple services and features provided by AWS to help you construct a highly available environment for your business needs.
At a minimum, your servers and databases should be running in more than one AZ. You can also launch them in a separate region if you are worried about regional outages.
Creating another VPC is not exactly the solution you’re looking for in this scenario. It helps if your additional VPC has subnets in other AZs or is running in a different region.
Creating a new account is unnecessary.
What is the lowest support plan that allows an unlimited number of technical support cases to be opened? A.Basic B.Business C.Enterprise D.Developer
D.Developer
Explanation:
AWS Support offers a range of plans that provide access to tools and expertise that support the success and operational health of your AWS solutions. All support plans provide 24x7 access to customer service, AWS documentation, whitepapers, and support forums. For technical support and more resources to plan, deploy, and improve your AWS environment, you can select a support plan that best aligns with your AWS use case.
AWS Support offers four support plans: Basic, Developer, Business, and Enterprise. The Basic plan is free of charge and offers support for account and billing questions and service limit increases. The other plans offer an unlimited number of technical support cases with pay-by-the-month pricing and no long-term contracts, providing the level of support that meets your needs.
All AWS customers automatically have around-the-clock access to these features of the Basic support plan:
- Customer Service: one-on-one responses to account and billing questions
- Support forums
- Service health checks
- Documentation, whitepapers, and best-practice guides
The cheapest support plan that offers technical support with unlimited amount of cases that can be opened is the Developer support plan. Additionally, it provides you access to the 7 core Trusted Advisor checks and the Personal Health Dashboard, where you get a personalized view of the health of AWS services, and alerts when your resources are impacted.
Hence, the correct answer is Developer.
Basic is incorrect because this support plan does not offer Technical Support cases.
Both Business and Enterprise are incorrect because these support plans are more expensive than the Developer plan.
A number of servers in your on-premises data center have been collecting dust over the past few years. What is the benefit of moving to the Cloud in this case?
A.Physical servers are managed and maintained by AWS for you
B.The ability to provision resources only when you need them
C.AWS has automated services for you
D.The ability to pay for only what you use
B.The ability to provision resources only when you need them
Explanation:
In 2006, Amazon Web Services (AWS) began offering IT infrastructure services to businesses as web services—now commonly known as cloud computing. One of the key benefits of cloud computing is the opportunity to replace upfront capital infrastructure expenses with low variable costs that scale with your business. With the cloud, businesses no longer need to plan for and procure servers and other IT infrastructure weeks or months in advance. Instead, they can instantly spin up hundreds or thousands of servers in minutes and deliver results faster.
Whether you are using it to run applications that share photos to millions of mobile users or to support business-critical operations, a cloud services platform provides rapid access to flexible and low cost IT resources. With cloud computing, you don’t need to make large upfront investments in hardware and spend a lot of time on the heavy lifting of managing that hardware. Instead, you can provision exactly the right type and size of computing resources you need to power your newest idea or operate your IT department. You can access as many resources as you need, almost instantly, and only pay for what you use.
One of the best practices to perform in the Cloud is to have disposable resources instead of fixed servers. Therefore, you can easily provision resources when you need them and take them down when you don’t in the Cloud, and this is the advantage you receive compared to having servers on-premises.
Physical servers are managed by AWS for you. However, this is not the main compelling advantage on why you should shift to the cloud rather than have your unused servers sit in your data center collecting dust.
The ability to pay for only what you use is an incorrect option since the scenario is not discussing about cost issues.
The option that says: AWS has automated services for you is incorrect since this option is not related to the concerns of the given scenario.
What are the benefits of using Amazon DynamoDB as your database? (Select TWO.)
A.DynamoDB offers 11 9’s in terms of durability, according to the SLA
B.You can perform very complex queries and joins without deterioration in performance
C.DynamoDB is self-healing, which means your data is scanned for errors and is repaired continuously
D.Database size scales automatically so you wont have to worry about capacity
E.You can store different kinds of unstructured data that would normally not be suitable for relational databases
D.Database size scales automatically so you wont have to worry about capacity
E.You can store different kinds of unstructured data that would normally not be suitable for relational databases
Explanation:
Amazon DynamoDB is a fast, scalable NoSQL managed database service. You can build applications with virtually unlimited throughput and storage. You can use DynamoDB to store structured documents, JSON data, key-value pairs and many more items that you would not be able to store on a relational database.
DynamoDB supports some of the world’s largest scale applications by providing consistent, single-digit millisecond response times at any scale. You can build applications with virtually unlimited throughput and storage. DynamoDB global tables replicate your data across multiple AWS Regions to give you fast, local access to data for your globally distributed applications. For use cases that require even faster access with microsecond latency, DynamoDB Accelerator (DAX) provides a fully managed in-memory cache.
Hence, the correct answers are:
- Database size scales automatically so you won’t have to worry about capacity
- You can store different kinds of unstructured data that would normally not be suitable for relational databases
The option that says: DynamoDB is self-healing, which means your data is scanned for errors and is repaired continuously is incorrect because DynamoDB is not a self-healing database and it doesn’t automatically scan or repair your data. An example of a self-healing database in AWS is Amazon Aurora in which the concept of “self-healing” relates to the underlying infrastructure.
The option that says: DynamoDB offers 11 9’s in terms of durability, according to the SLA is incorrect because it does not offer 11 9’s in durability, unlike Amazon S3.
The option that says: You can perform very complex queries and joins without deterioration in performance is incorrect because usually, with NoSQL databases, you would not perform complex queries due to the unstructured formatting of your data. Complex queries are better performed in SQL databases due to predictable structuring.
You believe that your AWS root account may have been used without you knowing. What can you do to tighten the security on the root account?
A.Configure MFA for your root acount
B.Disable root account credentials and use an IAM user with admin privileges
C.Change the root account password often
D.Enable Cloudtrail to monitor for suspicious logins
A.Configure MFA for your root acount
Explanation:
Multi-Factor Authentication (MFA) is a simple best practice that adds an extra layer of protection on top of your user name and password. It is actually one of the first few steps that you should perform after logging in to your newly created AWS account. With MFA enabled, when a user signs in to an AWS website, they will be prompted for their user name and password, as well as for an authentication response from their AWS MFA device. Taken together, these multiple factors provide increased security for your AWS account settings and resources.
You can enable MFA for your AWS account and for individual IAM users you have created under your account. MFA can be also be used to control access to AWS service APIs. After you’ve obtained a supported hardware or a virtual MFA device, AWS does not charge any additional fees for using MFA.
Hence, the correct answer is: Configure MFA for your root account.
The option that says: Disable root account credentials and use an IAM user with admin privileges instead is incorrect because there is no actual way to disable root account credentials in AWS.
The option that says: Change the root account password often is incorrect because although changing the password often is a good practice for any kind of account that has a password, this is still not the best option for this scenario. It is better to add an additional layer of protection to your root account via MFA instead.
The option that says: Enable Cloudtrail to monitor for suspicious logins is incorrect because enabling Cloudtrail will just give you information on the login event that occurred, but does not necessarily provide any further protection beyond that.
What is the right arrangement of the AWS Global Infrastructure components according to their geographical coverage area size, in descending order?
A.Regions, Availability Zone, Edge Locations
B.Edge Locations, Availability Zones, Regions
C.Regions, Edge Locations, Availability zones
D.Availability Zones, Edge Locations, Regions
A.Regions, Availability Zone, Edge Locations
Explanation:
The AWS Global Infrastructure delivers a cloud infrastructure companies can depend on—no matter their size, changing needs, or challenges. The AWS Global Infrastructure is designed and built to deliver the most flexible, reliable, scalable, and secure cloud computing environment with the highest quality global network performance available today. Every component of the AWS infrastructure is designed and built for redundancy and reliability, from regions to networking links to load balancers to routers and firmware.
You can explore the AWS Global infrastructure with this interactive map.
AWS provides a more extensive global footprint than any other cloud provider, and it opens up new Regions faster than other providers. To support its global footprint and ensure customers are served across the world, AWS maintains multiple geographic regions, including Regions in North America, South America, Europe, Asia Pacific, and the Middle East.
Each AWS Region provides full redundancy and connectivity to the network. Unlike other cloud providers, who define a region as a single data center, at AWS Regions consist of multiple Availability Zones, each of which is a fully isolated partition of the AWS infrastructure that consists of discrete data centers, each with redundant power, networking, and connectivity, and each housed in separate facilities.
An Availability Zone gives customers the ability to operate production applications and databases that are more highly available, fault-tolerant, and scalable than would be possible from a single data center. All AZs are interconnected with high-bandwidth, low-latency networking, over fully redundant, dedicated metro fiber providing high-throughput, low-latency networking between AZs. The network performance is sufficient to accomplish synchronous replication between AZs.
Hence, the correct answer is: Regions, Availability Zones, Edge Locations.
All other options are incorrect due to their wrong order.
What AWS service can monitor the compliance status of your AWS resources against a set of compliance guidelines? A.Amazon CloudWatch B.AWS IAM C.AWS Artifact D.AWS Config
D.AWS Config
Explanation:
AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. Config continuously monitors and records your AWS resource configurations and allows you to automate the evaluation of recorded configurations against desired configurations.
With AWS Config, you are able to continuously monitor and record configuration changes of your AWS resources. Config also enables you to inventory your AWS resources, the configurations of your AWS resources, as well as software configurations within EC2 instances at any point in time. Once change from a previous state is detected, an Amazon Simple Notification Service (SNS) notification can be delivered for you to review and take action.
Hence, the correct answer is AWS Config.
Amazon CloudWatch is incorrect because this service is just a monitoring tool for your AWS resources. CloudWatch provides you with data and actionable insights to monitor your applications, respond to system-wide performance changes, optimize resource utilization, and get a unified view of operational health. It does not check for resource compliance unlike AWS Config.
AWS Artifact is incorrect because this simply acts as a central resource for compliance-related information. It provides on-demand access to AWS’ security and compliance reports and select online agreements. It does not, however, monitor the compliance status of your AWS resources and alert you when there are deviations from your set guidelines.
AWS IAM is incorrect since this is a security service for your AWS account, user and access management.