AWS Cloud Practitioner Practice Exam Questions Flashcards

1
Q

Which AWS Service can be used to mitigate a Distributed Denial of Service (DDoS) attack?
a. Amazon CloudWatch
b. AWS Systems Manager
c. AWS Shield
d. AWS Key Management Service (AWS KMS)

A

c. AWS Shield

AWS Shield is a managed Distributed Denial of Service (DDoS) protection service that safeguards applications running on AWS. AWS Shield provides always-on detection and automatic inline mitigations that minimize application downtime and latency, so there is no need to engage AWS Support to benefit from DDoS protection. There are two tiers of AWS Shield - Standard and Advanced.

All AWS customers benefit from the automatic protections of AWS Shield Standard, at no additional charge. AWS Shield Standard defends against most common, frequently occurring network and transport layer DDoS attacks that target your web site or applications. When you use AWS Shield Standard with Amazon CloudFront and Amazon Route 53, you receive comprehensive availability protection against all known infrastructure (Layer 3 and 4) attacks.

For higher levels of protection against attacks targeting your applications running on Amazon Elastic Compute Cloud (EC2), Elastic Load Balancing (ELB), Amazon CloudFront, AWS Global Accelerator and Amazon Route 53 resources, you can subscribe to AWS Shield Advanced. In addition to the network and transport layer protections that come with Standard, AWS Shield Advanced provides additional detection and mitigation against large and sophisticated DDoS attacks, near real-time visibility into attacks, and integration with AWS WAF, a web application firewall.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Which of the following statements are true about AWS Shared Responsibility Model? (Select two)

a. AWS maintains the configuration of its infrastructure devices and is responsible for configuring the guest operating systems, databases, and applications
b. Amazon Elastic Compute Cloud (Amazon EC2) is categorized as Infrastructure as a Service (IaaS) and hence AWS will perform all of the necessary security configuration and management tasks
c. AWS trains AWS employees, but a customer must train their own employees
d. For abstracted services, such as Amazon S3, AWS operates the infrastructure layer, the operating system, platforms, encryption options, and appropriate permissions for accessing the S3 resources
e. AWS is responsible for patching and fixing flaws within the infrastructure, but customers are responsible for patching their guest operating system and applications

A

c. AWS trains AWS employees, but a customer must train their own employees
e. AWS is responsible for patching and fixing flaws within the infrastructure, but customers are responsible for patching their guest operating system and applications

“Security of the Cloud” is the responsibility of AWS - AWS is responsible for protecting the infrastructure that runs all of the services offered in the AWS Cloud. This infrastructure is composed of the hardware, software, networking, and facilities that run AWS Cloud services. As part of Patch Management, a Shared Control responsibility of AWS Shared Responsibility Model, AWS is responsible for patching and fixing flaws within the infrastructure, but customers are responsible for patching their guest OS and applications.

“Security in the Cloud” is the responsibility of the customer. Customer responsibility will be determined by the AWS Cloud services that a customer selects. This determines the amount of configuration work the customer must perform as part of their security responsibilities.

As part of Awareness & Training, a Shared Control responsibility of the AWS Shared Responsibility Model, AWS trains AWS employees, but a customer must train their own employees.

AWS Shared Responsibility Model: https://aws.amazon.com/compliance/shared-responsibility-model/

Incorrect options:

AWS maintains the configuration of its infrastructure devices and is responsible for configuring the guest operating systems, databases, and applications - As part of Configuration Management, a Shared Control responsibility of the AWS Shared Responsibility Model, AWS maintains the configuration of its infrastructure devices, but a customer is responsible for configuring their own guest operating systems, databases, and applications.

Amazon Elastic Compute Cloud (Amazon EC2) is categorized as Infrastructure as a Service (IaaS) and hence AWS will perform all of the necessary security configuration and management tasks - A service such as Amazon Elastic Compute Cloud (Amazon EC2) is categorized as Infrastructure as a Service (IaaS) and, as such, requires the customer to perform all of the necessary security configuration and management tasks. Customers that deploy an Amazon EC2 instance are responsible for the management of the guest operating system (including updates and security patches), any application software or utilities installed by the customer on the instances, and the configuration of the AWS-provided firewall (called a security group) on each instance.

For abstracted services, such as Amazon S3, AWS operates the infrastructure layer, the operating system, platforms, encryption options, and appropriate permissions for accessing the S3 resources - For abstracted services, such as Amazon S3 and Amazon DynamoDB, AWS operates the infrastructure layer, the operating system, and platforms, and customers access the endpoints to store and retrieve data. Customers are responsible for managing their data (including encryption options), classifying their assets, and using IAM tools to apply the appropriate permissions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Which of the following AWS authentication mechanisms supports an AWS Multi-Factor Authentication (AWS MFA) device that you can plug into a USB port on your computer?

a. SMS text message-based Multi-Factor Authentication (AWS MFA)
b. Hardware Multi-Factor Authentication (AWS MFA) device
c. U2F security key
d. Virtual Multi-Factor Authentication (AWS MFA) device

A

c. U2F security key

Universal 2nd Factor (U2F) Security Key is a device that you can plug into a USB port on your computer. U2F is an open authentication standard hosted by the FIDO Alliance. When you enable a U2F security key, you sign in by entering your credentials and then tapping the device instead of manually entering a code.

How to enable the U2F Security Key for your own IAM user: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa_enable_u2f.html

Incorrect options:

Virtual Multi-Factor Authentication (AWS MFA) device - This is a software app that runs on a phone or other device and emulates a physical device. The device generates a six-digit numeric code based upon a time-synchronized one-time password algorithm. The user must type a valid code from the device on a second webpage during sign-in. Each virtual MFA device assigned to a user must be unique.

Hardware Multi-Factor Authentication (AWS MFA) device - This is a hardware device that generates a six-digit numeric code based upon a time-synchronized one-time password algorithm. The user must type a valid code from the device on a second webpage during sign-in. Each MFA device assigned to a user must be unique. A user cannot type a code from another user’s device to be authenticated.

SMS text message-based Multi-Factor Authentication (AWS MFA) - This is a type of MFA in which the IAM user settings include the phone number of the user’s SMS-compatible mobile device. When the user signs in, AWS sends a six-digit numeric code by SMS text message to the user’s mobile device. The user is required to type that code on a second webpage during sign-in.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

A company would like to optimize Amazon Elastic Compute Cloud (Amazon EC2) costs. Which of the following actions can help with this task? (Select TWO)

a. Build its own servers
b. Vertically scale the EC2 instances
c. Set up Auto Scaling groups to align the number of instances with the demand
d. Purchase Amazon EC2 Reserved instances (RIs)
e. Opt for a higher AWS Support plan

A

c. Set up Auto Scaling groups to align the number of instances with the demand
d. Purchase Amazon EC2 Reserved instances (RIs)

An Auto Scaling group contains a collection of Amazon EC2 instances that are treated as a logical grouping for automatic scaling and management. You can adjust its size to meet demand, either manually or by using automatic scaling.

AWS Auto Scaling can help you optimize your utilization and cost efficiencies when consuming AWS services so you only pay for the resources you need.

How AWS Auto Scaling works: https://aws.amazon.com/autoscaling/

Amazon EC2 Reserved Instances (RI) provide a significant discount (up to 72%) compared to On-Demand pricing and provide a capacity reservation when used in a specific Availability Zone (AZ).

EC2 Pricing Options Overview: https://aws.amazon.com/ec2/pricing/

Incorrect options:

Vertically scale the EC2 instances - Vertically scaling EC2 instances (increasing one computer performance by adding CPUs, memory, and storage) is limited and is way more expensive than scaling horizontally (adding more computers to the system).

Opt for a higher AWS Support plan - The AWS Support plans do not help with EC2 costs.

Build its own servers - Building your own servers is more expensive than using EC2 instances in the cloud. You’re more likely to spend more money than saving money.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Which of the following is the correct statement regarding the AWS Storage services?

a. Amazon Simple Storage Service (Amazon S3) is file based storage, Amazon Elastic Block Store (Amazon EBS) is block based storage and Amazon Elastic File System (Amazon EFS) is object based storage
b. Amazon Simple Storage Service (Amazon S3) is block based storage, Amazon Elastic Block Store (Amazon EBS) is object based storage and Amazon Elastic File System (Amazon EFS) is file based storage
c. Amazon Simple Storage Service (Amazon S3) is object based storage, Amazon Elastic Block Store (Amazon EBS) is file based storage and Amazon Elastic File System (Amazon EFS) is block based storage
d. Amazon Simple Storage Service (Amazon S3) is object based storage, Amazon Elastic Block Store (Amazon EBS) is block based storage and Amazon Elastic File System (Amazon EFS) is file based storage

A

d. Amazon Simple Storage Service (Amazon S3) is object based storage, Amazon Elastic Block Store (Amazon EBS) is block based storage and Amazon Elastic File System (Amazon EFS) is file based storage

Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed, elastic NFS file system.

Amazon Elastic Block Store (Amazon EBS) is an easy to use, high-performance block storage service designed for use with Amazon Elastic Compute Cloud (Amazon EC2) for both throughput and transaction-intensive workloads at any scale.

Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Which of the following is the least effort way to encrypt data for AWS services only in your AWS account using AWS Key Management Service (KMS)?

a. Use AWS KMS APIs to encrypt data within your own application by using the AWS Encryption SDK
b. Use AWS managed master keys that are automatically created in your account for each service
c. Use AWS owned CMK in the service you wish to use encryption
d. Create your own customer managed keys (CMKs) in AWS KMS

A

b. Use AWS managed master keys that are automatically created in your account for each service

AWS KMS keys (KMS keys) are the primary resource in AWS KMS. You can use a KMS key to encrypt, decrypt, and re-encrypt data. It can also generate data keys that you can use outside of AWS KMS. AWS KMS is replacing the term customer master key (CMK) with AWS KMS key and KMS key.

AWS managed CMKs are CMKs in your account that are created, managed, and used on your behalf by an AWS service that is integrated with AWS KMS. Some AWS services support only an AWS managed CMK. Others use an AWS owned CMK or offer you a choice of CMKs. AWS managed CMK can be used only for your AWS account.

You can view the AWS managed CMKs in your account, view their key policies, and audit their use in AWS CloudTrail logs. However, you cannot manage these CMKs, rotate them, or change their key policies. And, you cannot use AWS managed CMKs in cryptographic operations directly; the service that creates them uses them on your behalf.

AWS managed CMKs appear on the AWS managed keys page of the AWS Management Console for AWS KMS. You can also identify most AWS managed CMKs by their aliases, which have the format aws/service-name, such as aws/redshift.

You do not pay a monthly fee for AWS managed CMKs. They can be subject to fees for use in excess of the free tier, but some AWS services cover these costs for you.

Incorrect options:

Create your own customer managed keys (CMKs) in AWS KMS - The AWS KMS keys that you create are customer managed keys. Customer managed keys are KMS keys in your AWS account that you create, own, and manage. You have full control over these KMS keys, including establishing and maintaining their key policies, IAM policies, and grants, enabling and disabling them, rotating their cryptographic material, adding tags, creating aliases that refer to the KMS keys, and scheduling the KMS keys for deletion.

customer managed key (CMK) incur a monthly fee and a fee for use in excess of the free tier. They are counted against the AWS KMS quotas for your account.

Use AWS KMS APIs to encrypt data within your own application by using the AWS Encryption SDK - AWS KMS APIs can also be accessed directly through the AWS KMS Command Line Interface or AWS SDK for programmatic access. AWS KMS APIs can also be used indirectly to encrypt data within your own applications by using the AWS Encryption SDK. This requires code changes and is not the easiest way to achieve encryption.

Use AWS owned CMK in the service you wish to use encryption - AWS owned CMKs are a collection of CMKs that an AWS service owns and manages for use in multiple AWS accounts. Although AWS owned CMKs are not in your AWS account, an AWS service can use its AWS owned CMKs to protect the resources in your account. AWS owned CMK can be used for multiple AWS accounts.

You do not need to create or manage the AWS owned CMKs. However, you cannot view, use, track, or audit them. You are not charged a monthly fee or usage fee for AWS owned CMKs and they do not count against the AWS KMS quotas for your account.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

A company wants to have control over creating and using its own keys for encryption on AWS services. Which of the following can be used for this use-case?
a. customer managed key (CMK)
b. AWS Secrets Manager
c. AWS owned key
d. AWS managed key

A

a. customer managed key (CMK)

An AWS KMS key is a logical representation of a cryptographic key. A KMS key contains metadata, such as the key ID, key spec, key usage, creation date, description, and key state. Most importantly, it contains a reference to the key material that is used when you perform cryptographic operations with the KMS key.

The KMS keys that you create are customer managed keys. Customer managed keys are KMS keys in your AWS account that you create, own, and manage. You have full control over these KMS keys, including establishing and maintaining their key policies, IAM policies, and grants, enabling and disabling them, rotating their cryptographic material, adding tags, creating aliases that refer to the KMS keys, and scheduling the KMS keys for deletion.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Which of the following AWS services have data encryption automatically enabled? (Select two)?

a. Amazon Elastic File System (Amazon EFS)
b. Amazon Redshift
c. Amazon Simple Storage Service (Amazon S3)
d. Amazon Elastic Block Store (Amazon EBS)
e. AWS Storage Gateway

A

c. Amazon Simple Storage Service (Amazon S3)
e. AWS Storage Gateway

All Amazon S3 buckets have encryption configured by default, and objects are automatically encrypted by using server-side encryption with Amazon S3 managed keys (SSE-S3). This encryption setting applies to all objects in your Amazon S3 buckets.

AWS Storage Gateway is a hybrid cloud storage service that gives you on-premises access to virtually unlimited cloud storage. All data transferred between the gateway and AWS storage is encrypted using SSL (for all three types of gateways - File, Volume and Tape Gateways).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Which of the following is available across all AWS Support plans?

a. Third-Party Software Support
b. Enhanced Technical Support with unlimited cases and unlimited contacts
c. Full set of AWS Trusted Advisor best practice checks
d. AWS Health Dashboard – Your account health

A

d. AWS Health Dashboard – Your account health

Full set of AWS Trusted Advisor best practice checks, enhanced Technical Support with unlimited cases, and unlimited contacts and third-party Software Support are available only for Business and Enterprise Support plans.

AWS Health Dashboard – Your account health is available for all Support plans.

AWS Health Dashboard – Your account health provides alerts and remediation guidance when AWS is experiencing events that may impact you.

With AWS Health Dashboard – Your account health, alerts are triggered by changes in the health of your AWS resources, giving you event visibility, and guidance to help quickly diagnose and resolve issues.

You can check on this page https://health.aws.amazon.com/health/home to get current status information.

Exam Alert:

Please review the differences between the AWS Developer Support, AWS Business Support, AWS Enterprise On-Ramp Support and AWS Enterprise Support plans as you can expect at least a couple of questions on the exam: https://aws.amazon.com/premiumsupport/plans/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Which of the following options is NOT a feature of Amazon Inspector?

a. Analyze against unintended network accessibility
b. Inspect running operating systems (OS) against known vulnerabilities
c. Automate security assessments
d. Track configuration changes

A

d. Track configuration changes

Tracking configuration changes is a feature of AWS Config.

AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. Config continuously monitors and records your AWS resource configurations and allows you to automate the evaluation of recorded configurations against desired configurations.

How AWS Config works: https://aws.amazon.com/config/

Incorrect options:

Automate security assessments

Analyze against unintended network accessibility

Inspect running operating systems (OS) against known vulnerabilities

These options are all features of Amazon Inspector.

Amazon Inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on AWS. Amazon Inspector automatically assesses applications for exposure, vulnerabilities, and deviations from best practices.

Amazon Inspector security assessments help you check for unintended network accessibility of your Amazon EC2 instances and for vulnerabilities on those EC2 instances.

Amazon Inspector also offers predefined software called an agent that you can optionally install in the operating system of the EC2 instances that you want to assess. The agent monitors the behavior of the EC2 instances, including network, file system, and process activity. It also collects a wide set of behavior and configuration data (telemetry).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Which AWS service is used to store and commit code privately and also offer features for version control?

a. AWS CodeStar
b. AWS CodeBuild
c. AWS CodePipeline
d. AWS CodeCommit

A

d. AWS CodeCommit

AWS CodeCommit is a fully-managed source control service that hosts secure Git-based repositories. It makes it easy for teams to collaborate on code in a secure and highly scalable ecosystem. CodeCommit eliminates the need to operate your own source control system or worry about scaling its infrastructure. You can use CodeCommit to securely store anything from source code to binaries, and it works seamlessly with your existing Git tools.

AWS CodeCommit eliminates the need to host, maintain, back up, and scale your own source control servers. The service automatically scales to meet the growing needs of your project. AWS CodeCommit automatically encrypts your files in transit and at rest. AWS CodeCommit is integrated with AWS Identity and Access Management (AWS IAM) allowing you to customize user-specific access to your repositories.

AWS CodeCommit supports all Git commands and works with your existing Git tools. You can keep using your preferred development environment plugins, continuous integration/continuous delivery systems, and graphical clients with CodeCommit.

Incorrect options:

AWS CodePipeline - AWS CodePipeline is a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates. CodePipeline automates the build, test, and deploy phases of your release process every time there is a code change, based on the release model you define. This enables you to rapidly and reliably deliver features and updates.

AWS CodeBuild - AWS CodeBuild is a fully managed continuous integration service that compiles source code, runs tests, and produces software packages that are ready to deploy. With CodeBuild, you don’t need to provision, manage, and scale your own build servers. CodeBuild scales continuously and processes multiple builds concurrently, so your builds are not left waiting in a queue. You can get started quickly by using prepackaged build environments, or you can create custom build environments that use your own build tools. With CodeBuild, you are charged by the minute for the compute resources you use.

AWS CodeStar - AWS CodeStar is a cloud‑based development service that provides the tools you need to quickly develop, build, and deploy applications on AWS. With AWS CodeStar, you can set up your entire continuous delivery toolchain in minutes, allowing you to start releasing code faster. AWS CodeStar makes it easy for your whole team to work together securely, with built-in role-based policies that allow you to easily manage access and add owners, contributors, and viewers to your projects.

Each CodeStar project includes development tools, including AWS CodePipeline, AWS CodeCommit, AWS CodeBuild and AWS CodeDeploy, that can be used on their own and with existing AWS applications.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Which of the following options are the benefits of using AWS Elastic Load Balancing (ELB)? (Select TWO)

a. Storage
b. High availability
c. Less costly
d. Fault tolerance
e. Agility

A

b. High availability
d. Fault tolerance

Elastic Load Balancing (ELB) automatically distributes incoming application traffic across multiple targets, such as Amazon EC2 instances, containers, and IP addresses. It can handle the varying load of your application traffic in a single Availability Zone (AZ) or across multiple Availability Zones (AZs).

Elastic Load Balancing (ELB) offers three types of load balancers that all feature the high availability, automatic scaling, and robust security necessary to make your applications fault-tolerant: Application Load Balancer (best suited for HTTP and HTTPS traffic), Network Load Balancer (best suited for TCP traffic), and Classic Load Balancer.

Incorrect options:

Agility - Agility refers to new IT resources being only a click away, which means that you reduce the time to make those resources available to your developers from weeks to just minutes. AWS Elastic Load Balancing (ELB) does not help with agility.

Less costly - AWS Elastic Load Balancing (ELB) does not help with reducing costs.

Storage - AWS Elastic Load Balancing (ELB) does not offer storage benefits. It is not a storage-related service.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

A company needs to keep sensitive data in its own data center due to compliance but would still like to deploy resources using AWS. Which Cloud deployment model does this refer to?

a. Public Cloud
b. Hybrid Cloud
c. Private Cloud
d. On-premises

A

b. Hybrid Cloud

A hybrid deployment is a way to connect infrastructure and applications between cloud-based resources and existing resources that are not located in the cloud. The most common method of hybrid deployment is between the cloud and existing on-premises infrastructure to extend, and grow, an organization’s infrastructure into the cloud while connecting cloud resources to the internal system.

Overview of Cloud Computing Deployment Models: https://aws.amazon.com/types-of-cloud-computing/

Incorrect options:

Public Cloud - A public cloud-based application is fully deployed in the cloud and all parts of the application run in the cloud. Applications in the cloud have either been created in the cloud or have been migrated from an existing infrastructure to take advantage of the benefits of cloud computing.

Private Cloud - Unlike a Public cloud, a Private cloud enables businesses to avail IT services that are provisioned and customized according to their precise needs. The business can further avail the IT services securely and reliably over a private IT infrastructure.

On-premises - This is not a cloud deployment model. When an enterprise opts for on-premises, it needs to create, upgrade, and scale the on-premise IT infrastructure by investing in sophisticated hardware, compatible software, and robust services. Also, the business needs to deploy dedicated IT staff to upkeep, scale, and manage the on-premise infrastructure continuously.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

A company has defined a baseline that mentions the number of AWS resources to be used for different stages of application testing. However, the company realized that employees are not adhering to the guidelines and provisioning additional resources via API calls, resulting in higher testing costs.

Which AWS service will help the company raise alarms whenever the baseline resource numbers are crossed?

a. Amazon Detective
b. AWS CloudTrail Insights
c. AWS X-Ray
d. AWS Config

A

b. AWS CloudTrail Insights

AWS CloudTrail Insights helps AWS users identify and respond to unusual activity associated with write API calls by continuously analyzing CloudTrail management events.

Insights events are logged when AWS CloudTrail detects unusual write management API activity in your account. If you have CloudTrail Insights enabled, and CloudTrail detects unusual activity, Insights events are delivered to the destination S3 bucket for your trail. You can also see the type of insight and the incident time period when you view Insights events on the CloudTrail console. Unlike other types of events captured in a CloudTrail trail, Insights events are logged only when CloudTrail detects changes in your account’s API usage that differ significantly from the account’s typical usage patterns.

AWS CloudTrail Insights can help you detect unusual API activity in your AWS account by raising Insights events. CloudTrail Insights measures your normal patterns of API call volume, also called the baseline, and generates Insights events when the volume is outside normal patterns.

AWS CloudTrail Insights continuously monitors CloudTrail write management events, and uses mathematical models to determine the normal levels of API and service event activity for an account. CloudTrail Insights identifies behavior that is outside normal patterns, generates Insights events, and delivers those events to a /CloudTrail-Insight folder in the chosen destination S3 bucket for your trail. You can also access and view Insights events in the AWS Management Console for CloudTrail.

Identify and Respond to Unusual API Activity using AWS CloudTrail Insights: https://aws.amazon.com/blogs/aws/announcing-cloudtrail-insights-identify-and-respond-to-unusual-api-activity/

Incorrect options:

AWS X-Ray - AWS X-Ray helps developers analyze and debug production, distributed applications, such as those built using a microservices architecture. With X-Ray, you can understand how your application and its underlying services are performing to identify and troubleshoot the root cause of performance issues and errors. X-Ray provides an end-to-end view of requests as they travel through your application, and shows a map of your application’s underlying components. X-Ray is not for tracking user actions when interacting with the AWS systems.

Amazon Detective - Amazon Detective simplifies the process of investigating security findings and identifying the root cause. Amazon Detective analyzes trillions of events from multiple data sources such as VPC Flow Logs, AWS CloudTrail logs, and Amazon GuardDuty findings and automatically creates a graph model that provides you with a unified, interactive view of your resources, users, and the interactions between them over time.

AWS Config - AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. Config continuously monitors and records your AWS resource configurations and allows you to automate the evaluation of recorded configurations against desired configurations. With Config, you can review changes in configurations and relationships between AWS resources, dive into detailed resource configuration histories, and determine your overall compliance against the configurations specified in your internal guidelines. This enables you to simplify compliance auditing, security analysis, change management, and operational troubleshooting.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Which of the following are the best practices when using AWS Organizations? (Select TWO)

a. Restrict account privileges using Service Control Policies (SCP)
b. Never use tags for billing
c. Create AWS accounts per department
d. Disable AWS CloudTrail on several accounts
e. Do not use AWS Organizations to automate AWS account creation

A

a. Restrict account privileges using Service Control Policies (SCP)
c. Create AWS accounts per department

AWS Organizations helps you centrally govern your environment as you grow and scale your workloads on AWS. Whether you are a growing startup or a large enterprise, AWS Organizations help you to centrally manage billing; control access, compliance, and security; and share resources across your AWS accounts.

Using AWS Organizations, you can automate account creation, create groups of accounts to reflect your business needs, and apply policies for these groups for governance. You can also simplify billing by setting up a single payment method for all of your AWS accounts. Through integrations with other AWS services, you can use AWS Organizations to define central configurations and resource sharing across accounts in your organization. AWS Organizations is available to all AWS customers at no additional charge.

You should create accounts per department based on regulatory restrictions (using Service Control Policies (SCP)) for better resource isolation, and to have separate per-account service limits.

AWS Organizations allows you to restrict what services and actions are allowed in your accounts. You can use the Service Control Policies (SCP) to apply permission guardrails on AWS Identity and Access Management (IAM) users and roles.

Incorrect options:

Never use tags for billing - You should use tags standards to categorize AWS resources for billing purposes.

Disable AWS CloudTrail on several accounts - You should enable AWS CloudTrail to monitor activity on all accounts for governance, compliance, risk, and auditing purposes.

Do not use AWS Organizations to automate AWS account creation - AWS Organizations helps you simplify IT operations by automating AWS account creation and management. The AWS Organizations APIs enable you to create new accounts programmatically and to add new accounts to a group. The policies attached to the group are automatically applied to the new account.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Which of the following is the MOST cost-effective Amazon Elastic Compute Cloud (Amazon EC2) instance purchasing option for short-term, spiky and critical workloads on AWS Cloud?

a. Spot Instance
b. Dedicated Host
c. On-Demand Instance
d. Reserved Instance (RI)

A

c. On-Demand Instance

An On-Demand Instance is an instance that you use on-demand. You have full control over its lifecycle — you decide when to launch, stop, hibernate, start, reboot, or terminate it. There is no long-term commitment required when you purchase On-Demand Instances. There is no upfront payment and you pay only for the seconds that your On-Demand Instances are running. There is no need for a long-term purchasing commitment. The price per second for running an On-Demand Instance is fixed. On-demand instances cannot be interrupted. Therefore On-Demand instances are the best fit for short-term, spiky and critical workloads.

Amazon EC2 Pricing Options Overview: https://aws.amazon.com/ec2/pricing/

Incorrect options:

Spot Instance - A Spot Instance is an unused EC2 instance that is available for less than the On-Demand price. Because Spot Instances enable you to request unused EC2 instances at steep discounts (up to 90%), you can lower your Amazon EC2 costs significantly. Spot Instances are well-suited for data analysis, batch jobs, background processing, and other flexible tasks that can be interrupted. These can be terminated at short notice, so these are not suitable for critical workloads that need to run at a specific point in time.

Reserved Instance (RI) - Reserved Instances (RI) provide you with significant savings (up to 75%) on your Amazon EC2 costs compared to On-Demand Instance pricing. Reserved Instances (RI) are not physical instances, but rather a billing discount applied to the use of On-Demand Instances in your account. You can purchase a Reserved Instance (RI) for a one-year or three-year commitment, with the three-year commitment offering a bigger discount. Reserved instances (RI) cannot be interrupted. Reserved instances (RI) are not the right choice for short-term workloads.

Dedicated Host - Amazon EC2 Dedicated Hosts allow you to use your eligible software licenses from vendors such as Microsoft and Oracle on Amazon EC2 so that you get the flexibility and cost-effectiveness of using your licenses, but with the resiliency, simplicity, and elasticity of AWS. An Amazon EC2 Dedicated Host is a physical server fully dedicated for your use, so you can help address corporate compliance requirement. They’re not cost-efficient compared to On-Demand instances. So this option is not correct.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

An e-commerce company wants to review the Payment Card Industry (PCI) reports on AWS Cloud. Which AWS resource can be used to address this use-case?

a. AWS Trusted Advisor
b. AWS Secrets Manager
c. AWS Artifact
d. AWS Cost & Usage Report (AWS CUR)

A

c. AWS Artifact

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

A retail company has multiple AWS accounts for each of its departments. Which of the following AWS services can be used to set up consolidated billing and a single payment method for these AWS accounts?

a. AWS Organizations
b. AWS Cost Explorer
c. AWS Budgets
d. AWS Secrets Manager

A

a. AWS Organizations

AWS Organizations helps you to centrally manage billing; control access, compliance, and security; and share resources across your AWS accounts. Using AWS Organizations, you can automate account creation, create groups of accounts to reflect your business needs, and apply policies for these groups for governance. You can also simplify billing by setting up a single payment method for all of your AWS accounts. AWS Organizations is available to all AWS customers at no additional charge.

Key Features of AWS Organizations: https://aws.amazon.com/organizations/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Which AWS service will help you deploy application code automatically to an Amazon Elastic Compute Cloud (Amazon EC2) instance?

a. AWS CloudFormation
b. AWS CodeBuild
c. AWS CodeDeploy
d. AWS Elastic Beanstalk

A

c. AWS CodeDeploy

AWS CodeDeploy is a service that automates application deployments to a variety of compute services including Amazon EC2, AWS Fargate, AWS Lambda, and on-premises instances. CodeDeploy fully automates your application deployments eliminating the need for manual operations. CodeDeploy protects your application from downtime during deployments through rolling updates and deployment health tracking.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Per the AWS Shared Responsibility Model, management of which of the following AWS services is the responsibility of the customer?

a. Amazon Elastic Compute Cloud (Amazon EC2)
b. AWS Elastic Beanstalk
c. Amazon DynamoDB
d. Amazon Simple Storage Service (Amazon S3)

A

a. Amazon Elastic Compute Cloud (Amazon EC2)

Security and Compliance is a shared responsibility between AWS and the customer. This shared model can help relieve the customer’s operational burden as AWS operates, manages, and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the service operates.

“Security of the Cloud” is the responsibility of AWS - AWS is responsible for protecting the infrastructure that runs all of the services offered in the AWS Cloud. This infrastructure is composed of the hardware, software, networking, and facilities that run AWS Cloud services.

“Security in the Cloud” is the responsibility of the customer. Customer responsibility will be determined by the AWS Cloud services that a customer selects. This determines the amount of configuration work the customer must perform as part of their security responsibilities. For example, a service such as Amazon Elastic Compute Cloud (Amazon EC2) is categorized as Infrastructure as a Service (IaaS) and, as such, requires the customer to perform all of the necessary security configuration and management tasks. Customers that deploy an Amazon EC2 instance are responsible for the management of the guest operating system (including updates and security patches), any application software or utilities installed by the customer on the instances, and the configuration of the AWS-provided firewall (called a security group) on each instance.

Incorrect options:

Amazon Simple Storage Service (Amazon S3)
Amazon DynamoDB
AWS Elastic Beanstalk

For abstracted services, such as Amazon S3, Amazon DynamoDB and for managed services such as AWS Elastic Beanstalk, AWS operates the infrastructure layer, the operating system, and platforms, and customers access the endpoints to store and retrieve data. Customers are responsible for managing their data (including encryption options), classifying their assets, and using IAM tools to apply the appropriate permissions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

A company wants to identify the optimal AWS resource configuration for its workloads so that the company can reduce costs and increase workload performance. Which of the following services can be used to meet this requirement?
a. AWS Cost Explorer
b. AWS Systems Manager
c. AWS Budgets
d. AWS Compute Optimizer

A

d. AWS Compute Optimizer

AWS Compute Optimizer recommends optimal AWS resources for your workloads to reduce costs and improve performance by using machine learning to analyze historical utilization metrics. Over-provisioning resources can lead to unnecessary infrastructure costs, and under-provisioning resources can lead to poor application performance. Compute Optimizer helps you choose optimal configurations for three types of AWS resources: Amazon EC2 instances, Amazon EBS volumes, and AWS Lambda functions, based on your utilization data.

Compute Optimizer recommends up to 3 options from 140+ EC2 instance types, as well as a wide range of EBS volume and Lambda function configuration options, to right-size your workloads. Compute Optimizer also projects what the CPU utilization, memory utilization, and run time of your workload would have been on recommended AWS resource options. This helps you understand how your workload would have performed on the recommended options before implementing the recommendations.

How Compute Optimizer works: https://aws.amazon.com/compute-optimizer/

Incorrect options:

AWS Cost Explorer - AWS Cost Explorer has an easy-to-use interface that lets you visualize, understand, and manage your AWS costs and usage over time. Cost Explorer Resource Rightsizing Recommendations and Compute Optimizer use the same recommendation engine. The Compute Optimizer recommendation engine delivers recommendations to help customers identify optimal EC2 instance types for their workloads. The Cost Explorer console and API surface a subset of these recommendations that may lead to cost savings, and augments them with customer-specific cost and savings information (e.g. billing information, available credits, RI, and Savings Plans) to help Cost Management owners quickly identify savings opportunities through infrastructure rightsizing. Compute Optimizer console and its API delivers all recommendations regardless of the cost implications.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

A company would like to move its infrastructure to AWS Cloud. Which of the following should be included in the Total Cost of Ownership (TCO) estimate? (Select TWO)

a. Power/Cooling
b. Application advertising
c. Electronic equipment at office
d. Number of end-users
e. Server administration

A

a. Power/Cooling
e. Server administration

AWS Pricing Calculator lets you explore AWS services and create an estimate for the cost of your use cases on AWS. You can model your solutions before building them, explore the price points and calculations behind your estimate, and find the available instance types and contract terms that meet your needs. This enables you to make informed decisions about using AWS. You can plan your AWS costs and usage or price out by setting up a new set of instances and services. AWS Pricing Calculator can be accessed at https://calculator.aws/#/.

AWS Pricing Calculator compares the cost of your applications in an on-premises or traditional hosting environment to AWS: server, storage, network, and IT labor. Therefore, you need to include every element relevant to these points of comparison.

Server administration is included in the IT labor costs.

Power/Cooling are included in the server, storage, and network cost.

Incorrect options:

Application advertising - The application advertising is not relevant for a Total Cost of Ownership (TCO) estimate.

Number of end-users - The number of end-users is not relevant for a Total Cost of Ownership (TCO) estimate.

Electronic equipment at office - The electronic equipment at the office is not relevant for a Total Cost of Ownership (TCO) estimate.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Which of the following statements are true about Cost Allocation Tags in AWS Billing? (Select two)

a. Tags help in organizing resources and are a mandatory configuration item to run reports
b. Only user-defined tags need to be activated before they can appear in Cost Explorer or on a cost allocation report
c. For each resource, each tag key must be unique, but can have multiple values
d. For each resource, each tag key must be unique, and each tag key can have only one value
e. You must activate both AWS generated tags and user-defined tags separately before they can appear in Cost Explorer or on a cost allocation report

A

d. For each resource, each tag key must be unique, and each tag key can have only one value
e. You must activate both AWS generated tags and user-defined tags separately before they can appear in Cost Explorer or on a cost allocation report

A Cost Allocation Tag is a label that you or AWS assigns to an AWS resource. Each tag consists of a key and a value. For each resource, each tag key must be unique, and each tag key can have only one value. You can use tags to organize your resources, and cost allocation tags to track your AWS costs on a detailed level.

AWS provides two types of cost allocation tags, an AWS generated tags and user-defined tags. AWS defines, creates, and applies the AWS generated tags for you, and you define, create, and apply user-defined tags. You must activate both types of tags separately before they can appear in Cost Explorer or on a cost allocation report.

AWS Cost Allocation Tags Overview: https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/cost-alloc-tags.html

Incorrect options:

Tags help in organizing resources and are a mandatory configuration item to run reports - Tags definitely help organize resources as per an organization’s requirement; they are not mandatory though.

For each resource, each tag key must be unique, but can have multiple values - For each resource, each tag key must be unique, and each tag key can have only one value.

Only user-defined tags need to be activated before they can appear in Cost Explorer or on a cost allocation report - As explained above, both kinds of tags (user-defined and AWS generated) need to be activated separately before they can appear in report generation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

A multi-national corporation wants to get expert professional advice on migrating to AWS and managing their applications on AWS Cloud. Which of the following entities would you recommend for this engagement?
a. Concierge Support Team
b. AWS Trusted Advisor
c. APN Consulting Partner
d. APN Technology Partner

A

c. APN Consulting Partner

The AWS Partner Network (APN) is the global partner program for technology and consulting businesses that leverage Amazon Web Services to build solutions and services for customers.

APN Consulting Partners are professional services firms that help customers of all types and sizes design, architect, build, migrate, and manage their workloads and applications on AWS, accelerating their migration to AWS cloud.

APN Partner Types Overview: https://aws.amazon.com/partners/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Which AWS tool can provide best practice recommendations for performance, service limits, and cost optimization?

a. AWS Health Dashboard - Service health
b. AWS Trusted Advisor
c. Amazon Inspector
d. Amazon CloudWatch

A

b. AWS Trusted Advisor

AWS Trusted Advisor is an online resource to help you reduce cost, increase performance, and improve security by optimizing your AWS environment. AWS Trusted Advisor provides real-time guidance to help you provision your resources following AWS best practices.

How AWS Trusted Advisor works: https://aws.amazon.com/premiumsupport/technology/trusted-advisor/

Incorrect options:

Amazon Inspector - Amazon Inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on your Amazon EC2 instances. Amazon Inspector automatically assesses applications for exposure, vulnerabilities, and deviations from best practices. Nevertheless, it does not provide best practice recommendations.

AWS Health Dashboard - Service health - AWS Health Dashboard - Service health publishes most up-to-the-minute information on the status and availability of all AWS services in tabular form for all Regions that AWS is present in. It does not provide best practice recommendations.

Amazon CloudWatch - Amazon CloudWatch is a monitoring and observability service built for DevOps engineers, developers, site reliability engineers (SREs), and IT managers. Amazon CloudWatch provides data and actionable insights to monitor applications, respond to system-wide performance changes, optimize resource utilization, and get a unified view of operational health. This is an excellent service for building Resilient systems. Think resource performance monitoring, events, and alerts; think Amazon CloudWatch. Amazon CloudWatch does not provide best practice recommendations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Access Key ID and Secret Access Key are tied to which of the following AWS Identity and Access Management (AWS IAM) entities?

a. IAM Role
b. IAM Policy
c. IAM User
d. IAM User Group

A

c. IAM User

Access keys are long-term credentials for an IAM user or the AWS account root user. You can use access keys to sign programmatic requests to the AWS CLI or AWS API (directly or using the AWS SDK). Access keys consist of two parts: an access key ID (for example, AKIAIOSFODNN7EXAMPLE) and a secret access key (for example, wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY). As a user name and password, you must use both the access key ID and secret access key together to authenticate your requests. Access Keys are secret, just like a password. You should never share them.

Note to self: Think Github

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

The QA team at a company wants a tool/service that can provide access to different mobile devices with variations in firmware and Operating System versions.

Which AWS service can address this use case?

a. AWS Device Farm
b. AWS Elastic Beanstalk
c. AWS Mobile Farm
d. AWS CodePipeline

A

a. AWS Device Farm

AWS Device Farm is an application testing service that lets you improve the quality of your web and mobile apps by testing them across an extensive range of desktop browsers and real mobile devices; without having to provision and manage any testing infrastructure. The service enables you to run your tests concurrently on multiple desktop browsers or real devices to speed up the execution of your test suite, and generates videos and logs to help you quickly identify issues with your app.

Device Farm is designed for developers, QA teams, and customer support representatives who are building, testing, and supporting mobile apps to increase the quality of their apps. Application quality is increasingly important, and also getting complex due to the number of device models, variations in firmware and OS versions, carrier and manufacturer customizations, and dependencies on remote services and other apps. AWS Device Farm accelerates the development process by executing tests on multiple devices, giving developers, QA and support professionals the ability to perform automated tests and manual tasks like reproducing customer issues, exploratory testing of new functionality, and executing manual test plans. AWS Device Farm also offers significant savings by eliminating the need for internal device labs, lab managers, and automation infrastructure development.

How AWS Device Farm works: https://aws.amazon.com/device-farm/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Which of the following is a serverless AWS service?
a. AWS Lambda
b. AWS Elastic Beanstalk
c. Amazon Elastic Compute Cloud (Amazon EC2)
d. Amazon EMR

A

a. AWS Lambda

Remember: C1 calls serverless Lambda

AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume. With Lambda, you can run code for virtually any type of application or backend service - all with zero administration. Just upload your code and Lambda takes care of everything required to run and scale your code with high availability.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

As a Cloud Practitioner, which Amazon Simple Storage Service (Amazon S3) storage class would you recommend for data archival?

a. Amazon S3 Standard
b. Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA)
c. Amazon S3 Intelligent-Tiering
d. Amazon S3 Glacier Flexible Retrieval

A

d. Amazon S3 Glacier Flexible Retrieval

Amazon S3 Intelligent-Tiering is not suitable for data archival.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Which of the following AWS services support VPC Endpoint Gateway for a private connection from a VPC? (Select two)
a. Amazon Elastic Compute Cloud (Amazon EC2)
b. Amazon Simple Storage Service (Amazon S3)
c. Amazon DynamoDB
d. Amazon Simple Notification Service (SNS)
e. Amazon Simple Queue Service (SQS)

A

b. Amazon Simple Storage Service (Amazon S3)

c. Amazon DynamoDB

A VPC endpoint enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by AWS PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Instances in your VPC do not require public IP addresses to communicate with resources in the service. Traffic between your VPC and the other service does not leave the Amazon network.

There are two types of VPC endpoints: interface endpoints and gateway endpoints.

An interface endpoint is an elastic network interface with a private IP address from the IP address range of your subnet that serves as an entry point for traffic destined to a supported service. Interface endpoints are powered by AWS PrivateLink, a technology that enables you to privately access services by using private IP addresses.

A gateway endpoint is a gateway that you specify as a target for a route in your route table for traffic destined to a supported AWS service. The following AWS services are supported:

Amazon Simple Storage Service (Amazon S3)

Amazon DynamoDB

Exam Alert:

You may see a question around this concept in the exam. Just remember that only Amazon S3 and Amazon DynamoDB support VPC gateway endpoint. All other services that support VPC Endpoints use a VPC interface endpoint (note that Amazon S3 supports the VPC interface endpoint as well).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Which feature of AWS Cloud offers the ability to innovate faster and rapidly develop, test and launch software applications?

a. Ability to deploy globally in minutes
b. Cost savings
c. Elasticity
d. Agility

A

d. Agility

Cloud computing is the on-demand delivery of IT resources over the Internet with pay-as-you-go pricing. Instead of buying, owning, and maintaining physical data centers and servers, you can access technology services, such as computing power, storage, and databases, on an as-needed basis from a cloud provider like Amazon Web Services (AWS).

Agility - Agility refers to the ability of the cloud to give you easy access to a broad range of technologies so that you can innovate faster and build nearly anything that you can imagine. You can quickly spin up resources as you need them – from infrastructure services, such as compute, storage, and databases, to Internet of Things, machine learning, data lakes and analytics, and much more.

Incorrect options:

Elasticity - With cloud computing elasticity, you don’t have to over-provision resources upfront to handle peak levels of business activity in the future. Instead, you provision the number of resources that you actually need. You can scale these resources up or down instantly to grow and shrink capacity as your business needs change.

Cost savings - The cloud allows you to trade capital expenses (such as data centers and physical servers) for variable expenses, and only pay for IT as you consume it. Plus, the variable expenses are much lower than what you would pay to do it yourself because of the economies of scale.

Ability to deploy globally in minutes - With the cloud, you can expand to new geographic regions and deploy globally in minutes. For example, AWS has infrastructure all over the world, so you can deploy your application in multiple physical locations with just a few clicks. Putting applications in closer proximity to end users reduces latency and improves their experience.

Exam Alert:

Please review the benefits of Cloud Computing: https://aws.amazon.com/what-is-cloud-computing/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Which of the following Amazon Simple Storage Service (Amazon S3) storage classes has NO constraint of a minimum storage duration charge for objects?

a. Amazon S3 Standard-Infrequent Access (S3 Standard-IA)
b. Amazon S3 Glacier Flexible Retrieval
c. Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA)
d. Amazon S3 Standard

A

d. Amazon S3 Standard

Amazon S3 Standard offers high durability, availability, and performance object storage for frequently accessed data. Amazon S3 Standard offers low latency and high throughput performance, It is designed for durability of 99.999999999% of objects across multiple Availability Zones (AZ). Amazon S3 Standard has no constraint of a minimum storage duration for objects.

Please review this illustration for Amazon S3 Storage Classes retrieval fee. You don’t need to memorize the actual numbers, just remember that Amazon S3 Standard and Amazon S3 Intelligent-Tiering do not charge any retrieval fee: https://aws.amazon.com/s3/storage-classes/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Which of the following is a part of the AWS Global Infrastructure?

a. Virtual Private Cloud (VPC)
b. Subnet
c. Virtual Private Network (VPN)
d. AWS Region

A

d. AWS Region

AWS Region is a physical location around the world where AWS builds its data centers. Each group of logical data centers is called an Availability Zone (AZ). Each AWS Region consists of multiple, isolated, and physically separate AZ’s within a geographic area.

Please see this illustration for AWS regions in the US: https://aws.amazon.com/about-aws/global-infrastructure/regions_az/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

Which security control tool can be used to deny traffic from a specific IP address?

a. VPC Flow Logs
b. Amazon GuardDuty
c. Security Group
d. Network Access Control List (network ACL)

A

d. Network Access Control List (network ACL)

A Network Access Control List (network ACL) is an optional layer of security for your virtual private cloud (VPC) that acts as a firewall for controlling traffic in and out of one or more subnets (i.e. it works at the subnet level). A network ACL has separate inbound and outbound rules, and each rule can either allow or deny traffic.

Incorrect options:

Security Group - A security group acts as a virtual firewall for your instance to control inbound and outbound traffic. Security groups act at the instance level, not at the subnet level. You can specify allow rules, but not deny rules. You can specify separate rules for inbound and outbound traffic.

VPC Flow Logs - VPC Flow Logs is a feature that enables you to capture information about the IP traffic going to and from network interfaces in your VPC. Flow log data can be published to Amazon CloudWatch Logs or Amazon Simple Storage Service (Amazon S3). After you’ve created a flow log, you can retrieve and view its data in the chosen destination. However, it cannot deny traffic from a specific IP address.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

A photo sharing web application wants to store thumbnails of user-uploaded images on Amazon Simple Storage Service (Amazon S3). The thumbnails are rarely used but need to be immediately accessible from the web application. The thumbnails can be regenerated easily if they are lost. Which is the most cost-effective way to store these thumbnails on Amazon Simple Storage Service (Amazon S3)?

a. Use Amazon S3 Standard to store the thumbnails
b. Use Amazon S3 Glacier Flexible Retrieval to store the thumbnails
c. Use Amazon S3 Standard-Infrequent Access (S3 Standard-IA) to store the thumbnails
d. Use Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) to store the thumbnails

A

d. Use Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) to store the thumbnails

Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) is for data that is accessed less frequently but requires rapid access when needed. Unlike other S3 Storage Classes which store data in a minimum of three Availability Zones (AZs), Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) stores data in a single Availability Zone (AZ) and costs 20% less than Amazon S3 Standard-Infrequent Access (S3 Standard-IA). Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) offers the same high durability, high throughput, and low latency of S3 Standard, with a low per GB storage price and per GB retrieval fee. Although Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) offers less availability than S3 Standard but that’s not an issue for the given use-case since the thumbnails can be regenerated easily.

As the thumbnails are rarely used but need to be rapidly accessed when required, so Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) is the best choice for this use-case.

Exam Alert:

Please review this detailed comparison on S3 Storage Classes as you can expect a few questions on this aspect of S3: https://aws.amazon.com/s3/storage-classes/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

Which of the following criteria are used to calculate the charge for Amazon EBS Volumes? (Select Two)

a. Volume type
b. Provisioned IOPS
c. The Amazon EC2 instance type the Amazon EBS Elastic volume is attached to
d. Data type
e. Data transfer IN

A

a. Volume type
b. Provisioned IOPS

IOPS = input/output operations per second

Amazon Elastic Block Store (Amazon EBS) provides persistent block storage volumes for use with Amazon EC2 instances in the AWS Cloud. Each Amazon EBS volume is automatically replicated within its Availability Zone (AZ) to protect you from component failure, offering high availability and durability. Amazon EBS volumes offer the consistent and low-latency performance needed to run your workloads. With Amazon EBS, you can scale your usage up or down within minutes—all while paying a low price for only what you provision.

The fundamental charges for EBS volumes are the volume type (based on performance), the storage volume in GB per month provisioned, the number of IOPS provisioned per month, the storage consumed by snapshots, and outbound data transfer.

Incorrect options:

Data transfer IN - Data transfer-in is always free, including for Amazon EBS Elastic Volumes.

The Amazon EC2 instance type the Amazon EBS Elastic volume is attached to - The Amazon EC2 instance type the Amazon EBS volume is attached to does not influence the EBS volume pricing.

Data type - The type of data stored on EBS volumes does not influence the price.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

As per the AWS Shared Responsibility Model, which of the following is a responsibility of AWS from a security and compliance point of view?

a. Edge Location Management
b. Server-side Encryption (SSE)
c. Identity and Access Management
d. Customer Data

A

a. Edge Location Management

Security and Compliance is a shared responsibility between AWS and the customer. The AWS Shared Responsibility Model can help relieve the customer’s operational burden as AWS operates, manages, and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the service operates.

AWS is responsible for security “of” the cloud. This covers their global infrastructure elements including Regions, Availability Zones (AZ), and Edge Locations.

Incorrect options:

Customer Data

Identity and Access Management

Server-side Encryption (SSE)

The customer is responsible for security “in” the cloud. Customers are responsible for managing their data including encryption options and using Identity and Access Management tools for implementing appropriate access control policies as per their organization requirements. For abstracted services, such as Amazon S3 and Amazon DynamoDB, AWS operates the infrastructure layer, the operating system, and platforms, and customers access the endpoints to store and retrieve data. Therefore, these three options fall under the responsibility of the customer according to the AWS shared responsibility model.

Exam Alert:

Please review the AWS Shared Responsibility Model in detail as you can expect multiple questions on this topic in the exam: https://aws.amazon.com/compliance/shared-responsibility-model/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

A company would like to audit requests made to an Amazon Simple Storage Service (Amazon S3) bucket. As a Cloud Practitioner, which Amazon Simple Storage Service (Amazon S3) feature would you recommend addressing this use-case?

a. S3 Versioning
b. S3 cross-region replication (S3 CRR)
c. Amazon S3 Bucket Policies
d. Amazon Simple Storage Service (Amazon S3) Access Logs

A

d. Amazon Simple Storage Service (Amazon S3) Access Logs

Server access logging provides detailed records for the requests that are made to an Amazon S3 bucket. Server access logs are useful for many applications. For example, access log information can be useful in security and access audits.

It can also help you learn about your customer base and understand your Amazon S3 bill.

Incorrect options:

S3 cross-region replication (S3 CRR) - S3 cross-region replication (S3 CRR) enables automatic, asynchronous copying of objects across Amazon S3 buckets. Cross-Region replication (CRR) is used to copy objects across Amazon S3 buckets in different AWS Regions. It does not help with auditing requests made to your bucket.

Amazon S3 Bucket Policies - Amazon S3 Bucket policy is a resource-based AWS Identity and Access Management (IAM) policy. You add a bucket policy to a bucket to grant other AWS accounts or IAM users access permissions for the bucket and the objects in it. Object permissions apply only to the objects that the bucket owner creates. It does not help with auditing requests made to your bucket.

S3 Versioning - Versioning is a means of keeping multiple variants of an object in the same bucket. You can use versioning to preserve, retrieve, and restore every version of every object stored in your Amazon S3 bucket. With versioning, you can easily recover from both unintended user actions and application failures. It does not help with auditing requests made to your bucket.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

Which of the following are the security best practices suggested by AWS for Identity and Access Management (IAM)? (Select two)

a. Do not change passwords and access keys once created. This results in failure of connectivity in the application logic
b. Enable AWS Multi-Factor Authentication (AWS MFA) on your AWS root user account. MFA helps give root access to multiple users without actually sharing the root user login credentials
c. Share your AWS account root user credentials only if absolutely necessary for performing an important billing operation
d. When you create IAM policies, grant the least privileges required to perform a task
e. Do not share security credentials between accounts, use IAM roles instead

A

d. When you create IAM policies, grant the least privileges required to perform a task
e. Do not share security credentials between accounts, use IAM roles instead

When you create IAM policies, follow the standard security advice of granting the least privileges, or granting only the permissions required to perform a task. Determine what users (and roles) need to do and then craft policies that allow them to perform only those tasks.

Start with a minimum set of permissions and grant additional permissions as necessary. Doing so is more secure than starting with permissions that are too lenient and then trying to tighten them later.

Don’t share security credentials between accounts to allow users from another AWS account to access resources in your AWS account. Instead, use IAM roles. You can define a role that specifies what permissions the IAM users in the other account are allowed. You can also designate which AWS accounts have the IAM users that are allowed to assume the role.

Incorrect options:

Share your AWS account root user credentials only if absolutely necessary for performing an important billing operation - Never share your AWS account root user password or access keys with anyone. Don’t use your AWS account root user credentials to access AWS, and don’t give your credentials to anyone else. Instead, create individual users for anyone who needs access to your AWS account. Create an IAM user for yourself as well, give that user administrative permissions, and use that IAM user for all your work.

Enable AWS Multi-Factor Authentication (AWS MFA) on your AWS root user account. MFA helps give root access to multiple users without actually sharing the root user login credentials - The given option just acts as a distractor. For extra security, AWS recommends that you use multi-factor authentication (MFA) for the root user in your account. With MFA, users have a device that generates a response to an authentication challenge. Both the user’s credentials and the device-generated response are required to complete the sign-in process. If a user’s password or access keys are compromised, your account resources are still secure because of the additional authentication requirement.

Do not change passwords and access keys once created. This results in failure of connectivity in the application logic - The given option just acts as a distractor. You should change your own passwords and access keys regularly, and make sure that all IAM users in your account do as well. That way, if a password or access key is compromised without your knowledge, you limit how long the credentials can be used to access your resources. You can apply a custom password policy to your account to require all your IAM users to rotate their AWS Management Console passwords. You can also choose how often they must do so.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

Which of the following AWS services is delivered globally rather than regionally?

a. AWS Snowmobile
b. Amazon Elastic File System (Amazon EFS)
c. Amazon Simple Storage Service (Amazon S3) buckets
d. Amazon WorkSpaces

A

d. Amazon WorkSpaces

AWS offers a broad set of global cloud-based products including compute, storage, database, analytics, networking, machine learning and AI, mobile, developer tools, IoT, security, enterprise applications, and much more.

Due to the nature of the service, some AWS services are delivered globally rather than regionally, such as Amazon Route 53, Amazon Chime, Amazon WorkDocs, Amazon WorkMail, Amazon WorkSpaces, Amazon WorkLink.

Amazon WorkSpaces is a managed, secure Desktop-as-a-Service (DaaS) solution. You can use Amazon WorkSpaces to provision either Windows or Linux desktops in just a few minutes and quickly scale to provide thousands of desktops to workers across the globe.

Incorrect options:

Amazon Simple Storage Service (Amazon S3) buckets - You specify an AWS Region when you create your Amazon S3 bucket and hence the S3 buckets are region-specific. For S3 on AWS Outposts, your data is stored in your Outpost on-premises environment, unless you manually choose to transfer it to an AWS Region.

Amazon Elastic File System (Amazon EFS) - Amazon Elastic File System (Amazon EFS) is AWS region-based service. You can use AWS DataSync to copy files between different AWS regions.

AWS Snowmobile - AWS Snowmobile can be made available for use with AWS services in specific AWS regions and hence is a region-specific service. Once all the data is copied into Snowmobile, Snowmobile will be returned to your designated AWS region where your data will be uploaded into the AWS storage services you have selected, such as S3 or Glacier.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

An e-learning company wants to build a knowledge graph by leveraging a fully managed database. Which of the following is the best fit for this requirement?

a. Amazon Relational Database Service (Amazon RDS)
b. Amazon DynamoDB
c. Amazon DocumentDB
d. Amazon Neptune

A

d. Amazon Neptune

Amazon Neptune is a fast, reliable, fully-managed graph database service that makes it easy to build and run applications that work with highly connected datasets. The core of Amazon Neptune is a purpose-built, high-performance graph database engine optimized for storing billions of relationships and querying the graph with milliseconds latency. Amazon Neptune is tailor-built for use cases like Knowledge Graphs, Identity Graphs, Fraud Detection, Recommendation Engines, Social Networking, Life Sciences, and so on.

Amazon Neptune supports popular graph models Property Graph and W3C’s RDF, and their respective query languages Apache TinkerPop Gremlin and SPARQL, allowing you to easily build queries that efficiently navigate highly connected datasets. Neptune powers graph use cases such as recommendation engines, fraud detection, knowledge graphs, drug discovery, and network security.

Amazon Neptune is highly available, with read-replicas, point-in-time recovery, continuous backup to Amazon S3, and replication across Availability Zones. Neptune is secure with support for HTTPS encrypted client connections and encryption at rest. Neptune is fully managed, so you no longer need to worry about database management tasks such as hardware provisioning, software patching, setup, configuration, or backups.

Fraud Detection with Amazon Neptune: https://aws.amazon.com/neptune/

Incorrect options:

Amazon DocumentDB - Amazon DocumentDB (with MongoDB compatibility) is a fast, scalable, highly available, and fully managed document database service that supports MongoDB workloads. As a document database, Amazon DocumentDB makes it easy to store, query, and index JSON data.

Amazon DocumentDB is a non-relational database service designed from the ground-up to give you the performance, scalability, and availability you need when operating mission-critical MongoDB workloads at scale. In Amazon DocumentDB, the storage and compute are decoupled, allowing each to scale independently, and you can increase the read capacity to millions of requests per second by adding up to 15 low latency read replicas in minutes, regardless of the size of your data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

Which AWS service can inspect Amazon CloudFront distributions running on any HTTP web server?

a. Elastic Load Balancing (ELB)
b. Amazon Inspector
c. AWS GuardDuty
d. AWS Web Application Firewall (AWS WAF)

A

d. AWS Web Application Firewall (AWS WAF)

AWS Web Application Firewall (AWS WAF) is a web application firewall that helps protect web applications from attacks by allowing you to configure rules that allow, block, or monitor (count) web requests based on conditions that you define. These conditions include IP addresses, HTTP headers, HTTP body, URI strings, SQL injection, and cross-site scripting (XSS).

AWS WAF is a web application firewall that lets you monitor the HTTP and HTTPS requests that are forwarded to Amazon CloudFront and lets you control access to your content.

When you use the AWS web application firewall (AWS WAF) on Amazon CloudFront, your rules run in all AWS Edge Locations, located around the world close to your end-users. This means security doesn’t come at the expense of performance. Blocked requests are stopped before they reach your web servers.

How AWS WAF works: https://aws.amazon.com/waf/

Incorrect options:

AWS GuardDuty - Amazon GuardDuty is a threat detection service that continuously monitors for malicious or unauthorized behavior to help you protect your AWS accounts and workloads. It does not inspect Amazon CloudFront distributions.

Amazon Inspector - Amazon Inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on AWS. Amazon Inspector automatically assesses applications for exposure, vulnerabilities, and deviations from best practices. Amazon Inspector security assessments help you check for unintended network accessibility of your Amazon EC2 instances and for vulnerabilities on those EC2 instances.

Elastic Load Balancing (ELB) - Elastic Load Balancing (ELB) automatically distributes incoming application traffic across multiple targets, such as Amazon EC2 instances, containers, and IP addresses. It does not inspect Amazon CloudFront distributions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

The DevOps team at an e-commerce company is trying to debug performance issues for its serverless application built using a microservices architecture. As a Cloud Practitioner, which AWS service would you recommend addressing this use-case?
a. AWS Trusted Advisor
b. AWS X-Ray
c. Amazon Pinpoint
d. AWS CloudFormation

A

b. AWS X-Ray

You can use AWS X-Ray to analyze and debug serverless and distributed applications such as those built using a microservices architecture. With X-Ray, you can understand how your application and its underlying services are performing to identify and troubleshoot the root cause of performance issues and errors.

AWS Trusted Advisor - AWS Trusted Advisor is an online tool that provides you real-time guidance to help you provision your resources following AWS best practices on cost optimization, security, fault tolerance, service limits and performance improvement. Whether establishing new workflows, developing applications, or as part of ongoing improvement, recommendations provided by Trusted Advisor regularly help keep your solutions provisioned optimally. Trusted Advisor cannot be used to debug performance issues for this serverless application built using a microservices architecture.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

Which of the following services/tools offers a user-friendly graphical user interface to manage AWS Snowball devices without a need for command-line interface or REST APIs?

a. AWS OpsWorks
b. AWS Transfer Family
c. AWS OpsHub
d. AppStream 2.0

A

c. AWS OpsHub

AWS OpsHub is a graphical user interface you can use to manage your AWS Snowball devices, enabling you to rapidly deploy edge computing workloads and simplify data migration to the cloud. With just a few clicks in AWS OpsHub, you have the full functionality of the Snowball devices at your fingertips; you can unlock and configure devices, drag-and-drop data to devices, launch applications, and monitor device metrics.

Previously, customers operated Snowball devices by either entering commands into a command-line interface or by using REST APIs. Now with AWS OpsHub, you have an easier way to deploy and manage even large fleets of Snowball devices, all while operating without an internet connection.

AWS OpsHub takes all the existing operations available in the Snowball API and presents them as a simple graphical user interface. This interface helps you quickly and easily migrate data to the AWS Cloud and deploy edge computing applications on Snow Family Devices.

AWS OpsHub provides a unified view of AWS services that are running on Snow Family Devices and automates operational tasks through AWS Systems Manager. With AWS OpsHub, users with different levels of technical expertise can easily manage a large number of Snow Family Devices. With just a few clicks, you can unlock devices, transfer files, manage Amazon EC2 instances, and monitor device metrics.

When your Snow device arrives at your site, you download, install, and launch the AWS OpsHub application on a client machine, such as a laptop. After installation, you can unlock the device and start managing it and using supported AWS services locally. AWS OpsHub provides a dashboard that summarizes key metrics such as storage capacity and active instances on your device. It also provides a selection of the AWS services that are supported on the Snow Family Devices. Within minutes, you can begin transferring files to the device.

Incorrect options:

AppStream 2.0 - Amazon AppStream 2.0 is a fully managed non-persistent application and desktop streaming service. You centrally manage your desktop applications on AppStream 2.0 and securely deliver them to any computer. You can easily scale to any number of users across the globe without acquiring, provisioning, and operating hardware or infrastructure. AppStream 2.0 is built on AWS, so you benefit from a data center and network architecture designed for the most security-sensitive organizations. This is not a tool for AWS Snowball devices.

AWS OpsWorks - AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet. Chef and Puppet are automation platforms that allow you to use code to automate the configurations of your servers. OpsWorks lets you use Chef and Puppet to automate how servers are configured, deployed, and managed across your Amazon EC2 instances or on-premises compute environments.

AWS Transfer Family - The AWS Transfer Family is the aggregated name of AWS Transfer for SFTP, AWS Transfer for FTPS, and AWS Transfer for FTP. The AWS Transfer Family offers fully managed support for the transfer of files over SFTP, FTPS, and FTP directly into and out of Amazon S3 or Amazon EFS.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

A leading research firm needs to access information available in old patents and documents (such as PDFs, Text Files, Word documents, etc) present in its huge knowledge base. The firm is looking for a powerful search tool that can dig into these knowledge resources and return the most relevant files/documents. Which of the following is the correct service to address this requirement?

a. Amazon Personalize
b. Amazon Lex
c. Amazon Kendra
d. Amazon Comprehend

A

c. Amazon Kendra

Amazon Kendra is an intelligent search service powered by machine learning. Kendra reimagines enterprise search for your websites and applications so your employees and customers can easily find the content they are looking for, even when it’s scattered across multiple locations and content repositories within your organization.

Using Amazon Kendra, you can stop searching through troves of unstructured data and discover the right answers to your questions, when you need them. Amazon Kendra is a fully managed service, so there are no servers to provision, and no machine learning models to build, train, or deploy. Amazon Kendra supports unstructured and semi-structured data in .html, MS Office (.doc, .ppt), PDF, and text formats.

Unlike conventional search technology, natural language search capabilities return the answers you’re looking for quickly and accurately, no matter where the information lives within your organization.

Amazon Kendra’s deep learning models come pre-trained across 14 industry domains, allowing it to extract more accurate answers across a wide range of business use cases from the get-go. You can also fine-tune search results by manually adjusting the importance of data sources, authors, freshness, or using custom tags.

Incorrect options:

Amazon Personalize - Amazon Personalize enables developers to build applications with the same machine learning (ML) technology used by Amazon.com for real-time personalized recommendations. Amazon Personalize makes it easy for developers to build applications capable of delivering a wide array of personalization experiences, including specific product recommendations, personalized product re-ranking, and customized direct marketing.

Amazon Comprehend - Amazon Comprehend is a natural-language processing (NLP) service that uses machine learning to uncover information in unstructured data. Instead of combing through documents, the process is simplified and unseen information is easier to understand.

Amazon Kendra provides ML-powered search capabilities for all unstructured data customers store in AWS. Amazon Kendra offers easy-to-use native connectors to popular AWS repository types such as S3 and RDS databases. Other AI services such as Amazon Comprehend, Amazon Transcribe, and Amazon Comprehend Medical can be used to pre-process documents, generate searchable text, extract entities, and enrich their metadata for more specialized search experiences.

Amazon Lex - Amazon Lex is a service for building conversational interfaces into any application using voice and text. Amazon Lex provides the advanced deep learning functionalities of automatic speech recognition (ASR) for converting speech to text, and natural language understanding (NLU) to recognize the intent of the text, to enable you to build applications with highly engaging user experiences and lifelike conversational interactions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

Which AWS service can be used to subscribe to an RSS feed to be notified of the status of all AWS service interruptions?

a. AWS Health Dashboard - Service Health
b. Amazon Simple Notification Service (Amazon SNS)
c. AWS Health Dashboard - Your Account Health
d. AWS Lambda

A

a. AWS Health Dashboard - Service Health

The AWS Health Dashboard – Service health is the single place to learn about the availability and operations of AWS services. You can view the overall status of AWS services, and you can sign in to view personalized communications about your particular AWS account or organization.

You can check on this page https://health.aws.amazon.com/health/status to get current status information.

The AWS Health Dashboard – Service health offers the possibility to subscribe to an RSS feed to be notified of interruptions to each service.

Incorrect options:

Amazon Simple Notification Service (Amazon SNS) - Amazon Simple Notification Service (Amazon SNS) is a highly available, durable, secure, fully managed pub/sub messaging service that enables you to decouple microservices, distributed systems, and serverless applications. It can be used to deliver notifications, but it does not provide the current services’ status.

AWS Health Dashboard - Your Account Health - Your AWS Health Dashboard – Your Account Health provides alerts and remediation guidance when AWS is experiencing events that may impact you.

AWS Lambda - AWS Lambda lets you run code without provisioning or managing servers. It does not provide all AWS services’ status.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

An e-commerce company would like to receive alerts when the Amazon EC2 Reserved Instances (RI) utilization drops below a certain threshold. Which AWS service can be used to address this use-case?

a. AWS Trusted Advisor
b. AWS Systems Manager
c. AWS Budgets
d. AWS Cost Explorer

A

c. AWS Budgets

AWS Budgets gives you the ability to set custom budgets that alert you when your costs or usage exceed (or are forecasted to exceed) your budgeted amount. You can also use AWS Budgets to set reservation utilization or coverage targets and receive alerts when your utilization drops below the threshold you define. You can define a utilization threshold and receive alerts when your Reserved Instances (RI) usage falls below that threshold. This lets you see if your Reserved Instances (RI) are unused or under-utilized. Reservation alerts are supported for Amazon EC2, Amazon RDS, Amazon Redshift, Amazon ElastiCache, and Amazon Elasticsearch reservations.

AWS Budgets Overview: https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/budgets-managing-costs.html

Incorrect options:

AWS Trusted Advisor - AWS Trusted Advisor is an online tool that provides real-time guidance to help provision your resources following AWS best practices. Whether establishing new workflows, developing applications, or as part of ongoing improvement, recommendations provided by AWS Trusted Advisor regularly help keep your solutions provisioned optimally. AWS Trusted Advisor analyzes your AWS environment and provides best practice recommendations in five categories: Cost Optimization, Performance, Security, Fault Tolerance, Service Limits.

AWS Cost Explorer - AWS Cost Explorer has an easy-to-use interface that lets you visualize, understand, and manage your AWS costs and usage over time. AWS Cost Explorer includes a default report that helps you visualize the costs and usage associated with your top five cost-accruing AWS services, and gives you a detailed breakdown of all services in the table view. The reports let you adjust the time range to view historical data going back up to twelve months to gain an understanding of your cost trends. You can use AWS Cost Explorer to generate the RI utilization and RI coverage reports, but you cannot use it to generate alerts for under-utilized Reserved Instances (RI).

AWS Systems Manager - AWS Systems Manager gives you visibility and control of your infrastructure on AWS. AWS Systems Manager provides a unified user interface so you can view operational data from multiple AWS services and allows you to automate operational tasks such as running commands, managing patches, and configuring servers across AWS Cloud as well as on-premises infrastructure.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

Which of the following AWS services specialize in data migration from on-premises to AWS Cloud? (Select two)

a. AWS Transit Gateway
b. AWS Direct Connect
c. AWS Snowball
d. AWS Site-to-Site VPN
e. AWS Database Migration Service (AWS DMS)

A

c. AWS Snowball
e. AWS Database Migration Service (AWS DMS)

AWS Snowball is a data transport solution that accelerates moving terabytes to petabytes of data into and out of AWS services using storage devices designed to be secure for physical transport.

AWS Database Migration Service (AWS DMS) helps you migrate databases from on-premises to AWS quickly and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database. The AWS Database Migration Service (AWS DMS) can migrate your data to and from the most widely used commercial and open-source databases.

You can do both homogeneous and heterogeneous database migration using AWS Database Migration Service (AWS DMS): https://aws.amazon.com/dms/

Incorrect options:

AWS Site-to-Site VPN - AWS Site-to-Site VPN creates a secure connection between your data center or branch office and your AWS cloud resources. This connection goes over the public internet. Site to Site VPN is a connectivity service and it does not specialize in data migration.

AWS Direct Connect - AWS Direct Connect creates a dedicated private connection from a remote network to your VPC. This is a private connection and does not use the public internet. Takes at least a month to establish this connection. Direct Connect is a connectivity service and it does not specialize in data migration.

AWS Transit Gateway - AWS Transit Gateway connects VPCs and on-premises networks through a central hub. This simplifies your network and puts an end to complex peering relationships. It acts as a cloud router – each new connection is only made once. As you expand globally, inter-Region peering connects AWS Transit Gateways using the AWS global network. Your data is automatically encrypted and never travels over the public internet. Transit Gateway is a connectivity service and it does not specialize in data migration.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

A company stores all its media files in Amazon Simple Storage Service (Amazon S3) which is accessed by an application hosted on Amazon EC2 instances. The company wants to convert these media files into formats that users can playback on mobile devices.

Which AWS service/tool helps you achieve this requirement?

a. Amazon Comprehend
b. Amazon Elastic Transcoder
c. Amazon Transcribe
d. AWS Glue

A

b. Amazon Elastic Transcoder

Amazon Elastic Transcoder lets you convert media files that you have stored in Amazon Simple Storage Service (Amazon S3) into media files in the formats required by consumer playback devices. For example, you can convert large, high-quality digital media files into formats that users can playback on mobile devices, tablets, web browsers, and connected televisions.

Amazon Elastic Transcoder manages all aspects of the media transcoding process for you transparently and automatically. There’s no need to administer software, scale hardware, tune performance, or otherwise manage transcoding infrastructure. You simply create a transcoding “job” specifying the location of your source media file and how you want it transcoded. Amazon Elastic Transcoder also provides transcoding presets for popular output formats, which means that you don’t need to guess about which settings work best on particular devices. All these features are available via service API, AWS SDKs and the AWS Management Console.

Incorrect options:

Amazon Transcribe - Amazon Transcribe makes it easy for developers to add speech to text capabilities to their applications. Audio data is virtually impossible for computers to search and analyze. Therefore, recorded speech needs to be converted to text before it can be used in applications.

Amazon Transcribe uses a deep learning process called automatic speech recognition (ASR) to convert speech to text quickly and accurately. Amazon Transcribe can be used to transcribe customer service calls, automate subtitling, and generate metadata for media assets to create a fully searchable archive.

Amazon Comprehend - Amazon Comprehend is a natural language processing (NLP) service that uses machine learning to find insights and relationships in a text. Amazon Comprehend uses machine learning to help you uncover the insights and relationships in your unstructured data. The service identifies the language of the text; extracts key phrases, places, people, brands, or events; understands how positive or negative the text is; analyzes text using tokenization and parts of speech, and automatically organizes a collection of text files by topic.

AWS Glue - AWS Glue is a serverless data integration service that makes it easy to discover, prepare, and combine data for analytics, machine learning, and application development. AWS Glue provides all of the capabilities needed for data integration, so you can start analyzing your data and putting it to use in minutes instead of months. You should use AWS Glue to discover properties of the data you own, transform it, and prepare it for analytics. Glue can automatically discover both structured and semi-structured data stored in your data lake on Amazon S3, data warehouse in Amazon Redshift, and various databases running on AWS.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

An e-commerce company would like to build a chatbot for its customer service using Natural Language Understanding (NLU). As a Cloud Practitioner, which AWS service would you use?

a. Amazon SageMaker
b. Amazon Lex
c. Amazon Rekognition
d. Amazon Comprehend

A

b. Amazon Lex

Amazon Lex is a service for building conversational interfaces using voice and text. Powered by the same conversational engine as Amazon Alexa, Amazon Lex provides high-quality speech recognition and language understanding capabilities, enabling the addition of sophisticated, natural language ‘chatbots’ to new and existing applications.

Amazon Lex Use Cases: https://aws.amazon.com/lex/

Incorrect options:

Amazon Rekognition - With Amazon Rekognition, you can identify objects, people, text, scenes, and activities in images and videos and also detect any inappropriate content. Amazon Rekognition also provides highly accurate facial analysis and facial search capabilities that you can use to detect, analyze, and compare faces for a wide variety of user verification, people counting, and public safety use cases.

Amazon SageMaker - Amazon SageMaker is a fully-managed platform that enables developers and data scientists to quickly and easily build, train, and deploy machine learning models at any scale. Amazon SageMaker removes all the barriers that typically slow down developers who want to use machine learning.

Amazon Comprehend - Amazon Comprehend is a natural language processing (NLP) service that uses machine learning to find meaning and insights in text. Natural Language Processing (NLP) is a way for computers to analyze, understand, and derive meaning from textual information in a smart and useful way. By utilizing Natural Language Processing (NLP), you can extract important phrases, sentiment, syntax, key entities such as brand, date, location, person, etc., and the language of the text.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
51
Q

Which of the following AWS Support plans provide access to only core checks from the AWS Trusted Advisor Best Practice Checks? (Select two)
a. AWS Developer Support
b. AWS Basic Support
c. AWS Enterprise Support
d. AWS Business Support
e. AWS Enterprise On-Ramp Support

A

a. AWS Developer Support

b. AWS Basic Support

They are the basic plans.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
52
Q

How is Amazon EC2 different from traditional hosting systems? (Select two)

a. Amazon EC2 caters more towards groups of users with similar system requirements so that the server resources are shared across multiple users and the cost is reduced
b. With Amazon EC2, users risk overbuying resources
c. Amazon EC2 can scale with changing computing requirements
d. With Amazon EC2, developers can launch and terminate the instances anytime they need to
e. Amazon EC2 provides a pre-configured instance for a fixed monthly cost

A

c. Amazon EC2 can scale with changing computing requirements
d. With Amazon EC2, developers can launch and terminate the instances anytime they need to

Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud with support for per-second billing. It is the easiest way to provision servers on AWS Cloud and access the underlying OS.

Amazon EC2 differs fundamentally with the traditional on-premises hosting systems in the flexibility, control and significant cost savings it offers developers, allowing them to treat Amazon EC2 instance as their own customized server backed by the robust infrastructure of AWS Cloud.

When computing requirements unexpectedly change, Amazon EC2 can be scaled to match the requirements. Developers can control how many EC2 instances are in use at any given point in time.

Using Amazon EC2, developers can choose not only to launch, terminate, start or shut down instances at any time, but they can also completely customize the configuration of their instances to suit their needs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
53
Q

A Cloud Practitioner wants to use CIDR block notation when providing an IP address range. Which of the following AWS network services/utilities allow this feature? (Select two)

a. Network access control list (network ACL)
b. Security group
c. AWS Cost Explorer
d. Amazon Simple Storage Service (Amazon S3)
e. AWS Lambda

A

a. Network access control list (network ACL)
b. Security group

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
54
Q

An organization has a complex IT architecture involving a lot of system dependencies and it wants to track the history of changes to each resource. Which AWS service will help the organization track the history of configuration changes for all the resources?

a. AWS Service Catalog
b. AWS CloudFormation
c. AWS CloudTrail
d. AWS Config

A

d. AWS Config

AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. Config continuously monitors and records your AWS resource configurations and allows you to automate the evaluation of recorded configurations against desired configurations. Think resource-specific history, audit, and compliance; think Config.

With AWS Config, you can do the following: 1. Evaluate your AWS resource configurations for desired settings. 2. Get a snapshot of the current configurations of the supported resources that are associated with your AWS account. 3. Retrieve configurations of one or more resources that exist in your account. 4. Retrieve historical configurations of one or more resources. 5. Receive a notification whenever a resource is created, modified, or deleted. 6.View relationships between resources. For example, you might want to find all resources that use a particular security group.

AWS CloudTrail - AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. With CloudTrail, you can log, continuously monitor, and retain account activity related to actions across your AWS infrastructure. CloudTrail provides the event history of your AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command-line tools, and other AWS services. Think account-specific activity and audit; think CloudTrail. You cannot use CloudTrail to track changes to each resource on AWS.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
55
Q

Which of the following statements are CORRECT regarding AWS Global Accelerator? (Select two)

a. AWS Global Accelerator provides static IP addresses that act as a fixed entry point to your applications
b. AWS Global Accelerator uses the AWS global network and its edge locations. But the edge locations used by Global Accelerator are different from Amazon CloudFront edge locations
c. AWS Global Accelerator can be used to host static websites
d. AWS Global Accelerator is a good fit for non-HTTP use cases
e. AWS Global Accelerator cannot be configured with an Elastic Load Balancer (ELB)

A

a. AWS Global Accelerator provides static IP addresses that act as a fixed entry point to your applications
d. AWS Global Accelerator is a good fit for non-HTTP use cases

AWS Global Accelerator is a networking service that helps you improve the availability and performance of the applications that you offer to your global users. Global Accelerator improves performance for a wide range of applications over TCP or UDP by proxying packets at the edge to applications running in one or more AWS Regions.

How AWS Global Accelerator Works: https://aws.amazon.com/global-accelerator/

AWS Global Accelerator is a good fit for non-HTTP use cases, such as gaming (UDP), IoT (MQTT), or Voice over IP, as well as for HTTP use cases that specifically require static IP addresses or deterministic, fast regional failover.

It provides static IP addresses that provide a fixed entry point to your applications and eliminate the complexity of managing specific IP addresses for different AWS Regions and Availability Zones (AZs).

Incorrect options:

AWS Global Accelerator uses the AWS global network and its edge locations. But the edge locations used by Global Accelerator are different from Amazon CloudFront edge locations - AWS Global Accelerator and Amazon CloudFront use the same edge locations.

AWS Global Accelerator cannot be configured with an Elastic Load Balancer (ELB) - A regional ELB load balancer is an ideal target for AWS Global Accelerator. AWS Global Accelerator complements ELB by extending these capabilities beyond a single AWS Region, allowing you to provide a global interface for your applications in any number of Regions.

AWS Global Accelerator can be used to host static websites - Amazon S3 can host static websites. So this option is incorrect.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
56
Q

A media company wants to enable customized content suggestions for the users of its movie streaming platform. Which AWS service can provide these personalized recommendations based on historic data?

a. Amazon Comprehend
b. Amazon Personalize
c. Amazon SageMaker
d. Amazon Customize

A

b. Amazon Personalize

Amazon Personalize enables developers to build applications with the same machine learning (ML) technology used by Amazon.com for real-time personalized recommendations. Amazon Personalize can be used to personalize the end-user experience over any digital channel. Examples include product recommendations for e-commerce, news articles and content recommendation for publishing, media, and social networks, hotel recommendations for travel websites, credit card recommendations for banks, and match recommendations for dating sites. These recommendations and personalized experiences can be delivered over websites, mobile apps, or email/messaging. Amazon Personalize can also be used to customize the user experience when user interaction is over a physical channel, e.g., a meal delivery company could personalize weekly meals to users in a subscription plan.

Amazon Personalize supports the following key use cases:

  1. Personalized recommendations
  2. Similar items
  3. Personalized reranking i.e. rerank a list of items for a user
  4. Personalized promotions/notifications

Incorrect options:

Amazon SageMaker - Amazon SageMaker is a fully managed service that provides every developer and data scientist with the ability to build, train, and deploy machine learning (ML) models quickly. Amazon SageMaker removes the heavy lifting from each step of the machine learning process to make it easier to develop high-quality models.

Amazon Customize - There is no such service as Amazon Customize. This option has been added as a distractor.

Amazon Comprehend - Amazon Comprehend is a natural-language processing (NLP) service that uses machine learning to uncover information in unstructured data. Instead of combing through documents, the process is simplified and unseen information is easier to understand.

The service can identify critical elements in data, including references to language, people, and places, and the text files can be categorized by relevant topics. In real-time, you can automatically and accurately detect customer sentiment in your content.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
57
Q

A research lab wants to optimize the caching capabilities for its scientific computations application running on Amazon Elastic Compute Cloud (Amazon EC2) instances. Which Amazon Elastic Compute Cloud (Amazon EC2) storage option is best suited for this use-case?

a. Instance Store
b. Amazon Elastic Block Store (Amazon EBS)
c. Amazon Elastic File System (Amazon EFS)
d. Amazon Simple Storage Service (Amazon S3)

A

a. Instance Store

An Instance Store provides temporary block-level storage for your Amazon EC2 instance. This storage is located on disks that are physically attached to the host computer. Instance store is ideal for the temporary storage of information that changes frequently, such as buffers, caches, scratch data, and other temporary content, or for data that is replicated across a fleet of instances, such as a load-balanced pool of web servers. Instance storage is temporary, data is lost if instance experiences failure or is terminated.

Instance Store Overview: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
58
Q

A production company with predictable usage would like to reduce the cost of its Amazon Elastic Compute Cloud (Amazon EC2) instances by using reserved instances (RI). Which of the following length terms are available for Amazon EC2 reserved instances (RI)? (Select Two)

a. 2 years
b. 6 months
c. 3 years
d. 1 year
e. 5 years

A

c. 3 years
d. 1 year

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
59
Q

Which of the following is a container service of AWS?

a. AWS Elastic Beanstalk
b. AWS Fargate
c. Amazon Simple Notification Service (Amazon SNS)
d. Amazon SageMaker

A

b. AWS Fargate

AWS Fargate is a serverless compute engine for containers that works with both Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS). Fargate makes it easy for you to focus on building your applications. Fargate removes the need to provision and manage servers, lets you specify and pay for resources per application, and improves security through application isolation by design.

How AWS Fargate Works: https://aws.amazon.com/fargate/

Incorrect options:

AWS Elastic Beanstalk - AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services. You simply upload your code and Elastic Beanstalk automatically handles the deployment, from capacity provisioning, load balancing, auto-scaling to application health monitoring. Beanstalk provisions servers so it is not a serverless service.

Amazon Simple Notification Service (Amazon SNS) - Amazon Simple Notification Service (Amazon SNS) is a highly available, durable, secure, fully managed pub/sub messaging service that enables you to decouple microservices, distributed systems, and serverless applications.

Amazon SageMaker - Amazon SageMaker is a fully managed service that provides every developer and data scientist with the ability to build, train, and deploy machine learning (ML) models quickly. SageMaker removes the heavy lifting from each step of the machine learning process to make it easier to develop high-quality models.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
60
Q

Under the AWS Shared Responsibility Model, which of the following is a shared responsibility of both AWS and the customer?
a. Infrastructure maintenance of Amazon Simple Storage Service (Amazon S3) storage servers
b. Availability Zone (AZ) infrastructure maintenance
c. Guarantee data separation among various AWS customers
d. Configuration Management

A

d. Configuration Management

tl;dr: AWS implements the configurations they support, and the customer sets the configurations they want.

Security and Compliance is a shared responsibility between AWS and the customer. This shared model can help relieve the customer’s operational burden as AWS operates, manages and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the service operates.

Controls that apply to both the infrastructure layer and customer layers, but in completely separate contexts or perspectives are called shared controls. In a shared control, AWS provides the requirements for the infrastructure and the customer must provide their own control implementation within their use of AWS services. Configuration Management forms a part of shared controls - AWS maintains the configuration of its infrastructure devices, but a customer is responsible for configuring their own guest operating systems, databases, and applications.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
61
Q

An organization maintains separate Amazon Virtual Private Clouds (Amazon VPC) for each of its departments. With expanding business, the organization now wants to connect all Amazon Virtual Private Clouds (Amazon VPC) for better departmental collaboration. Which AWS service will help the organization tackle the issue effectively?

a. VPC peering connection
b. AWS Site-to-Site VPN
c. AWS Direct Connect
d. AWS Transit Gateway

A

d. AWS Transit Gateway

AWS Transit Gateway connects Amazon Virtual Private Clouds (Amazon VPC) and on-premises networks through a central hub. This simplifies your network and puts an end to complex peering relationships. It acts as a cloud router – each new connection is only made once. As you expand globally, inter-Region peering connects AWS Transit Gateways using the AWS global network. Your data is automatically encrypted and never travels over the public internet.

How AWS Transit Gateway can simplify your network: https://aws.amazon.com/transit-gateway/

Incorrect options:

VPC peering connection - A VPC peering connection is a networking connection between two Amazon Virtual Private Clouds (Amazon VPC) that enables you to route traffic between them privately. VPC peering connection is not transitive, a separate VPC peering connection has to be made between two VPCs that need to talk to each other. With growing VPCs, this gets difficult to manage.

Transitive VPC peering connection is not allowed: https://docs.aws.amazon.com/vpc/latest/peering/invalid-peering-configurations.html

AWS Direct Connect - AWS Direct Connect creates a dedicated private connection from a remote network to your VPC. This is a private connection and does not use the public internet. Takes at least a month to establish this connection. AWS Direct Connect cannot be used to interconnect VPCs.

AWS Site-to-Site VPN - AWS Site-to-Site VPN creates a secure connection between your data center or branch office and your AWS cloud resources. This connection goes over the public internet. AWS Site-to-Site VPN cannot be used to interconnect VPCs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
62
Q

A company runs an application on a fleet of EC2 instances. The company wants to automate the traditional maintenance job of running timely assessments and checking for OS vulnerabilities. As a Cloud Practitioner, which service will you suggest for this use case?
a. Amazon GuardDuty
b. AWS Shield
c. Amazon Inspector
d. Amazon Macie

A

c. Amazon Inspector

Amazon Inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on your Amazon EC2 instances. Amazon Inspector automatically assesses applications for exposure, vulnerabilities, and deviations from best practices. After performing an assessment, Amazon Inspector produces a detailed list of security findings prioritized by level of severity. These findings can be reviewed directly or as part of detailed assessment reports which are available via the Amazon Inspector console or API.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
63
Q

The development team at a company manages 300 microservices and it is now trying to automate the code reviews to improve the code quality. Which tool/service is the right fit for this requirement?

a. AWS CodeBuild
b. AWS X-Ray
c. AWS Trusted Advisor
d. Amazon CodeGuru

A

d. Amazon CodeGuru

Amazon CodeGuru is a developer tool that provides intelligent recommendations to improve code quality and identify an application’s most expensive lines of code. Integrate Amazon CodeGuru into your existing software development workflow to automate code reviews during application development, continuously monitor application performance in production, provide recommendations and visual clues for improving code quality and application performance, and reduce overall cost.

Amazon CodeGuru Reviewer uses machine learning and automated reasoning to identify critical issues, security vulnerabilities, and hard-to-find bugs during application development and provides recommendations to improve code quality.

Amazon CodeGuru Profiler pinpoints an application’s most expensive lines of code by helping developers understand the runtime behavior of their applications, identify and remove code inefficiencies, improve performance, and significantly decrease compute costs.

How Amazon CodeGuru works: https://aws.amazon.com/codeguru/

Incorrect options:

AWS X-Ray - AWS X-Ray helps developers analyze and debug production, and distributed applications, such as those built using a microservices architecture. With AWS X-Ray, you can understand how your application and its underlying services are performing to identify and troubleshoot the root cause of performance issues and errors. AWS X-Ray provides an end-to-end view of requests as they travel through your application, and shows a map of your application’s underlying components.

AWS CodeBuild - AWS CodeBuild is a fully managed continuous integration service that compiles source code, runs tests, and produces software packages that are ready to deploy. With AWS CodeBuild, you don’t need to provision, manage, and scale your own build servers. AWS CodeBuild scales continuously and processes multiple builds concurrently, so your builds are not left waiting in a queue.

AWS Trusted Advisor - AWS Trusted Advisors provides recommendations that help you follow AWS best practices. AWS Trusted Advisor evaluates your account by using checks. These checks identify ways to optimize your AWS infrastructure, improve security and performance, reduce costs, and monitor service quotas.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
64
Q

What are the fundamental drivers of cost with AWS Cloud?

a. Compute, Storage and Inbound Data Transfer
b. Compute, Databases and Inbound Data Transfer
c. Compute, Databases and Outbound Data Transfer
d. Compute, Storage and Outbound Data Transfer

A

d. Compute, Storage and Outbound Data Transfer

There are three fundamental drivers of cost with AWS: compute, storage, and outbound data transfer. In most cases, there is no charge for inbound data transfer or data transfer between other AWS services within the same region. Outbound data transfer is aggregated across services and then charged at the outbound data transfer rate.

AWS Cloud Pricing Fundamentals: https://d0.awsstatic.com/whitepapers/aws_pricing_overview.pdf

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
65
Q

Which of the following AWS services are offered free of cost? (Select two)

a. AWS Elastic Beanstalk
b. Amazon EC2 Spot Instances
c. AWS Auto Scaling
d. An Elastic IP address, which is chargeable as long as it is associated with an EC2 instance
e. Amazon CloudWatch facilitated detailed monitoring of EC2 instances

A

a. AWS Elastic Beanstalk
c. AWS Auto Scaling

There is no additional charge for AWS Elastic Beanstalk. You pay for AWS resources (e.g. EC2 instances or S3 buckets) you create to store and run your application. You only pay for what you use, as you use it; there are no minimum fees and no upfront commitments.

There is no additional charge for AWS Auto Scaling. You pay only for the AWS resources needed to run your applications and Amazon CloudWatch monitoring fees.

Incorrect options:

Amazon EC2 Spot Instances - Amazon EC2 Spot Instances let you take advantage of unused EC2 capacity in the AWS cloud. Spot Instances are available at up to a 90% discount compared to On-Demand prices. Spot Instances are, however, not free.

Amazon CloudWatch facilitated detailed monitoring of EC2 instances - If you enable detailed monitoring, you are charged per metric that is sent to CloudWatch. You are not charged for data storage. Data is available in 1-minute periods, as opposed to 5-minute periods at no charge, for basic monitoring.

An Elastic IP address, which is chargeable as long as it is associated with an EC2 instance - An Elastic IP address doesn’t incur charges as long as all the following conditions are true: The Elastic IP address is associated with an EC2 instance, The instance associated with the Elastic IP address is running, The instance has only one Elastic IP address attached to it and the Elastic IP address is associated with an attached network interface, such as a Network Load Balancer or NAT gateway.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
66
Q

Which AWS service can be used to store, manage, and deploy Docker container images?

a. AWS Lambda
b. Amazon Elastic Compute Cloud (Amazon EC2)
c. Amazon Elastic Container Registry (Amazon ECR)
d. Amazon Elastic Container Service (Amazon ECS)

A

c. Amazon Elastic Container Registry (Amazon ECR)

Amazon Elastic Container Registry (Amazon ECR) can be used to store, manage, and deploy Docker container images. Amazon Elastic Container Registry (Amazon ECR) eliminates the need to operate your container repositories. You can then pull your docker images from Amazon Elastic Container Registry (Amazon ECR) and run those on Amazon Elastic Container Service (Amazon ECS).

Please see this schematic diagram to understand how Amazon Elastic Container Registry (Amazon ECR) works: via - https://aws.amazon.com/ecr/

Incorrect options:

Amazon Elastic Container Service (Amazon ECS) - Amazon Elastic Container Service (Amazon ECS) is a highly scalable, fast, container management service that makes it easy to run, stop, and manage Docker containers on a cluster. You cannot use Amazon Elastic Container Service (Amazon ECS) to store and deploy docker container images.

Please see this schematic diagram to understand how Amazon Elastic Container Service (Amazon ECS) works: via - https://aws.amazon.com/ecs/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
67
Q

A data science team would like to build Machine Learning models for its projects. Which AWS service can it use?

a. Amazon Connect
b. Amazon Polly
c. Amazon Comprehend
d. Amazon SageMaker

A

d. Amazon SageMaker

Amazon SageMaker is a fully-managed platform that enables developers and data scientists to quickly and easily build, train, and deploy machine learning models at any scale. Amazon SageMaker removes all the barriers that typically slow down developers who want to use machine learning.

Incorrect options:

Amazon Polly - You can use Amazon Polly to turn text into lifelike speech thereby allowing you to create applications that talk. Polly’s Text-to-Speech (TTS) service uses advanced deep learning technologies to synthesize natural sounding human speech.

Amazon Comprehend - Amazon Comprehend is a natural language processing (NLP) service that uses machine learning to find meaning and insights in text. Natural Language Processing (NLP) is a way for computers to analyze, understand, and derive meaning from textual information in a smart and useful way. By utilizing natural language processing (NLP), you can extract important phrases, sentiment, syntax, key entities such as brand, date, location, person, etc., and the language of the text.

Amazon Connect - Amazon Connect is an omnichannel cloud contact center. You can set up a contact center in a few steps, add agents who are located anywhere, and start engaging with your customers. You can create personalized experiences for your customers using omnichannel communications. Amazon Connect is an open platform that you can integrate with other enterprise applications.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
68
Q

Which pillar of AWS Well-Architected Framework focuses on using IT and computing resources efficiently, while considering the right resource types and sizes based on workload requirements?

a. Performance Efficiency Pillar
b. Reliability Pillar
c. Operational Excellence Pillar
d. Cost Optimization Pillar

A

a. Performance Efficiency Pillar

The performance efficiency pillar focuses on using IT and computing resources efficiently. Key topics include selecting the right resource types and sizes based on workload requirements, monitoring performance, and making informed decisions to maintain efficiency as business needs evolve.

Performance Efficiency uses the following design principles to help achieve and maintain efficient workloads in the cloud: Democratize advanced technologies, Go global in minutes, Use serverless architectures, Experiment more often and Consider mechanical sympathy.

More information on the Design principles of the Performance Efficiency pillar: https://d1.awsstatic.com/whitepapers/architecture/AWS-Performance-Efficiency-Pillar.pdf

Incorrect options:

Operational Excellence Pillar - The operational excellence pillar focuses on running and monitoring systems to deliver business value, and continually improving processes and procedures. Key topics include automating changes, responding to events, and defining standards to manage daily operations.

Cost Optimization Pillar - The cost optimization pillar focuses on avoiding unnecessary costs. Key topics include understanding and controlling where the money is being spent, selecting the most appropriate and right number of resource types, analyzing spend over time, and scaling to meet business needs without overspending.

Reliability Pillar - The reliability pillar focuses on ensuring a workload performs its intended function correctly and consistently when it’s expected to. A resilient workload quickly recovers from failures to meet business and customer demand. Key topics include distributed system design, recovery planning, and how to handle change.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
69
Q

An e-commerce company uses AWS Cloud and would like to receive separate invoices for development and production environments. As a Cloud Practioner, which of the following solutions would you recommend for this use-case?

a. Create separate AWS accounts for development and production environments to receive separate invoices
b. Use AWS Cost Explorer to create separate invoices for development and production environments
c. Use AWS Organizations to create separate invoices for development and production environments
d. Tag all resources in the AWS account as either development or production. Then use the tags to create separate invoices

A

a. Create separate AWS accounts for development and production environments to receive separate invoices

Every AWS account provides its own invoice end of the month. You can get separate invoices for development and production environments by setting up separate AWS accounts for each environment.

Incorrect options:

Use AWS Organizations to create separate invoices for development and production environments - AWS Organizations helps you to centrally manage billing; control access, compliance, and security; and share resources across your AWS accounts. Using AWS Organizations, you can automate account creation, create groups of accounts to reflect your business needs, and apply policies for these groups for governance. You can also simplify billing by setting up a single payment method for all of your AWS accounts. AWS Organizations is available to all AWS customers at no additional charge.

AWS Organizations cannot create separate invoices for development and production environments, rather, AWS Organizations helps you to centrally manage billing.

Tag all resources in the AWS account as either development or production. Then use the tags to create separate invoices - You cannot create separate invoices based on tags.

Use AWS Cost Explorer to create separate invoices for development and production environments - AWS Cost Explorer lets you explore your AWS costs and usage at both a high level and at a detailed level of analysis, and empowering you to dive deeper using several filtering dimensions (e.g., AWS Service, Region, Linked Account). AWS Cost Explorer cannot create separate invoices for development and production environments.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
70
Q

A company has a static website hosted on an Amazon Simple Storage Service (Amazon S3) bucket in an AWS Region in Asia. Although most of its users are in Asia, now it wants to drive growth globally. How can it improve the global performance of its static website?

a. Use Amazon CloudFront to improve the performance of your website
b. Use AWS Web Application Firewall (AWS WAF) to improve the performance of your website
c. Use Amazon S3 Transfer Acceleration (Amazon S3TA) to improve the performance of your website
d. Use Amazon CloudFormation to improve the performance of your website

A

a. Use Amazon CloudFront to improve the performance of your website

You can use Amazon CloudFront to improve the performance of your website. Amazon CloudFront makes your website files (such as HTML, images, and video) available from data centers around the world (called edge locations). When a visitor requests a file from your website, Amazon CloudFront automatically redirects the request to a copy of the file at the nearest edge location. This results in faster download times than if the visitor had requested the content from a data center that is located farther away.

Incorrect options:

Use Amazon CloudFormation to improve the performance of your website - AWS CloudFormation allows you to use programming languages or a simple text file to model and provision, in an automated and secure manner, all the resources needed for your applications across all regions and accounts. CloudFormation cannot be used to improve the performance of a static website.

Use AWS Web Application Firewall (AWS WAF) to improve the performance of your website - By using AWS Web Application Firewall (AWS WAF), you can configure web access control lists (Web ACLs) on your CloudFront distributions or Application Load Balancers to filter and block requests based on request signatures. Besides, by using AWS WAF’s rate-based rules, you can automatically block the IP addresses of bad actors when requests matching a rule exceed a threshold that you define. WAF cannot be used to improve the performance of a static website.

Use Amazon S3 Transfer Acceleration (Amazon S3TA) to improve the performance of your website - Amazon S3 Transfer Acceleration (Amazon S3TA) enables fast, easy, and secure transfers of files over long distances between your client and an S3 bucket. Amazon S3 Transfer Acceleration (Amazon S3TA) takes advantage of Amazon CloudFront’s globally distributed edge locations. As the data arrives at an edge location, data is routed to Amazon S3 over an optimized network path. Amazon S3 Transfer Acceleration (Amazon S3TA) cannot be used to improve the performance of a static website.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
71
Q

Which of the following is a repository service that helps in maintaining application dependencies via integration with commonly used package managers and build tools like Maven, Gradle, npm, etc?

a. AWS CodeStar
b. AWS CodeArtifact
c. AWS CodeBuild
d. AWS CodeCommit

A

b. AWS CodeArtifact

AWS CodeArtifact is a fully managed artifact repository service that makes it easy for organizations of any size to securely store, publish, and share software packages used in their software development process. CodeArtifact can be configured to automatically fetch software packages and dependencies from public artifact repositories so developers have access to the latest versions. CodeArtifact works with commonly used package managers and build tools like Maven, Gradle, npm, yarn, twine, pip, and NuGet making it easy to integrate into existing development workflows.

Development teams often rely on both open-source software packages and those packages built within their organization. IT leaders need to be able to control access to and validate the safety of these software packages. Teams need a way to find up-to-date packages that have been approved for use by their IT leaders. To address these challenges, IT leaders turn to central artifact repository services to store and share packages. However, existing solutions often require teams to purchase licenses for software solutions that are complex to set up, scale, and operate.

AWS CodeArtifact is a pay-as-you-go artifact repository service that scales based on the needs of the organization. With CodeArtifact there is no software to update or servers to manage. In just a few clicks, IT leaders can set-up central repositories that make it easy for development teams to find and use the software packages they need. IT leaders can also approve packages and control distribution across the organization, ensuring development teams consume software packages that are safe for use.

Incorrect options:

AWS CodeCommit - AWS CodeCommit is a fully-managed source control service that hosts secure Git-based repositories. It makes it easy for teams to collaborate on code in a secure and highly scalable ecosystem. CodeCommit eliminates the need to operate your own source control system or worry about scaling its infrastructure. You can use CodeCommit to securely store anything from source code to binaries, and it works seamlessly with your existing Git tools.

AWS CodeBuild - AWS CodeBuild is a fully managed continuous integration service that compiles source code, runs tests, and produces software packages that are ready to deploy. With CodeBuild, you don’t need to provision, manage, and scale your own build servers. CodeBuild scales continuously and processes multiple builds concurrently, so your builds are not left waiting in a queue. You can get started quickly by using prepackaged build environments, or you can create custom build environments that use your own build tools.

AWS CodeStar - AWS CodeStar is a cloud‑based development service that provides the tools you need to quickly develop, build, and deploy applications on AWS. With AWS CodeStar, you can set up your entire continuous delivery toolchain in minutes, allowing you to start releasing code faster. AWS CodeStar makes it easy for your whole team to work together securely, with built-in role-based policies that allow you to easily manage access and add owners, contributors, and viewers to your projects.

Each CodeStar project includes development tools, including AWS CodePipeline, AWS CodeCommit, AWS CodeBuild, and AWS CodeDeploy, that can be used on their own and with existing AWS applications.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
72
Q

A customer is running a comparative study of pricing models of Amazon EFS and Amazon Elastic Block Store (Amazon EBS) that are used with the Amazon EC2 instances that host the application. Which of the following statements are correct regarding this use-case? (Select two)

a. Amazon Elastic Compute Cloud (Amazon EC2) data transfer charges will apply for all Amazon Elastic Block Store (Amazon EBS) direct APIs for Snapshots
b. Amazon Elastic Block Store (Amazon EBS) Snapshot storage pricing is based on the amount of space your data consumes in Amazon Elastic Block Store (Amazon EBS)
c. You will pay a fee each time you read from or write data stored on the Amazon Elastic File System (Amazon EFS) - Infrequent Access storage class
d. Amazon Elastic Block Store (Amazon EBS) Snapshots are stored incrementally, which means you are billed only for the changed blocks stored
e. With AWS Backup, you pay only for the amount of Amazon Elastic File System (Amazon EFS) backup storage you use in a month, you need not pay for restoring this data

A

c. You will pay a fee each time you read from or write data stored on the Amazon Elastic File System (Amazon EFS) - Infrequent Access storage class
d. Amazon Elastic Block Store (Amazon EBS) Snapshots are stored incrementally, which means you are billed only for the changed blocks stored

Amazon Elastic File System (Amazon EFS) - Infrequent Access storage class is cost-optimized for files accessed less frequently. Data stored on the Amazon Elastic File System (Amazon EFS) - Infrequent Access storage class costs less than Standard and you will pay a fee each time you read from or write to a file.

Amazon EBS Snapshots are a point in time copy of your block data. For the first snapshot of a volume, Amazon EBS saves a full copy of your data to Amazon S3. Amazon EBS Snapshots are stored incrementally, which means you are billed only for the changed blocks stored.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
73
Q

Which AWS service can be used to execute code triggered by new files being uploaded to Amazon Simple Storage Service (Amazon S3)?

a. Amazon Simple Queue Service (Amazon SQS)
b. Amazon Elastic Container Service (Amazon ECS)
c. Amazon Elastic Compute Cloud (Amazon EC2)
d. AWS Lambda

A

d. AWS Lambda

AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume. With AWS Lambda, you can run code for virtually any type of application or backend service - all with zero administration. Just upload your code and Lambda takes care of everything required to run and scale your code with high availability.

You can use Amazon S3 to trigger AWS Lambda to process data immediately after an upload. For example, you can use AWS Lambda to thumbnail images, transcode videos, index files, process logs, validate content, and aggregate and filter data in real-time.

How AWS Lambda executes code in response to a trigger from S3: https://aws.amazon.com/lambda/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
74
Q

Which of the following AWS services are always free to use (Select two)?

a. Amazon Simple Storage Service (Amazon S3)
b. Amazon DynamoDB
c. Amazon Elastic Compute Cloud (Amazon EC2)
d. AWS Identity and Access Management (AWS IAM)
e. AWS Auto Scaling

A

d. AWS Identity and Access Management (AWS IAM)
e. AWS Auto Scaling

AWS Identity and Access Management (AWS IAM) - AWS Identity and Access Management (AWS IAM) enables you to manage access to AWS services and resources securely. Using IAM, you can create and manage AWS users and groups, and use permissions to allow and deny their access to AWS resources. IAM is a feature of your AWS account offered at no additional charge.

AWS Auto Scaling - AWS Auto Scaling monitors your applications and automatically adjusts the capacity to maintain steady, predictable performance at the lowest possible cost. Using AWS Auto Scaling, it’s easy to setup application scaling for multiple resources across multiple services in minutes. AWS Auto Scaling is available at no additional charge. You pay only for the AWS resources needed to run your applications and Amazon CloudWatch monitoring fees.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
75
Q

Amazon CloudWatch billing metric data is stored in which AWS Region?

a. In the AWS Region where the AWS resource is provisioned
b. US East (N. Virginia) - us-east-1
c. US West (N. California) - us-west-1
d. In the AWS Region where the AWS account is created

A

b. US East (N. Virginia) - us-east-1

You can monitor your estimated AWS charges by using Amazon CloudWatch. Billing metric data is stored in the US East (N. Virginia) Region and represents worldwide charges. This data includes the estimated charges for every service in AWS that you use, in addition to the estimated overall total of your AWS charges.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
76
Q

An IT company has deployed a static website on Amazon Simple Storage Service (Amazon S3), but the website is still inaccessible. As a Cloud Practioner, which of the following solutions would you suggest to address this issue?

a. Enable Amazon S3 versioning
b. Enable Amazon S3 replication
c. Disable Amazon S3 encryption
d. Fix the Amazon S3 bucket policy

A

d. Fix the Amazon S3 bucket policy

To host a static website on Amazon S3, you configure an Amazon S3 bucket for website hosting and then upload your website content to the bucket. When you configure a bucket as a static website, you must enable website hosting, set permissions, and create and add an index document.

Hosting a static website on Amazon S3: https://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteHosting.html

If you want to configure an existing bucket as a static website that has public access, you must edit block public access settings for that bucket. You may also have to edit your account-level block public access settings. Amazon S3 applies the most restrictive combination of the bucket-level and account-level block public access settings.

Here is how you can edit Public Access settings for S3 buckets: https://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteAccessPermissionsReqd.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
77
Q

A research group wants to use EC2 instances to run a scientific computation application that has a fault tolerant architecture. The application needs high-performance hardware disks that provide fast I/O performance. As a Cloud Practitioner, which of the following storage options would you recommend as the MOST cost-effective solution?
a. Amazon Simple Storage Service (Amazon S3)
b. Instance Store
c. Amazon Elastic Block Store (EBS)
d. Amazon Elastic File System (Amazon EFS)

A

b. Instance Store

An instance store provides temporary block-level storage for your instance. This storage is located on disks that are physically attached to the host computer. This is a good option when you need storage with very low latency, but you don’t need the data to persist when the instance terminates or you can take advantage of fault-tolerant architectures. For this use-case, the computation application itself has a fault tolerant architecture, so it can automatically handle any failures of Instance Store volumes.

As the Instance Store volumes are included as part of the instance’s usage cost, therefore this is the correct option.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
78
Q

AWS Shield Advanced provides expanded DDoS attack protection for web applications running on which of the following resources? (Select two)
a. AWS Global Accelerator
b. Amazon Route 53
c. AWS Elastic Beanstalk
d. AWS CloudFormation
e. Amazon API Gateway

A

a. AWS Global Accelerator
b. Amazon Route 53

AWS Shield Standard is activated for all AWS customers, by default. For higher levels of protection against attacks, you can subscribe to AWS Shield Advanced. With Shield Advanced, you also have exclusive access to advanced, real-time metrics and reports for extensive visibility into attacks on your AWS resources. With the assistance of the DRT (DDoS response team), AWS Shield Advanced includes intelligent DDoS attack detection and mitigation for not only for network layer (layer 3) and transport layer (layer 4) attacks but also for application layer (layer 7) attacks.

AWS Shield Advanced provides expanded DDoS attack protection for web applications running on the following resources: Amazon Elastic Compute Cloud, Elastic Load Balancing (ELB), Amazon CloudFront, Amazon Route 53, AWS Global Accelerator.

Probably helps to remember the five above.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
79
Q

Which of the following are recommended best practices for AWS Identity and Access Management (AWS IAM) service? (Select two)

a. Create a minimum number of accounts and share these account credentials among employees
b. Rotate credentials regularly
c. Grant maximum privileges to avoid assigning privileges again
d. Enable multi-factor authentication (MFA) for all users
e. Share AWS account root user access keys with other administrators

A

b. Rotate credentials regularly
d. Enable multi-factor authentication (MFA) for all users

AWS IAM security best practices: https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
80
Q

Which AWS service will you use to privately connect your virtual private cloud (VPC) to Amazon Simple Storage Service (Amazon S3)?

a. VPC Endpoint
b. AWS Direct Connect
c. AWS Transit Gateway
d. Amazon API Gateway

A

a. VPC Endpoint

A VPC endpoint enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by AWS PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Instances in your VPC do not require public IP addresses to communicate with resources in the service. Traffic between your VPC and the other service does not leave the Amazon network.

There are two types of VPC endpoints: interface endpoints and gateway endpoints.

An interface endpoint is an elastic network interface with a private IP address from the IP address range of your subnet that serves as an entry point for traffic destined to a supported service. Interface endpoints are powered by AWS PrivateLink, a technology that enables you to privately access services by using private IP addresses.

A gateway endpoint is a gateway that you specify as a target for a route in your route table for traffic destined to a supported AWS service. The following AWS services are supported:

Amazon Simple Storage Service (Amazon S3)

Amazon DynamoDB

Exam Alert:

You may see a question around this concept in the exam. Just remember that only Amazon S3 and Amazon DynamoDB support VPC gateway endpoint. All other services that support VPC Endpoints use a VPC interface endpoint (note that Amazon S3 supports the VPC interface endpoint as well).

Incorrect options:

AWS Direct Connect - AWS Direct Connect is a cloud service solution that makes it easy to establish a dedicated network connection from your premises to AWS. You can use AWS Direct Connect to establish a private virtual interface from your on-premise network directly to your Amazon VPC. This private connection takes at least one month for completion.

AWS Transit Gateway - AWS Transit Gateway connects VPCs and on-premises networks through a central hub. This simplifies your network and puts an end to complex peering relationships. It acts as a cloud router – each new connection is only made once. This service is helpful in reducing the complex topology of VPC peering when a lot of systems are involved.

Amazon API Gateway - Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. APIs act as the “front door” for applications to access data, business logic, or functionality from your backend services.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
81
Q

A company wants a fully managed, flexible, and scalable file storage system, with low latency access, for its Windows-based applications. Which AWS service is the right choice for the company?

a. Amazon FSx for Lustre
b. Amazon Elastic File System (Amazon EFS)
c. Amazon Elastic Block Storage (Amazon EBS)
d. Amazon FSx for Windows File Server

A

d. Amazon FSx for Windows File Server

Amazon FSx for Windows File Server provides fully managed, highly reliable, and scalable file storage that is accessible over the industry-standard Service Message Block (SMB) protocol. It is built on Windows Server, delivering a wide range of administrative features such as user quotas, end-user file restore, and Microsoft Active Directory (AD) integration.

To support a wide spectrum of workloads, Amazon FSx provides high levels of throughput, IOPS and consistent sub-millisecond latencies. Amazon FSx is accessible from Windows, Linux, and macOS compute instances and devices.

For Windows-based applications, Amazon FSx provides fully managed Windows file servers with features and performance optimized for “lift-and-shift” business-critical application workloads including home directories (user shares), media workflows, and ERP applications. It is accessible from Windows and Linux instances via the SMB protocol.

Incorrect options:

Amazon FSx for Lustre - For compute-intensive and fast processing workloads, like high-performance computing (HPC), machine learning, EDA, and media processing, Amazon FSx for Lustre, provides a file system that’s optimized for performance, with input and output stored on Amazon S3. Amazon FSx for Lustre is only compatible with Linux.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
82
Q

Which of the following AWS services will help provision a logically isolated network for your AWS resources?

a. AWS PrivateLink
b. Amazon Route 53
c. AWS Firewall Manager
d. Amazon Virtual Private Cloud (Amazon VPC)

A

d. Amazon Virtual Private Cloud (Amazon VPC)

Amazon Virtual Private Cloud (Amazon VPC) is a service that lets you launch AWS resources in a logically isolated virtual network that you define. You have complete control over your virtual networking environment, including the selection of your own IP address range, creation of subnets, and configuration of route tables and network gateways. You can use both IPv4 and IPv6 for most resources in your virtual private cloud, helping to ensure secure and easy access to resources and applications.

As one of AWS’s foundational services, Amazon VPC makes it easy to customize your VPC’s network configuration. You can create a public-facing subnet for your web servers that have access to the internet. It also lets you place your backend systems, such as databases or application servers, in a private-facing subnet with no internet access. Amazon VPC lets you use multiple layers of security, including security groups and network access control lists, to help control access to Amazon EC2 instances in each subnet.

Incorrect options:

AWS PrivateLink - AWS PrivateLink provides private connectivity between Amazon VPCs and services hosted on AWS or on-premises, securely on the Amazon network. By providing a private endpoint to access your services, AWS PrivateLink ensures your traffic is not exposed to the public internet.

Amazon Route 53 - Amazon Route 53 is a highly available and scalable cloud Domain Name System (DNS) web service. It is designed to give developers and businesses an extremely reliable and cost-effective way to route end users to Internet applications by translating names like www.example.com into the numeric IP addresses like 192.0.2.1 that computers use to connect to each other. Amazon Route 53 is fully compliant with IPv6 as well.

AWS Firewall Manager - AWS Firewall Manager is a security management service that allows you to centrally configure and manage firewall rules across your accounts and applications in AWS Organizations. As new applications are created, Firewall Manager makes it easy to bring new applications and resources into compliance by enforcing a common set of security rules.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
83
Q

Which AWS service can be used to send, store, and receive messages between software components at any volume to decouple application tiers?

a. Amazon Simple Queue Service (Amazon SQS)
b. AWS Organizations
c. Amazon Simple Notification Service (Amazon SNS)
d. AWS Elastic Beanstalk

A

a. Amazon Simple Queue Service (Amazon SQS)

Amazon Simple Queue Service (Amazon SQS) is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. Amazon SQS eliminates the complexity and overhead associated with managing and operating message-oriented middleware and empowers developers to focus on differentiating work.

Using Amazon Simple Queue Service (Amazon SQS), you can send, store, and receive messages between software components at any volume, without losing messages or requiring other services to be available.

Incorrect options:

AWS Elastic Beanstalk - AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services. You can simply upload your code, and AWS Elastic Beanstalk automatically handles the deployment, from capacity provisioning, load balancing, and auto scaling to application health monitoring. It is not used to send, store, and receive messages between software components.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
84
Q

Which of the following is the best practice for application architecture on AWS Cloud?

a. Build loosely coupled components
b. Build monolithic applications
c. Use synchronous communication between components
d. Build tightly coupled components

A

a. Build loosely coupled components

AWS Cloud recommends microservices as an architectural and organizational approach to software development where software is composed of small independent services that communicate over well-defined APIs. These services are owned by small, self-contained teams.

Microservices architectures make applications easier to scale and faster to develop, enabling innovation and accelerating time-to-market for new features. Each service can be considered as a loosely coupled component of a bigger system. You can use services like SNS or SQS to decouple and scale microservices.

Microservices Overview: https://aws.amazon.com/blogs/compute/understanding-asynchronous-messaging-for-microservices/

Incorrect options:

Build tightly coupled components
Build monolithic applications

With monolithic architectures, all processes are tightly coupled and run as a single service. This means that if one process of the application experiences a spike in demand, the entire architecture must be scaled. Monolithic architectures add risk for application availability because many dependent and tightly coupled processes increase the impact of a single process failure. So both these options are incorrect.

Use synchronous communication between components - Synchronous between applications can be problematic if there are sudden spikes of traffic. You should use SNS or SQS to decouple your application components.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
85
Q

Which of the following are NoSQL database services from AWS? (Select two)

a. Amazon DocumentDB
b. AWS Storage Gateway
c. Amazon Relational Database Service (Amazon RDS)
d. Amazon Aurora
e. Amazon Neptune

A

a. Amazon DocumentDB
e. Amazon Neptune

A graph database’s purpose is to make it easy to build and run applications that work with highly connected datasets. Typical use cases for a graph database include social networking, recommendation engines, fraud detection, and knowledge graphs. Amazon Neptune is a fully-managed graph database service and it’s also considered as a type of NoSQL database.

In application code, data is represented often as an object or JSON-like document because it is an efficient and intuitive data model for developers. Document databases make it easier for developers to store and query data in a database by using the same document model format that they use in their application code. Amazon DocumentDB (with MongoDB compatibility) and MongoDB are popular document databases that provide powerful and intuitive APIs for flexible and iterative development.

Incorrect options:

Amazon Relational Database Service (Amazon RDS) - Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. It provides cost-efficient and resizable capacity while automating time-consuming administration tasks such as hardware provisioning, database setup, patching and backups. It frees you to focus on your applications so you can give them the fast performance, high availability, security and compatibility they need.

Amazon Aurora - Amazon Aurora is a MySQL and PostgreSQL-compatible relational database built for the cloud, that combines the performance and availability of traditional enterprise databases with the simplicity and cost-effectiveness of open source databases. Amazon Aurora is fully managed by Amazon Relational Database Service (RDS), which automates time-consuming administration tasks like hardware provisioning, database setup, patching, and backups.

AWS Storage Gateway - AWS Storage Gateway is a hybrid cloud storage service that gives you on-premises access to virtually unlimited cloud storage. Customers use Storage Gateway to simplify storage management and reduce costs for key hybrid cloud storage use cases. These include moving backups to the cloud, using on-premises file shares backed by cloud storage, and providing low latency access to data in AWS for on-premises applications. It is not a database service.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
86
Q

Which of the following is a recommended way to provide programmatic access to AWS resources?
a. Use AWS Multi-Factor Authentication (AWS MFA) to access AWS resources programmatically
b. Use Access Key ID and Secret Access Key to access AWS resources programmatically
c. Use IAM user group to access AWS resources programmatically
d. Create a new IAM user and share the username and password

A

b. Use Access Key ID and Secret Access Key to access AWS resources programmatically

Access keys are long-term credentials for an IAM user or the AWS account root user. You can use access keys to sign programmatic requests to the AWS CLI or AWS API (directly or using the AWS SDK). Access keys consist of two parts: an access key ID and a secret access key. As a user name and password, you must use both the access key ID and secret access key together to authenticate your requests. When you create an access key pair, save the access key ID and secret access key in a secure location. The secret access key is available only at the time you create it. If you lose your secret access key, you must delete the access key and create a new one.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
87
Q

Which free tool helps to review the state of your workloads and compares them to the latest AWS architectural best practices after you have answered a series of questions about your workload?

a. AWS Trusted Advisor
b. AWS Well-Architected Framework
c. AWS Technical Account Manager (TAM)
d. AWS Well-Architected Tool

A

d. AWS Well-Architected Tool

The AWS Well-Architected Tool helps you review the state of your workloads and compares them to the latest AWS architectural best practices. The tool is based on the AWS Well-Architected Framework, developed to help cloud architects build secure, high-performing, resilient, and efficient application infrastructure.

To use this free tool, available in the AWS Management Console, just define your workload and answer a set of questions regarding operational excellence, security, reliability, performance efficiency, and cost optimization. The AWS Well-Architected Tool then provides a plan on how to architect for the cloud using established best practices.

The AWS Well-Architected Tool gives you access to knowledge and best practices used by AWS architects, whenever you need it. You answer a series of questions about your workload, and the tool delivers an action plan with step-by-step guidance on how to build better workloads for the cloud.

How AWS Well-Architected Tool works: https://aws.amazon.com/well-architected-tool/

Incorrect options:

AWS Well-Architected Framework - AWS Well-Architected Framework helps cloud architects build secure, high-performing, resilient, and efficient infrastructure for their applications and workloads. Based on six pillars — operational excellence, security, reliability, performance efficiency, cost optimization and sustainability — AWS Well-Architected provides a consistent approach for customers and partners to evaluate architectures, and implement designs that can scale over time. This is a framework based on which Well-Architected Tool and AWS Trusted Advisor offer guidance, suggestions and improvements.

AWS Trusted Advisor - AWS Trusted Advisor is an online tool that provides you real-time guidance to help you provision your resources following AWS best practices. Trusted Advisor checks help optimize your AWS infrastructure, increase security and performance, reduce your overall costs, and monitor service limits. Whether establishing new workflows, developing applications, or as part of ongoing improvement, take advantage of the recommendations provided by Trusted Advisor on a regular basis to help keep your solutions provisioned optimally.

While AWS Trusted advisor checks are based on the support plan the customer has. Both Basic and Developer support plans have access to the 7 core Trusted Advisor checks. Unlike documentation-based guidance (like AWS Well-Architected Tool), this tool provides recommendations against AWS Well Architected Framework best practices and is able to track against your current AWS architecture.

AWS Technical Account Manager (TAM) - With AWS Enterprise Support, you get 24x7 technical support from high-quality engineers, tools, and technology to automatically manage the health of your environment, consultative architectural guidance delivered in the context of your applications and use-cases, and a designated Technical Account Manager (TAM).

A Technical Account Manager (TAM) is your designated technical point of contact who helps you onboard, provides advocacy and guidance to help plan and build solutions using best practices, coordinates access to subject matter experts, assists with case management, presents insights and recommendations on your AWS spend, workload optimization, and event management, and proactively keeps your AWS environment healthy.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
88
Q

Which of the following use cases is best suited for Amazon EFS Standard-Infrequent Access (EFS Standard-IA) storage class?

a. Use as boot volume for highly available Amazon Elastic Compute Cloud (Amazon EC2) instances
b. Storing data in a single AWS Availability Zone (AZ)
c. Storing files in an accessible location to satisfy audit requirements
d. Object storage for workloads that need sub-second latency speeds for accessing the data

A

c. Storing files in an accessible location to satisfy audit requirements

The Amazon EFS Standard-Infrequent Access (EFS Standard-IA) storage class reduces storage costs for files that are not accessed every day. It does this without sacrificing the high availability, high durability, elasticity, and POSIX file system access that Amazon Elastic File System (Amazon EFS) provides.

AWS recommends Amazon EFS Standard-Infrequent Access (EFS Standard-IA) storage class if you need your full dataset to be readily accessible and want to automatically save on storage costs for files that are less frequently accessed. Examples include keeping files accessible to satisfy audit requirements, performing historical analysis, or performing backup and recovery. Amazon EFS Standard-Infrequent Access (EFS Standard-IA) storage is compatible with all Amazon EFS features, and is available in all AWS Regions where Amazon EFS is available.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
89
Q

Which AWS service would you use to create a logically isolated section of the AWS Cloud where you can launch AWS resources in your virtual network?

a. Virtual Private Network (VPN)
b. Subnet
c. Network Access Control List (Network ACL)
d. Virtual private Cloud (VPC)

A

d. Virtual private Cloud (VPC)

Amazon Virtual Private Cloud (Amazon VPC) is a logically isolated section of the AWS Cloud where you can launch AWS resources in a virtual network that you define. You have complete control over your virtual networking environment, including the selection of your IP address range, creation of subnets, and configuration of route tables and network gateways. You can easily customize the network configuration of your Amazon VPC using public and private subnets.

Incorrect options:

Virtual Private Network (VPN) - AWS Virtual Private Network (AWS VPN) lets you establish a secure and private encrypted tunnel from your on-premises network to the AWS global network. AWS VPN is comprised of two services: AWS Site-to-Site VPN and AWS Client VPN. You cannot use VPN to create a logically isolated section of the AWS Cloud.

Subnet - A subnet is a range of IP addresses within your VPC. A subnet is not an AWS service, so this option is ruled out.

Network Access Control List (Network ACL) - A network access control list (network NACL) is an optional layer of security for your VPC that acts as a firewall for controlling traffic in and out of one or more subnets. A network access control list (network NACL) is not an AWS service, so this option is ruled out.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
90
Q

A financial services company wants to migrate from its on-premises data center to AWS Cloud. As a Cloud Practitioner, which AWS service would you recommend so that the company can compare the cost of running their IT infrastructure on-premises vs AWS Cloud?

a. AWS Cost Explorer
b. AWS Budgets
c. AWS Pricing Calculator
d. AWS Trusted Advisor

A

c. AWS Pricing Calculator

AWS Pricing Calculator lets you explore AWS services and create an estimate for the cost of your use cases on AWS. You can model your solutions before building them, explore the price points and calculations behind your estimate, and find the available instance types and contract terms that meet your needs. This enables you to make informed decisions about using AWS. You can plan your AWS costs and usage or price out setting up a new set of instances and services. AWS Pricing Calculator can be accessed at https://calculator.aws/#/.

AWS also offers a complimentary service called Migration Evaluator (Formerly TSO Logic) to create data-driven business cases for AWS Cloud planning and migration.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
91
Q

What are the advantages that AWS Cloud offers over a traditional on-premises IT infrastructure? (Select two)
a. Trade capital expense for variable expense
b. Make a capacity decision before deploying an application, to reduce costs
c. Increase speed and agility by keeping servers and other required resources ready before time in your data centers
d. Provide lower latency to applications by maintaining servers on-premises
e. Eliminate guessing on your infrastructure capacity needs

A

a. Trade capital expense for variable expense

e. Eliminate guessing on your infrastructure capacity needs

Trade capital expense for variable expense

In a traditional on-premises environment, you have to invest heavily in data centers and servers before you know how you’re going to use them. With Cloud Computing, you can pay only when you consume computing resources, and pay only for how much you consume.

Eliminate guessing on your infrastructure capacity needs

When you make a capacity decision before deploying an application, you often end up either sitting on expensive idle resources or dealing with limited capacity. With Cloud Computing, these problems go away. You can access as much or as little capacity as you need, and scale up and down as required with only a few minutes’ notice. You can Stop guessing capacity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
92
Q

An organization would like to copy data across different Availability Zones (AZs) using Amazon EBS snapshots. Where are Amazon EBS snapshots stored in the AWS Cloud?

a. Amazon Simple Storage Service (Amazon S3)
b. Amazon Elastic Compute Cloud (Amazon EC2)
c. Amazon Relational Database Service (Amazon RDS)
d. Amazon Elastic File System (Amazon EFS)

A

a. Amazon Simple Storage Service (Amazon S3)

You can create a point-in-time snapshot of an Amazon EBS Elastic Volume and use it as a baseline for new volumes or data backup. If you make periodic snapshots of a volume, the snapshots are incremental—the new snapshot saves only the blocks that have changed since your last snapshot.

You can back up the data on your Amazon EBS Elastic Volumes to Amazon Simple Storage Service (Amazon S3) by taking point-in-time snapshots.

Incorrect options:

Amazon Elastic Compute Cloud (Amazon EC2) - Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale computing easier for developers. Amazon EBS snapshots cannot be stored on Amazon EC2.

Amazon Relational Database Service (Amazon RDS) - Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. Amazon EBS snapshots cannot be stored on Amazon RDS.

Amazon Elastic File System (Amazon EFS) - Amazon Elastic File System (Amazon EFS) provides a simple, scalable, elastic file system for Linux-based workloads for use with AWS Cloud services and on-premises resources. Amazon EBS snapshots cannot be stored on Amazon EFS.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
93
Q

An organization deploys its IT infrastructure in a combination of its on-premises data center along with AWS Cloud. How would you categorize this deployment model?

a. Hybrid deployment
b. Mixed deployment
c. Private deployment
d. Cloud deployment

A

a. Hybrid deployment

A hybrid deployment is a way to connect your on-premises infrastructure to the cloud. The most common method of hybrid deployment is between the cloud and existing on-premises infrastructure to extend an organization’s infrastructure into the cloud while connecting cloud resources to internal systems.

Overview of Cloud Computing Deployment Models: https://aws.amazon.com/types-of-cloud-computing/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
94
Q

AWS Organizations provides which of the following benefits? (Select two)

a. Share the reserved Amazon EC2 instances amongst the member AWS accounts
b. Deploy patches on Amazon EC2 instances across the member AWS accounts
c. Volume discounts for Amazon EC2 and Amazon S3 aggregated across the member AWS accounts
d. Provision Amazon EC2 Spot instances across the member AWS accounts
e. Check vulnerabilities on Amazon EC2 instances across the member AWS accounts

A

a. Share the reserved Amazon EC2 instances amongst the member AWS accounts
c. Volume discounts for Amazon EC2 and Amazon S3 aggregated across the member AWS accounts

AWS Organizations helps you to centrally manage billing; control access, compliance, and security; and share resources such as reserved Amazon EC2 instances across your AWS accounts.

Using AWS Organizations, you can automate account creation, create groups of accounts to reflect your business needs, and apply policies for these groups for governance. You can also simplify billing by setting up a single payment method for all of your AWS accounts. AWS Organizations is available to all AWS customers at no additional charge.

You can use AWS Organizations to set up a single payment method for all the AWS accounts in your organization through consolidated billing. With consolidated billing, you can see a combined view of charges incurred by all your accounts, as well as take advantage of pricing benefits from aggregated usage, such as volume discounts for Amazon EC2 and Amazon S3.

Key benefits of AWS Organizations: https://aws.amazon.com/organizations/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
95
Q

Which Amazon Simple Storage Service (Amazon S3) storage class offers the lowest availability?

a. Amazon S3 Glacier Flexible Retrieval
b. Amazon S3 Intelligent-Tiering
c. Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA)
d. Amazon S3 Standard

A

c. Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA)

Juat know Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) has 99.5% availability while others have either 99.9% or 99.99% availability.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
96
Q

Which of the following data sources are used by Amazon Detective to analyze events and identify potential security issues?

a. Amazon CloudWatch Logs, Amazon VPC Flow Logs and Amazon GuardDuty findings
b. Amazon CloudWatch Logs, AWS CloudTrail logs and Amazon Simple Storage Service (Amazon S3) Access Logs
c. AWS CloudTrail logs, Amazon VPC Flow Logs, and Amazon GuardDuty findings
d. Amazon CloudWatch Logs, AWS CloudTrail logs and Amazon Inspector logs

A

c. AWS CloudTrail logs, Amazon VPC Flow Logs, and Amazon GuardDuty findings

Amazon Detective can analyze trillions of events from multiple data sources such as Virtual Private Cloud (VPC) Flow Logs, AWS CloudTrail, and Amazon GuardDuty, and automatically creates a unified, interactive view of your resources, users, and the interactions between them over time.

Amazon Detective conforms to the AWS shared responsibility model, which includes regulations and guidelines for data protection. Once enabled, Amazon Detective will process data from AWS CloudTrail logs, VPC Flow Logs, and Amazon GuardDuty findings for any accounts where it has been turned on.

Amazon Detective requires that you have Amazon GuardDuty enabled on your accounts for at least 48 hours before you enable Detective on those accounts. However, you can use Detective to investigate more than just your GuardDuty findings. Amazon Detective provides detailed summaries, analyses, and visualizations of the behaviors and interactions amongst your AWS accounts, EC2 instances, AWS users, roles, and IP addresses. This information can be very useful in understanding security issues or operational account activity.

How Amazon Detective Works: https://aws.amazon.com/detective/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
97
Q

A financial services company must meet compliance requirements that mandate storing multiple copies of data in geographically distant locations. As the company uses Amazon Simple Storage Service (Amazon S3) as its main storage service, which of the following represents the MOST resource-efficient solution for this use-case?

a. Use S3 cross-region replication (S3 CRR) to replicate data between distant AWS Regions
b. For every new object, trigger an AWS Lambda function to write data into a bucket in another AWS Region
c. Use S3 same-region replication (S3 SRR) to replicate data between distant AWS Regions
d. Run a daily job on an Amazon Elastic Compute Cloud (Amazon EC2) instance to copy objects into another Region

A

a. Use S3 cross-region replication (S3 CRR) to replicate data between distant AWS Regions

Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance.

Replication enables automatic, asynchronous copying of objects across Amazon S3 buckets. Buckets that are configured for object replication can be owned by the same AWS account or by different accounts. You can copy objects between different AWS Regions or within the same Region.

Although Amazon S3 stores your data across multiple geographically distant Availability Zones by default, compliance requirements might dictate that you store data at even greater distances. S3 cross-region replication (S3 CRR) allows you to replicate data between distant AWS Regions to satisfy these requirements.

Exam Alert:

Please review the differences between S3 same-region replication (S3 SRR) and S3 cross-region replication (S3 CRR): https://docs.aws.amazon.com/AmazonS3/latest/dev/replication.html

Incorrect options:

For every new object, trigger an AWS Lambda function to write data into a bucket in another AWS Region - Although this solution is feasible, it’s not resource efficient as the AWS Lambda is used to do something which S3 cross-region replication (S3 CRR) can achieve off-the-shelf.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
98
Q

The AWS Well-Architected Framework provides guidance on building cloud based applications using AWS best practices. Which of the following options are the pillars mentioned in the AWS Well-Architected Framework? (Select two)

a. Reliability
b. Availability
c. Cost Optimization
d. Scalability
e. Elasticity

A

a. Reliability
c. Cost Optimization

The AWS Well-Architected Framework provides guidance on building secure, high-performing, resilient, and efficient infrastructure for cloud based applications. Based on six pillars — operational excellence, security, reliability, performance efficiency, cost optimization and sustainability — the Framework provides a consistent approach for customers and partners to evaluate architectures, and implement designs that will scale over time.

Incorrect options:

Elasticity - Elasticity is the ability to acquire resources as you need them and release resources when you no longer need them. In the cloud, you want to do this automatically.

Availability - A system that is available is capable of delivering the designed functionality at a given point in time. Highly available systems are those that can withstand some measure of degradation while still remaining available.

Scalability - A measurement of a system’s ability to grow to accommodate an increase in demand.

These three options are not part of the AWS Well-Architected Framework.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
99
Q

Which of the following statements are correct regarding Amazon API Gateway? (Select two)

a. If an API response is served by the cached data, it is not considered an API call for billing purposes
b. Amazon API Gateway can call an AWS Lambda function to create the front door of a serverless application
c. Amazon API Gateway does not yet support API result caching
d. API Gateway can be configured to send data directly to Amazon Kinesis Data Stream
e. Amazon API Gateway creates RESTful APIs, Storage Gateway creates WebSocket APIs

A

b. Amazon API Gateway can call an AWS Lambda function to create the front door of a serverless application
d. API Gateway can be configured to send data directly to Amazon Kinesis Data Stream

Amazon API Gateway is an AWS service for creating, publishing, maintaining, monitoring, and securing REST, HTTP, and WebSocket APIs at any scale. API developers can create APIs that access AWS or other web services, as well as data stored in the AWS Cloud.

API Gateway acts as a “front door” for applications to access data, business logic, or functionality from your backend services, such as workloads running on Amazon Elastic Compute Cloud (Amazon EC2), code running on AWS Lambda, any web application, or real-time communication applications.

Amazon API Gateway can execute AWS Lambda functions in your account, start AWS Step Functions state machines, or call HTTP endpoints hosted on AWS Elastic Beanstalk, Amazon EC2, and also non-AWS hosted HTTP based operations that are accessible via the public Internet.API Gateway also allows you to specify a mapping template to generate static content to be returned, helping you mock your APIs before the backend is ready. You can also integrate API Gateway with other AWS services directly – for example, you could expose an API method in API Gateway that sends data directly to Amazon Kinesis.

How Amazon API Gateway Works: https://aws.amazon.com/api-gateway/

Incorrect options:

Amazon API Gateway creates RESTful APIs, Storage Gateway creates WebSocket APIs - Amazon API Gateway is an AWS service for creating, publishing, maintaining, monitoring, and securing REST, HTTP, and WebSocket APIs. AWS Storage Gateway is a hybrid storage solution offered by AWS.

Amazon API Gateway does not yet support API result caching - API Gateway supports result caching. You can add caching to API calls by provisioning an API Gateway cache and specifying its size in gigabytes.

If an API response is served by the cached data, it is not considered an API call for billing purposes - API calls are counted equally for billing purposes whether the response is handled by your backend operations or by the Amazon API Gateway caching operation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
100
Q

Reserved Instance (RI) pricing is available for which of the following AWS services? (Select two)

a. AWS Identity and Access Management (AWS IAM)
b. Amazon Relational Database Service (Amazon RDS)
c. Amazon Elastic Compute Cloud (Amazon EC2)
d. Amazon CloudFront
e. Amazon Simple Storage Service (Amazon S3)

A

b. Amazon Relational Database Service (Amazon RDS)
c. Amazon Elastic Compute Cloud (Amazon EC2)

A Reserved Instance (RI) is a reservation that provides a discounted hourly rate in exchange for an upfront fee and term contract. Services such as Amazon Elastic Compute Cloud (Amazon EC2) and Amazon Relational Database Service (Amazon RDS) use this approach to sell reserved capacity for hourly use of Reserved Instances (RI). It is not a virtual machine. It is a commitment to pay in advance for specific Amazon EC2 or Amazon RDS instances.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
101
Q

Which AWS service allows you to quickly and easily add user sign-up, sign-in, and access control to web and mobile applications?

a. AWS Identity and Access Management (AWS IAM)
b. AWS IAM Identity Center
c. Amazon Cognito
d. AWS Organizations

A

c. Amazon Cognito

Amazon Cognito lets you add user sign-up, sign-in, and access control to your web and mobile apps quickly and easily. With Amazon Cognito, you also have the option to authenticate users through social identity providers such as Facebook, Twitter, or Amazon, with SAML identity solutions, or by using your own identity system.

Incorrect options:

AWS IAM Identity Center - AWS IAM Identity Center is the successor to AWS Single Sign-On. It is built on top of AWS Identity and Access Management (IAM) to simplify access management to multiple AWS accounts, AWS applications, and other SAML-enabled cloud applications. In IAM Identity Center, you create, or connect, your workforce users for use across AWS. You can choose to manage access just to your AWS accounts, just to your cloud applications, or to both.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
102
Q

A fleet of Amazon EC2 instances spread across different Availability Zones (AZ) needs to access, edit and share file-based data stored centrally on a system. As a Cloud Practitioner, which AWS service would you recommend for this use-case?

a. Amazon Simple Storage Service (Amazon S3)
b. EC2 Instance Store
c. Amazon Elastic Block Store (Amazon EBS)
d. Amazon Elastic File System (Amazon EFS)

A

d. Amazon Elastic File System (Amazon EFS)

Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed, elastic NFS file system. It is built to scale on-demand to petabytes without disrupting applications, growing and shrinking automatically as you add and remove files, eliminating the need to provision and manage capacity to accommodate growth. Amazon EFS is designed to provide massively parallel shared access to thousands of Amazon EC2 instances, enabling your applications to achieve high levels of aggregate throughput and IOPS with consistent low latencies.

How Amazon Elastic File System (Amazon EFS) Works: https://aws.amazon.com/efs/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
103
Q

Which of the following entities are part of an Amazon Virtual Private Cloud (Amazon VPC) in the AWS Cloud? (Select two)

a. Object
b. API Gateway
c. Internet Gateway
d. AWS Storage Gateway
e. Subnet

A

c. Internet Gateway
e. Subnet

Amazon Virtual Private Cloud (Amazon VPC) enables you to launch AWS resources into a virtual network that you’ve defined.

The following are the key concepts for VPCs:

Virtual private cloud (VPC) — A virtual network dedicated to your AWS account.

Subnet — A range of IP addresses in your VPC.

Route table — A set of rules, called routes, that are used to determine where network traffic is directed.

Internet Gateway — A gateway that you attach to your VPC to enable communication between resources in your VPC and the internet.

VPC endpoint — Enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection.

Incorrect options:

AWS Storage Gateway - AWS Storage Gateway is a hybrid cloud storage service that gives you on-premises access to virtually unlimited cloud storage. Customers use Storage Gateway to simplify storage management and reduce costs for key hybrid cloud storage use cases. Storage Gateway is not part of VPC.

API Gateway - Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. APIs act as the “front door” for applications to access data, business logic, or functionality from your backend services. API Gateway is not part of a VPC.

Object - Buckets and objects are part of Amazon S3. Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
104
Q

Which of the following are the serverless computing services offered by AWS ? (Select two)

a. AWS Fargate
b. Amazon Elastic Compute Cloud (Amazon EC2)
c. Amazon Lightsail
d. AWS Elastic Beanstalk
e. AWS Lambda

A

a. AWS Fargate
e. AWS Lambda

Serverless is the native architecture of the cloud that enables you to shift more of your operational responsibilities to AWS, increasing your agility and innovation. Serverless allows you to build and run applications and services without thinking about servers. It eliminates infrastructure management tasks such as server or cluster provisioning, patching, operating system maintenance, and capacity provisioning.

The AWS serverless platform overview: https://aws.amazon.com/serverless/

With AWS Lambda, you can run code for virtually any type of application or backend service - all with zero administration. Just upload your code and Lambda takes care of everything required to run and scale your code with high availability. You can set up your code to automatically trigger from other AWS services or call it directly from any web or mobile app.

AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume - there is no charge when your code is not running.

AWS Fargate is a serverless compute engine for containers that works with both Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS). Fargate makes it easy for you to focus on building your applications. Fargate removes the need to provision and manage servers, lets you specify and pay for resources per application, and improves security through application isolation by design.

AWS Fargate is a purpose-built serverless compute engine for containers. Fargate scales and manages the infrastructure required to run your containers.

Incorrect options:

Amazon Elastic Compute Cloud (Amazon EC2) - Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud with support for per-second billing. It is the easiest way to provision servers on AWS Cloud and access the underlying OS. Amazon EC2 reduces the time required to obtain and boot new server instances to minutes, allowing you to quickly scale capacity, both up and down, as your computing requirements change.

AWS Elastic Beanstalk - AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services. You simply upload your code and Elastic Beanstalk automatically handles the deployment, from capacity provisioning, load balancing, auto-scaling to application health monitoring. Beanstalk provisions servers so it is not a serverless service.

Amazon Lightsail - Amazon Lightsail is an easy-to-use cloud platform that offers you everything needed to build an application or website, plus a cost-effective, monthly plan. Lightsail offers several preconfigured, one-click-to-launch operating systems, development stacks, and web applications, including Linux, Windows OS, and WordPress.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
105
Q

Which AWS Support plan provides architectural guidance contextual to your specific use-cases?
a. AWS Enterprise On-Ramp Support
b. AWS Developer Support
c. AWS Enterprise Support
d. AWS Business Support

A

d. AWS Business Support

You should use AWS Business Support if you have production workloads on AWS and want 24x7 phone, email and chat access to technical support and architectural guidance in the context of your specific use-cases. You get full access to AWS Trusted Advisor Best Practice Checks. You also get access to Infrastructure Event Management for an additional fee.

AWS Enterprise Support - AWS Enterprise Support provides customers with concierge-like service where the main focus is helping the customer achieve their outcomes and find success in the cloud. With Enterprise Support, you get 24x7 technical support from high-quality engineers, tools and technology to automatically manage the health of your environment, consultative review and guidance based on your applications, and a designated Technical Account Manager (TAM) to coordinate access to proactive/preventative programs and AWS subject matter experts. This plan supports architectural guidance contextual to your application.

I still kind of don’t understand the difference. Business support drills

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
106
Q

Which entity ensures that your application on Amazon Elastic Compute Cloud (Amazon EC2) always has the right amount of capacity to handle the current traffic demand?

a. Application Load Balancer
b. Multi-AZ deployment
c. Amazon EC2 Auto Scaling
d. Network Load Balancer

A

c. Amazon EC2 Auto Scaling

Amazon EC2 Auto Scaling helps you ensure that you have the correct number of Amazon EC2 instances available to handle the load for your application. You create collections of Amazon EC2 instances, called Auto Scaling groups. You can specify the minimum number of instances in each Auto Scaling group, and Amazon EC2 Auto Scaling ensures that your group never goes below this size.

Amazon EC2 Auto Scaling Overview: https://docs.aws.amazon.com/autoscaling/ec2/userguide/what-is-amazon-ec2-auto-scaling.html

Incorrect options:

Multi-AZ deployment - With Availability Zones (AZ), you can design and operate applications and databases that automatically failover between zones without interruption. Multi-AZ deployment of Amazon EC2 instances provided high availability, it does not help in scaling resources.

Network Load Balancer - Network Load Balancer is best suited for load balancing of Transmission Control Protocol (TCP), User Datagram Protocol (UDP) and Transport Layer Security (TLS) traffic where extreme performance is required. It distributes traffic, does not scale resources.

Application Load Balancer - An Application Load Balancer serves as the single point of contact for clients. The load balancer distributes incoming application traffic across multiple targets, such as EC2 instances, in multiple Availability Zones. It distributes traffic, does not scale resources.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
107
Q

Which AWS service can be used to provision resources to run big data workloads on Hadoop clusters?

a. AWS Step Functions
b. AWS Batch
c. Amazon Elastic Compute Cloud (Amazon EC2)
d. Amazon EMR

A

d. Amazon EMR

Amazon EMR is the industry-leading cloud big data platform for processing vast amounts of data using open source tools such as Hadoop, Apache Spark, Apache Hive, Apache HBase, Apache Flink, Apache Hudi, and Presto. Amazon EMR can be used to provision resources to run big data workloads on Hadoop clusters.

Incorrect options:

AWS Step Functions - AWS Step Functions service lets you coordinate multiple AWS services into serverless workflows. You can design and run workflows that stitch together services such as AWS Lambda, AWS Glue and Amazon SageMaker.

AWS Step Functions Overview: https://aws.amazon.com/step-functions/

AWS Batch - You can use AWS Batch to plan, schedule and execute your batch computing workloads across the full range of AWS compute services. AWS Batch dynamically provisions the optimal quantity and type of compute resources (e.g., CPU or memory optimized instances) based on the volume and specific resource requirements of the batch jobs submitted. AWS Batch provisions compute resources and optimizes the job distribution based on the volume and resource requirements of the submitted batch jobs.

Please review the common use-cases for AWS Batch: https://aws.amazon.com/batch/

Exam Alert:

Understand the difference between AWS Step Functions and AWS Batch. You may get questions to choose one over the other. AWS Batch runs batch computing workloads by provisioning the compute resources. AWS Step Functions does not provision any resources. AWS Step Functions only orchestrates AWS services required for a given workflow. You cannot use AWS Step Functions to plan, schedule and execute your batch computing workloads by provisioning underlying resources.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
108
Q

The engineering team at an IT company wants to monitor the CPU utilization for its fleet of Amazon Elastic Compute Cloud (Amazon EC2) instances and send an email to the administrator if the utilization exceeds 80%. As a Cloud Practitioner, which AWS services would you recommend to build this solution? (Select two)

a. AWS CloudTrail
b. AWS Lambda
c. Amazon CloudWatch
d. Amazon Simple Queue Service (SQS)
e. Amazon Simple Notification Service (SNS)

A

c. Amazon CloudWatch
e. Amazon Simple Notification Service (SNS)

Amazon CloudWatch - Amazon CloudWatch is a monitoring and observability service built for DevOps engineers, developers, site reliability engineers (SREs), and IT managers. CloudWatch provides data and actionable insights to monitor applications, respond to system-wide performance changes, optimize resource utilization, and get a unified view of operational health. You can create an CloudWatch alarm that sends an email message using Amazon SNS when the alarm changes state from OK to ALARM. The alarm changes to the ALARM state when the average CPU use of an EC2 instance exceeds a specified threshold for consecutive specified periods.

Amazon Simple Notification Service (SNS) - Amazon Simple Notification Service (SNS) is a highly available, durable, secure, fully managed pub/sub messaging service that enables you to decouple microservices, distributed systems, and serverless applications.

How SNS Works: https://aws.amazon.com/sns/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
109
Q

A developer has written a simple web application in PHP and he wants to just upload his code to AWS Cloud and have AWS handle the deployment automatically but still wants access to the underlying operating system for further enhancements. As a Cloud Practioner, which of the following AWS services would you recommend for this use-case?

a. AWS Elastic Beanstalk
b. Amazon Elastic Compute Cloud (Amazon EC2)
c. Amazon Elastic Container Service (Amazon ECS)
d. AWS CloudFormation

A

a. AWS Elastic Beanstalk

AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services developed with Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker on familiar servers such as Apache, Nginx, Passenger, and IIS. Simply upload your code and AWS Elastic Beanstalk automatically handles the deployment, from capacity provisioning, load balancing, auto-scaling to application health monitoring. At the same time, you retain full control over the AWS resources powering your application and can access the underlying resources at any time. There is no additional charge for AWS Elastic Beanstalk - you pay only for the AWS resources needed to store and run your applications.

Key Benefits of AWS Elastic Beanstalk: https://aws.amazon.com/elasticbeanstalk/

Incorrect options:

AWS CloudFormation - AWS CloudFormation allows you to use programming languages or a simple text file (in YAML or JSON format) to model and provision, in an automated and secure manner, all the resources needed for your applications across all Regions and accounts. Think infrastructure as code; think CloudFormation. This is very different from Beanstalk where you just upload your application code and Beanstalk automatically figures out what resources are required to deploy that application. In AWS CloudFormation, you have to explicitly specify which resources you want to provision.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
110
Q

Which of the following statements is the MOST accurate when describing AWS Elastic Beanstalk?

a. It is an Infrastructure as Code (IaC) that allows you to model and provision resources needed for an application
b. It is an Infrastructure as a Service (IaaS) that allows you to deploy and scale web applications and services
c. It is a Platform as a Service (PaaS) that allows you to model and provision resources needed for an application
d. It is a Platform as a Service (PaaS) that allows you to deploy and scale web applications and services

A

d. It is a Platform as a Service (PaaS) that allows you to deploy and scale web applications and services

AWS Elastic Beanstalk makes it even easier for developers to quickly deploy and manage applications in the AWS Cloud. Developers simply upload their applications, and AWS Elastic Beanstalk automatically handles the deployment details of capacity provisioning, load balancing, auto-scaling, and application health monitoring.

It is a Platform as a Service (PaaS) as you only manage the applications and the data.

Please review this overview of the types of Cloud Computing: https://aws.amazon.com/types-of-cloud-computing/

Incorrect options:

It is an Infrastructure as Code (IaC) that allows you to model and provision resources needed for an application - This is the definition of AWS CloudFormation. AWS CloudFormation gives developers and systems administrators an easy way to create and manage a collection of related AWS resources, provisioning and updating them in an orderly and predictable fashion. You can use the AWS CloudFormation sample templates or create your own templates to describe your AWS resources, and any associated dependencies or runtime parameters, required to run your application.

It is a Platform as a Service (PaaS) that allows you to model and provision resources needed for an application - AWS Elastic Beanstalk is a Platform as a Service (PaaS). However, the service that allows you to model and provision resources needed for an application is AWS CloudFormation.

It is an Infrastructure as a Service (IaaS) that allows you to deploy and scale web applications and services - AWS Elastic Beanstalk allows you to deploy and scale web applications and services, but it is not an Infrastructure as a Service (IaaS). With AWS Elastic Beanstalk, you do not manage the runtime, the middleware, and the operating system.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
111
Q

Which of the following AWS services are global in scope? (Select two)

a. Amazon CloudFront
b. Amazon Relational Database Service (Amazon RDS)
c. Amazon Simple Storage Service (Amazon S3)
d. Amazon Elastic Compute Cloud (Amazon EC2)
e. AWS Identity and Access Management (AWS IAM)

A

a. Amazon CloudFront
e. AWS Identity and Access Management (AWS IAM)

Most of the services that AWS offers are Region specific. But few services, by definition, need to be in a global scope because of the underlying service they offer. AWS Identity and Access Management (AWS IAM), Amazon CloudFront, Amazon Route 53 and AWS Web Application Firewall (AWS WAF) are some of the global services.

AWS Identity and Access Management (AWS IAM) enables you to manage access to AWS services and resources securely. Using AWS Identity and Access Management (AWS IAM), you can create and manage IAM users and IAM user-groups, and use permissions to allow and deny their access to AWS resources.

Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
112
Q

An organization maintains a separate Virtual Private Cloud (VPC) for each of its business units. Two units need to privately share data. Which is the most optimal way of privately sharing data between the two VPCs?

a. VPC peering connection
b. AWS Direct Connect
c. VPC Endpoint
d. AWS Site-to-Site VPN

A

a. VPC peering connection

A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them privately. Instances in either VPC can communicate with each other as if they are within the same network. You can create a VPC peering connection between your VPCs, with a VPC in another AWS account, or with a VPC in a different AWS Region.

VPC Peering Overview: https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html

Incorrect options:

AWS Site-to-Site VPN - AWS Site-to-Site VPN creates a secure connection between your data center or branch office and your AWS cloud resources. This connection goes over the public internet. Site to Site VPN cannot be used to interconnect VPCs.

AWS Direct Connect - AWS Direct Connect creates a dedicated private connection from a remote network to your VPC. This is a private connection and does not use the public internet. Takes at least a month to establish this connection. AWS Direct Connect cannot be used to interconnect VPCs.

VPC Endpoint - A VPC endpoint enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by AWS PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. You cannot connect two VPCs using a VPC endpoint.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
113
Q

Which of the following points have to be considered when choosing an AWS Region for a service? (Select two)

a. Compliance and Data Residency guidelines of the AWS Region should match your business requirements
b. AWS Region chosen should be geographically closer to the user base that utilizes the hosted AWS services
c. The AWS Region should have 5G networks, to seamlessly access the breadth of AWS services in the region
d. The AWS Region chosen should have all its Availability Zones (AZ) within 100 Kms radius, to keep latency low for hosted applications
e. The AWS Region with high availability index should be considered for your business

A

a. Compliance and Data Residency guidelines of the AWS Region should match your business requirements
b. AWS Region chosen should be geographically closer to the user base that utilizes the hosted AWS services

If you have data residency requirements, you can choose the AWS Region that is in close proximity to your desired location. You retain complete control and ownership over the region in which your data is physically located, making it easy to meet regional compliance and data residency requirements.

When deploying your applications and workloads to the cloud, you have the flexibility in selecting a technology infrastructure and AWS Region that is closest to your primary target of users.

Incorrect options:

The AWS Region with high availability index should be considered for your business - AWS delivers the highest network availability of any cloud provider. Each region is fully isolated and comprised of multiple Availability Zone (AZ), which are fully isolated partitions of our infrastructure. All AWS Regions are designed to be highly available.

The AWS Region should have 5G networks, to seamlessly access the breadth of AWS services in the region - AWS Local Zones and AWS Wavelength, with telco providers, provide performance for applications that require single-digit millisecond latencies by delivering AWS infrastructure and services closer to end-users and 5G connected devices. But, having a 5G network is not a factor for a customer to decide on an AWS Region.

The AWS Region chosen should have all its Availability Zones (AZ) within 100 Kms radius, to keep latency low for hosted applications - An Availability Zone (AZ) is one or more discrete data centers with redundant power, networking, and connectivity in an AWS Region. AZ’s are physically separated by a meaningful distance, many kilometers, from any other Availability Zone (AZ), although all are within 100 km (60 miles) of each other. This applies to all Availability Zones (AZ) and hence is not a criterion for choosing an AWS Region.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
114
Q

The DevOps team at an IT company is moving 500 GB of data from an EC2 instance to an S3 bucket in the same region. Which of the following scenario captures the correct charges for this data transfer?
a. The company would only be charged for the inbound data transfer into the S3 bucket
b. The company would only be charged for the outbound data transfer from EC2 instance
c. The company would not be charged for this data transfer
d. The company would be charged for both the outbound data transfer from EC2 instance as well as the inbound data transfer into the S3 bucket

A

c. The company would not be charged for this data transfer

There are three fundamental drivers of cost with AWS: compute, storage, and outbound data transfer. In most cases, there is no charge for inbound data transfer or data transfer between other AWS services within the same region. Outbound data transfer is aggregated across services and then charged at the outbound data transfer rate.

Per AWS pricing, data transfer between S3 and EC2 instances within the same region is not charged, so there would be no data transfer charge for moving 500 GB of data from an EC2 instance to an S3 bucket in the same region.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
115
Q

A Security Group has been changed in an AWS account and the manager of the account has asked you to find out the details of the user who changed it. As a Cloud Practitioner, which AWS service will you use to fetch the necessary information?

a. AWS X-Ray
b. AWS Trusted Advisor
c. AWS CloudTrail
d. Amazon Inspector

A

c. AWS CloudTrail

AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. With CloudTrail, you can log, continuously monitor and retain account activity related to actions across your AWS infrastructure. CloudTrail provides the event history of your AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command-line tools, and other AWS services. This event history simplifies security analysis, resource change tracking, and troubleshooting. In addition, you can use AWS CloudTrail to detect unusual activity in your AWS accounts. These capabilities help simplify operational analysis and troubleshooting.

AWS CloudTrail provides visibility into user activity by recording actions taken on your account. CloudTrail records important information about each action, including who made the request, the services used, the actions performed, parameters for the actions, and the response elements returned by the AWS service. This information helps you to track changes made to your AWS resources and to troubleshoot operational issues. CloudTrail makes it easier to ensure compliance with internal policies and regulatory standards.

How AWS CloudTrail Works: https://aws.amazon.com/cloudtrail/

Incorrect options:

AWS X-Ray - AWS X-Ray helps developers analyze and debug production, distributed applications, such as those built using a microservices architecture. With X-Ray, you can understand how your application and its underlying services are performing to identify and troubleshoot the root cause of performance issues and errors. X-Ray provides an end-to-end view of requests as they travel through your application, and shows a map of your application’s underlying components. X-Ray is not for tracking user actions when interacting with the AWS systems.

Amazon Inspector - Amazon Inspector is an automated security assessment service that helps you test the network accessibility of your Amazon EC2 instances and the security state of your applications running on the instances. Amazon Inspector allows you to automate security vulnerability assessments throughout your development and deployment pipeline or against static production systems. This allows you to make security testing a more regular occurrence as part of the development and IT operations.

AWS Trusted Advisor - AWS Trusted Advisor is an online tool that provides you real-time guidance to help you provision your resources following AWS best practices. Trusted Advisor checks help optimize your AWS infrastructure, increase security and performance, reduce your overall costs, and monitor service limits. Whether establishing new workflows, developing applications, or as part of ongoing improvement, take advantage of the recommendations provided by Trusted Advisor on a regular basis to help keep your solutions provisioned optimally.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
116
Q

Which of the following entities should be used for an Amazon Elastic Compute Cloud (Amazon EC2) Instance to access a DynamoDB table?

a. AWS Key Management Service (KMS)
b. IAM role
c. AWS IAM user access keys
d. Amazon Cognito

A

b. IAM role

An IAM Role is an IAM identity that you can create in your account that has specific permissions. An IAM role is similar to an IAM user in that it is an AWS identity with permissions policies that determine what the identity can and cannot do in AWS. When you assume a role, it provides you with temporary security credentials for your role session.

Incorrect options:

AWS IAM user access keys - Access keys are long-term credentials for an IAM user or the AWS account root user. You can use access keys to sign programmatic requests to the AWS CLI or AWS API (directly or using the AWS SDK). Access keys consist of two parts: an access key ID and a secret access key. As a user name and password, you must use both the access key ID and secret access key together to authenticate your requests. As a best practice, AWS suggests the use of temporary security credentials (IAM roles) instead of access keys.

Amazon Cognito - Amazon Cognito lets you add user sign-up, sign-in, and access control to your web and mobile apps quickly and easily. Amazon Cognito scales to millions of users and supports sign-in with social identity providers, such as Facebook, Google, and Amazon, and enterprise identity providers via SAML 2.0. Amazon Cognito cannot be used to facilitate an Amazon EC2 Instance to access a DynamoDB table.

AWS Key Management Service (KMS) - AWS Key Management Service (KMS) makes it easy for you to create and manage cryptographic keys and control their use across a wide range of AWS services and in your applications. AWS KMS is a secure and resilient service that uses hardware security modules that have been validated under FIPS 140-2, or are in the process of being validated, to protect your keys. AWS KMS cannot be used to facilitate an Amazon EC2 Instance to access a DynamoDB table.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
117
Q

Which of the following is a benefit of using AWS managed services such as Amazon Relational Database Service (Amazon RDS)?
a. The customer needs to manage database backups
b. The customer needs to patch the underlying OS
c. The performance of AWS managed Amazon Relational Database Service (Amazon RDS) instance is better than a customer-managed database instance
d. There is no need to optimize database instance type and size

A

c. The performance of AWS managed Amazon Relational Database Service (Amazon RDS) instance is better than a customer-managed database instance

Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. It provides cost-efficient and resizable capacity while automating time-consuming administration tasks such as hardware provisioning, database setup, patching, and backups.

Amazon RDS provides a selection of instance types optimized to fit different relational database use cases. Instance types comprise varying combinations of CPU, memory, storage, and networking capacity and give you the flexibility to choose the appropriate mix of resources for your database to optimize the database for your use-case by selecting the correct instance type and size.

As the RDS instances are optimized for memory, performance, or I/O, therefore the performance of AWS managed Amazon Relational Database Service (Amazon RDS) instance is better than a customer-managed database instance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
118
Q

Which of the following capabilities does Amazon Rekognition provide as a ready-to-use feature?

a. Convert images into greyscale
b. Identify objects in a photo
c. Human pose detection
d. Resize images quickly

A

b. Identify objects in a photo

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
119
Q

Which AWS services can be used to decouple components of a microservices based application on AWS Cloud? (Select two)
a. Amazon Simple Queue Service (SQS)
b. AWS Lambda
c. AWS Step Functions
d. Amazon Elastic Compute Cloud (Amazon EC2)
e. Amazon Simple Notification Service (SNS)

A

a. Amazon Simple Queue Service (SQS)

e. Amazon Simple Notification Service (SNS)

Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. Using SQS, you can send, store, and receive messages between software components at any volume, without losing messages or requiring other services to be available.

Amazon Simple Notification Service (SNS) is a highly available, durable, secure, fully managed pub/sub messaging service that enables you to decouple microservices, distributed systems, and serverless applications. Using Amazon SNS topics, your publisher systems can fan-out messages to a large number of subscriber endpoints for parallel processing, including Amazon SQS queues, AWS Lambda functions, and HTTP/S webhooks. Additionally, SNS can be used to fan out notifications to end users using mobile push, SMS, and email.

Therefore, both SNS and SQS can be used to decouple components of a microservices-based application.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
120
Q

A company’s flagship application runs on a fleet of Amazon Elastic Compute Cloud (Amazon EC2) instances. As per the new policies, the system administrators are looking for the best way to provide secure shell access to Amazon Elastic Compute Cloud (Amazon EC2) instances without opening new ports or using public IP addresses.

Which tool/service will help you achieve this requirement?

a. Amazon Elastic Compute Cloud (Amazon EC2) Instance Connect
b. Amazon Inspector
c. Amazon Route 53
d. AWS Systems Manager Session Manager

A

d. AWS Systems Manager Session Manager

AWS Systems Manager Session Manager is a fully-managed service that provides you with an interactive browser-based shell and CLI experience. It helps provide secure and auditable instance management without the need to open inbound ports, maintain bastion hosts, and manage SSH keys. AWS Systems Manager Session Manager helps to enable compliance with corporate policies that require controlled access to instances, increase security and auditability of access to the instances while providing simplicity and cross-platform instance access to end-users.

Incorrect options:

Amazon Elastic Compute Cloud (Amazon EC2) Instance Connect - Amazon Elastic Compute Cloud (Amazon EC2) Instance Connect provides a simple and secure way to connect to your Linux instances using Secure Shell (SSH). With EC2 Instance Connect, you use AWS Identity and Access Management (IAM) policies and principals to control SSH access to your instances, removing the need to share and manage SSH keys. EC2 Instance Connect will need port 22 to be open for traffic. Therefore, not the correct option here.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
121
Q

A multi-national organization has a separate virtual private cloud (VPC) for each of its business units on the AWS Cloud. The organization also wants to connect its on-premises data center with the different virtual private clouds (VPC) for better organization-wide collaboration. Which AWS services can be combined to build the MOST efficient solution for this use-case? (Select two)

a. AWS Direct Connect
b. AWS Storage Gateway
c. AWS Transit Gateway
d. Internet Gateway
e. VPC peering connection

A

a. AWS Direct Connect
c. AWS Transit Gateway

AWS Transit Gateway connects virtual private clouds (VPC) and on-premises networks through a central hub. This simplifies your network and puts an end to complex peering relationships. It acts as a cloud router – each new connection is only made once. As you expand globally, inter-Region peering connects AWS Transit Gateways using the AWS global network. Your data is automatically encrypted and never travels over the public internet.

How AWS Transit Gateway can simplify your network: https://aws.amazon.com/transit-gateway/

Incorrect options:

VPC peering connection - A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them privately. VPC peering connection is not transitive, a separate VPC peering connection has to be made between two VPCs that need to talk to each other. With a growing number of virtual private clouds (VPC), this gets difficult to manage.

Transitive VPC peering connection is not allowed: https://docs.aws.amazon.com/vpc/latest/peering/invalid-peering-configurations.html

Internet Gateway - An internet gateway is a horizontally scaled, redundant, and highly available VPC component that allows communication between instances in your VPC and the internet. It, therefore, imposes no availability risks or bandwidth constraints on your network traffic. You cannot use the internet gateway to connect your on-premises data center with multiple VPCs within your AWS network.

AWS Storage Gateway - AWS Storage Gateway is a hybrid cloud storage service that gives you on-premises access to virtually unlimited cloud storage. All data transferred between the gateway and AWS storage is encrypted using SSL (for all three types of gateways - File, Volume and Tape Gateways). You cannot use AWS Storage Gateway to connect your on-premises data center with multiple VPCs within your AWS network.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
122
Q

Which tool/service will help you get a forecast of your spending for the next 12 months?

a. Consolidated Billing of AWS Organizations
b. AWS Cost Explorer
c. AWS Marketplace
d. AWS Pricing Calculator

A

b. AWS Cost Explorer

AWS Cost Explorer lets you explore your AWS costs and usage at both a high level and at a detailed level of analysis, and empowering you to dive deeper using a number of filtering dimensions (e.g., AWS Service, Region, Member Account, etc.) AWS Cost Explorer also gives you access to a set of default reports to help you get started, while also allowing you to create custom reports from scratch.

You can explore your usage and costs using the main graph, the Cost Explorer cost, and usage reports, or the Cost Explorer RI report. You can view data for up to the last 12 months, forecast how much you’re likely to spend for the next 12 months, and get recommendations for what Reserved Instances to purchase. You can use Cost Explorer to identify areas that need further inquiry and see trends that you can use to understand your costs.

You can view your costs and usage using the Cost Explorer user interface free of charge. You can also access your data programmatically using the Cost Explorer API.

When you first sign up for Cost Explorer, AWS prepares the data about your costs for the current month and the last 12 months and then calculates the forecast for the next 12 months. The current month’s data is available for viewing in about 24 hours. The rest of your data takes a few days longer. Cost Explorer updates your cost data at least once every 24 hours. After you sign up, Cost Explorer can display up to 12 months of historical data (if you have that much), the current month, and the forecasted costs for the next 12 months.

Incorrect options:

Consolidated Billing of AWS Organizations - AWS products and services are designed to accommodate every size of the company, from small start-ups to enterprises. If your company is large or likely to grow, you might want to set up multiple AWS accounts that reflect your company’s structure. If you create multiple accounts, you can use the Consolidated Billing feature of AWS Organizations to combine all member accounts under a management account and receive a single bill.

AWS Pricing Calculator - AWS Pricing Calculator lets you explore AWS services and create an estimate for the cost of your use cases on AWS. You can model your solutions before building them, explore the price points and calculations behind your estimate, and find the available instance types and contract terms that meet your needs. This enables you to make informed decisions about using AWS. You cannot use this service to get a forecast of your spending for the next 12 months.

AWS Marketplace - AWS Marketplace is a digital catalog with thousands of software listings from independent software vendors that make it easy to find, test, buy, and deploy software that runs on AWS.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
123
Q

Which Amazon Elastic Compute Cloud (Amazon EC2) Auto Scaling feature can help with fault tolerance?

a. Replacing unhealthy Amazon EC2 instances
b. Lower cost by adjusting the number of Amazon EC2 instances
c. Having the right amount of computing capacity
d. Distributing load to Amazon EC2 instances

A

a. Replacing unhealthy Amazon EC2 instances

Amazon EC2 Auto Scaling helps you maintain application availability and allows you to automatically add or remove Amazon EC2 instances according to the conditions you define. You can use the fleet management features of Amazon EC2 Auto Scaling to maintain the health and availability of your fleet. You can also use the dynamic and predictive scaling features of Amazon EC2 Auto Scaling to add or remove EC2 instances.

Amazon EC2 Auto Scaling can detect when an instance is unhealthy, terminate it, and replace it with a new one.

Fault tolerance is the ability to withstand subsystem failure and maintain availability (doing the right thing within an established SLA).

Incorrect options:

Distributing load to Amazon EC2 instances - Even though this helps with fault tolerance and is often used with Amazon EC2 Auto Scaling, it is a feature of Elastic Load Balancing (ELB) and not an Amazon EC2 Auto Scaling. Elastic Load Balancing automatically distributes incoming application traffic across multiple targets, such as Amazon EC2 instances, containers, IP addresses, and Lambda functions. It can handle the varying load of your application traffic in a single Availability Zone (AZ) or across multiple Availability Zones (AZs).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
124
Q

A company wants to establish a private, dedicated connection between AWS and its on-premises data center. Which AWS service is the right choice for this requirement?

a. Amazon API Gateway
b. Amazon CloudFront
c. AWS Direct Connect
d. AWS Site-to-Site VPN

A

c. AWS Direct Connect

AWS Direct Connect is a cloud service solution that makes it easy to establish a dedicated network connection from your premises to AWS. Using AWS Direct Connect, you can establish private connectivity between AWS and your datacenter, office, or colocation environment, which in many cases can reduce your network costs, increase bandwidth throughput, and provide a more consistent network experience than Internet-based connections. AWS Direct Connect does not encrypt your traffic that is in transit.

AWS Direct Connect lets you establish a dedicated network connection between your network and one of the AWS Direct Connect locations. Using industry-standard 802.1q VLANs, this dedicated connection can be partitioned into multiple virtual interfaces. This allows you to use the same connection to access public resources such as objects stored in Amazon S3 using public IP address space, and private resources such as Amazon EC2 instances running within an Amazon Virtual Private Cloud (VPC) using private IP space while maintaining network separation between the public and private environments. Virtual interfaces can be reconfigured at any time to meet your changing needs.

Incorrect options:

AWS Site-to-Site VPN - AWS virtual private network (VPN) solutions establish secure connections between your on-premises networks, remote offices, client devices, and the AWS global network. AWS VPN is comprised of two services: AWS Site-to-Site VPN and AWS Client VPN. AWS Site-to-Site VPN creates encrypted tunnels between your network and your Amazon Virtual Private Clouds or AWS Transit Gateway(s).

Amazon CloudFront - Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment. Amazon CloudFront offers the most advanced security capabilities, including field-level encryption and HTTPS support, seamlessly integrated with AWS Shield, AWS web application firewall (AWS WAF), and Amazon Route 53 to protect against multiple types of attacks including network and application layer DDoS attacks.

Amazon API Gateway - Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. APIs act as the “front door” for applications to access data, business logic, or functionality from your backend services. Using API Gateway, you can create RESTful APIs and WebSocket APIs that enable real-time two-way communication applications. API Gateway supports containerized and serverless workloads, as well as web applications.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
125
Q

AWS Web Application Firewall (WAF) offers protection from common web exploits at which layer?
a. Layer 3
b. Layer 4 and 7
c. Layer 4
d. Layer 7

A

d. Layer 7

AWS WAF is a web application firewall that lets you monitor the HTTP and HTTPS requests that are forwarded to an Amazon API Gateway API, Amazon CloudFront or an Application Load Balancer. HTTP and HTTPS requests are part of the Application layer, which is layer 7.

Incorrect options:

Layer 3 - Layer 3 is the Network layer and this layer decides which physical path data will take when it moves on the network. AWS Shield offers protection at this layer. WAF does not offer protection at this layer.

Layer 4 - Layer 4 is the Transport layer and this layer data transmission occurs using TCP or UDP protocols. AWS Shield offers protection at this layer. WAF does not offer protection at this layer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
126
Q

Which of the following AWS services are regional in scope? (Select two)

a. Amazon CloudFront
b. AWS Web Application Firewall (AWS WAF)
c. AWS Identity and Access Management (AWS IAM)
d. AWS Lambda
e. Amazon Rekognition

A

d. AWS Lambda
e. Amazon Rekognition

Most of the services that AWS offers are Region specific. But few services, by definition, need to be in a global scope because of the underlying service they offer. AWS Identity and Access Management (AWS IAM), Amazon CloudFront, Amazon Route 53 and AWS Web Application Firewall (AWS WAF) are some of the global services.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
127
Q

An e-commerce company has migrated its IT infrastructure from the on-premises data center to AWS Cloud. Which of the following costs is the company responsible for?

a. Costs for hardware infrastructure on AWS Cloud
b. AWS Data Center physical security costs
c. Costs for powering servers on AWS Cloud
d. Application software license costs

A

d. Application software license costs

Cloud computing is the on-demand delivery of compute power, database storage, applications, and other IT resources through a cloud services platform via the Internet with pay-as-you-go pricing. With cloud computing, you don’t need to make large upfront investments in hardware and spend a lot of time on the heavy lifting of managing that hardware. Therefore, all costs for hardware infrastructure, powering servers and physical security for the Data Center fall under the ambit of AWS.

The customer needs to take care of software licensing costs and human resources costs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
128
Q

An IT company is on a cost-optimization spree and wants to identify all Amazon Elastic Compute Cloud (Amazon EC2) instances that are under-utilized. Which AWS services can be used off-the-shelf to address this use-case without needing any manual configurations? (Select two)

a. AWS Cost Explorer
b. AWS Trusted Advisor
c. AWS Budgets
d. AWS Cost & Usage Report (AWS CUR)
e. Amazon CloudWatch

A

a. AWS Cost Explorer
b. AWS Trusted Advisor

AWS Trusted Advisor is an online tool that provides real-time guidance to help provision your resources following AWS best practices. Whether establishing new workflows, developing applications, or as part of ongoing improvement, recommendations provided by Trusted Advisor regularly help keep your solutions provisioned optimally. AWS Trusted Advisor analyzes your AWS environment and provides best practice recommendations in five categories: Cost Optimization, Performance, Security, Fault Tolerance, Service Limits.

AWS Trusted Advisor checks the Amazon Elastic Compute Cloud (Amazon EC2) instances that were running at any time during the last 14 days and alerts you if the daily CPU utilization was 10% or less and network I/O was 5 MB or less on 4 or more days.

How AWS Trusted Advisor Works: https://aws.amazon.com/premiumsupport/technology/trusted-advisor/

AWS Cost Explorer has an easy-to-use interface that lets you visualize, understand, and manage your AWS costs and usage over time. AWS Cost Explorer includes a default report that helps you visualize the costs and usage associated with your top five cost-accruing AWS services, and gives you a detailed breakdown of all services in the table view. The reports let you adjust the time range to view historical data going back up to twelve months to gain an understanding of your cost trends.

The rightsizing recommendations feature in AWS Cost Explorer helps you identify cost-saving opportunities by downsizing or terminating Amazon EC2 instances. You can see all of your underutilized Amazon EC2 instances across member accounts in a single view to immediately identify how much you can save.

Incorrect options:

AWS Cost & Usage Report (AWS CUR) - The AWS Cost & Usage Report (AWS CUR) contains the most comprehensive set of cost and usage data available. You can use AWS Cost & Usage Report (AWS CUR) to publish your AWS billing reports to an Amazon Simple Storage Service (Amazon S3) bucket that you own. You can receive reports that break down your costs by the hour or month, by product or product resource, or by tags that you define yourself. AWS Cost & Usage Report (AWS CUR) cannot be used to identify under-utilized Amazon EC2 instances.

Amazon CloudWatch - Amazon CloudWatch can be used to create alarm to monitor your estimated charges. When you enable the monitoring of estimated charges for your AWS account, the estimated charges are calculated and sent several times daily to CloudWatch as metric data. You can choose to receive alerts by email when charges have exceeded a certain threshold. Think resource performance monitoring, events, and alerts; think CloudWatch. Amazon CloudWatch cannot be used to identify under-utilized Amazon EC2 instances without manually configuring an alarm with the appropriate threshold to track the Amazon EC2 utilization, so this option is incorrect.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
129
Q

AWS Web Application Firewall (AWS WAF) can be deployed on which of the following services?

a. Amazon CloudFront, Application Load Balancer, Amazon API Gateway, AWS AppSync
b. Application Load Balancer, Amazon Elastic Compute Cloud (Amazon EC2), Amazon API Gateway
c. AWS AppSync, Amazon CloudFront, Application Load Balancer, Amazon Elastic Compute Cloud (Amazon EC2)
d. Amazon CloudFront, Amazon Elastic Compute Cloud (Amazon EC2), Amazon API Gateway, Application Load Balancer

A

a. Amazon CloudFront, Application Load Balancer, Amazon API Gateway, AWS AppSync

AWS Web Application Firewall (AWS WAF) is a web application firewall that helps protect your web applications or APIs against common web exploits that may affect availability, compromise security, or consume excessive resources. AWS WAF gives you control over how traffic reaches your applications by enabling you to create security rules that block common attack patterns, such as SQL injection or cross-site scripting, and rules that filter out specific traffic patterns you define.

You can deploy AWS WAF on Amazon CloudFront as part of your CDN solution, the Application Load Balancer that fronts your web servers or origin servers running on EC2, Amazon API Gateway for your REST APIs, or AWS AppSync for your GraphQL APIs.

AWS WAF is tightly integrated with Amazon CloudFront, the Application Load Balancer, Amazon API Gateway, and AWS AppSync – services that AWS customers commonly use to deliver content for their websites and applications. When you use AWS WAF on Amazon CloudFront, your rules run in all AWS Edge Locations, located around the world close to your end-users. This means security doesn’t come at the expense of performance. Blocked requests are stopped before they reach your web servers. When you use AWS WAF on regional services, such as Application Load Balancer, Amazon API Gateway, and AWS AppSync, your rules run in the region and can be used to protect internet-facing resources as well as internal resources.

How AWS WAF Works: https://aws.amazon.com/waf/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
130
Q

Which AWS serverless service allows you to prepare data for analytics?

a. Amazon EMR
b. Amazon Redshift
c. AWS Glue
d. Amazon Athena

A

c. AWS Glue

AWS Glue is a fully managed extract, transform, and load (ETL) service that makes it easy for customers to prepare and load their data for analytics. AWS Glue job is meant to be used for batch ETL data processing.

How AWS Glue works: via - https://aws.amazon.com/glue/

Incorrect options:

Amazon Athena - Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon Simple Storage Service (Amazon S3) using standard SQL. Athena is serverless, so there is no infrastructure to manage, and you pay only for the queries that you run. Amazon Athena is used for analytics and not to prepare data for analytics.

Amazon Redshift - Amazon Redshift is a fast and scalable data warehouse that makes it simple and cost-effective to analyze all your data across your data warehouse and data lake. Redshift is used for analytics and not to prepare data for analytics.

Amazon EMR - Amazon EMR provides a managed Hadoop framework that makes it easy, fast, and cost-effective to process vast amounts of data across dynamically scalable Amazon EC2 instances. EMR is used for analytics and not to prepare data for analytics.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
131
Q

Which AWS services can be used to facilitate organizational change management, part of the Reliability pillar of AWS Well-Architected Framework? (Select three)
a. Amazon Inspector
b. AWS CloudTrail
c. Amazon CloudWatch
d. Amazon GuardDuty
e. AWS Trusted Advisor
f. AWS Config

A

b. AWS CloudTrail
c. Amazon CloudWatch
f. AWS Config

There are three best practice areas for Reliability in the cloud - Foundations, Change Management, Failure Management. Being aware of how change affects a system (change management) allows you to plan proactively, and monitoring allows you to quickly identify trends that could lead to capacity issues or SLA breaches.

AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. Config continuously monitors and records your AWS resource configurations and allows you to automate the evaluation of recorded configurations against desired configurations.

AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. With CloudTrail, you can log, continuously monitor, and retain account activity related to actions across your AWS infrastructure. CloudTrail provides event history of your AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command-line tools, and other AWS services.

Amazon CloudWatch is a monitoring and observability service built for DevOps engineers, developers, site reliability engineers (SREs), and IT managers. CloudWatch provides data and actionable insights to monitor applications, respond to system-wide performance changes, optimize resource utilization, and get a unified view of operational health.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
132
Q

Which of the following Amazon Simple Storage Service (Amazon S3) storage classes do not charge any data retrieval fee? (Select two)

a. Amazon S3 Standard-Infrequent Access (S3 Standard-IA)
b. Amazon S3 Intelligent-Tiering
c. Amazon S3 Standard
d. Amazon S3 Glacier Flexible Retrieval
e. Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA)

A

b. Amazon S3 Intelligent-Tiering
c. Amazon S3 Standard

Amazon S3 Standard offers high durability, availability, and performance object storage for frequently accessed data. Amazon S3 Standard offers low latency and high throughput performance, It is designed for durability of 99.999999999% of objects across multiple Availability Zones (AZ). Amazon S3 Standard does not charge any data retrieval fee.

The Amazon S3 Intelligent-Tiering storage class is designed to optimize costs by automatically moving data to the most cost-effective access tier, without performance impact or operational overhead. It works by storing objects in two access tiers: one tier that is optimized for frequent access and another lower-cost tier that is optimized for infrequent access. Amazon S3 Intelligent-Tiering does not charge any data retrieval fee.

Please review this illustration for the Amazon S3 Storage Classes retrieval fee. You don’t need to memorize the actual numbers, just remember that Amazon S3 Standard and Amazon S3 Intelligent-Tiering do not charge any retrieval fee: https://aws.amazon.com/s3/storage-classes/

Incorrect options:

Amazon S3 Glacier Flexible Retrieval - Amazon S3 Glacier Flexible Retrieval is a secure, durable, and extremely low-cost Amazon S3 cloud storage class for data archiving and long-term backup. It is designed to deliver 99.999999999% durability, and provide comprehensive security and compliance capabilities that can help meet even the most stringent regulatory requirements. Amazon S3 Glacier Flexible Retrieval has a data retrieval fee.

Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) - Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) is for data that is accessed less frequently, but requires rapid access when needed. Unlike other S3 Storage Classes which store data in a minimum of three Availability Zones (AZ), Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) stores data in a single Availability Zone (AZ). It is not suitable for data archival. Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) has a data retrieval fee.

Amazon S3 Standard-Infrequent Access (S3 Standard-IA) - Amazon S3 Standard-Infrequent Access (S3 Standard-IA) is for data that is accessed less frequently, but requires rapid access when needed. Amazon S3 Standard-Infrequent Access (S3 Standard-IA) offers high durability, high throughput, and low latency of S3 Standard, with a low per GB storage price and per GB retrieval fee. This combination of low cost and high performance makes Amazon S3 Standard-Infrequent Access (S3 Standard-IA) ideal for long-term storage, backups, and as a data store for disaster recovery files. Amazon S3 Standard-Infrequent Access (S3 Standard-IA) has a data retrieval fee.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
133
Q

A developer would like to automate operations on his on-premises environment using Chef and Puppet. Which AWS service can help with this task?

a. AWS OpsWorks
b. AWS Batch
c. AWS CloudFormation
d. AWS CodeDeploy

A

a. AWS OpsWorks

AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet. Chef and Puppet are automation platforms that allow you to use code to automate the configurations of your servers. AWS OpsWorks lets you use Chef and Puppet to automate how servers are configured, deployed, and managed across your Amazon Elastic Compute Cloud (Amazon EC2) instances or on-premises compute environments.

Incorrect options:

AWS CloudFormation - AWS CloudFormation gives developers and systems administrators an easy way to create and manage a collection of related AWS resources, provisioning and updating them in an orderly and predictable fashion. It does not use Chef and Puppet and is more focused on what and how AWS resources are procured.

AWS CodeDeploy - AWS CodeDeploy is a service that automates code deployments to any instance, including Amazon Elastic Compute Cloud (Amazon EC2) instances and instances running on-premises. It does not use Chef and Puppet, and does not deal with infrastructure configuration and orchestration.

AWS Batch - AWS Batch enables developers, scientists, and engineers to easily and efficiently run hundreds of thousands of batch computing jobs on AWS. It is not used to automate operations on his on-premises environment using Chef and Puppet.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
134
Q

Which of the following services are provided by Amazon Route 53? (Select Two)

a. Domain registration
b. IP routing
c. Load balancing
d. Health checks and monitoring
e. Transfer acceleration

A

a. Domain registration
d. Health checks and monitoring

Amazon Route 53 is a highly available and scalable cloud Domain Name System (DNS) web service. It is designed to give developers and businesses an extremely reliable and cost-effective way to route end users to Internet applications by translating names like www.example.com into the numeric IP addresses like 192.0.2.1 that computers use to connect to each other.

Amazon Route 53 offers domain name registration services, where you can search for and register available domain names or transfer in existing domain names to be managed by Route 53.

Amazon Route 53 can monitor the health and performance of your application as well as your web servers and other resources.

Incorrect options:

IP routing - Despite its name, Amazon Route 53 does not offer IP routing. However, it can route traffic based on multiple criteria, such as endpoint health, geographic location, and latency, using routing policies.

Load balancing - It is a feature of Elastic Load Balancing (ELB) and not Amazon Route 53. Elastic Load Balancing (ELB) automatically distributes incoming application traffic across multiple targets, such as Amazon EC2 instances, containers, IP addresses, and Lambda functions. It can handle the varying load of your application traffic in a single Availability Zone (AZ) or across multiple Availability Zones (AZs).

Transfer acceleration - Transfer acceleration is a feature of Amazon’s simple storage service (Amazon S3). Amazon S3 Transfer Acceleration (Amazon S3TA) can speed up content transfers to and from Amazon S3 by as much as 50-500% for long-distance transfer of larger objects.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
135
Q

An engineering team is new to the AWS Cloud and it would like to launch a dev/test environment with low monthly pricing. Which AWS service can address this use case?

a. AWS CloudFormation
b. Amazon Elastic Container Service (Amazon ECS)
c. Amazon Elastic Compute Cloud (Amazon EC2)
d. Amazon LightSail

A

d. Amazon LightSail

Amazon Lightsail is designed to be the easiest way to launch and manage a virtual private server (VPS) with AWS. Amazon Lightsail plans include everything you need to jumpstart your project – a virtual machine, SSD- based storage, data transfer, Domain Name System (DNS) management, and a static IP address – for a low, predictable price.

It is great for people with little cloud experience to launch quickly a popular IT solution ready to use immediately.

Incorrect options:

AWS CloudFormation - AWS CloudFormation gives developers and systems administrators an easy way to create and manage a collection of related AWS resources, provisioning and updating them in an orderly and predictable fashion. Using AWS CloudFormation requires experience as resources are deployed within a virtual private cloud (VPC).

Amazon Elastic Compute Cloud (Amazon EC2) - Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale computing easier for developers. Deploying a dev/test environment with Amazon EC2 requires experience as instances are deployed within a virtual private cloud (VPC).

Amazon Elastic Container Service (Amazon ECS) - Amazon Elastic Container Service (Amazon ECS) is a highly scalable, high-performance container orchestration service that supports Docker containers and allows you to easily run and scale containerized applications on AWS. Amazon ECS eliminates the need for you to install and operate your own container orchestration software, manage and scale a cluster of virtual machines, or schedule containers on those virtual machines. Using Amazon ECS requires experience.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
136
Q

A company is looking at real-time processing of streaming big data for their ad-tech platform. Which of the following AWS services is the right choice for this requirement?

a. Amazon Redshift
b. Amazon EMR
c. Amazon Simple Queue Service (Amazon SQS)
d. Amazon Kinesis Data Streams

A

d. Amazon Kinesis Data Streams

Amazon Kinesis Data Streams enables you to build custom applications that process or analyze streaming data for specialized needs. You can continuously add various types of data such as clickstreams, application logs, and social media to an Amazon Kinesis data stream from hundreds of thousands of sources. Within seconds, the data will be available for your Amazon Kinesis Applications to read and process from the stream.

Amazon Kinesis Data Streams is useful to rapidly move the data off data producers and then continuously process the data, be it to transform the data before emitting it to a data store, run real-time metrics and analytics, or derive more complex data streams for further processing. The following are typical scenarios for using Amazon Kinesis Data Streams: accelerated log and data feed intake, real-time metrics and reporting, real-time data analytics, complex stream processing.

How Amazon Kinesis Data Streams Work: https://aws.amazon.com/kinesis/data-streams/

Incorrect options:

Amazon Simple Queue Service (Amazon SQS) - Amazon Simple Queue Service (Amazon SQS) offers a reliable, highly scalable hosted queue for storing messages as they travel between computers. Amazon SQS lets you easily move data between distributed application components and helps you build applications in which messages are processed independently (with message-level ack/fail semantics), such as automated workflows.

Amazon Redshift - With Amazon Redshift, you can query and combine exabytes of structured and semi-structured data across your data warehouse, operational database, and data lake using standard SQL. Redshift lets you easily save the results of your queries back to your S3 data lake using open formats, like Apache Parquet, so that you can do additional analytics from other analytics services like Amazon EMR, Amazon Athena, and Amazon SageMaker. Redshift is a data warehousing solution and not a real-time streaming service.

Amazon EMR - Amazon EMR makes it easy to set up, operate, and scale your big data environments by automating time-consuming tasks like provisioning capacity and tuning clusters. EMR is not suitable as a real-time streaming service.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
137
Q

The AWS Cloud Adoption Framework (AWS CAF) recommends four iterative and incremental cloud transformation phases. Which cloud transformation journey phase of the AWS Cloud Adoption Framework (AWS CAF) focuses on demonstrating how the cloud will help accelerate your business outcomes?

a. Launch
b. Envision
c. Align
d. Scale

A

b. Envision

The Envision phase of the AWS Cloud Adoption Framework (AWS CAF) focuses on demonstrating how the cloud will help accelerate your business outcomes.

Incorrect options:

Align - The Align phase of the AWS Cloud Adoption Framework (AWS CAF) focuses on identifying capability gaps across the six AWS CAF perspectives, identifying cross-organizational dependencies, and surfacing stakeholder concerns and challenges.

Launch - The Launch phase of the AWS Cloud Adoption Framework (AWS CAF) focuses on delivering pilot initiatives in production and on demonstrating incremental business value.

Scale - The Scale phase of the AWS Cloud Adoption Framework (AWS CAF) focuses on expanding production pilots and business value to desired scale and ensuring that the business benefits associated with your cloud investments are realized and sustained.

Reference:

https://d1.awsstatic.com/whitepapers/aws-caf-ebook.pdf

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
138
Q

Which of the following statements are true about AWS Lambda? (Select two)

a. AWS Lambda lets you run code without provisioning or managing servers
b. You pay for the compute time you consume for AWS Lambda
c. AWS Lambda allows you to orchestrate and manage Docker containers to facilitate complex containerized applications on AWS
d. AWS Lambda allows you to install databases on the underlying serverless Operating System
e. AWS Lambda provides access to the underlying operating system to control its behavior through code

A

a. AWS Lambda lets you run code without provisioning or managing servers
b. You pay for the compute time you consume for AWS Lambda

AWS Lambda is a compute service that lets you run code without provisioning or managing servers. AWS Lambda executes your code only when needed and scales automatically, from a few requests per day to thousands per second. You pay for the compute time and the number of requests for your Lambda function - there is no charge when your code is not running. With AWS Lambda, you can run code for virtually any type of application or backend service - all with zero administration. AWS Lambda runs your code on a high-availability compute infrastructure and performs all of the administration of the compute resources, including server and operating system maintenance, capacity provisioning and automatic scaling, code monitoring and logging.

How AWS Lambda Works: https://aws.amazon.com/lambda/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
139
Q

A company based in Sydney hosts its application on an Amazon Elastic Compute Cloud (Amazon EC2) instance in ap-southeast-2. They would like to deploy the same Amazon EC2 instances in eu-south-1. Which of the following AWS entities can address this use case?

a. Amazon EBS Elastic Volume snapshots
b. Amazon Machine Image (AMI)
c. Elastic Load Balancing (ELB)
d. AWS Lambda

A

b. Amazon Machine Image (AMI)

An Amazon Machine Image (AMI) provides the information required to launch an instance. You must specify an Amazon Machine Image (AMI) when you launch an instance. You can launch multiple instances from a single Amazon Machine Image (AMI) when you need multiple instances with the same configuration.

How to use an Amazon Machine Image (AMI): https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html

Incorrect options:

Elastic Load Balancing (ELB) - Elastic Load Balancing (ELB) automatically distributes incoming application traffic across multiple targets, such as Amazon EC2 instances, containers, IP addresses, and Lambda functions. It can handle the varying load of your application traffic in a single Availability Zone (AZ) or across multiple Availability Zones (AZs). It cannot be used to deploy the same EC2 instances across different Availability Zones (AZs).

AWS Lambda - AWS Lambda lets you run code without provisioning or managing servers. It cannot be used to deploy the same EC2 instances across different Availability Zones (AZs).

Amazon EBS Elastic Volume snapshots - An Amazon EBS snapshot is a point-in-time copy of your Amazon EBS volume. EBS snapshots are one of the components of an AMI, but EBS snapshots alone cannot be used to deploy the same EC2 instances across different Availability Zones (AZs).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
140
Q

A research group wants to provision an Amazon Elastic Compute Cloud (Amazon EC2) instance for a flexible application that can be interrupted. As a Cloud Practitioner, which of the following would you recommend as the MOST cost-optimal option?

a. Dedicated Host
b. Spot Instance
c. On-Demand Instance
d. Reserved Instance (RI)

A

b. Spot Instance

A Spot Instance is an unused EC2 instance that is available for less than the On-Demand price. Because Spot Instances enable you to request unused EC2 instances at steep discounts (up to 90%), you can lower your Amazon EC2 costs significantly. Spot Instances are well-suited for data analysis, batch jobs, background processing, and other flexible tasks that can be interrupted. These can be terminated at short notice, so these are not suitable for critical workloads that need to run at a specific point in time.

Amazon EC2 Pricing Options Overview: https://aws.amazon.com/ec2/pricing/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
141
Q

A company would like to create a private, high bandwidth network connection between its on-premises data centers and AWS Cloud. As a Cloud Practitioner, which of the following options would you recommend?

a. VPC Endpoints
b. VPC peering connection
c. AWS Direct Connect
d. AWS Site-to-Site VPN

A

c. AWS Direct Connect

AWS Direct Connect is a cloud service solution that makes it easy to establish a dedicated network connection from your premises to AWS. Using AWS Direct Connect, you can establish private connectivity between AWS and your data center, office, or colocation environment, which in many cases can reduce your network costs, increase bandwidth throughput, and provide a more consistent network experience than Internet-based connections.

How AWS Direct Connect works: https://aws.amazon.com/directconnect/

Incorrect options:

AWS Site-to-Site VPN - By default, instances that you launch into an Amazon VPC can’t communicate with your own (remote) network. You can enable access to your remote network from your VPC by creating an AWS Site-to-Site VPN (Site-to-Site VPN) connection and configuring routing to pass traffic through the connection. It uses the public internet and is therefore not suited for this use case.

VPC Endpoints - A VPC endpoint enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by AWS PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. It does not connect your on-premises data centers and AWS Cloud.

VPC peering connection - A VPC peering connection is a networking connection between two virtual private clouds (VPCs) that enables you to route traffic between them using private IPv4 addresses or IPv6 addresses. It is used to connect virtual private clouds (VPCs) together, and not on-premises data centers and AWS Cloud.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
142
Q

AWS Lambda pricing is based on which of the following criteria? (Select two)

a. The size of the deployment package for the AWS Lambda function
b. The language runtime of the AWS Lambda function
c. The time it takes for the AWS Lambda function to execute
d. The number of lines of code for the AWS Lambda function
e. Number of requests for the AWS Lambda function

A

c. The time it takes for the AWS Lambda function to execute
e. Number of requests for the AWS Lambda function

AWS Lambda lets you run code without provisioning or managing servers. With AWS Lambda, you can run code for virtually any type of application or backend service - all with zero administration. Just upload your code and Lambda takes care of everything required to run and scale your code with high availability.

With AWS Lambda, you pay only for what you use. You are charged based on the number of requests for your functions and the duration, the time it takes for your code to execute. AWS Lambda counts a request each time it starts executing in response to an event notification or invoke call, including test invokes from the console. Duration is calculated from the time your code begins executing until it returns or otherwise terminates, rounded up to the nearest 100ms.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
143
Q

Which AWS entity enables you to privately connect your Amazon Virtual Private Cloud (Amazon VPC) to an Amazon Simple Queue Service (Amazon SQS) queue?

a. Internet Gateway
b. AWS Direct Connect
c. VPC Gateway Endpoint
d. VPC Interface Endpoint

A

d. VPC Interface Endpoint

An interface endpoint is an elastic network interface (ENI) with a private IP address from the IP address range of your subnet that serves as an entry point for traffic destined to a supported service. Interface endpoints are powered by AWS PrivateLink, a technology that enables you to privately access services by using private IP addresses. AWS PrivateLink restricts all network traffic between your VPC and services to the Amazon network. You do not need an internet gateway, a Network Address Translation (NAT) device, or a virtual private gateway.

Exam Alert:

You may see a question around this concept in the exam. Just remember that only Amazon S3 and Amazon DynamoDB support VPC gateway endpoint. All other services that support VPC Endpoints use a VPC interface endpoint (note that Amazon S3 supports the VPC interface endpoint as well).

Incorrect options:

VPC Gateway Endpoint - A Gateway Endpoint is a gateway that you specify as a target for a route in your route table for traffic destined to a supported AWS service. The following AWS services are supported: Amazon S3, DynamoDB. You cannot use VPC Gateway Endpoint to privately connect your VPC to an Amazon SQS queue.

AWS Direct Connect - AWS Direct Connect is a cloud service solution that makes it easy to establish a dedicated network connection from your premises to AWS. You can use AWS Direct Connect to establish a private virtual interface from your on-premise network directly to your Amazon VPC. This private connection takes at least one month for completion. You cannot use AWS Direct Connect to privately connect your VPC to an Amazon SQS queue.

Internet Gateway - An Internet Gateway is a horizontally scaled, redundant, and highly available VPC component that allows communication between your VPC and the internet. An internet gateway serves two purposes: to provide a target in your VPC route tables for internet-routable traffic, and to perform network address translation (NAT) for instances that have been assigned public IPv4 addresses. You cannot use an Internet Gateway to privately connect your VPC to an Amazon SQS queue.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
144
Q

Which of the following are benefits of the AWS Web Application Firewall (AWS WAF)? (Select two)

a. AWS Web Application Firewall (AWS WAF) offers protection against all known infrastructure (Layer 3 and 4) attacks
b. AWS Web Application Firewall (AWS WAF) can block all requests except the ones that you allow
c. AWS Web Application Firewall (AWS WAF) can check for the presence of SQL code that is likely to be malicious (known as SQL injection)
d. AWS Web Application Firewall (AWS WAF) offers dedicated support from the DDoS Response Team (DRT) and advanced reporting
e. AWS Web Application Firewall (AWS WAF) lets you monitor the HTTP and HTTPS requests that are forwarded to Amazon Route 53

A

b. AWS Web Application Firewall (AWS WAF) can block all requests except the ones that you allow
c. AWS Web Application Firewall (AWS WAF) can check for the presence of SQL code that is likely to be malicious (known as SQL injection)

AWS Web Application Firewall (AWS WAF) can block all requests except the ones that you allow. This is useful when you want to serve content for a restricted website whose users are readily identifiable by properties in web requests, such as the IP addresses that they use to browse to the website.

AWS Web Application Firewall (AWS WAF) offers additional protection against web attacks using conditions that you specify. You can define conditions by using characteristics of web requests such as - IP addresses that requests originate from, presence of a script that is likely to be malicious (known as cross-site scripting), presence of SQL code that is likely to be malicious (known as SQL injection) and many more.

AWS Web Application Firewall (AWS WAF) is a web application firewall that helps protect your web applications or APIs against common web exploits that may affect availability, compromise security, or consume excessive resources. AWS Web Application Firewall (AWS WAF) gives you control over how traffic reaches your applications by enabling you to create security rules that block common attack patterns such as SQL injection or cross-site scripting. You can also use rate-based rules to mitigate the Web layer DDoS attack.

How AWS Web Application Firewall (AWS WAF) Works: https://aws.amazon.com/waf/

Incorrect options:

AWS Web Application Firewall (AWS WAF) offers protection against all known infrastructure (Layer 3 and 4) attacks - AWS Web Application Firewall (AWS WAF) lets you monitor the HTTP and HTTPS requests to your application, it only works at the application layer (layer 7).

AWS Web Application Firewall (AWS WAF) offers dedicated support from the DDoS Response Team (DRT) and advanced reporting - As AWS Shield Advanced customer can contact a 24x7 DDoS response team (DRT) for assistance during a DDoS attack, it is a feature of Shield Advanced, and not of AWS Web Application Firewall (AWS WAF).

AWS Web Application Firewall (AWS WAF) lets you monitor the HTTP and HTTPS requests that are forwarded to Amazon Route 53 - AWS Web Application Firewall (AWS WAF) is a web application firewall that lets you monitor the HTTP and HTTPS requests that are forwarded to an Amazon API Gateway API, Amazon CloudFront or an Application Load Balancer. It does not cover Amazon Route 53, which is a Domain Name System (DNS) web service.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
145
Q

A startup wants to set up its IT infrastructure on AWS Cloud. The CTO would like to get an estimate of the monthly AWS bill based on the AWS services that the startup wants to use. As a Cloud Practitioner, which AWS service would you suggest for this use-case?
a. AWS Budgets
b. AWS Cost & Usage Report (AWS CUR)
c. AWS Cost Explorer
d. AWS Pricing Calculator

A

d. AWS Pricing Calculator

Price calculator is used to evaluate potential costs for potential options (Cost explorer is a breakdown of money already spent)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
146
Q

Which option is a common stakeholder role for the AWS Cloud Adoption Framework (AWS CAF) platform perspective? (Select two)
a. Chief Data Officer (CDO)
b. Chief Product Officer (CPO)
c. Engineer
d. Chief Technology Officer (CTO)
e. Chief Information Officer (CIO)

A

c. Engineer

d. Chief Technology Officer (CTO)

Because this is all about eng.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
147
Q

Which of the following statements is correct regarding the Amazon Elastic File System (Amazon EFS) storage service?

a. EC2 instances can access files on an Amazon Elastic File System (Amazon EFS) file system across many Availability Zones (AZ) but not across VPCs and Regions
b. EC2 instances can access files on an Amazon Elastic File System (Amazon EFS) file system across many Availability Zones (AZ) and VPCs but not across Regions
c. EC2 instances can access files on an Amazon Elastic File System (Amazon EFS) file system only in one Availability Zone (AZ)
d. EC2 instances can access files on an Amazon Elastic File System (Amazon EFS) file system across many Availability Zones (AZ), Regions and VPCs

A

d. EC2 instances can access files on an Amazon Elastic File System (Amazon EFS) file system across many Availability Zones (AZ), Regions and VPCs

Amazon EFS is a regional service storing data within and across multiple Availability Zones (AZs) for high availability and durability. Amazon EC2 instances can access your file system across AZs, regions, and VPCs, while on-premises servers can access using AWS Direct Connect or AWS VPN.

Amazon EFS Overview: https://aws.amazon.com/efs/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
148
Q

Which of the following AWS Identity and Access Management (AWS IAM) Security Tools allows you to review permissions granted to an IAM user?

a. IAM credentials report
b. AWS Identity and Access Management (IAM) access advisor
c. IAM policy
d. Multi-Factor Authentication (MFA)

A

b. AWS Identity and Access Management (IAM) access advisor

IAM Access advisor shows the service permissions granted to a user and when those services were last accessed. You can use this information to revise your policies. To summarize, you can identify unnecessary permissions so that you can revise your IAM policies accordingly.

Incorrect options:

IAM credentials report - You can generate and download a credential report that lists all IAM users in your account and the status of their various credentials, including passwords, access keys, and multi-factor authentication (MFA) devices. It is not used to review permissions granted to an IAM user.

IAM policy - IAM policies define permissions for an action regardless of the method that you use to perform the operation.

Multi-Factor Authentication (MFA) - Multi-Factor Authentication (MFA) is a simple best practice that adds an extra layer of protection on top of your user name and password. with Multi-Factor Authentication (MFA) enabled, when a user signs in to an AWS Management Console, they will be prompted for their user name and password (the first factor—what they know), as well as for an authentication code from their AWS MFA device (the second factor—what they have). Taken together, these multiple factors provide increased security for your AWS account settings and resources. It cannot be used to review permissions granted.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
149
Q

Which types of monitoring can be provided by Amazon CloudWatch? (Select TWO)

a. Resource utilization
b. API access
c. Account management
d. Application performance
e. Performance and availability of AWS services

A

a. Resource utilization
d. Application performance

Amazon CloudWatch is a monitoring service for AWS cloud resources and the applications you run on AWS. You can use Amazon CloudWatch to collect and track metrics, collect and monitor log files, and set alarms. Amazon CloudWatch can monitor AWS resources such as Amazon Elastic Compute Cloud (Amazon EC2) instances, Amazon DynamoDB tables, and Amazon Amazon Relational Database Service (Amazon RDS) DB instances, as well as custom metrics generated by your applications and services, and any log files your applications generate.

You can use Amazon CloudWatch to gain system-wide visibility into resource utilization, application performance, and operational health. You can use these insights to react and keep your application running smoothly.

How Amazon CloudWatch works: https://aws.amazon.com/cloudwatch/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
150
Q

AWS Shield Advanced provides expanded DDoS attack protection for web applications running on which of the following resources? (Select two)

a. Amazon Simple Storage Service (Amazon S3)
b. Amazon Elastic Compute Cloud (Amazon EC2)
c. Amazon CloudFront
d. AWS Identity and Access Management (AWS IAM)
e. AWS Elastic Beanstalk

A

b. Amazon Elastic Compute Cloud (Amazon EC2)
c. Amazon CloudFront

AWS Shield Standard is activated for all AWS customers, by default. For higher levels of protection against attacks, you can subscribe to AWS Shield Advanced. With Shield Advanced, you also have exclusive access to advanced, real-time metrics and reports for extensive visibility into attacks on your AWS resources. With the assistance of the DRT (DDoS response team), AWS Shield Advanced includes intelligent DDoS attack detection and mitigation for not only for network layer (layer 3) and transport layer (layer 4) attacks but also for application layer (layer 7) attacks.

AWS Shield Advanced provides expanded DDoS attack protection for web applications running on the following resources: Amazon Elastic Compute Cloud, Elastic Load Balancing (ELB), Amazon CloudFront, Amazon Route 53, AWS Global Accelerator.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
151
Q

Which AWS service enables users to find, buy, and immediately start using software solutions in their AWS environment?

a. AWS Config
b. AWS Systems Manager
c. AWS OpsWorks
d. AWS Marketplace

A

d. AWS Marketplace

AWS Marketplace is a digital catalog with thousands of software listings from independent software vendors that make it easy to find, test, buy, and deploy software that runs on AWS. AWS Marketplace includes thousands of software listings from popular categories such as security, networking, storage, machine learning, IoT, business intelligence, database, and DevOps. You can use AWS Marketplace as a buyer (subscriber) or as a seller (provider), or both. Anyone with an AWS account can use AWS Marketplace as a consumer and can register to become a seller.

Incorrect options:

AWS Config - AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. Config continuously monitors and records your AWS resource configurations and allows you to automate the evaluation of recorded configurations against desired configurations. Think resource-specific history, audit, and compliance; think Config.

AWS OpsWorks - AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet. OpsWorks lets you use Chef and Puppet to automate how servers are configured, deployed and managed across your Amazon EC2 instances or on-premises compute environments.

AWS Systems Manager - AWS Systems Manager gives you visibility and control of your infrastructure on AWS. Systems Manager provides a unified user interface so you can view operational data from multiple AWS services and allows you to automate operational tasks across your AWS resources. With Systems Manager, you can group resources, like Amazon EC2 instances, Amazon S3 buckets, or Amazon RDS instances, by application, view operational data for monitoring and troubleshooting, and take action on your groups of resources.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
152
Q

Which AWS service helps with global application availability and performance using the AWS global network?

a. Elastic Load Balancing (ELB)
b. Amazon CloudFront
c. Amazon Route 53
d. AWS Global Accelerator

A

d. AWS Global Accelerator

AWS Global Accelerator is a service that improves the availability and performance of your applications with local or global users. It provides static IP addresses that act as a fixed entry point to your application endpoints in a single or multiple AWS Regions, such as your Application Load Balancers, Network Load Balancers, or Amazon EC2 instances. AWS Global Accelerator uses the AWS global network to optimize the path from your users to your applications, improving the performance of your traffic by as much as 60%.

AWS Global Accelerator improves performance for a wide range of applications over TCP or UDP by proxying packets at the edge to applications running in one or more AWS Regions. AWS Global Accelerator is a good fit for non-HTTP use cases, such as gaming (UDP), IoT (MQTT), or Voice over IP, as well as for HTTP use cases that specifically require static IP addresses or deterministic, fast regional failover.

How AWS Global Accelerator Works: https://aws.amazon.com/global-accelerator/

Exam Alert:

Please review the differences between Amazon CloudFront and AWS Global Accelerator: https://aws.amazon.com/global-accelerator/faqs/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
153
Q

Which AWS service can be used for online analytical processing?

a. Amazon DynamoDB
b. Amazon Redshift
c. Amazon Relational Database Service (Amazon RDS)
d. Amazon ElastiCache

A

b. Amazon Redshift

Amazon Redshift is a fast, fully managed cloud data warehouse that makes it simple and cost-effective to analyze all your data using standard SQL and your existing Business Intelligence (BI) tools. It allows you to run complex analytic queries against terabytes to petabytes of structured data, using sophisticated query optimization, columnar storage on high-performance storage, and massively parallel query execution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
154
Q

According to the AWS Shared Responsibility Model, which of the following are the responsibilities of the customer? (Select two)

a. Operating system patches and updates of an Amazon Elastic Compute Cloud (Amazon EC2) instance
b. Compliance validation of Cloud infrastructure
c. AWS Global Network Security
d. Ensuring AWS employees cannot access customer data
e. Enabling data encryption of data stored in Amazon Simple Storage Service (Amazon S3) buckets

A

a. Operating system patches and updates of an Amazon Elastic Compute Cloud (Amazon EC2) instance
e. Enabling data encryption of data stored in Amazon Simple Storage Service (Amazon S3) buckets

Under the AWS Shared Responsibility Model, AWS is responsible for protecting the infrastructure that runs all of the services offered in the AWS Cloud. This infrastructure is composed of the hardware, software, networking, and facilities that run AWS Cloud services. Customer’s responsibility is determined by the AWS Cloud services that a customer selects.

Security and Compliance is a shared responsibility between AWS and the customer. This shared model can help relieve the customer’s operational burden as AWS operates, manages and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the service operates.

Security “in” the cloud is the responsibility of the customer. A service such as Amazon Elastic Compute Cloud (Amazon EC2) is categorized as Infrastructure as a Service (IaaS) and, as such, requires the customer to perform all of the necessary security configuration and management tasks.

In the AWS Shared Responsibility Model, customers are responsible for managing their data (including encryption options), classifying their assets, and using IAM tools to apply the appropriate permissions.

AWS Shared Responsibility Model Overview: https://aws.amazon.com/compliance/shared-responsibility-model/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
155
Q

According to the AWS Cloud Adoption Framework (AWS CAF), what are two tasks that a company should perform when planning to migrate to the AWS Cloud and aiming to become more responsive to customer inquiries and feedback as part of their organizational transformation? (Select two)
a. Leverage legacy infrastructure for cost efficiencies
b. Leverage agile methods to rapidly iterate and evolve
c. Organize your teams around bureaucratic design principles
d. Organize your teams around products and value streams
e. Create new analytical insights with existing products and services

A

b. Leverage agile methods to rapidly iterate and evolve

d. Organize your teams around products and value streams

The AWS Cloud Adoption Framework (AWS CAF) leverages AWS experience and best practices to help you digitally transform and accelerate your business outcomes through innovative use of AWS. AWS CAF identifies specific organizational capabilities that underpin successful cloud transformations. These capabilities provide best practice guidance that helps you improve your cloud readiness. AWS CAF groups its capabilities in six perspectives: Business, People, Governance, Platform, Security, and Operations.

Organize your teams around products and value streams
Leverage agile methods to rapidly iterate and evolve

Using the AWS Cloud Adoption Framework (AWS CAF), you can reimagine how your business and technology teams create customer value and meet your strategic intent. Organizing your teams around products and value streams while leveraging agile methods to rapidly iterate and evolve will help you become more responsive and customer centric.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
156
Q

According to the AWS Shared Responsibility Model, which of the following are the responsibilities of AWS? (Select two)

a. Installing security patches of the guest operating system (OS)
b. Configuring IAM Roles
c. Network operability
d. Data center security
e. Encrypting application data

A

c. Network operability
d. Data center security

AWS responsibility “Security OF the Cloud” - AWS is responsible for protecting the infrastructure that runs all of the services offered in the AWS Cloud. This infrastructure is composed of the hardware, software, networking, and facilities that run AWS Cloud services.

Exam Alert:

Please review the AWS Shared Responsibility Model in detail as you can expect multiple questions on the shared responsibility model in the exam: https://aws.amazon.com/compliance/shared-responsibility-model/

Incorrect options:

Installing security patches of the guest operating system (OS) - The customers are responsible for patching their guest operating system.

Please review the IT controls under the AWS Shared Responsibility Model: https://aws.amazon.com/compliance/shared-responsibility-model/

Encrypting application data - The customers are responsible for encrypting application data.

Configuring IAM Roles - The customers are responsible for configuring IAM Roles.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
157
Q

According to the AWS Shared Responsibility Model, which of the following are responsibilities of the customer for AWS Identity and Access Management (AWS IAM)? (Select two)

a. Enable multi-factor authentication (MFA) on all accounts
b. Configuration and vulnerability analysis for the underlying software infrastructure
c. Analyze user access patterns and review AWS Identity and Access Management (AWS IAM) permissions
d. Compliance validation for the underlying software infrastructure
e. Manage global network security infrastructure

A

a. Enable multi-factor authentication (MFA) on all accounts
c. Analyze user access patterns and review AWS Identity and Access Management (AWS IAM) permissions

Security and Compliance is a shared responsibility between AWS and the customer. This shared model can help relieve the customer’s operational burden as AWS operates, manages and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the service operates.

Under the AWS Shared Responsibility Model, customers are responsible for enabling multi-factor authentication (MFA) on all accounts, analyzing access patterns and reviewing permissions for AWS Identity and Access Management (AWS IAM) entities.

AWS Shared Responsibility Model Overview: https://aws.amazon.com/compliance/shared-responsibility-model/

158
Q

A gaming company is looking at a technology/service that can deliver a consistent low-latency gameplay to ensure a great user experience for end-users in various locations.

Which AWS technology/service will provide the necessary low-latency access to the end-users?

a. AWS Direct Connect
b. AWS Wavelength
c. AWS Edge Locations
d. AWS Local Zones

A

d. AWS Local Zones

AWS Local Zones allow you to use select AWS services, like compute and storage services, closer to more end-users, providing them very low latency access to the applications running locally. AWS Local Zones are also connected to the parent region via Amazon’s redundant and very high bandwidth private network, giving applications running in AWS Local Zones fast, secure, and seamless access to the rest of AWS services.

You should use AWS Local Zones to deploy workloads closer to your end-users for low-latency requirements. AWS Local Zones have their connection to the internet and support AWS Direct Connect, so resources created in the Local Zone can serve local end-users with very low-latency communications.

Various AWS services such as Amazon Elastic Compute Cloud (EC2), Amazon Virtual Private Cloud (VPC), Amazon Elastic Block Store (EBS), Amazon FSx, Amazon Elastic Load Balancing, Amazon EMR, Amazon ElastiCache, and Amazon Relational Database Service (RDS) are available locally in the AWS Local Zones. You can also use services that orchestrate or work with local services such as Amazon EC2 Auto Scaling, Amazon EKS clusters, Amazon ECS clusters, Amazon EC2 Systems Manager, Amazon CloudWatch, AWS CloudTrail, and AWS CloudFormation. AWS Local Zones also provide a high-bandwidth, secure connection to the AWS Region, allowing you to seamlessly connect to the full range of services in the AWS Region through the same APIs and toolsets.

AWS Edge Locations - An AWS Edge location is a site that CloudFront uses to cache copies of the content for faster delivery to users at any location.

AWS Wavelength - AWS Wavelength extends the AWS cloud to a global network of 5G edge locations to enable developers to innovate and build a whole new class of applications that require ultra-low latency. Wavelength Zones provide a high-bandwidth, secure connection to the parent AWS Region, allowing developers to seamlessly connect to the full range of services in the AWS Region through the same APIs and toolsets.

159
Q

An e-commerce company needs to generate custom reports and graphs every week for analyzing the product sales data. The company is looking at a tool/service that will help them analyze this data using interactive dashboards with minimal effort. The dashboards also need to be accessible from any device.

Which AWS tool/service will you recommend for this use-case?

a. AWS Glue
b. Amazon Quicksight
c. Amazon SageMaker
d. Amazon Athena

A

b. Amazon Quicksight

Amazon QuickSight is a scalable, serverless, embeddable, machine learning-powered business intelligence (BI) service built for the cloud. QuickSight lets you easily create and publish interactive BI dashboards that include Machine Learning-powered insights. QuickSight dashboards can be accessed from any device, and seamlessly embedded into your applications, portals, and websites.

With Amazon QuickSight, you can quickly embed interactive dashboards into your applications, websites, and portals. QuickSight provides a rich set of APIs and SDKs that allow you to easily customize the look and feel of the dashboards to match applications. With Amazon QuickSight, you can manage your dashboard versions, grant dashboard authoring privileges, and share usage reports with your end-customers. If your application is used by customers that belong to different teams or organizations, QuickSight ensures that their data is always siloed and secure.

Amazon QuickSight has a serverless architecture that automatically scales to tens of thousands of users without the need to set up, configure, or manage your own servers. It also ensures that your users don’t have to deal with slow dashboards during peak-hours when multiple BI users are accessing the same dashboards or datasets. And with pay-per-session pricing, you only pay when your users access the dashboards or reports, which makes it cost-effective for deployments with lots of users. There are no upfront costs or annual commitments for using QuickSight.

How Amazon QuickSight Works: https://aws.amazon.com/quicksight/

Connecting QuickSight to your Data Lakes (e.g. Amazon S3): https://aws.amazon.com/quicksight/

Incorrect options:

AWS Glue - AWS Glue is a serverless data integration service that makes it easy to discover, prepare, and combine data for analytics, machine learning, and application development. AWS Glue provides all of the capabilities needed for data integration, so you can start analyzing your data and putting it to use in minutes instead of months. You should use AWS Glue to discover properties of the data you own, transform it, and prepare it for analytics. Glue can automatically discover both structured and semi-structured data stored in your data lake on Amazon S3, data warehouse in Amazon Redshift, and various databases running on AWS.

Amazon SageMaker - Amazon SageMaker is a fully managed service that provides every developer and data scientist with the ability to build, train, and deploy machine learning (ML) models quickly. SageMaker removes the heavy lifting from each step of the machine learning process to make it easier to develop high-quality models. Amazon SageMaker ensures that ML model artifacts and other system artifacts are encrypted in transit and at rest.

Amazon Athena - Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL. Athena is serverless, so there is no infrastructure to manage, and you pay only for the queries that you run. Athena is easy to use. Simply point to your data in Amazon S3, define the schema, and start querying using standard SQL. Most results are delivered within seconds. With Athena, there’s no need for complex ETL jobs to prepare your data for analysis. This makes it easy for anyone with SQL skills to quickly analyze large-scale datasets.

Athena is integrated out-of-the-box with AWS Glue Data Catalog, allowing you to create a unified metadata repository across various services, crawl data sources to discover schemas, and populate your Catalog with new and modified table and partition definitions, and maintain schema versioning.

As discussed in the example above, Athena can be used to analyze data, while Quicksight can be used to visualize this data via advanced interactive dashboards.

160
Q

A web application stores all of its data on Amazon S3 buckets. A client has mandated that data be encrypted before sending it to Amazon S3.

Which of the following is the right technique for encrypting data as needed by the customer?
a. Enable client-side encryption using AWS encryption SDK
b. Encryption is enabled by default for all the objects written to Amazon S3. Additional configuration is not required
c. Enable server-side encryption with Amazon S3 Managed Keys (SSE-S3)
d. Enable server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS)

A

a. Enable client-side encryption using AWS encryption SDK

The act of encrypting data before sending it to Amazon S3 is termed as client-side encryption. The AWS encryption SDK is a client-side encryption library that is separate from the language–specific SDKs. You can use this encryption library to more easily implement encryption best practices in Amazon S3. Unlike the Amazon S3 encryption clients in the language–specific AWS SDKs, the AWS encryption SDK is not tied to Amazon S3 and can be used to encrypt or decrypt data to be stored anywhere.

161
Q

An IT company would like to move its IT resources (including any data and applications) from an AWS Region in the US to another AWS Region in Europe. Which of the following represents the correct solution for this use-case?

a. The company should just start creating new resources in the destination AWS Region and then migrate the relevant data and applications into this new AWS Region
b. The company should use AWS CloudFormation to move the resources (including any data and applications) from source AWS Region to destination AWS Region
c. The company should use AWS Database Migration Service (AWS DMS) to move the resources (including any data and applications) from source AWS Region to destination AWS Region
d. The company should raise a ticket with AWS Support for this resource migration

A

a. The company should just start creating new resources in the destination AWS Region and then migrate the relevant data and applications into this new AWS Region

The company needs to create resources in the new AWS Region and then move the relevant data and applications into the new AWS Region. There is no off-the-shelf solution or service that the company can use to facilitate this transition.

162
Q

Which policy describes prohibited uses of the web services offered by Amazon Web Services?

a. AWS Trusted Advisor
b. AWS Acceptable Use Policy
c. AWS Fair Use Policy
d. AWS Applicable Use Policy

A

b. AWS Acceptable Use Policy

The Acceptable Use Policy describes prohibited uses of the web services offered by Amazon Web Services, Inc. and its affiliates (the “Services”) and the website located at http://aws.amazon.com (the “AWS Site”). This policy is present at https://aws.amazon.com/aup/ and is updated on a need basis by AWS.

163
Q

Which member of the AWS Snow Family is used by the Edge computing applications for IoT use cases for facilitating the collection and processing of data to gain immediate insights and then transfer the data to AWS?

a. AWS Snowball Edge Storage Optimized
b. AWS Snowmobile
c. AWS Snowcone
d. AWS Snowposts

A

c. AWS Snowcone

AWS Snowcone is the smallest member of the AWS Snow Family of edge computing, edge storage, and data transfer devices, weighing in at 4.5 pounds (2.1 kg) with 8 terabytes of usable storage. Snowcone is ruggedized, secure, and purpose-built for use outside of a traditional data center. Its small form factor makes it a perfect fit for tight spaces or where portability is a necessity and network connectivity is unreliable. You can use Snowcone in backpacks on first responders, or for IoT, vehicular, and drone use cases. You can execute compute applications on the edge, and you can ship the device with data to AWS for offline data transfer, or you can transfer data online with AWS DataSync from edge locations.

Like AWS Snowball, Snowcone has multiple layers of security and encryption. You can use either of these services to run edge computing workloads, or to collect, process, and transfer data to AWS. Snowcone is designed for data migration needs up to 8 terabytes per device and from space-constrained environments where AWS Snowball devices will not fit.

How AWS Snowcone works: https://aws.amazon.com/snowcone/

Feature comparison in members of Snow Family: https://aws.amazon.com/snow/#Feature_comparison

Incorrect options:

AWS Snowball Edge Storage Optimized - AWS Snowball, a part of the AWS Snow Family, is an edge computing, data migration, and edge storage device that comes in two options. Snowball Edge Storage Optimized devices provide both block storage and Amazon S3-compatible object storage, and 40 vCPUs. They are well suited for local storage and large scale data transfer.

AWS Snowball Edge Compute Optimized devices provide 52 vCPUs, block and object storage, and an optional GPU for use cases like advanced machine learning and full-motion video analysis in disconnected environments. You can use these devices for data collection, machine learning and processing, and storage in environments with intermittent connectivity or in extremely remote locations before shipping them back to AWS.

AWS Snowposts - This is a made-up option, used only as a distractor.

AWS Snowmobile- AWS Snowmobile is an Exabyte-scale data transfer service used to move extremely large amounts of data to AWS. You can transfer up to 100PB per Snowmobile, a 45-foot long ruggedized shipping container, pulled by a semi-trailer truck. Snowmobile makes it easy to move massive volumes of data to the cloud, including video libraries, image repositories, or even a complete data center migration. Transferring data with Snowmobile is more secure, fast and cost effective.

164
Q

Which AWS service can be used to automate code deployment to Amazon Elastic Compute Cloud (Amazon EC2) instances as well as on-premises instances?

a. AWS CodePipeline
b. AWS CloudFormation
c. AWS CodeCommit
d. AWS CodeDeploy

A

d. AWS CodeDeploy

AWS CodeDeploy is a service that automates code deployments to any instance, including Amazon EC2 instances and instances running on-premises. AWS CodeDeploy makes it easier for you to rapidly release new features, helps you avoid downtime during deployment, and handles the complexity of updating your applications. You can use AWS CodeDeploy to automate deployments, eliminating the need for error-prone manual operations, and the service scales with your infrastructure so you can easily deploy to one instance or thousands.

Incorrect options:

AWS CodeCommit - AWS CodeCommit is a fully-managed source control service that hosts secure Git-based repositories. It makes it easy for teams to collaborate on code in a secure and highly scalable ecosystem. CodeCommit eliminates the need to operate your own source control system or worry about scaling its infrastructure. It cannot be used to automate code deployment.

AWS CloudFormation - AWS CloudFormation allows you to use programming languages or a simple text file to model and provision, in an automated and secure manner, all the resources needed for your applications across all regions and accounts. It cannot be used to automate code deployment.

AWS CodePipeline - AWS CodePipeline is a continuous delivery service that enables you to model, visualize, and automate the steps required to release your software. With AWS CodePipeline, you model the full release process for building your code, deploying to pre-production environments, testing your application and releasing it to production.

AWS CodePipeline integrates with AWS services such as AWS CodeCommit, Amazon S3, AWS CodeBuild, AWS CodeDeploy, AWS Elastic Beanstalk, AWS CloudFormation, AWS OpsWorks, Amazon ECS, and AWS Lambda. To further elucidate, CodePipeline cannot by itself deploy the code, it can integrate with CodeDeploy for the actual deployment.

165
Q

An IT company wants to run a log backup process every Monday at 2 AM. The usual runtime of the process is 5 minutes. As a Cloud Practitioner, which AWS services would you recommend to build a serverless solution for this use-case? (Select two)

a. AWS Step Function
b. AWS Lambda
c. Amazon Elastic Compute Cloud (Amazon EC2)
d. Amazon Eventbridge
e. AWS Systems Manager

A

b. AWS Lambda
d. Amazon Eventbridge

Amazon Eventbridge - Amazon EventBridge is a service that provides real-time access to changes in data in AWS services, your own applications, and software as a service (SaaS) applications without writing code. Amazon EventBridge Scheduler is a serverless task scheduler that simplifies creating, executing, and managing millions of schedules across AWS services without provisioning or managing underlying infrastructure.

Amazon Eventbridge Scheduler: via - https://aws.amazon.com/eventbridge/

AWS Lambda - AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume. The lambda has a maximum execution time of 15 minutes, so it can be used to run this log backup process.

To build the solution for the given use-case, you can leverage the Amazon EventBridge Scheduler to trigger on a schedule. You can then set the Lambda as the target for this rule.

166
Q

Which of the following is correct regarding the Amazon Relational Database Service (Amazon RDS) service?

a. You can use both read replicas and multi-AZ deployment for disaster recovery
b. You can use read replicas for disaster recovery only and multi-AZ deployment for improved read performance only
c. You can use read replicas for improved read performance only and multi-AZ deployment for disaster recovery only
d. You can use both read replicas and multi-AZ deployment having single standby for improved read performance

A

a. You can use both read replicas and multi-AZ deployment for disaster recovery

Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. Read replicas allow you to create read-only copies that are synchronized with your master database. Read replicas are used for improved read performance. You can also place your read replica in a different AWS Region closer to your users for better performance. Using a cross-Region read replica can also help ensure that you get back up and running if you experience a regional availability issue in case of a disaster. Read replicas are an example of horizontal scaling of resources.

Read replica Overview: https://aws.amazon.com/rds/features/multi-az/

Amazon RDS Multi-AZ deployments provide enhanced availability and durability for RDS database (DB) instances, making them a natural fit for production database workloads. When you provision an Amazon RDS Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Amazon RDS performs an automatic failover to the standby so that you can resume database operations as soon as the failover is complete. Since the endpoint for your DB Instance remains the same after a failover, your application can resume database operation without the need for manual administrative intervention. Think of multi-AZ deployment as enhancing the availability and reliability of your system. For a disaster event based on disruption or loss of one physical data center, implementing a highly available workload in multiple Availability Zones within a single AWS Region helps mitigate against natural and technical disasters. So, RDS Multi-AZ can be used to address DR requirements.

In addition, Amazon RDS Multi-AZ with two readable standbys maximizes read-performance and scalability.

To understand the Amazon Relational Database Service (Amazon RDS) disaster recovery capabilities in more detail, you can refer to this excellent AWS blog: https://aws.amazon.com/blogs/database/implementing-a-disaster-recovery-strategy-with-amazon-rds/

167
Q

Which of the following statements are correct regarding the health monitoring and reporting capabilities supported by AWS Elastic Beanstalk? (Select two)

a. In a single instance environment, AWS Elastic Beanstalk determines the instance’s health by monitoring the Elastic Load Balancing (ELB) health settings
b. The AWS Elastic Beanstalk health monitoring can determine that the environment’s Auto Scaling group is available and has a minimum of at least one instance
c. The basic health reporting system that provides information about the health of instances in an AWS Elastic Beanstalk environment does not use health checks performed by Elastic Load Balancing (ELB)
d. With basic health reporting, the AWS Elastic Beanstalk service does not publish any metrics to Amazon CloudWatch
e. AWS Elastic Beanstalk provides only basic health reporting system; Combined with Elastic Load Balancing (ELB), they provide advanced health check features

A

b. The AWS Elastic Beanstalk health monitoring can determine that the environment’s Auto Scaling group is available and has a minimum of at least one instance
d. With basic health reporting, the AWS Elastic Beanstalk service does not publish any metrics to Amazon CloudWatch

In addition to Elastic Load Balancing health checks, AWS Elastic Beanstalk monitors resources in your environment and changes health status to red if they fail to deploy, are not configured correctly, or become unavailable. These checks confirm that:
1. The environment’s Auto Scaling group is available and has a minimum of at least one instance.
2. The environment’s security group is available and is configured to allow incoming traffic on port 80.
3. The environment CNAME exists and is pointing to the right load balancer.
4. In a worker environment, the Amazon Simple Queue Service (Amazon SQS) queue is being polled at least once every three minutes.

With basic health reporting, the AWS Elastic Beanstalk service does not publish any metrics to Amazon CloudWatch. The CloudWatch metrics used to produce graphs on the Monitoring page of the environment console are published by the resources in your environment.

Incorrect options:

AWS Elastic Beanstalk provides only basic health reporting system; Combined with Elastic Load Balancing (ELB), they provide advanced health check features - This option has been added as a distractor.

In a single instance environment, AWS Elastic Beanstalk determines the instance’s health by monitoring the Elastic Load Balancing (ELB) health settings - In a single instance or worker tier environment, AWS Elastic Beanstalk determines the instance’s health by monitoring its Amazon EC2 instance status. Elastic Load Balancing health settings, including HTTP health check URLs cannot be used in these environment types.

The basic health reporting system that provides information about the health of instances in an AWS Elastic Beanstalk environment does not use health checks performed by Elastic Load Balancing (ELB) - The basic health reporting system provides information about the health of instances in an AWS Elastic Beanstalk environment based on health checks performed by Elastic Load Balancing for load-balanced environments, or Amazon Elastic Compute Cloud (Amazon EC2) for single-instance environments.

168
Q

As per the AWS Shared Responsibility Model, Security and Compliance is a shared responsibility between AWS and the customer. Which of the following security services/utilities falls under the purview of AWS under the AWS Shared Responsibility Model?

a. AWS Shield Standard
b. AWS Shield Advanced
c. Security group
d. AWS Web Application Firewall (AWS WAF)

A

a. AWS Shield Standard

AWS Shield is a managed service that protects against Distributed Denial of Service (DDoS) attacks for applications running on AWS. AWS Shield Standard is enabled for all AWS customers at no additional cost. AWS Shield Standard automatically protects your web applications running on AWS against the most common, frequently occurring DDoS attacks. You can get the full benefits of AWS Shield Standard by following the best practices of DDoS resiliency on AWS. As AWS Shield Standard is automatically activated for all AWS customers with no options for any customizations, therefore AWS needs to manage the maintenance and configurations for this service. Hence this service falls under the purview of AWS.

Incorrect options:

AWS Web Application Firewall (AWS WAF) - AWS Web Application Firewall (AWS WAF) is a web application firewall that lets you monitor the HTTP and HTTPS requests that are forwarded to an Amazon API Gateway API, Amazon CloudFront or an Application Load Balancer. AWS Web Application Firewall (AWS WAF) also lets you control access to your content. AWS Web Application Firewall (AWS WAF) has to be enabled by the customer and comes under the customer’s responsibility.

AWS Shield Advanced - For higher levels of protection against attacks, you can subscribe to AWS Shield Advanced. As an AWS Shield Advanced customer, you can contact a 24x7 DDoS response team (DRT) for assistance during a DDoS attack. You also have exclusive access to advanced, real-time metrics and reports for extensive visibility into attacks on your AWS resources. Customers need to subscribe to AWS Shield Advanced and need to pay for this service. It falls under customer responsibility per the AWS Shared Responsibility Model.

Security group - A Security group acts as a virtual firewall for the Amazon EC2 instance to control incoming and outgoing traffic. Inbound rules control the incoming traffic to your instance, and outbound rules control the outgoing traffic from your instance. Security groups are the responsibility of the customer.

169
Q

Due to regulatory guidelines, a company needs to encrypt data as it passes through the different layers of its AWS architecture. The company is reviewing the capabilities of the various AWS services and their encryption options.

Which of the below services are encrypted by default and need no user intervention to enable encryption?

a. AWS CloudTrail Logs, Amazon S3 Glacier, AWS Storage Gateway
b. AWS Organizations, Amazon EC2, AWS CloudTrail Logs
c. Amazon CloudWatch logs, Application Load Balancer (ALB), Amazon S3 Glacier
d. AWS Storage Gateway, Application Load Balancer (ALB), Amazon CloudFront

A

a. AWS CloudTrail Logs, Amazon S3 Glacier, AWS Storage Gateway

By default, all data stored by AWS Storage Gateway in S3 is encrypted server-side with Amazon S3-Managed Encryption Keys (SSE-S3). Also, you can optionally configure different gateway types to encrypt stored data with AWS Key Management Service (KMS) via the Storage Gateway API.

Data at rest stored in S3 Glacier is automatically server-side encrypted using 256-bit Advanced Encryption Standard (AES-256) with keys maintained by AWS. If you prefer to manage your own keys, you can also use client-side encryption before storing data in S3 Glacier.

By default, the log files delivered by AWS CloudTrail to your bucket are encrypted by Amazon server-side encryption with Amazon S3-managed encryption keys (SSE-S3). To provide a security layer that is directly manageable, you can instead use server-side encryption with AWS KMS–managed keys (SSE-KMS) for your CloudTrail log files. To use SSE-KMS with CloudTrail, you create and manage a KMS key, also known as a customer master key (CMK).

Incorrect options:

Amazon CloudWatch logs, Application Load Balancer (ALB), Amazon S3 Glacier - Encryption at rest and Encryption in transit is a configurable feature in Application Load Balancer.

AWS Organizations, Amazon EC2, AWS CloudTrail Logs - AWS Organizations helps you centrally manage and govern your environment as you grow and scale your AWS resources.

Instance storage provides temporary block-level storage for Amazon EC2 instances. This storage is located on disks attached physically to a host computer. By default, files stored on these disks are not encrypted. Amazon Elastic Block Store (Amazon EBS) volumes attached to EC2 instances are not encrypted by default either.

AWS Storage Gateway, Application Load Balancer (ALB), Amazon CloudFront - Amazon CloudFront does not encrypt data by default. But, encryption can be enabled if needed, by configuring encryption in transit and encryption at rest, for your distributions.

170
Q

Which of the following is an INCORRECT statement about Scaling, a design principle of Reliability pillar of the AWS Well-Architected Framework?
a. A scale out operation implies you scale by adding more instances to your existing pool of resources
b. Fault tolerance is achieved by a scale out operation
c. A scale up operation implies you scale by adding more power (CPU, RAM) to your existing machine/node
d. Fault tolerance is achieved by a scale up operation

A

d. Fault tolerance is achieved by a scale up operation

A scale up operation is constrained to be running its processes on only one computer. In such systems, the only way to increase performance is to add more resources into one computer in the form of faster CPUs, memory or storage. Fault tolerance is not possible for such scaling operations since a single instance is prone to failure.

171
Q

A research lab needs to be notified in case of a configuration change for security and compliance reasons. Which AWS service can assist with this task?

a. AWS Secrets Manager
b. AWS Config
c. Amazon Inspector
d. AWS Trusted Advisor

A

b. AWS Config

172
Q

Which of the following AWS services can be used to generate, use, and manage encryption keys on the AWS Cloud?

a. AWS Secrets Manager
b. Amazon Inspector
c. AWS GuardDuty
d. AWS CloudHSM

A

d. AWS CloudHSM

The AWS CloudHSM service helps you meet corporate, contractual, and regulatory compliance requirements for data security by using dedicated Hardware Security Module (HSM) instances within the AWS cloud.

AWS CloudHSM allows you to securely generate, store, and manage cryptographic keys used for data encryption in a way that keys are accessible only to you.

How AWS CloudHSM works: via - https://aws.amazon.com/cloudhsm/

Incorrect options:

Amazon Inspector - Amazon Inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on AWS. Amazon Inspector automatically assesses applications for exposure, vulnerabilities, and deviations from best practices. It cannot be used to generate, use, and manage encryption keys.

AWS GuardDuty - Amazon GuardDuty is a threat detection service that continuously monitors for malicious or unauthorized behavior to help you protect your AWS accounts and workloads. It cannot be used to generate, use, and manage encryption keys.

AWS Secrets Manager - AWS Secrets Manager helps you protect secrets needed to access your applications, services, and IT resources. The service enables you to easily rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle. It is integrated with AWS CloudHSM to generate, use, and manage encryption keys.

173
Q

Which AWS Route 53 routing policy would you use to route traffic to multiple resources and also choose how much traffic is routed to each resource?
a. Simple routing
b. latency-based routing
c. Weighted routing
d. Failover routing

A

c. Weighted routing

Amazon Route 53 is a highly available and scalable cloud Domain Name System (DNS) web service. It is designed to give developers and businesses an extremely reliable and cost-effective way to route end users to Internet applications by translating names like www.example.com into the numeric IP addresses like 192.0.2.1 that computers use to connect to each other.

Weighted routing lets you associate multiple resources with a single domain name (example.com) or subdomain name (acme.example.com) and choose how much traffic is routed to each resource. This can be useful for a variety of purposes, including load balancing and testing new versions of software. To configure weighted routing, you create records that have the same name and type for each of your resources. You assign each record a relative weight that corresponds with how much traffic you want to send to each resource. Amazon Route 53 sends traffic to a resource based on the weight that you assign to the record as a proportion of the total weight for all records in the group.

Route 53 Routing Policy Overview: https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html

174
Q

An online gaming company wants to block users from certain geographies from accessing its content. Which AWS service can be used to accomplish this task?

a. Amazon CloudWatch
b. AWS Web Application Firewall (AWS WAF)
c. Security group
d. AWS Shield

A

b. AWS Web Application Firewall (AWS WAF)

AWS Web Application Firewall (AWS WAF) is a web application firewall that helps protect web applications from attacks by allowing you to configure rules that allow, block, or monitor (count) web requests based on conditions that you define. These conditions include IP addresses, HTTP headers, HTTP body, URI strings, SQL injection, and cross-site scripting. You can use the IP address based match rule to block specific geographies. The accuracy of the IP Address to country lookup database varies by Region. Based on recent tests, AWS mentions that the overall accuracy for the IP address to country mapping is 99.8%.

How AWS Web Application Firewall (AWS WAF) Works: https://aws.amazon.com/waf/

Security group - A security group acts as a virtual firewall for your EC2 instances to control incoming and outgoing traffic. Inbound rules control the incoming traffic to your instance, and outbound rules control the outgoing traffic from your instance. Security groups only have “allow” rules. You cannot use the security groups to block users from certain geographies.

175
Q

Which of the following is the best way to protect your data from accidental deletion on Amazon S3?

a. Amazon S3 lifecycle configuration
b. Amazon S3 storage classes
c. Amazon S3 Versioning
d. Amazon S3 Transfer Acceleration (Amazon S3TA)

A

c. Amazon S3 Versioning

Versioning is a means of keeping multiple variants of an object in the same bucket. You can use versioning to preserve, retrieve, and restore every version of every object stored in your Amazon S3 bucket. With versioning, you can easily recover from both unintended user actions and application failures.

Versioning-enabled buckets enable you to recover objects from accidental deletion or overwrite. For example: if you delete an object, instead of removing it permanently, Amazon S3 inserts a delete marker, which becomes the current object version.

Amazon S3 Versioning Overview: https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectVersioning.html

Incorrect options:

Amazon S3 lifecycle configuration - To manage your Amazon S3 objects so that they are stored cost-effectively throughout their lifecycle, configure their Amazon S3 Lifecycle. With Amazon S3 Lifecycle configuration rules, you can tell Amazon S3 to transition objects to less expensive storage classes, or archive or delete them. Amazon S3 lifecycle configuration will do the hard lifting of moving your data into cost-effective storage classes without user intervention. Amazon S3 Lifecycle configuration is not meant to protect from accidental deletion of data.

Amazon S3 storage classes - Amazon S3 offers a range of storage classes designed for different use cases. These include S3 Standard for general-purpose storage of frequently accessed data; S3 Intelligent-Tiering for data with unknown or changing access patterns; S3 Standard-Infrequent Access (S3 Standard-IA) and S3 One Zone-Infrequent Access (S3 One Zone-IA) for long-lived, but less frequently accessed data; and Amazon S3 Glacier (S3 Glacier) and Amazon S3 Glacier Deep Archive (S3 Glacier Deep Archive) for long-term archive and digital preservation. Storage classes are for different storage pattern needs that customers have, and not a data protection mechanism for S3.

Amazon S3 Transfer Acceleration (Amazon S3TA) - Amazon S3 Transfer Acceleration (Amazon S3TA) enables fast, easy, and secure transfers of files over long distances between your client and an S3 bucket. Amazon S3 Transfer Acceleration (Amazon S3TA) takes advantage of Amazon CloudFront’s globally distributed edge locations. As the data arrives at an edge location, data is routed to Amazon S3 over an optimized network path. Amazon S3 Transfer Acceleration (Amazon S3TA) cannot be used to protect from accidental deletion of data.

176
Q

Which AWS service would you use to send alerts when the costs for your AWS account exceed your budgeted amount?

a. AWS Organizations
b. AWS Budgets
c. AWS Cost Explorer
d. AWS Pricing Calculator

A

b. AWS Budgets

177
Q

A team lead is reviewing the AWS services that can be used in the development workflow for his company. Which of the following statements are correct regarding the capabilities of these AWS services? (Select three)

a. Each AWS CodeStar project includes development tools, including AWS CodePipeline, AWS CodeCommit, AWS CodeBuild, and AWS CodeDeploy, that can be used on their own and with existing AWS applications
b. AWS CodePipeline uses Amazon CloudWatch Events to detect changes in CodeCommit repositories used as a source for a pipeline
c. AWS CodeStar is a cloud-based integrated development environment that lets you write, run, and debug your code with just a browser
d. AWS CodeCommit allows you to run builds and tests as part of your AWS CodePipeline
e. AWS CodeBuild is directly integrated with both AWS CodePipeline and AWS CodeCommit
f. You can use AWS CodeStar and AWS Cloud9 to develop, build, and deploy a serverless web application

A

a. Each AWS CodeStar project includes development tools, including AWS CodePipeline, AWS CodeCommit, AWS CodeBuild, and AWS CodeDeploy, that can be used on their own and with existing AWS applications
b. AWS CodePipeline uses Amazon CloudWatch Events to detect changes in CodeCommit repositories used as a source for a pipeline
f. You can use AWS CodeStar and AWS Cloud9 to develop, build, and deploy a serverless web application

AWS CodeStar accelerates software release with the help of AWS CodePipeline, a continuous integration and continuous delivery (CI/CD) service. Each project comes pre-configured with an automated pipeline that continuously builds, tests, and deploys your code with each commit. AWS CodeStar integrates with AWS CodeDeploy and AWS CloudFormation so that you can easily update your application code and deploy to Amazon EC2 and AWS Lambda.

AWS CodePipeline uses Amazon CloudWatch Events to detect changes in CodeCommit repositories used as a source for a pipeline. Each source action has a corresponding event rule. This event rule starts your pipeline when a change occurs in the repository.

AWS CodeStar is a cloud‑based development service that provides the tools you need to quickly develop, build, and deploy applications on AWS. AWS Cloud9 is a cloud-based integrated development environment (IDE) that lets you write, run, and debug your code with just a browser. Together, the two services can be used to build serverless applications in very little time.

Incorrect options:

AWS CodeBuild is directly integrated with both AWS CodePipeline and AWS CodeCommit - AWS CodeCommit can trigger a Lambda function that in turns invokes a CodeBuild job, therefore CodeBuild has an indirect integration with CodeCommit. However, AWS CodePipeline is directly integrated with both AWS CodeBuild and AWS CodeCommit because CodePipeline can use source action integrations with CodeCommit and build action integrations with CodeBuild.

AWS CodeCommit allows you to run builds and tests as part of your AWS CodePipeline - AWS CodeCommit is a version control service that you can use to privately store and manage assets (such as documents, source code, and binary files) in the cloud. AWS CodeBuild is a fully managed continuous integration service that compiles source code, runs tests, and produces software packages that are ready to deploy. AWS CodeBuild allows you to run builds and tests as part of your pipeline.

AWS CodeStar is a cloud-based integrated development environment that lets you write, run, and debug your code with just a browser - AWS CodeStar is a cloud‑based development service that provides the tools you need to quickly develop, build, and deploy applications on AWS. AWS Cloud9 is a cloud-based integrated development environment (IDE) that lets you write, run, and debug your code with just a browser.

Here is an example to explain the collaboration between these services - You can use AWS CodeStar to build a new AWS Lambda based Node.js serverless web application. You will use AWS CodeStar to set up a continuous delivery mechanism using AWS CodeCommit for source control and AWS CodePipeline to automate your release process. You can then change some code in the Node.js project using Cloud9 and commit the change to trigger your continuous pipeline and redeploy your project.

178
Q

A weather-tracking application is built using Amazon DynamoDB. The performance of the application has been consistently good. But lately, the team has realized that during holidays and travel seasons, the load on the application is high and the read requests consume most of the database resources, thereby drastically increasing the overall application latency.

Which feature/service will help resolve this issue?

a. Amazon DynamoDB Regulator
b. Amazon ElastiCache
c. Amazon CloudFront
d. Amazon DynamoDB Accelerator

A

d. Amazon DynamoDB Accelerator

Amazon DynamoDB is designed for scale and performance. In most cases, the DynamoDB response times can be measured in single-digit milliseconds. However, there are certain use cases that require response times in microseconds. For these use cases, Amazon DynamoDB Accelerator (DAX) delivers fast response times for accessing eventually consistent data.

DAX is a DynamoDB-compatible caching service that enables you to benefit from fast in-memory performance for demanding applications. DAX addresses three core scenarios:
1. As an in-memory cache, DAX reduces the response times of eventually consistent read workloads by an order of magnitude from single-digit milliseconds to microseconds.
2. DAX reduces operational and application complexity by providing a managed service that is API-compatible with DynamoDB. Therefore, it requires only minimal functional changes to use with an existing application.
3. For read-heavy or bursty workloads, DAX provides increased throughput and potential operational cost savings by reducing the need to overprovision read capacity units. This is especially beneficial for applications that require repeated reads for individual keys.
How Amazon DynamoDB Accelerator (DAX) Works: https://aws.amazon.com/dynamodb/dax/

Incorrect options:

Amazon DynamoDB Regulator - This is a made-up option, used only as a distractor.

Amazon ElastiCache - Amazon ElastiCache is a web service that makes it easy to deploy, operate, and scale an in-memory data store and cache in the cloud. The service improves the performance of web applications by allowing you to retrieve information from fast, managed, in-memory data stores, instead of relying entirely on slower disk-based databases. Amazon ElastiCache supports two open-source in-memory engines: Amazon ElastiCache for Redis, and Amazon ElastiCache for Memcached. AWS recommends using Amazon DynamoDB Accelerator (DAX) for DynamoDB, which is an out-of-box caching solution for DynamoDB.

Amazon CloudFront - Amazon CloudFront is a global content delivery network (CDN) service that accelerates delivery of your websites, APIs, video content or other web assets. It integrates with other Amazon Web Services products to give developers and businesses an easy way to accelerate content to end-users with no minimum usage commitments. It is not a caching solution.

179
Q

A financial services enterprise plans to enable Multi-Factor Authentication (MFA) for its employees. For ease of travel, they prefer not to use any physical devices to implement Multi-Factor Authentication (MFA). Which of the below options is best suited for this use case?

a. Soft Token Multi-Factor Authentication (MFA) device
b. U2F security key
c. Hardware Multi-Factor Authentication (MFA) device
d. Virtual Multi-Factor Authentication (MFA) device

A

d. Virtual Multi-Factor Authentication (MFA) device

A software app that runs on a phone or other device and emulates a physical device. The device generates a six-digit numeric code based upon a time-synchronized one-time password algorithm. The user must type a valid code from the device on a second webpage during sign-in. Each virtual Multi-Factor Authentication (MFA) device assigned to a user must be unique. A user cannot type a code from another user’s virtual Multi-Factor Authentication (MFA) device to authenticate.

180
Q

Which of the following AWS Support plans is the MOST cost-effective when getting enhanced technical support by Cloud Support Engineers?

a. AWS Enterprise Support
b. AWS Developer Support
c. AWS Business Support
d. AWS Basic Support

A

c. AWS Business Support

AWS recommends AWS Business Support if you have production workloads on AWS and want 24x7 phone, email, and chat access to technical support and architectural guidance in the context of your specific use cases. You get full access to AWS Trusted Advisor Best Practice Checks. It is also the cheapest support plan to provide enhanced technical support by Cloud Support Engineers.

Exam Alert:

Please review the differences between the AWS Developer Support, AWS Business Support, and AWS Enterprise Support plans as you can expect at least a couple of questions on the exam: https://aws.amazon.com/premiumsupport/plans/

Incorrect options:

AWS Developer Support - AWS recommends AWS Developer Support if you are testing or doing early development on AWS and want the ability to get technical support during business hours as well as general architectural guidance as you build and test. It provides enhanced technical support by Cloud Support Associates.

AWS Basic Support - The AWS Basic Support plan is included for all AWS customers. It does not provide enhanced technical support.

AWS Enterprise Support - AWS Enterprise Support provides customers with concierge-like service where the main focus is helping the customer achieve their outcomes and find success in the cloud. With Enterprise Support, you get 24x7 technical support from high-quality engineers, tools, and technology to automatically manage the health of your environment, consultative architectural guidance delivered in the context of your applications and use cases, and a designated Technical Account Manager (TAM) to coordinate access to proactive/preventative programs and AWS subject matter experts. It provides enhanced technical support by Cloud Support Engineers but is more expensive than the Business support plan.

181
Q

A media company uploads its media (audio and video) files to a centralized Amazon Simple Storage Service (Amazon S3) bucket from geographically dispersed locations. Which of the following solutions can the company use to optimize transfer speeds?

a. Amazon CloudFront
b. Amazon S3 Transfer Acceleration (S3TA)
c. AWS Direct Connect
d. AWS Global Accelerator

A

b. Amazon S3 Transfer Acceleration (S3TA)

Amazon S3 Transfer Acceleration (S3TA) enables fast, easy, and secure transfers of files over long distances between your client and your Amazon S3 bucket. S3 Transfer Acceleration leverages Amazon CloudFront’s globally distributed AWS Edge Locations. As data arrives at an AWS Edge Location, data is routed to your Amazon S3 bucket over an optimized network path. S3 Transfer Acceleration is designed to optimize transfer speeds from across the world into S3 buckets. If you are uploading to a centralized bucket from geographically dispersed locations, or if you regularly transfer GBs or TBs of data across continents, you may save hours or days of data transfer time with S3 Transfer Acceleration.

Amazon S3 Transfer Acceleration (S3TA) benefits: https://aws.amazon.com/s3/transfer-acceleration/

Incorrect options:

Amazon CloudFront - Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment. CloudFront is used for content delivery than for data uploads. CloudFront caches data and a subsequent request for a webpage will not go to the origin server, but will be served from the cache. S3 Transfer Acceleration is a better option for the given use-case.

AWS Direct Connect - AWS Direct Connect is a cloud service solution that makes it easy to establish a dedicated network connection from your premises to AWS. You can use AWS Direct Connect to establish a private virtual interface from your on-premise network directly to your Amazon VPC. This private connection takes at least one month for completion. You cannot use Direct Connect to optimize media uploads into S3.

AWS Global Accelerator - AWS Global Accelerator is a service that improves the availability and performance of your applications with local or global users. It provides static IP addresses that act as a fixed entry point to your application endpoints in a single or multiple AWS Regions, such as your Application Load Balancers, Network Load Balancers or Amazon EC2 instances. Similar to CloudFront it uses AWS Global network and edge locations for enhanced performance. It’s an overall performance enhancer than an upload speed accelerator. You cannot use Global Accelerator to optimize media uploads into S3.

182
Q

Which feature/functionality will help you organize your AWS resources, manage and automate tasks on large numbers of resources at a time?

a. AWS Organizations
b. AWS Resource Groups
c. Amazon WorkSpaces
d. Tags

A

b. AWS Resource Groups

In AWS, a resource is an entity that you can work with. Examples include an Amazon EC2 instance, an AWS CloudFormation stack, or an Amazon S3 bucket. If you work with multiple resources, you might find it useful to manage them as a group rather than move from one AWS service to another for each task. If you manage large numbers of related resources, such as EC2 instances that make up an application layer, you likely need to perform bulk actions on these resources at one time.

You can use AWS Resource Groups to organize your AWS resources. Resource groups make it easier to manage and automate tasks on large numbers of resources at a time. Resource Groups feature permissions are at the account level. As long as users who are sharing your account have the correct IAM permissions, they can work with the resource groups that you create.

Incorrect options:

AWS Organizations - AWS Organizations helps you centrally manage and govern your environment as you grow and scale your AWS resources. Using AWS Organizations, you can programmatically create new AWS accounts and allocate resources, group accounts to organize your workflows, apply policies to accounts or groups for governance, and simplify billing by using a single payment method for all of your accounts.

Tags - To help you manage your instances, images, and other Amazon EC2 resources, you can assign your own metadata to each resource in the form of tags. Tags enable you to categorize your AWS resources in different ways, for example, by purpose, owner, or environment. Tags are properties of a resource. Resource Groups allow you to easily create, maintain, and view a collection of resources that share common tags.

Amazon WorkSpaces - Amazon WorkSpaces enables you to provision virtual, cloud-based Microsoft Windows or Amazon Linux desktops for your users, known as WorkSpaces. Amazon WorkSpaces eliminates the need to procure and deploy hardware or install complex software. You can quickly add or remove users as your needs change. Users can access their virtual desktops from multiple devices or web browsers.

183
Q

A university provides access to AWS services for its students to submit their research data for analysis. The university is looking at the most cost-effective approach for recovering from disasters and it can tolerate data loss of a few hours.

Which disaster recovery strategy is well-suited for this use case?

a. Backup and restore strategy
b. Warm standby strategy
c. Multi-site active/active strategy
d. Pilot light strategy

A

a. Backup and restore strategy

When selecting your DR strategy, you must weigh the benefits of lower recovery time objective (RTO) and recovery point objective (RPO) vs the costs of implementing and operating a strategy. The Backup and restore strategy offers a good balance of benefits and cost for the current use case. This is the cheapest of all the disaster recovery options available with AWS.

Backup and restore is the most suitable approach for the given use case as the university can tolerate data loss of a few hours. This approach can be used to mitigate against a regional disaster by replicating data to other AWS Regions or to mitigate the lack of redundancy for workloads deployed to a single Availability Zone. In addition to data, you must redeploy the infrastructure, configuration, and application code in the recovery Region.

Comparing different disaster recovery strategies: https://aws.amazon.com/blogs/architecture/disaster-recovery-dr-architecture-on-aws-part-iii-pilot-light-and-warm-standby/

Incorrect options:

Pilot light strategy - With the pilot light approach, you replicate your data from one Region to another and provision a copy of your core workload infrastructure. Resources required to support data replication and backup, such as databases and object storage, are always on. Other elements, such as application servers, are loaded with application code and configurations but are switched off and are only used during testing or when disaster recovery failover is invoked. Unlike the backup and restore approach, your core infrastructure is always available and you always have the option to quickly provision a full-scale production environment by switching on and scaling out your application servers. This also implies that the cost incurred is higher than what it is for the backup and restore approach.

Multi-site active/active strategy - You can run your workload simultaneously in multiple AWS Regions as part of a multi-site active/active strategy. Multi-site active/active serves traffic from all regions to which it is deployed. With a multi-site active/active approach, users can access the workload in any of the Regions in which it is deployed. This approach is the most complex and costliest for disaster recovery.

Warm standby strategy - The warm standby approach involves ensuring that there is a scaled-down but fully functional copy of your production environment in another AWS Region. This approach extends the pilot light concept and decreases the time to recovery because your workload is always-on in another Region. This approach also allows you to more easily perform testing or implement continuous testing to increase confidence in your ability to recover from a disaster. This strategy is costly and is used only for business-critical applications.

184
Q

Which AWS services can be used together to send alerts whenever the AWS account root user signs in? (Select two)

a. AWS Step Functions
b. AWS Lambda
c. Amazon Simple Queue Service (Amazon SQS)
d. Amazon Simple Notification Service (Amazon SNS)
e. Amazon CloudWatch

A

d. Amazon Simple Notification Service (Amazon SNS)
e. Amazon CloudWatch

Amazon CloudWatch Events delivers a near real-time stream of system events that describe changes in Amazon Web Services (AWS) resources. Using simple rules that you can quickly set up, you can match events and route them to one or more target functions or streams. Amazon CloudWatch Events becomes aware of operational changes as they occur. Amazon CloudWatch Events responds to these operational changes and takes corrective action as necessary, by sending messages to respond to the environment, activating functions, making changes, and capturing state information.

Amazon Simple Notification Service (Amazon SNS) is a highly available, durable, secure, fully managed pub/sub messaging service that enables you to decouple microservices, distributed systems, and serverless applications. Additionally, Amazon SNS can be used to fan out notifications to end users using mobile push, SMS, and email.

How Amazon SNS Works: https://aws.amazon.com/sns/

To send alerts whenever the AWS account root user signs in, you can create an Amazon Simple Notification Service (Amazon SNS) topic. Then, create an Amazon CloudWatch event rule to monitor userIdentity root logins from the AWS Management Console and send an email via Amazon SNS when the event triggers.

Incorrect options:

Amazon Simple Queue Service (Amazon SQS) - Amazon Simple Queue Service (Amazon SQS) is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. Using Amazon SQS, you can send, store, and receive messages between software components at any volume, without losing messages or requiring other services to be available.

AWS Lambda - AWS Lambda is a compute service that lets you run code without provisioning or managing servers.

AWS Step Functions - AWS Step Functions lets you coordinate multiple AWS services into serverless workflows. You can design and run workflows that stitch together services such as AWS Lambda, AWS Glue and Amazon SageMaker.

185
Q

What is the primary benefit of deploying an Amazon Relational Database Service (Amazon RDS) database in a Read Replica configuration?

a. Read Replica protects the database from a regional failure
b. Read Replica enhances database availability
c. Read Replica improves database scalability
d. Read Replica reduces database usage costs

A

c. Read Replica improves database scalability

Read Replicas help with scaling, not availability.

Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. Read Replicas allow you to create read-only copies that are synchronized with your master database. Read Replicas are used for improved read performance. You can also place your read replica in a different AWS Region closer to your users for better performance. Read Replicas are an example of horizontal scaling of resources.

Read Replica Overview: https://aws.amazon.com/rds/features/multi-az/

Exam Alert:

Please review the differences between Amazon RDS Multi-AZ, Multi-Region and Read Replica deployments for RDS: https://aws.amazon.com/rds/features/multi-az/

186
Q

According to the AWS Well-Architected Framework, which of the following statements are recommendations in the Operational Excellence pillar? (Select two)

a. Anticipate failure
b. Use serverless architectures
c. Make frequent, small, reversible changes
d. Automatically recover from failure
e. Enable traceability

A

a. Anticipate failure
c. Make frequent, small, reversible changes

The Operational Excellence pillar includes the ability to run and monitor systems to deliver business value and to continually improve supporting processes and procedures.

Perform “pre-mortem” exercises to identify potential sources of failure so that they can be removed or mitigated. Test your failure scenarios and validate your understanding of their impact. Test your response procedures to ensure that they are effective, and that teams are familiar with their execution. Set up regular game days to test workloads and team responses to simulated events.

Design workloads to allow components to be updated regularly. Make changes in small increments that can be reversed if they fail (without affecting customers when possible).

The AWS Well-Architected Framework helps you understand the pros and cons of the decisions you make while building systems on AWS. By using the Framework you will learn architectural best practices for designing and operating reliable, secure, efficient, and cost-effective systems in the cloud. It provides a way for you to consistently measure your architectures against best practices and identify areas for improvement.

The AWS Well-Architected Framework is based on six pillars — Operational Excellence, Security, Reliability, Performance Efficiency, Cost Optimization, and Sustainability.

187
Q

A company is planning to move their traditional CRM application running on MySQL to an AWS database service. Which database service is the right fit for this requirement?

a. Amazon Aurora
b. Amazon DynamoDB
c. Amazon Neptune
d. Amazon ElastiCache

A

a. Amazon Aurora

Amazon Aurora is a relational database engine that combines the speed and reliability of high-end commercial databases with the simplicity and cost-effectiveness of open-source databases. Amazon Aurora MySQL delivers up to five times the performance of MySQL without requiring any changes to most MySQL applications; similarly, Amazon Aurora PostgreSQL delivers up to three times the performance of PostgreSQL. Amazon RDS manages your Amazon Aurora databases, handling time-consuming tasks such as provisioning, patching, backup, recovery, failure detection, and repair. You pay a simple monthly charge for each Amazon Aurora database instance you use. There are no upfront costs or long-term commitments required.

Amazon Aurora features a distributed, fault-tolerant, self-healing storage system that auto-scales up to 128TB per database instance. It delivers high performance and availability with up to 15 low-latency read replicas, point-in-time recovery, continuous backup to Amazon S3, and replication across three Availability Zones (AZs).

You can use the standard “mysqldump” utility to export data from MySQL and “mysqlimport” utility to import data to Amazon Aurora, and vice-versa. You can also use Amazon RDS’s DB Snapshot migration feature to migrate an RDS MySQL DB Snapshot to Amazon Aurora using the AWS Management Console. Migration completes for most customers in under an hour, though the duration depends on format and data set size.

Incorrect options:

Amazon DynamoDB - Amazon DynamoDB is a key-value and document database that delivers single-digit millisecond performance at any scale. It’s a fully managed, multi-region, multi-active, durable database with built-in security, backup and restore, and in-memory caching for internet-scale applications. DynamoDB can handle more than 10 trillion requests per day and can support peaks of more than 20 million requests per second. DynamoDB is not for relational databases.

Amazon Neptune - Amazon Neptune is a fast, reliable, fully-managed graph database service that makes it easy to build and run applications that work with highly connected datasets. The core of Amazon Neptune is a purpose-built, high-performance graph database engine optimized for storing billions of relationships and querying the graph with milliseconds latency. Amazon Neptune is tailor-built for use cases like Knowledge Graphs, Identity Graphs, Fraud Detection, Recommendation Engines, Social Networking, Life Sciences, and so on. Amazon Neptune is not for relational databases.

Amazon ElastiCache - Amazon ElastiCache allows you to seamlessly set up, run, and scale popular open-source compatible in-memory data stores in the cloud. Build data-intensive apps or boost the performance of your existing databases by retrieving data from high throughput and low latency in-memory data stores. ElastiCache however, is not a relational database solution.

188
Q

Which of the following use-cases can be solved using the Amazon Forecast service?

a. Predict the web traffic of a website for the next few weeks
b. To develop and test fully functional machine learning models
c. To recommend personalized products for users based on their previous purchases
d. Document search service that can extract answers from text within documents

A

a. Predict the web traffic of a website for the next few weeks

Amazon Forecast is a fully managed service that uses machine learning to deliver highly accurate forecasts. Based on the same technology used at Amazon.com, Amazon Forecast uses machine learning to combine time series data with additional variables to build forecasts. Amazon Forecast requires no machine learning experience to get started. You only need to provide historical data, plus any additional data that you believe may impact your forecasts.

Amazon Forecast can be used to forecast any time-series data, such as retail demand, manufacturing demand, travel demand, revenue, IT capacity, logistics, and web traffic.

Incorrect options:

To develop and test fully functional machine learning models - Amazon SageMaker is the correct service for this requirement. Amazon SageMaker is a fully managed service that provides every developer and data scientist with the ability to build, train, and deploy machine learning (ML) models quickly. SageMaker removes the heavy lifting from each step of the machine learning process to make it easier to develop high-quality models.

Document search service that can extract answers from text within documents - Amazon Kendra is the best fit for this use case. Amazon Kendra is an intelligent search service powered by machine learning. Kendra reimagines enterprise search for your websites and applications so your employees and customers can easily find the content they are looking for, even when it’s scattered across multiple locations and content repositories within your organization.

To recommend personalized products for users based on their previous purchases - Amazon Personalize is useful in creating recommendations. Amazon Personalize makes it easy for developers to build applications capable of delivering a wide array of personalization experiences, including specific product recommendations, personalized product re-ranking, and customized direct marketing.

189
Q

A corporation would like to simplify access management to multiple AWS accounts as well as facilitate AWS Single Sign-On (AWS SSO) access to its AWS accounts. As a Cloud Practitioner, which AWS service would you use for this task?

a. AWS Identity and Access Management (AWS IAM)
b. AWS Command Line Interface (CLI)
c. AWS Cognito
d. AWS IAM Identity Center

A

d. AWS IAM Identity Center

AWS IAM Identity Center

AWS IAM Identity Center is the successor to AWS Single Sign-On (AWS SSO). It is built on top of AWS Identity and Access Management (IAM) to simplify access management to multiple AWS accounts, AWS applications, and other SAML-enabled cloud applications. In IAM Identity Center, you create or connect, your workforce users for use across AWS. You can choose to manage access just to your AWS accounts, just to your cloud applications, or to both.

You can create users directly in IAM Identity Center, or you can bring them from your existing workforce directory. With IAM Identity Center, you get a unified administration experience to define, customize, and assign fine-grained access. Your workforce users get a user portal to access their assigned AWS accounts or cloud applications.

You can use IAM Identity Center to quickly and easily assign and manage your employees’ access to multiple AWS accounts, SAML-enabled cloud applications (such as Salesforce, Microsoft 365, and Box), and custom-built in-house applications, all from a central place.

How AWS IAM Identity Center works: https://aws.amazon.com/iam/identity-center/

Incorrect options:

AWS Cognito - Amazon Cognito lets you add user sign-up, sign-in, and access control to your web and mobile apps quickly and easily. With Amazon Cognito, you also have the option to authenticate users through social identity providers such as Facebook, Twitter, or Amazon, with SAML identity solutions, or by using your own identity system. It is an identity management solution for customers/developers building B2C or B2B apps for their customers.

AWS Identity and Access Management (AWS IAM) - AWS Identity and Access Management (AWS IAM) enables you to securely control access to AWS services and resources for your users. Using AWS IAM, you can create and manage AWS users and groups, and use permissions to allow and deny their access to AWS resources. It is not used to log in but to manage users and roles.

AWS Command Line Interface (CLI) - The AWS Command Line Interface (CLI) is a unified tool to manage your AWS services. With just one tool to download and configure, you can control multiple AWS services from the command line and automate them through scripts. It is not a central user portal.

190
Q

According to the AWS Well-Architected Framework, which of the following action is recommended in the Security pillar?

a. Use AWS CloudFormation to automate security best practices
b. Use Amazon CloudWatch to measure overall efficiency
c. Use AWS Key Management Service (AWS KMS) to encrypt data
d. Use AWS Cost Explorer to view and track your usage in detail

A

c. Use AWS Key Management Service (AWS KMS) to encrypt data

The Security pillar includes the ability to protect information, systems, and assets while delivering business value through risk assessments and mitigation strategies.

Encrypting data is part of the design principle “Protect data in transit and at rest”: Classify your data into sensitivity levels and use mechanisms, such as encryption, tokenization, and access control where appropriate.

AWS Key Management Service (AWS KMS) makes it easy for you to create and control keys used for encryption. It is a key service of the Security pillar.

The AWS Well-Architected Framework helps you understand the pros and cons of the decisions you make while building systems on AWS. By using the Framework you will learn architectural best practices for designing and operating reliable, secure, efficient, and cost-effective systems in the cloud. It provides a way for you to consistently measure your architectures against best practices and identify areas for improvement.

The AWS Well-Architected Framework is based on six pillars — Operational Excellence, Security, Reliability, Performance Efficiency, Cost Optimization, and Sustainability.

Overview of the six pillars of the AWS Well-Architected Framework: https://aws.amazon.com/architecture/well-architected/

Incorrect options:

Use AWS Cost Explorer to view and track your usage in detail - AWS Cost Explorer has an easy-to-use interface that lets you visualize, understand, and manage your AWS costs and usage over time. Using Cost Explorer to view and track your usage in detail relates more to the Cost Optimization pillar.

Use Amazon CloudWatch to measure overall efficiency - Amazon CloudWatch is a monitoring and management service built for developers, system operators, site reliability engineers (SRE), and IT managers. Using Amazon CloudWatch to measure overall efficiency relates more to the Reliability pillar.

Use AWS CloudFormation to automate security best practices - AWS CloudFormation provides a common language for you to model and provision AWS and third-party application resources in your cloud environment. It is not used to automate security best practices. If you want to automate security best practices, you should use Amazon Inspector.

191
Q

A company using a hybrid cloud would like to store secondary backup copies of the on-premises data. Which Amazon S3 Storage Class would you use for a cost-optimal yet rapid access solution?

a. Amazon S3 Glacier Deep Archive
b. Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA)
c. Amazon S3 Standard-Infrequent Access (S3 Standard-IA)
d. Amazon S3 Standard

A

b. Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA)

Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) is for data that is accessed less frequently but requires rapid access when needed. Unlike other S3 Storage Classes which store data in a minimum of three Availability Zones (AZs), S3 One Zone-IA stores data in a single Availability Zone (AZ) and costs 20% less than S3 Standard-IA. S3 One Zone-IA is ideal for customers who want a lower-cost option for infrequently accessed data but do not require the availability and resilience of S3 Standard or S3 Standard-IA. It’s a good choice for storing secondary backup copies of on-premises data or easily re-creatable data. You can also use it as cost-effective storage for data that is replicated from another AWS Region using S3 cross-region replication (S3 CRR).

Exam Alert:

Please review this detailed comparison of S3 Storage Classes as you can expect a few questions on this aspect of S3: https://aws.amazon.com/s3/storage-classes/

Incorrect options:

Amazon S3 Glacier Deep Archive - Amazon S3 Glacier Deep Archive storage class is designed to provide durable and secure long-term storage for large amounts of data at a price that is competitive with off-premises tape archival services. Data is stored across 3 or more AWS Availability Zones(AZs) and can be retrieved in 12 hours or less. You no longer need to deal with expensive and finicky tape drives, arrange for off-premises storage, or worry about migrating data to newer generations of media.

Amazon S3 Standard - Amazon S3 Standard offers high durability, availability, and performance object storage for frequently accessed data. Because it delivers low latency and high throughput, Amazon S3 Standard is appropriate for a wide variety of use cases, including cloud applications, dynamic websites, content distribution, mobile and gaming applications, and big data analytics.

Amazon S3 Standard-Infrequent Access (S3 Standard-IA) - Amazon S3 Standard-Infrequent Access (S3 Standard-IA) is for data that is accessed less frequently but requires rapid access when needed. S3 Standard-IA offers the high durability, high throughput, and low latency of S3 Standard, with a low per GB storage price and per GB retrieval fee. This combination of low cost and high performance makes S3 Standard-IA ideal for long-term storage, backups, and as a data store for disaster recovery files. It can be used for backups, but it is more expensive than S3 One Zone - Infrequent Access. Hence, S3 One Zone - Infrequent Access is a better option for secondary backup copies.

192
Q

By default, which of the following events are logged by AWS CloudTrail?

a. Data events and Insights events
b. Management events
c. AWS CloudTrail Insights events
d. Data events

A

b. Management events

An event in AWS CloudTrail is the record of activity in an AWS account. This activity can be an action taken by a user, role, or service that is monitorable by CloudTrail. CloudTrail events provide a history of both API and non-API account activity made through the AWS Management Console, AWS SDKs, command line tools, and other AWS services.

There are three types of events that can be logged in CloudTrail: management events, data events, and AWS CloudTrail Insights events.

By default, AWS CloudTrail logs all management events and does not include data events or Insights events. Additional charges apply for data and Insights events. All event types use the same CloudTrail JSON log format.

Management events provide information about management operations that are performed on resources in your AWS account. These are also known as control plane operations. Examples include registering devices, configuring rules for routing data, setting up logging etc.

Incorrect options:

Data events - Data events provide information about the resource operations performed on or in a resource. These are also known as data plane operations. Data events are often high-volume activities. The following data types are recorded: Amazon S3 object-level API activity, AWS Lambda function execution activity, Amazon S3 object-level API activity on AWS Outposts.

Data events are not logged by default when you create a trail. To record AWS CloudTrail data events, you must explicitly add to a trail the supported resources or resource types for which you want to collect activity. Additional charges apply for logging data events.

AWS CloudTrail Insights events - AWS CloudTrail Insights events capture unusual activity in your AWS account. If you have Insights events enabled, and CloudTrail detects unusual activity, Insights events are logged to a different folder or prefix in the destination S3 bucket for your trail. You can also see the type of insight and the incident time period when you view Insights events on the CloudTrail console.

Insights events are disabled by default when you create a trail. To record AWS CloudTrail Insights events, you must explicitly enable Insights event collection on a new or existing trail. Additional charges apply for logging CloudTrail Insights events.

193
Q

According to the AWS Shared Responsibility Model, which of the following is the responsibility of the customer?

a. Firewall & networking configuration of Amazon Elastic Compute Cloud (Amazon EC2)
b. Managing Amazon DynamoDB
c. Edge locations security
d. Protecting hardware infrastructure

A

a. Firewall & networking configuration of Amazon Elastic Compute Cloud (Amazon EC2)

The customers are responsible for “Security IN the cloud”. It includes the configuration of the operating system, network & firewall of applications.

Exam Alert:

Please review the AWS Shared Responsibility Model in detail as you can expect multiple questions on the shared responsibility model in the exam: https://aws.amazon.com/compliance/shared-responsibility-model/

Incorrect options:

Managing Amazon DynamoDB - Amazon DynamoDB is a fully managed service. AWS operates the infrastructure layer, the operating system, and platforms, and customers access the endpoints to store and retrieve data.

Protecting hardware infrastructure
Edge locations security

AWS is responsible for “Security OF the cloud”. It includes the infrastructure, which is composed of the hardware, software, networking, and facilities that run AWS Cloud services.

194
Q

AWS Trusted Advisor analyzes your AWS environment and provides best practice recommendations for which of the following categories? (Select two)

a. Change Management
b. Service Limits
c. Elasticity
d. Cost Optimization
e. Documentation

A

b. Service Limits
d. Cost Optimization

AWS Trusted Advisor is an online tool that provides real-time guidance to help provision your resources following AWS best practices. Whether establishing new workflows, developing applications, or as part of ongoing improvement, recommendations provided by Trusted Advisor on a regular basis help keep your solutions provisioned optimally. AWS Trusted Advisor analyzes your AWS environment and provides best practice recommendations in five categories: Cost Optimization, Performance, Security, Fault Tolerance, Service Limits.

How Trusted Advisor Works:

AWS Trusted Advisor Recommendations:
https://aws.amazon.com/premiumsupport/technology/trusted-advisor/

195
Q

A company wants to move to AWS cloud and release new features with quick iterations by utilizing relevant AWS services whenever required. Which of the following characteristics of AWS Cloud does it want to leverage?
a. Scalability
b. Agility
c. Reliability
d. Elasticity

A

b. Agility

In the world of cloud computing, “Agility” refers to the ability to rapidly develop, test and launch software applications that drive business growth Another way to explain “Agility” - AWS provides a massive global cloud infrastructure that allows you to quickly innovate, experiment and iterate. Instead of waiting weeks or months for hardware, you can instantly deploy new applications. This ability is called Agility.

196
Q

Which of the following AWS services offers Lifecycle configuration for cost-optimal storage?
a. Amazon Elastic Block Store (Amazon EBS)
b. Amazon Simple Storage Service (Amazon S3)
c. Amazon EC2 Instance Store
d. AWS Storage Gateway

A

b. Amazon Simple Storage Service (Amazon S3)

You can manage your objects on Amazon S3 so that they are stored cost-effectively throughout their lifecycle by configuring their Amazon S3 Lifecycle. An Amazon S3 Lifecycle configuration is a set of rules that define actions that Amazon S3 applies to a group of objects.

There are two types of actions:

Transition actions — Define when objects transition to another storage class. For example, you might choose to transition objects to the Amazon S3 Standard-IA storage class 30 days after you created them, or archive objects to the Amazon S3 Glacier storage class one year after creating them.

Expiration actions — Define when objects expire. Amazon S3 deletes expired objects on your behalf.

Incorrect options:

Amazon EC2 Instance Store - An Instance Store provides temporary block-level storage for your EC2 instance. This storage is located on disks that are physically attached to the host computer. Instance store is ideal for the temporary storage of information that changes frequently, such as buffers, caches, scratch data, and other temporary content, or for data that is replicated across a fleet of instances, such as a load-balanced pool of web servers. Instance storage is temporary, data is lost if instance experiences failure or is terminated. Instance Store does not offer lifecycle configuration.

Amazon Elastic Block Store (Amazon EBS) - Amazon Elastic Block Store (Amazon EBS) is an easy to use, high-performance block storage service designed for use with Amazon Elastic Compute Cloud (EC2) for both throughput and transaction-intensive workloads at any scale. A broad range of workloads, such as relational and non-relational databases, enterprise applications, containerized applications, big data analytics engines, file systems, and media workflows are widely deployed on Amazon EBS. It does not offer lifecycle configuration.

AWS Storage Gateway - AWS Storage Gateway is a hybrid cloud storage service that gives you on-premises access to virtually unlimited cloud storage. All data transferred between the gateway and AWS storage is encrypted using SSL (for all three types of gateways - File, Volume and Tape Gateways). AWS Storage Gateway does not offer lifecycle configuration.

197
Q

Which of the following AWS entities provides the information required to launch an Amazon Elastic Compute Cloud (Amazon EC2) instance?

a. Amazon Elastic Block Store (Amazon EBS)
b. Amazon Machine Image (AMI)
c. AWS Lambda
d. Amazon Elastic File System (Amazon EFS)

A

b. Amazon Machine Image (AMI)

An Amazon Machine Image (AMI) provides the information required to launch an instance. You must specify an Amazon Machine Image (AMI) when you launch an instance.

An Amazon Machine Image (AMI) includes the following:

One or more Amazon Elastic Block Store (Amazon EBS) snapshots, or, for instance store backed AMIs, a template for the root volume of the instance (for example, an operating system, an application server, and applications).

Launch permissions that control which AWS accounts can use the Amazon Machine Image (AMI) to launch instances.

A block device mapping that specifies the volumes to attach to the instance when it’s launched.

The following diagram summarizes the Amazon Machine Image (AMI) lifecycle: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html

198
Q

The DevOps team at an IT company wants to centrally manage its servers on AWS Cloud as well as on-premise data center so that it can collect software inventory, run commands, configure and patch servers at scale. As a Cloud Practitioner, which AWS service would you recommend for this use-case?

a. AWS Systems Manager
b. AWS CloudFormation
c. AWS OpsWorks
d. AWS Config

A

a. AWS Systems Manager

AWS Systems Manager gives you visibility and control of your infrastructure on AWS. Systems Manager provides a unified user interface so you can view operational data from multiple AWS services and allows you to automate operational tasks such as collecting software inventory, running commands, managing patches, and configuring servers across AWS Cloud as well as on-premises infrastructure.

AWS Systems Manager offers utilities for running commands, patch-management and configuration compliance: https://aws.amazon.com/systems-manager/faq/

Incorrect options:

AWS OpsWorks - AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet. AWS OpsWorks lets you use Chef and Puppet to automate how servers are configured, deployed and managed across your Amazon EC2 instances or on-premises compute environments. You cannot use AWS OpsWorks for collecting software inventory and viewing operational data from multiple AWS services.

AWS CloudFormation - AWS CloudFormation allows you to use programming languages or a simple text file to model and provision, in an automated and secure manner, all the resources needed for your applications across all Regions and accounts. Think infrastructure as code; think AWS CloudFormation. You cannot use AWS CloudFormation for running commands or managing patches on servers.

AWS Config - AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. AWS Config continuously monitors and records your AWS resource configurations and allows you to automate the evaluation of recorded configurations against desired configurations. You cannot use AWS Config for running commands or managing patches on servers.

199
Q

A company is looking for a guided path to help deploy, configure, and secure its new workloads while ensuring that it is ready for on-going operations in the cloud. Which of the following AWS services/tools can be leveraged for this use case?

a. AWS Shared Responsibility Model
b. Cloud Foundations
c. AWS Config
d. AWS Trusted Advisor

A

b. Cloud Foundations

Cloud Foundations provides a guided path to help customers deploy, configure, and secure their new workloads while ensuring they are ready for on-going operations in the cloud. Cloud Foundations helps customers navigate through the decisions they need to make through curated AWS Services, AWS Solutions, Partner Solutions, and Guidance.

Cloud Foundations: https://aws.amazon.com/architecture/cloud-foundations/

200
Q

AWS Identity and Access Management (AWS IAM) policies are written as JSON documents. Which of the following are mandatory elements of an IAM policy?

a. Action, Condition
b. Effect, Sid
c. Sid, Principal
d. Effect, Action

A

d. Effect, Action

Most policies are stored in AWS as JSON documents. Identity-based policies and policies used to set permissions boundaries are JSON policy documents that you attach to a user or role. Resource-based policies are JSON policy documents that you attach to a resource.

A JSON policy document includes these elements:

  1. Optional policy-wide information at the top of the document
  2. One or more individual statements

Each statement includes information about a single permission. The information in a statement is contained within a series of elements.

  1. Version – Specify the version of the policy language that you want to use. As a best practice, use the latest 2012-10-17 version.
  2. Statement – Use this main policy element as a container for the following elements. You can include more than one statement in a policy.

a. Sid (Optional) – Include an optional statement ID to differentiate between your statements.

b. Effect – Use Allow or Deny to indicate whether the policy allows or denies access.

c. Principal (Required in only some circumstances) – If you create a resource-based policy, you must indicate the account, user, role, or federated user to which you would like to allow or deny access. If you are creating an IAM permissions policy to attach to a user or role, you cannot include this element. The principal is implied as that user or role.

d. Action – Include a list of actions that the policy allows or denies.

e. Resource (Required in only some circumstances) – If you create an IAM permissions policy, you must specify a list of resources to which the actions apply. If you create a resource-based policy, this element is optional. If you do not include this element, then the resource to which the action applies is the resource to which the policy is attached.

f. Condition (Optional) – Specify the circumstances under which the policy grants permission.

201
Q

Which of the following is an AWS database service?
a. Amazon Redshift
b. AWS Database Migration Service (AWS DMS)
c. AWS Storage Gateway
d. AWS Glue

A

a. Amazon Redshift

Amazon Redshift is a fully-managed petabyte-scale cloud-based data warehouse product designed for large scale data set storage and analysis.

202
Q

A gaming company needs compute and storage services close to edge locations in order to ensure ultra-low latency for end-users and devices that connect through mobile networks. Which AWS service is the best fit for this requirement?

a. AWS Snowmobile
b. AWS Snowball Edge
c. AWS Wavelength
d. AWS Outposts

A

c. AWS Wavelength

AWS Wavelength is an AWS Infrastructure offering optimized for mobile edge computing applications. Wavelength Zones are AWS infrastructure deployments that embed AWS compute and storage services within cloud service provider (CSP) data centers at the edge of the 5G network, so application traffic from 5G devices can reach application servers running in Wavelength Zones without leaving the telecommunications network. This avoids the latency that would result from application traffic having to traverse multiple hops across the Internet to reach their destination, enabling customers to take full advantage of the latency and bandwidth benefits offered by modern 5G networks.

AWS enterprise customers that build applications to serve their own use-cases such as IoT, live media production, and industrial automation can use Wavelength to deliver low-latency solutions. Customers with edge data processing needs such as image and video recognition, inference, data aggregation, and responsive analytics can use Wavelength to perform low-latency operations and processing right where their data is generated, reducing the need to move large amounts of data to be processed in centralized locations.

How AWS Wavelength works: https://aws.amazon.com/wavelength/

Incorrect options:

AWS Outposts - AWS Outposts is designed for workloads that need to remain on-premises due to latency requirements, where customers want that workload to run seamlessly with the rest of their other workloads in AWS. AWS Outposts are fully managed and configurable compute and storage racks built with AWS-designed hardware that allow customers to run compute and storage on-premises, while seamlessly connecting to AWS’s broad array of services in the cloud.

You should also note another service called AWS Local Zones, which is a new type of AWS infrastructure designed to run workloads that require single-digit millisecond latency in more locations, like video rendering and graphics intensive, virtual desktop applications. Not every customer wants to operate their own on-premises data center, while others may be interested in getting rid of their local data center entirely. Local Zones allow customers to gain all the benefits of having compute and storage resources closer to end-users, without the need to own and operate their own data center infrastructure.

AWS Snowball Edge - AWS Snowball Edge is a type of Snowball device with on-board storage and compute power for select AWS capabilities. Snowball Edge can do local processing and edge-computing workloads in addition to transferring data between your local environment and the AWS Cloud. Snowball edge cannot be used to optimize connections through mobile networks.

AWS Snowmobile - AWS Snowmobile is an Exabyte-scale data transfer service used to move extremely large amounts of data to AWS. You can transfer up to 100PB per Snowmobile, a 45-foot long ruggedized shipping container, pulled by a semi-trailer truck. Snowmobile makes it easy to move massive volumes of data to the cloud, including video libraries, image repositories, or even a complete data center migration. Transferring data with Snowmobile is more secure, fast and cost effective. Snowmobile cannot be used to optimize connections through mobile networks.

203
Q

A company would like to move 50 petabytes (PBs) of data from its on-premises data centers to AWS in the MOST cost-effective way. As a Cloud Practitioner, which of the following solutions would you choose?

a. AWS Storage Gateway
b. AWS Snowmobile
c. AWS Snowball
d. AWS Snowball Edge

A

b. AWS Snowmobile

AWS Snowmobile is an Exabyte-scale data transfer service used to move extremely large amounts of data to AWS. You can transfer up to 100PB per Snowmobile, a 45-foot long ruggedized shipping container, pulled by a semi-trailer truck. AWS Snowmobile makes it easy to move massive volumes of data to the cloud, including video libraries, image repositories, or even a complete data center migration. Transferring data with Snowmobile is more secure, fast, and cost-effective.

Incorrect options:

AWS Snowball Edge - AWS Snowball Edge is an edge computing and data transfer device provided by the AWS Snowball service. It has onboard storage and compute power that provides select AWS services for use in edge locations. However, one AWS Snowball Edge only provides up to 100 TB of capacity. Therefore, to transfer 50 PBs, AWS Snowball Edge is not the most cost-effective option.

AWS Snowball - AWS Snowball is a petabyte-scale data transport solution that uses secure appliances to transfer large amounts of data into and out of AWS. The use of Snowball addresses common challenges with large-scale data transfers including high network costs, long transfer times, and security concerns. Transferring data with AWS Snowball is simple, fast, secure, and can be as little as one-fifth the cost of high-speed Internet. However, one Snowball only provides up to 80 TB of capacity. Therefore, to transfer 50 PBs, AWS Snowball is not the most cost-effective option.

AWS Storage Gateway - AWS Storage Gateway is a hybrid storage service that enables your on-premises applications to seamlessly use AWS cloud storage. You can use the service for backup and archiving, disaster recovery, cloud data processing, storage tiering, and migration. However, data transfer through AWS Storage Gateway takes longer even with great bandwidth. Moreover, transferring 50 PBs of data will be more expensive than using AWS Snowmobile.

204
Q

Which of the following Amazon S3 storage classes takes the most time to retrieve data (also known as first byte latency)?
a. Amazon S3 Intelligent-Tiering
b. Amazon S3 Glacier Deep Archive
c. Amazon S3 Standard
d. Amazon S3 Glacier Flexible Retrieval

A

b. Amazon S3 Glacier Deep Archive

Glacier means hard to retrieve be cheap to hold onto, just in case.

205
Q

A unicorn startup is building an analytics application with support for a speech-based interface. The application will accept speech-based input from users and then convey results via speech. As a Cloud Practitioner, which solution would you recommend for the given use-case?
a. Use Amazon Polly to convert speech to text for downstream analysis. Then use Amazon Translate to convey the text results via speech
b. Use Amazon Polly to convert speech to text for downstream analysis. Then use Amazon Transcribe to convey the text results via speech
c. Use Amazon Transcribe to convert speech to text for downstream analysis. Then use Amazon Polly to convey the text results via speech
d. Use Amazon Translate to convert speech to text for downstream analysis. Then use Amazon Polly to convey the text results via speech

A

c. Use Amazon Transcribe to convert speech to text for downstream analysis. Then use Amazon Polly to convey the text results via speech

You can use Amazon Transcribe to add speech-to-text capability to your applications. Amazon Transcribe uses a deep learning process called automatic speech recognition (ASR) to convert speech to text quickly and accurately. Amazon Transcribe can be used to transcribe customer service calls, to automate closed captioning and subtitling, and to generate metadata for media assets.

You can use Amazon Polly to turn text into lifelike speech thereby allowing you to create applications that talk. Polly’s Text-to-Speech (TTS) service uses advanced deep learning technologies to synthesize natural sounding human speech.

206
Q

A start-up would like to monitor its cost on the AWS Cloud and would like to choose an optimal Savings Plan. As a Cloud Practitioner, which AWS service would you use?

a. AWS Pricing Calculator
b. AWS Cost Explorer
c. AWS Budgets
d. AWS Cost & Usage Report (AWS CUR)

A

b. AWS Cost Explorer

AWS Cost Explorer lets you explore your AWS costs and usage at both a high level and at a detailed level of analysis and empowers you to dive deeper using several filtering dimensions (e.g., AWS Service, AWS Region, Linked Account, etc.). AWS Cost Explorer also gives you access to a set of default reports to help you get started, while also allowing you to create custom reports from scratch.

Customers can receive Savings Plan recommendations at the member (linked) account level in addition to the existing AWS organization-level recommendations in AWS Cost Explorer.

Incorrect options:

AWS Cost & Usage Report (AWS CUR) - The AWS Cost & Usage Report (AWS CUR) is a single location for accessing comprehensive information about your AWS costs and usage. It does not provide Savings Plan recommendations.

207
Q

A company is using a message broker service on its on-premises application and wants to move this messaging functionality to AWS Cloud. Which of the following AWS services is the right choice to move the existing functionality easily?

a. Amazon Simple Queue Service (Amazon SQS)
b. Amazon MQ
c. Amazon Simple Notification Service (Amazon SNS)
d. Amazon Kinesis Data Streams

A

b. Amazon MQ

Amazon MQ is a managed message broker service for Apache ActiveMQ and RabbitMQ that makes it easy to set up and operate message brokers on AWS. Amazon MQ reduces your operational responsibilities by managing the provisioning, setup, and maintenance of message brokers for you. Because Amazon MQ connects to your current applications with industry-standard APIs and protocols, you can easily migrate to AWS without having to rewrite code.

If you’re using messaging with existing applications, and want to move the messaging functionality to the cloud quickly and easily, AWS recommends you consider Amazon MQ. It supports industry-standard APIs and protocols so you can switch from any standards-based message broker to Amazon MQ without rewriting the messaging code in your applications. If you are building brand new applications in the cloud, AWS recommends you consider Amazon SQS and Amazon SNS.

How Amazon MQ works: https://aws.amazon.com/amazon-mq/

Incorrect options:

Amazon Kinesis Data Streams - Amazon Kinesis Data Streams enables you to build custom applications that process or analyze streaming data for specialized needs. You can continuously add various types of data such as clickstreams, application logs, and social media to an Amazon Kinesis data stream from hundreds of thousands of sources. Within seconds, the data will be available for your Amazon Kinesis Applications to read and process from the stream.

208
Q

Which of the following AWS services can be used to connect a company’s on-premises environment to a VPC without using the public internet?
a. VPC Endpoint
b. Internet Gateway
c. AWS Direct Connect
d. AWS Site-to-Site VPN

A

c. AWS Direct Connect

AWS Direct Connect is a cloud service solution that makes it easy to establish a dedicated network connection from your premises to AWS. You can use AWS Direct Connect to establish a private virtual interface from your on-premise network directly to your Amazon VPC, providing you with a private, high bandwidth network connection between your network and your VPC. This connection is private and does not go over the public internet. It takes at least a month to establish this physical connection.

How Direct Connect Works: https://aws.amazon.com/directconnect/

209
Q

A supply chain company is looking for a database that provides a centrally verifiable history of all changes made to data residing in it. This functionality is critical for the product and needs to be available off the shelf without the need for any customizations.

Which of the following databases is the right choice for this use case?

a. Amazon Quantum Ledger Database (Amazon QLDB)
b. Amazon Managed Blockchain
c. Amazon Neptune
d. Amazon Timestream

A

a. Amazon Quantum Ledger Database (Amazon QLDB)

Amazon Quantum Ledger Database (Amazon QLDB) is a fully managed ledger database that provides a transparent, immutable, and cryptographically verifiable transaction log ‎owned by a central trusted authority. Amazon QLDB can be used to track each and every application data change and maintains a complete and verifiable history of changes over time.

Ledgers are typically used to record a history of economic and financial activity in an organization. Many organizations build applications with ledger-like functionality because they want to maintain an accurate history of their applications’ data, for example, tracking the history of credits and debits in banking transactions, verifying the data lineage of an insurance claim, or tracing the movement of an item in a supply chain network. Ledger applications are often implemented using custom audit tables or audit trails created in relational databases.

Amazon QLDB is a new class of databases that eliminates the need to engage in the complex development effort of building your own ledger-like applications. With QLDB, your data’s change history is immutable – it cannot be altered or deleted – and using cryptography, you can easily verify that there have been no unintended modifications to your application’s data. QLDB uses an immutable transactional log, known as a journal, that tracks each application data change and maintains a complete and verifiable history of changes over time. QLDB is easy to use because it provides developers with a familiar SQL-like API, a flexible document data model, and full support for transactions. QLDB’s streaming capability provides a near real-time flow of your data stored within QLDB, allowing you to develop event-driven workflows, and real-time analytics, and to replicate data to other AWS services to support advanced analytical processing. QLDB is also serverless, so it automatically scales to support the demands of your application. There are no servers to manage and no read or write limits to configure. With QLDB, you only pay for what you use.

How Amazon Quantum Ledger Database (Amazon QLDB) Works: https://aws.amazon.com/qldb/

Incorrect options:

Amazon Neptune - Amazon Neptune is a fast, reliable, fully-managed graph database service that makes it easy to build and run applications that work with highly connected datasets. The core of Amazon Neptune is a purpose-built, high-performance graph database engine optimized for storing billions of relationships and querying the graph with milliseconds latency. Amazon Neptune is tailor-built for use cases like Knowledge Graphs, Identity Graphs, Fraud Detection, Recommendation Engines, Social Networking, Life Sciences, and so on.

Amazon Timestream - Amazon Timestream is a fast, scalable, and serverless time series database service for IoT and operational applications that makes it easy to store and analyze trillions of events per day up to 1,000 times faster and at as little as 1/10th the cost of relational databases. Amazon Timestream saves you time and costs in managing the lifecycle of time series data by keeping recent data in memory and moving historical data to a cost-optimized storage tier based upon user-defined policies.

Amazon Managed Blockchain - Amazon Managed Blockchain is a fully managed service that allows you to join public networks or set up and manage scalable private networks using popular open-source frameworks. Amazon Managed Blockchain eliminates the overhead required to create the network or join a public network and automatically scales to meet the demands of thousands of applications running millions of transactions.

While QLDB is a ledger database purpose-built for customers who need to maintain a complete and verifiable history of data changes in an application that they own and manage in a centralized way, QLDB is not a blockchain technology. Instead, blockchain technologies focus on enabling multiple parties to transact and share data securely in a decentralized way; without a trusted, central authority. Every member in a network has an independently verifiable copy of an immutable ledger, and members can create and endorse transactions in the network.

210
Q

A company would like to reserve Amazon Elastic Compute Cloud (Amazon EC2) compute capacity for three years to reduce costs. The company also plans to increase their workloads during this period. As a Cloud Practitioner, which Amazon Elastic Compute Cloud (Amazon EC2) reserved instance (RI) type would you recommend?

a. Convertible reserved instance (RI)
b. Scheduled reserved instance (RI)
c. Standard reserved instance (RI)
d. Adaptable reserved instances (RI)

A

a. Convertible reserved instance (RI)

Purchase convertible reserved instance (RI) if you need additional flexibility, such as the ability to use different instance families, operating systems, or tenancies over the reserved instance (RI) term. Convertible reserved instance (RI) provides you with a significant discount (up to 54%) compared to an on-demand instance and can be purchased for a 1-year or 3-year term.

Convertible reserved instance (RI) can be useful when workloads are likely to change. In this case, a convertible reserved instance (RI) enables you to adapt as needs evolve while still obtaining discounts and capacity reservation.

Amazon EC2 Pricing Options Overview: https://aws.amazon.com/ec2/pricing/

Incorrect options:

Standard reserved instance (RI) - Standard reserved instance (RI) provides you with a significant discount (up to 72%) compared to on-demand instance pricing, and can be purchased for a 1-year or 3-year term. Standard reserved instance (RI) do not offer as much flexibility as convertible reserved instance (RI), such as not being able to change the instance family type; and therefore are not best-suited for this use case.

Review the differences between standard reserved instance (RI) and convertible reserved instance (RI): https://docs.aws.amazon.com/whitepapers/latest/cost-optimization-reservation-models/standard-vs.-convertible-offering-classes.html

Scheduled reserved instance (RI) - AWS does not support scheduled reserved instance (RI), so this option is ruled out.

Adaptable reserved instances (RI) - Adaptable reserved instance (RI) is not a valid type of reserved instance (RI). It is a distractor.

211
Q

As per the AWS Shared Responsibility Model, which of the following is a responsibility of the customer from a security and compliance point of view?

a. Managing patches of the guest operating system on Amazon Elastic Compute Cloud (Amazon EC2)
b. Availability Zone (AZ) infrastructure management
c. Patching/fixing flaws within the AWS infrastructure
d. Configuration management for AWS global infrastructure

A

a. Managing patches of the guest operating system on Amazon Elastic Compute Cloud (Amazon EC2)

Security and Compliance is a shared responsibility between AWS and the customer. This shared model can help relieve the customer’s operational burden as AWS operates, manages, and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the service operates.

As per the AWS shared responsibility model, the customer is responsible for security “in” the cloud. Customers that deploy an Amazon EC2 instance are responsible for the management of the guest operating system (including updates and security patches), any application software or utilities installed by the customer on the instances, and the configuration of the AWS-provided firewall (called a security group) on each instance.

Exam Alert:

Please review the Shared Responsibility Model in detail as you can expect multiple questions on the shared responsibility model in the exam: https://aws.amazon.com/compliance/shared-responsibility-model/

212
Q

Which AWS service can be used to view the most comprehensive billing details for the past month?

a. AWS Budgets
b. AWS Cost Explorer
c. AWS Cost & Usage Report (AWS CUR)
d. AWS Pricing Calculator

A

c. AWS Cost & Usage Report (AWS CUR)

The AWS Cost & Usage Report (AWS CUR) contains the most comprehensive set of cost and usage data available. You can use Cost and Usage Reports to publish your AWS billing reports to an Amazon Simple Storage Service (Amazon S3) bucket that you own. You can receive reports that break down your costs by the hour or month, by product or product resource, or by tags that you define yourself.

AWS Cost & Usage Report (AWS CUR) Overview: https://docs.aws.amazon.com/cur/latest/userguide/what-is-cur.html

Incorrect options:

AWS Budgets - AWS Budgets gives the ability to set custom budgets that alert you when your costs or usage exceed (or are forecasted to exceed) your budgeted amount. You can also use AWS Budgets to set reservation utilization or coverage targets and receive alerts when your utilization drops below the threshold you define. Budgets can be created at the monthly, quarterly, or yearly level, and you can customize the start and end dates. You can further refine your budget to track costs associated with multiple dimensions, such as AWS service, linked account, tag, and others. AWS Budgets cannot provide billing details for the past month.

AWS Cost Explorer - AWS Cost Explorer has an easy-to-use interface that lets you visualize, understand, and manage your AWS costs and usage over time. AWS Cost Explorer includes a default report that helps you visualize the costs and usage associated with your top five cost-accruing AWS services and gives you a detailed breakdown of all services in the table view. The reports let you adjust the time range to view historical data going back up to twelve months to gain an understanding of your cost trends. AWS Cost Explorer cannot provide granular billing details for the past month.

AWS Pricing Calculator - AWS Pricing Calculator lets you explore AWS services and create an estimate for the cost of your use cases on AWS. You can model your solutions before building them, explore the price points and calculations behind your estimate, and find the available instance types and contract terms that meet your needs. This enables you to make informed decisions about using AWS. You can plan your AWS costs and usage or price out by setting up a new set of instances and services. AWS Pricing Calculator cannot provide billing details for the past month.

Exam Alert:

Please review the differences between “AWS Cost & Usage Report (AWS CUR)” and “AWS Cost Explorer”. Think of “AWS Cost & Usage Report (AWS CUR)” as a cost management tool providing the most detailed cost and usage data for your AWS account. It can provide reports that break down your costs by the hour into your Amazon Simple Storage Service (Amazon S3) bucket. On the other hand, “AWS Cost Explorer” is more of a high-level cost management tool that helps you visualize the costs and usage associated with your AWS account.

213
Q

Which of the following is correct regarding the AWS Shield Advanced pricing?

a. AWS Shield Advanced is a free service for AWS Business Support plan
b. AWS Shield Advanced is a free service for all AWS Support plans
c. AWS Shield Advanced is a free service for AWS Enterprise Support plan
d. AWS Shield Advanced offers protection against higher fees that could result from a DDoS attack

A

d. AWS Shield Advanced offers protection against higher fees that could result from a DDoS attack

AWS Shield Advanced offers some cost protection against spikes in your AWS bill that could result from a DDoS attack. This cost protection is provided for your Elastic Load Balancing load balancers, Amazon CloudFront distributions, Amazon Route 53 hosted zones, Amazon Elastic Compute Cloud instances, and your AWS Global Accelerator accelerators.

AWS Shield Advanced is a paid service for all customers, irrespective of the Support plan.

214
Q

Which security service of AWS is enabled for all AWS customers, by default, at no additional cost?
a. AWS Secrets Manager
b. AWS Shield Standard
c. AWS Shield Advanced
d. AWS Web Application Firewall (AWS WAF)

A

b. AWS Shield Standard

AWS Shield Standard defends against most common, frequently occurring network and transport layer DDoS attacks that target your website or applications. While AWS Shield Standard helps protect all AWS customers, you get better protection if you are using Amazon CloudFront and Amazon Route 53. All AWS customers benefit from the automatic protections of AWS Shield Standard, at no additional charge.

215
Q

Which of the following statements are CORRECT regarding the Availability Zone (AZ) specific characteristics of Amazon Elastic Block Store (EBS) and Amazon Elastic File System (Amazon EFS) storage types?
a. EBS volume can be attached to one or more instances in multiple Availability Zones (AZ) and EFS file system can be mounted on instances in the same Availability Zone (AZ)
b. EBS volume can be attached to a single instance in the same Availability Zone (AZ) whereas EFS file system can be mounted on instances across multiple Availability Zones (AZ)
c. EBS volume can be attached to one or more instances in multiple Availability Zones (AZ) and EFS file system can be mounted on instances across multiple Availability Zones (AZ)
d. EBS volume can be attached to a single instance in the same Availability Zone (AZ) and EFS file system can only be mounted on instances in the same Availability Zone (AZ)

A

b. EBS volume can be attached to a single instance in the same Availability Zone (AZ) whereas EFS file system can be mounted on instances across multiple Availability Zones (AZ)

Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources. It is built to scale on-demand to petabytes without disrupting applications, growing and shrinking automatically as you add and remove files, eliminating the need to provision and manage capacity to accommodate growth.

The service is designed to be highly scalable, highly available, and highly durable. Amazon EFS file systems store data and metadata across multiple Availability Zones (AZ) in an AWS Region. EFS file system can be mounted on instances across multiple Availability Zones (AZ).

Amazon Elastic Block Store (EBS) is an easy to use, high-performance block storage service designed for use with Amazon Elastic Compute Cloud (EC2) for both throughput and transaction-intensive workloads at any scale.

Designed for mission-critical systems, EBS volumes are replicated within an Availability Zone (AZ) and can easily scale to petabytes of data. You can attach an available EBS volume to one instance that is in the same Availability Zone (AZ) as the volume.

216
Q

Which of the following statements are CORRECT regarding the AWS VPC service? (Select two)
a. A Security Group can have allow rules only
b. A network access control list (network ACL) can have allow rules only
c. A Security Group can have both allow and deny rules
d. A Network Address Translation instance (NAT instance) is managed by AWS
e. A Network Address Translation gateway (NAT gateway) is managed by AWS

A

a. A Security Group can have allow rules only

e. A Network Address Translation gateway (NAT gateway) is managed by AWS

Honestly…the explanation is lengthy and confusing.

217
Q

Which of the following are the storage services offered by the AWS Cloud? (Select two)
a. Amazon Elastic File System (Amazon EFS)
b. Amazon Simple Notification Service (SNS)
c. Amazon Elastic Compute Cloud (Amazon EC2)
d. Amazon Simple Queue Service (SQS)
e. Amazon Simple Storage Service (Amazon S3)

A

a. Amazon Elastic File System (Amazon EFS)
e. Amazon Simple Storage Service (Amazon S3)

Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance.

Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources. It is built to scale on-demand to petabytes without disrupting applications, growing and shrinking automatically as you add and remove files, eliminating the need to provision and manage capacity to accommodate growth.

218
Q

Which service gives a personalized view of the status of the AWS services that are part of your Cloud architecture so that you can quickly assess the impact on your business when AWS service(s) are experiencing issues?

a. Amazon Inspector
b. Amazon CloudWatch
c. AWS Health - Your Account Health Dashboard
d. AWS Health - Service Health Dashboard

A

c. AWS Health - Your Account Health Dashboard

AWS Health - Your Account Health Dashboard provides alerts and remediation guidance when AWS is experiencing events that may impact you.

With AWS Health - Your Account Health Dashboard, alerts are triggered by changes in the health of your AWS resources, giving you event visibility, and guidance to help quickly diagnose and resolve issues.

You can check on this page https://phd.aws.amazon.com/phd/home to get current status information.

Incorrect options:

AWS Health - Service Health Dashboard - The AWS Health - Service Health Dashboard is the single place to learn about the availability and operations of AWS services. You can view the overall status of AWS services, and you can sign in to view personalized communications about your particular AWS account or organization.

You can check on this page https://health.aws.amazon.com/health/status to get current status information.

Exam Alert:

While the AWS Health - Service Health Dashboard displays the general status of AWS services; the AWS Health - Your Account Health Dashboard gives you a personalized view of the performance and availability of the AWS services underlying your AWS resources.

219
Q

A company is looking for ways to make its desktop applications available to the employees from browsers on their devices/laptops. Which AWS service will help achieve this requirement without having to procure servers or maintain infrastructure?

a. AWS Snowball
b. AWS Outposts
c. Amazon AppStream 2.0
d. Amazon WorkSpaces

A

c. Amazon AppStream 2.0

Amazon AppStream 2.0 is a fully managed non-persistent application and desktop streaming service. You centrally manage your desktop applications on AppStream 2.0 and securely deliver them to any computer. You can easily scale to any number of users across the globe without acquiring, provisioning, and operating hardware or infrastructure. AppStream 2.0 is built on AWS, so you benefit from a data center and network architecture designed for the most security-sensitive organizations. Each end-user has a fluid and responsive experience because your applications run on virtual machines optimized for specific use cases and each streaming session automatically adjusts to network conditions.

Users can access the desktop applications they need at any time. AppStream 2.0 streams your applications from AWS to any computer, including Chromebooks, Macs, and PCs. AppStream 2.0 connects to your Active Directory, network, cloud storage, and file shares. Users access applications using their existing credentials and your existing security policies manage access. Extensive APIs integrate AppStream 2.0 with your IT solutions.

How Amazon AppStream 2.0 Works: https://aws.amazon.com/appstream2/

Incorrect options:

Amazon WorkSpaces - Amazon WorkSpaces is a managed, secure Desktop-as-a-Service (DaaS) solution. You can use Amazon WorkSpaces to provision either Windows or Linux desktops in just a few minutes and quickly scale to provide thousands of desktops to workers across the globe. Amazon WorkSpaces helps you eliminate the complexity in managing hardware inventory, OS versions and patches, and Virtual Desktop Infrastructure (VDI), which helps simplify your desktop delivery strategy. With Amazon WorkSpaces, your users get a fast, responsive desktop of their choice that they can access anywhere, anytime, from any supported device.

While Amazon AppStream 2.0 helps move desktop applications to AWS Cloud, so they can be accessed from anywhere; Workspaces provides the entire Desktop environment needed for the workforce.

AWS Outposts - AWS Outposts is a fully managed service that offers the same AWS infrastructure, AWS services, APIs, and tools to virtually any data center, co-location space, or on-premises facility for a truly consistent hybrid experience. AWS Outposts is ideal for workloads that require low latency access to on-premises systems, local data processing, data residency, and migration of applications with local system interdependencies.

AWS Snowball - AWS Snowball, a part of the AWS Snow Family, is an edge computing, data migration, and edge storage device that comes in two options. Snowball Edge Storage Optimized devices provide both block storage and Amazon S3-compatible object storage, and 40 vCPUs. They are well suited for local storage and large scale data transfer.

220
Q

Which characteristic of Cloud Computing imparts the ability to acquire resources as you need and release when you no longer need them?

a. Reliability
b. Resiliency
c. Durability
d. Elasticity

A

d. Elasticity

Elasticity

The ability to acquire resources as you need and release when they are no longer needed is termed as Elasticity of the Cloud. With cloud computing, you don’t have to over-provision resources upfront to handle peak levels of business activity in the future. Instead, you provision the number of resources that you need. You can scale these resources up or down instantly to grow and shrink capacity as your business needs change.

What is Elasticity: https://aws.amazon.com/what-is-cloud-computing/

221
Q

Which of the following entities applies patches to the underlying OS for Amazon Aurora?
a. The AWS customer by SSHing on the instances
b. The AWS customer by using AWS Systems Manager
c. The AWS Support after receiving a request from the customer
d. The AWS Product Team automatically

A

d. The AWS Product Team automatically

Amazon Aurora is a MySQL and PostgreSQL-compatible relational database built for the cloud. Amazon Aurora is fully managed by Amazon Relational Database Service (RDS), which automates time-consuming administration tasks like hardware provisioning, database setup, patching, and backups. The AWS Product team is responsible for applying patches to the underlying OS for AWS Aurora.

222
Q

As part of log analysis, you have realized that one or more AWS-owned IP addresses are being used for port scanning your on-premises server. Which service/team should you connect to resolve this issue?

a. Reach out to Werner Vogels, the CTO of Amazon, with the details of the incident
b. Use AWS Trusted Advisor to log a complaint with AWS
c. Contact AWS Support
d. Contact AWS Abuse team

A

d. Contact AWS Abuse team

223
Q

A startup wants to provision an EC2 instance for the lowest possible cost for a long-term duration but needs to make sure that the instance would never be interrupted. As a Cloud Practitioner, which of the following options would you recommend?
a. EC2 Dedicated Host
b. EC2 Spot Instance
c. EC2 On-Demand Instance
d. EC2 Reserved Instance (RI)

A

d. EC2 Reserved Instance (RI)

An EC2 Reserved Instance (RI) provides you with significant savings (up to 75%) on your Amazon EC2 costs compared to On-Demand Instance pricing. A Reserved Instance (RI) is not a physical instance, but rather a billing discount applied to the use of On-Demand Instances in your account. You can purchase a Reserved Instance (RI) for a one-year or three-year commitment, with the three-year commitment offering a bigger discount. A reserved instance (RI) cannot be interrupted. So this is the correct option.

On-demand instances cannot be interrupted. However, On-demand instances are not as cost-effective as Reserved instances, so this option is not correct.

EC2 Spot Instances are as cost-effective as Reserved instances. However, EC2 Spot Instance can be interrupted, so this option is not correct.

224
Q

A development team is looking for a forum where the most frequent questions and requests from AWS customers are listed along with AWS provided solutions.

Which AWS forum/service can be used for troubleshooting an issue or checking for a solution?

a. AWS Knowledge Center
b. AWS Marketplace
c. AWS Support Center
d. AWS Health Dashboard - service health

A

a. AWS Knowledge Center

AWS Knowledge Center contains the most frequent & common questions and requests and AWS provided solutions for the same. This should be the starting point of checking for a solution or troubleshooting an issue with AWS services. The URL for Knowledge Center is https://aws.amazon.com/premiumsupport/knowledge-center/.

225
Q

Which Amazon Route 53 routing policy would you use when you want to route your traffic in an active-passive configuration?

a. Simple routing
b. Failover routing
c. Latency-based routing
d. Weighted routing

A

b. Failover routing

226
Q

A big data analytics company is moving its IT infrastructure from an on-premises data center to AWS Cloud. The company has some server-bound software licenses that it wants to use on AWS. As a Cloud Practitioner, which of the following EC2 instance types would you recommend to the company?
a. On-Demand Instance
b. Reserved Instance (RI)
c. Dedicated Instance
d. Dedicated Host

A

d. Dedicated Host

Amazon EC2 Dedicated Hosts allow you to use your eligible software licenses from vendors such as Microsoft and Oracle on Amazon EC2. An Amazon EC2 Dedicated Host is a physical server fully dedicated for your use, so you can help address corporate compliance requirements.

Exam Alert:

Please review the differences between Dedicated hosts and Dedicated instances: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/dedicated-hosts-overview.html

Dedicated Instance - A Dedicated Instance is an Amazon EC2 instance that runs in a virtual private cloud (VPC) on hardware that’s dedicated to a single customer. Dedicated Instances that belong to different AWS accounts are physically isolated at the hardware level. However, Dedicated Instances may share hardware with other instances from the same AWS account that are not Dedicated Instances. You cannot use Dedicated Instances for using server-bound software licenses.

227
Q

A manufacturing company is looking at a service that can offer AWS infrastructure, AWS services, APIs, and tools to its on-premises data center for running low latency applications.

Which of the following service/tool is the best fit for the given requirement?

a. AWS Local Zones
b. AWS Snow Family
c. AWS Outposts
d. AWS Wavelength

A

c. AWS Outposts

AWS Outposts is a fully managed service that offers the same AWS infrastructure, AWS services, APIs, and tools to virtually any data center, co-location space, or on-premises facility for a truly consistent hybrid experience. AWS Outposts is ideal for workloads that require low latency access to on-premises systems, local data processing, data residency, and migration of applications with local system interdependencies.

AWS compute, storage, database, and other services run locally on Outposts, and you can access the full range of AWS services available in the Region to build, manage, and scale your on-premises applications using familiar AWS services and tools.

You can use AWS Outposts to support your applications that have low latency or local data processing requirements. These applications may need to generate near real-time responses to end-user applications or need to communicate with other on-premises systems or control on-site equipment. These can include workloads running on factory floors for automated operations in manufacturing, real-time patient diagnosis or medical imaging, and content and media streaming. You can use Outposts to securely store and process customer data that needs to remain on-premises or in countries where there is no AWS region. You can run data-intensive workloads on Outposts and process data locally when transmitting data to the cloud is expensive and wasteful and for better control on data analysis, back-up, and restoration.

How AWS Outposts Works: https://aws.amazon.com/outposts/

Incorrect options:

AWS Snow Family - The AWS Snow Family is a collection of physical devices that help migrate large amounts of data into and out of the cloud without depending on networks. This helps you apply the wide variety of AWS services for analytics, file systems, and archives to your data. You can use AWS Snow Family services for data transfer and occasional pre-processing on location. Some large data transfer examples include cloud migration, disaster recovery, data center relocation, and/or remote data collection projects. These projects typically require you to migrate large amounts of data in the shortest, and most cost-effective, amount of time.

AWS Wavelength - AWS Wavelength is an AWS Infrastructure offering optimized for mobile edge computing applications. Wavelength Zones are AWS infrastructure deployments that embed AWS compute and storage services within communications service providers’ (CSP) data centers at the edge of the 5G network, so application traffic from 5G devices can reach application servers running in Wavelength Zones without leaving the telecommunications network.

AWS Local Zones - AWS Local Zones are a type of AWS infrastructure deployment that places AWS compute, storage, database, and other select services close to a large population, industry, and IT centers. With AWS Local Zones, you can easily run applications that need single-digit millisecond latency closer to end-users in a specific geography. AWS Local Zones are ideal for use cases such as media & entertainment content creation, real-time gaming, live video streaming, and machine learning inference.

228
Q

Which AWS service can help you analyze your infrastructure to identify unattached or underutilized Amazon EBS Elastic Volumes?

a. Amazon Inspector
b. AWS Config
c. Amazon CloudWatch
d. AWS Trusted Advisor

A

d. AWS Trusted Advisor

AWS Trusted Advisor is an online tool that provides real-time guidance to help provision your resources following AWS best practices. Whether establishing new workflows, developing applications, or as part of ongoing improvement, recommendations provided by Trusted Advisor regularly help keep your solutions provisioned optimally. AWS Trusted Advisor analyzes your AWS environment and provides best practice recommendations in five categories: Cost Optimization, Performance, Security, Fault Tolerance, Service Limits.

AWS Trusted Advisor can check Amazon Elastic Block Store (Amazon EBS) volume configurations and warns when volumes appear to be underused. Charges begin when a volume is created. If a volume remains unattached or has very low write activity (excluding boot volumes) for a period of time, the volume is probably not being used.

How AWS Trusted Advisor Works: https://aws.amazon.com/premiumsupport/technology/trusted-advisor/

Incorrect options:

Amazon Inspector - Amazon Inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on your Amazon EC2 instances. Amazon Inspector automatically assesses applications for exposure, vulnerabilities, and deviations from best practices. Its a security assessment service and not an infrastructure tracking service.

229
Q

A company would like to separate cost for AWS services by the department for cost allocation. Which of the following is the simplest way to achieve this task?

a. Create different virtual private cloud (VPCs) for different departments
b. Create one account for all departments and share this account
c. Create different accounts for different departments
d. Create tags for each department

A

d. Create tags for each department

You can assign metadata to your AWS resources in the form of tags. Each tag is a label consisting of a user-defined key and value. Tags can help you manage, identify, organize, search for, and filter resources. You can create tags to categorize resources by purpose, owner, environment, or other criteria.

Typically, you use business tags such as cost center/business unit, customer, or project to associate AWS costs with traditional cost-allocation dimensions. But a cost allocation report can include any tag. This lets you associate costs with technical or security dimensions, such as specific applications, environments, or compliance programs.

Example of tagging for cost optimization: https://docs.aws.amazon.com/general/latest/gr/aws_tagging.html

Incorrect options:

Create different accounts for different departments - Users can belong to several departments. Therefore, having different accounts for different departments would imply some users having several accounts. This is contrary to the security best practice: one physical user = one account. Also, it is much simpler to set up tags for tracking costs for each department.

Create one account for all departments and share this account - Sharing accounts is not a security best practice, and is not recommended.

Create different virtual private cloud (VPCs) for different departments - Creating different VPCs will not help with separating costs.

230
Q

A company provides you with a completed product that is run and managed by the company itself. As a customer, you only use the product without worrying about maintaining or managing the product.

Which cloud computing model does this kind of product belong to?

a. Infrastructure as a Service (IaaS)
b. Product as a Service (Paas)
c. Software as a Service (SaaS)
d. Platform as a Service (PaaS)

A

c. Software as a Service (SaaS)

Software as a Service provides you with a completed product that is run and managed by the service provider. In most cases, people referring to Software as a Service are referring to end-user applications. With a SaaS offering, you do not have to think about how the service is maintained or how the underlying infrastructure is managed; you only need to think about how you will use that particular piece of software.

A common example of a SaaS application is the web-based email where you can send and receive email without having to manage feature additions to the email product or maintaining the servers and operating systems that the email program is running on.

Incorrect options:

Infrastructure as a Service (IaaS) - Infrastructure as a service (IaaS), sometimes abbreviated as IaaS, contains the basic building blocks for cloud IT and typically provides access to networking features, computers (virtual or on dedicated hardware), and data storage space. Infrastructure as a Service provides you with the highest level of flexibility and management control over your IT resources and is most similar to existing IT resources that many IT departments and developers are familiar with today.

Platform as a Service (PaaS) - Platform as a Service (PaaS) as a service removes the need for organizations to manage the underlying infrastructure (usually hardware and operating systems) and allow you to focus on the deployment and management of your applications. This helps you be more efficient as you don’t need to worry about resource procurement, capacity planning, software maintenance, patching, or any of the other undifferentiated heavy lifting involved in running your application.

Product as a Service (Paas) - This is a made-up option, given only as a distractor.

231
Q

The IT infrastructure at a university is deployed on AWS Cloud and it’s experiencing a read-intensive workload. As a Cloud Practitioner, which AWS service would you use to take the load off databases?

a. Amazon EMR
b. AWS Glue
c. Amazon ElastiCache
d. Amazon Relational Database Service (Amazon RDS)

A

c. Amazon ElastiCache

Amazon ElastiCache is a web service that makes it easy to deploy, operate, and scale an in-memory cache in the cloud. The service improves the performance of web applications by allowing you to retrieve information from fast, managed, in-memory caches, instead of relying entirely on slower disk-based databases.

If Amazon EC2 instances are intensively reading data from a database, ElastiCache can cache some values to take the load off the database.

How Amazon ElastiCache works: https://aws.amazon.com/elasticache/

Incorrect options:

AWS Glue - AWS Glue is a fully managed extract, transform, and load (ETL) service that makes it easy for customers to prepare and load their data for analytics. AWS Glue job is meant to be used for batch ETL data processing. It cannot be used to take the load off the databases.

Amazon EMR - Amazon EMR provides a managed Hadoop framework that makes it easy, fast, and cost-effective to process vast amounts of data across dynamically scalable Amazon EC2 instances. It cannot be used to take the load off the databases.

232
Q

A digital media company wants to convert English language subtitles into Spanish language subtitles. As a Cloud Practitioner, which AWS service would you recommend for this use-case?

a. Amazon Transcribe
b. Amazon Rekognition
c. Amazon Translate
d. Amazon Polly

A

c. Amazon Translate

Amazon Translate is a neural machine translation service that delivers fast, high-quality, and affordable language translation. Amazon Translate allows you to localize content - such as websites and applications - for international users, and to easily translate large volumes of text efficiently.

Incorrect options:

Amazon Polly - You can use Amazon Polly to turn text into lifelike speech thereby allowing you to create applications that talk. Polly’s Text-to-Speech (TTS) service uses advanced deep learning technologies to synthesize natural sounding human speech.

Amazon Transcribe - You can use Amazon Transcribe to add speech-to-text capability to your applications. Amazon Transcribe uses a deep learning process called automatic speech recognition (ASR) to convert speech to text quickly and accurately. Amazon Transcribe can be used to transcribe customer service calls, to automate closed captioning and subtitling, and to generate metadata for media assets.

Amazon Rekognition - With Amazon Rekognition, you can identify objects, people, text, scenes, and activities in images and videos, as well as to detect any inappropriate content. Amazon Rekognition also provides highly accurate facial analysis and facial search capabilities that you can use to detect, analyze, and compare faces for a wide variety of user verification, people counting, and public safety use cases.

233
Q

Which of the following AWS services has encryption enabled by default?
a. Amazon Elastic File System (Amazon EFS)
b. Amazon Relational Database Service (Amazon RDS)
c. Amazon Elastic Block Store (Amazon EBS)
d. AWS CloudTrail Logs

A

d. AWS CloudTrail Logs

AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. AWS CloudTrail can be used to record AWS API calls and other activity for your AWS account and save the recorded information to log files in an Amazon Simple Storage Service (Amazon S3) bucket that you choose. By default, the log files delivered by CloudTrail to your S3 bucket are encrypted using server-side encryption with Amazon S3 managed keys (SSE-S3).

234
Q

Historically, IT departments had to over-provision for peak demand. IT professionals may bring this legacy mindset to the table when they build their cloud infrastructure leading to over-provisioned resources and unnecessary costs. Right-sizing of resources is necessary to reduce infrastructure costs while still using cloud functionality optimally.

Which feature of the AWS Cloud refers to right-sizing the resources?

a. Elasticity
b. Horizontal scaling
c. Resiliency
d. Reliability

A

a. Elasticity

Most people, when thinking of cloud computing, think of the ease with which they can procure resources when needed. This is only one aspect of elasticity. The other aspect is to contract when they no longer need resources. Scale-out and scale in. Scale up and scale down.

The ability to acquire resources as you need them and release resources when you no longer need them. In the cloud, you want to do this automatically.

Some AWS services do this as part of their service: Amazon Simple Storage Service (Amazon S3), Amazon Simple Queue Service (Amazon SQS), Amazon Simple Notification Service (Amazon SNS), Amazon Simple Email Service (Amazon SES), Amazon Aurora, etc. Some require vertical scaling, like Amazon Relational Database Service (Amazon RDS). Others integrate with AWS Auto Scaling, like Amazon EC2, Amazon ECS, AWS Fargate, Amazon EKS, and Amazon DynamoDB. Amazon Aurora Serverless and Amazon Athena also qualify as elastic.

Incorrect options:

Reliability - The ability of a workload to perform its intended function correctly and consistently when it’s expected to. This includes the ability to operate and test the workload through its total lifecycle.

Resiliency - The ability of a workload to recover from infrastructure or service disruptions, dynamically acquire computing resources to meet demand, and mitigate disruptions, such as misconfigurations or transient network issues.

Horizontal scaling - A “horizontally scalable” system can increase capacity by adding more computers to the system. This is in contrast to a “vertically scalable” system, which is constrained to running its processes on only one computer; in such systems, the only way to increase performance is to add more resources into one computer in the form of faster (or more) CPUs, memory or storage.

235
Q

A Project Manager, working on AWS for the first time, is confused about how credits are used in AWS. There are two credits available in the manager’s account. Credit one is for $100, expires July 2022, and can be used for either Amazon S3 or Amazon EC2. Credit two is for $50, expires December 2022, and can be used only for Amazon EC2. The manager’s AWS account has incurred two charges: $1000 for Amazon EC2 and $500 for Amazon S3.

What will be the outcome on the overall bill once the credits are used? (Select two)
a. Then, credit two is applied to $500 for Amazon S3 usage
b. Credit one is applied, which expires in July, to the Amazon EC2 charge which leaves you with a $900 Amazon EC2 charge and a $500 Amazon S3 charge
c. Then, credit two is applied to the remaining $900 of Amazon EC2 usage
d. Credit one is applied, which expires in July, to Amazon S3 usage which leaves you with a $1000 Amazon EC2 charge and a $400 Amazon S3 charge
e. Only one credit can be used in one billing cycle and the customer has a choice to choose from the available ones

A

b. Credit one is applied, which expires in July, to the Amazon EC2 charge which leaves you with a $900 Amazon EC2 charge and a $500 Amazon S3 charge

c. Then, credit two is applied to the remaining $900 of Amazon EC2 usage

Credits are applied in the following order:

Soonest expiring

Least number of applicable products

Oldest credit

For the given use case, credit one is applied, which expires in July, to the Amazon EC2 charge which leaves you with a $900 Amazon EC2 charge and a $500 Amazon S3 charge. Then, credit two is applied to the remaining $900 of Amazon EC2 usage. You need to pay $850 for Amazon EC2 and $500 for Amazon S3. All your credits are now exhausted.

236
Q

Which of the following AWS storage services can be directly used with on-premises systems?

a. Amazon EC2 Instance Store
b. Amazon Simple Storage Service (Amazon S3)
c. Amazon Elastic Block Store (Amazon EBS)
d. Amazon Elastic File System (Amazon EFS)

A

d. Amazon Elastic File System (Amazon EFS)

Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources.

To access EFS file systems from on-premises, you must have an AWS Direct Connect or AWS VPN connection between your on-premises datacenter and your Amazon VPC. You mount an EFS file system on your on-premises Linux server using the standard Linux mount command for mounting a file system

How Amazon EFS Works: https://aws.amazon.com/efs/faq/

Incorrect options:

Amazon Elastic Block Store (Amazon EBS) - Amazon Elastic Block Store (EBS) is an easy to use, high-performance block storage service designed for use with Amazon Elastic Compute Cloud (EC2) for both throughput and transaction-intensive workloads at any scale. EBS volumes can only be mounted with Amazon EC2.

Amazon EC2 Instance Store - An instance store provides temporary block-level storage for your Amazon EC2 instance. This storage is located on disks that are physically attached to the host computer. It is not possible to use this storage from on-premises systems.

Amazon Simple Storage Service (Amazon S3) - Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance. Amazon S3 can be accessed from on-premises only via AWS Storage Gateway. It is not possible to access S3 directly from on-premises systems.

237
Q

A financial services company wants to ensure that all customer data uploaded on its data lake on Amazon Simple Storage Service (Amazon S3) always stays private. Which of the following is the MOST efficient solution to address this compliance requirement?

a. Set up a high-level advisory committee to review the privacy settings of each object uploaded into Amazon S3
b. Use Amazon S3 Block Public Access to ensure that all Amazon S3 resources stay private
c. Trigger a lambda function every time an object is uploaded on Amazon S3. The lambda function should change the object settings to make sure it stays private
d. Use Amazon CloudWatch to ensure that all Amazon S3 resources stay private

A

b. Use Amazon S3 Block Public Access to ensure that all Amazon S3 resources stay private

The Amazon S3 Block Public Access feature provides settings for access points, buckets, and accounts to help you manage public access to Amazon S3 resources. By default, new buckets, access points, and objects don’t allow public access. However, users can modify bucket policies, access point policies, or object permissions to allow public access. S3 Block Public Access settings override these policies and permissions so that you can limit public access to these resources.

When Amazon S3 receives a request to access a bucket or an object, it determines whether the bucket or the bucket owner’s account has a block public access setting applied. If the request was made through an access point, Amazon S3 also checks for block public access settings for the access point. If there is an existing block public access setting that prohibits the requested access, Amazon S3 rejects the request.

Amazon S3 Block Public Access Overview: https://docs.aws.amazon.com/AmazonS3/latest/dev/access-control-block-public-access.html

Incorrect options:

Trigger a lambda function every time an object is uploaded on Amazon S3. The lambda function should change the object settings to make sure it stays private - Although it’s possible to implement this solution, but it is more efficient to use the “Amazon S3 Block Public Access” feature as its available off-the-shelf.

Use Amazon CloudWatch to ensure that all Amazon S3 resources stay private - Amazon CloudWatch is a monitoring and observability service built for DevOps engineers, developers, site reliability engineers (SREs), and IT managers. CloudWatch provides data and actionable insights to monitor applications, respond to system-wide performance changes, optimize resource utilization, and get a unified view of operational health. This is an excellent service for building Resilient systems. Think resource performance monitoring, events, and alerts; think CloudWatch. CloudWatch cannot be used to ensure data privacy on Amazon S3.

238
Q

Which of the following AWS services comes under the Software as a Service (SaaS) Cloud Computing Type?

a. Elastic Load Balancing (ELB)
b. AWS Elastic Beanstalk
c. Amazon Elastic Compute Cloud (Amazon EC2)
d. Amazon Rekognition

A

d. Amazon Rekognition

Cloud Computing can be broadly divided into three types - Infrastructure as a Service (IaaS), Platform as a Service (PaaS), Software as a Service (SaaS).

IaaS contains the basic building blocks for cloud IT. It typically provides access to networking features, computers (virtual or on dedicated hardware), and data storage space. IaaS gives the highest level of flexibility and management control over IT resources. Examples - Amazon EC2 (on AWS), GCP, Azure, Rackspace, Digital Ocean, Linode.

PaaS removes the need to manage underlying infrastructure (usually hardware and operating systems) and allows you to focus on the deployment and management of your applications. You don’t need to worry about resource procurement, capacity planning, software maintenance, patching, or any of the other undifferentiated heavy lifting involved in running your application. Examples - Elastic Beanstalk (on AWS), Heroku, Google App Engine (GCP), Windows Azure (Microsoft).

SaaS provides you with a complete product that is run and managed by the service provider. With a SaaS offering, you don’t have to think about how the service is maintained or how the underlying infrastructure is managed. You only need to think about how you will use that particular software. Examples - Amazon Rekognition, Google Apps (Gmail), Dropbox, Zoom.

Overview of Cloud Computing Types: https://aws.amazon.com/types-of-cloud-computing/

You can use Amazon Rekognition to add image and video analysis to your applications using proven, highly scalable, deep learning technology that requires no machine learning expertise. With Amazon Rekognition, you can identify objects, people, text, scenes, and activities in images and videos as well as detect any inappropriate content. Rekognition is an example of Software as a Service (Saas) model.

239
Q

Which of the following represents the correct scenario where an Auto Scaling group’s (ASG) predictive scaling can be effectively used to maintain the required number of AWS resources?

a. To help configure a scaling policy to keep the average aggregate CPU utilization of your Auto Scaling group at 40 percent
b. To manage a fixed number of resources in the Auto Scaling group
c. To help configure a CloudWatch Amazon Simple Queue Service (Amazon SQS) metric like ApproximateNumberOfMessagesVisible for scaling the group based on the value of the metric
d. To manage a workload that exhibits recurring load patterns that are specific to the day of the week or the time of day

A

d. To manage a workload that exhibits recurring load patterns that are specific to the day of the week or the time of day

Predictive scaling uses machine learning to predict capacity requirements based on historical data from CloudWatch. The machine learning algorithm consumes the available historical data and calculates capacity that best fits the historical load pattern, and then continuously learns based on new data to make future forecasts more accurate.

Predictive scaling is well suited for situations where you have:
1. Cyclical traffic, such as high use of resources during regular business hours and low use of resources during evenings and weekends
2. Recurring on-and-off workload patterns, such as batch processing, testing, or periodic data analysis
3. Applications that take a long time to initialize, causing a noticeable latency impact on application performance during scale-out events

Incorrect options:

To help configure a scaling policy to keep the average aggregate CPU utilization of your Auto Scaling group at 40 percent - Target tracking scaling policy is the best fit for this use case. With target tracking scaling policies, you select a scaling metric and set a target value. Amazon EC2 Auto Scaling creates and manages the CloudWatch alarms that trigger the scaling policy and calculates the scaling adjustment based on the metric and the target value. The scaling policy adds or removes capacity as required to keep the metric at, or close to, the specified target value.

To help configure a CloudWatch Amazon Simple Queue Service (Amazon SQS) metric like ApproximateNumberOfMessagesVisible for scaling the group based on the value of the metric - Target tracking scaling policy with backlog per instance metric is the best fit for this use case. That’s because the number of messages in your SQS queue does not solely define the number of instances needed. The number of instances in your Auto Scaling group can be driven by multiple factors, including how long it takes to process a message and the acceptable amount of latency (queue delay).

To manage a fixed number of resources in the Auto Scaling group - Maintaining current instance levels at all times to a fixed number is a basic way to configure an ASG. Predictive Scaling is not needed to maintain a fixed number of resources.

240
Q

Which of the following statements are correct regarding the AWS Support Plans? (Select two)

a. Contextual guidance based on customer use-case, is available only for the AWS Enterprise support plan
b. A designated Technical Account Manager is available only for AWS Enterprise Support plan
c. Infrastructure Event Management is included for free for AWS Business Support and AWS Enterprise Support plans and can be extended to AWS Developer Support plan for an additional fee
d. Both Basic and AWS Developer Support plans have access to the core Trusted Advisor checks only
e. AWS Concierge service is available for the AWS Business Support and AWS Enterprise Support plans

A

b. A designated Technical Account Manager is available only for AWS Enterprise Support plan
d. Both Basic and AWS Developer Support plans have access to the core Trusted Advisor checks only

A designated Technical Account Manager (TAM) is the primary point of contact who provides guidance, architectural review, and ongoing communication to keep the customer informed and well prepared as they plan, deploy, and proactively optimize their AWS solutions. As the cornerstone of the Enterprise Support plan, your TAM serves as your guide and advocate, focused on delivering the right resources to support the success and ongoing operational health of your AWS infrastructure.

AWS Trusted Advisor draws upon best practices learned from the aggregated operational history of serving hundreds of thousands of AWS customers. Trusted Advisor inspects your AWS environment and alerts you to opportunities to save money, improve system availability and performance, or help close security gaps. Access to the core Trusted Advisor checks, and guidance to provision your resources following best practices to increase performance and improve security are only part of the Basic and Developer support plans.

Incorrect options:

AWS Concierge service is available for the AWS Business Support and AWS Enterprise Support plans - AWS Concierge is a senior customer service agent who is assigned to your account when you subscribe to an Enterprise or qualified Reseller Support plan. This Concierge agent is your primary point of contact for billing or account inquiries; when you don’t know whom to call, they will find the right people to help. In most cases, the AWS Concierge is available during regular business hours in your headquarters’ geography.

Contextual guidance based on customer use-case, is available only for the AWS Enterprise support plan - Contextual guidance on how services fit together to meet your specific use-case, workload, or application is part of the Business support plan.

Infrastructure Event Management is included for free for AWS Business Support and AWS Enterprise Support plans and can be extended to AWS Developer Support plan for an additional fee - AWS Infrastructure Event Management is a short-term engagement with AWS Support, available as part of the Enterprise-level Support product offering (also available to the AWS Enterprise On-Ramp Support plan subject to a cap of one per year), and available for additional purchase for AWS Business Support plan users. AWS Infrastructure Event Management partners with your technical and project resources to gain a deep understanding of your use case and provide architectural and scaling guidance for an event. Common use-case examples for AWS Event Management include advertising launches, new product launches, and infrastructure migrations to AWS. Infrastructure Event Management cannot be extended to a AWS Developer Support plan for an additional fee.

241
Q

Which AWS service allows you to connect any number of IoT devices to the cloud without requiring you to provision or manage servers?

a. Amazon Connect
b. AWS Control Tower
c. AWS IoT Gateway
d. AWS IoT Core

A

d. AWS IoT Core

AWS IoT Core lets you connect IoT devices to the AWS cloud without the need to provision or manage servers. AWS IoT Core can support billions of devices and trillions of messages and can process and route those messages to AWS endpoints and to other devices reliably and securely. With AWS IoT Core, your applications can keep track of and communicate with all your devices, all the time, even when they aren’t connected.

AWS IoT Core also makes it easy to use AWS and Amazon services like AWS Lambda, Amazon Kinesis, Amazon S3, Amazon SageMaker, Amazon DynamoDB, Amazon CloudWatch, AWS CloudTrail, Amazon QuickSight, and Alexa Voice Service to build IoT applications that gather, process, analyze and act on data generated by connected devices, without having to manage any infrastructure.

AWS IoT Core lets you select the communication protocol most appropriate for your use case to connect and manage IoT devices. AWS IoT Core supports MQTT (Message Queuing and Telemetry Transport), HTTPS (Hypertext Transfer Protocol - Secure), MQTT over WSS (WebSockets Secure), and LoRaWAN (low-power long-range wide-area network).

AWS IoT Core provides automated configuration and authentication upon a device’s first connection to AWS IoT Core, as well as end-to-end encryption throughout all points of connection, so that data is never exchanged between devices and AWS IoT Core without proven identity. In addition, you can secure access to your devices and applications by applying policies with granular permissions.

AWS IoT Core capabilities: https://aws.amazon.com/iot-core/

Incorrect options:

Amazon Connect - Amazon Connect is an easy to use omnichannel cloud contact center that helps you provide superior customer service at a lower cost. Designed from the ground up to be omnichannel, Amazon Connect provides a seamless experience across voice and chat for your customers and agents. This includes one set of tools for skills-based routing, task management, powerful real-time and historical analytics, and intuitive management tools – all with pay-as-you-go pricing, which means Amazon Connect simplifies contact center operations, improves agent efficiency, and lowers costs.

AWS IoT Gateway - This is a made-up option and has been added as a distractor.

AWS Control Tower - AWS Control Tower provides the easiest way to set up and govern a new, secure, multi-account AWS environment based on best practices established through AWS’ experience working with thousands of enterprises as they move to the cloud. Control Tower provides mandatory and strongly recommended high-level rules, called guardrails, that help enforce your policies using service control policies (SCPs), or detect policy violations using AWS Config rules.

242
Q

A startup is looking for 24x7 phone based technical support for its AWS account. Which of the following is the MOST cost-effective AWS support plan for this use-case?

a. AWS Business Support
b. AWS Enterprise Support
c. AWS Developer Support
d. AWS Enterprise On-Ramp Support

A

a. AWS Business Support

AWS offers four different support plans to cater to each of its customers - AWS Developer Support, AWS Business Support, AWS Enterprise On-Ramp Support and AWS Enterprise Support plans. A basic support plan is included for all AWS customers.

AWS Business Support - You should use the AWS Business Support plan if you have production workloads on AWS and want 24x7 phone, email and chat access to technical support and architectural guidance in the context of your specific use-cases. AWS Business Support plan is the MOST cost-effective option for the given use-case.

Exam Alert:

Please review the differences between the AWS Developer Support, AWS Business Support, AWS Enterprise On-Ramp Support and AWS Enterprise Support plans as you can expect at least a couple of questions on the exam: https://aws.amazon.com/premiumsupport/plans/

243
Q

Which of the following is CORRECT regarding removing an AWS account from AWS Organizations?
a. Raise a support ticket with AWS Support to remove the account
b. The AWS account must be able to operate as a standalone account. Only then it can be removed from AWS organizations
c. The AWS account can be removed from AWS Systems Manager
d. The AWS account must not have any Service Control Policies (SCPs) attached to it. Only then it can be removed from AWS organizations

A

b. The AWS account must be able to operate as a standalone account. Only then it can be removed from AWS organizations

You can remove an account from your organization only if the account has the information that is required for it to operate as a standalone account. For each account that you want to make standalone, you must accept the AWS Customer Agreement, choose a support plan, provide and verify the required contact information, and provide a current payment method. AWS uses the payment method to charge for any billable (not AWS Free Tier) AWS activity that occurs while the account isn’t attached to an organization.

244
Q

Which AWS service can help you create data-driven business cases for transitioning your business from on-premises to AWS Cloud?

a. AWS Budgets
b. AWS Billing and Cost Management
c. AWS Migration Evaluator
d. AWS Trusted Advisor

A

c. AWS Migration Evaluator

AWS Migration Evaluator (Formerly TSO Logic) is a complimentary service to create data-driven business cases for AWS Cloud planning and migration.

AWS Migration Evaluator quickly provides a business case to make sound AWS planning and migration decisions. With AWS Migration Evaluator, your organization can build a data-driven business case for AWS, gets access to AWS expertise, visibility into the costs associated with multiple migration strategies, and insights on how reusing existing software licensing reduces costs further.

Incorrect options:

AWS Budgets - AWS Budgets gives the ability to set custom budgets that alert you when your costs or usage exceed (or are forecasted to exceed) your budgeted amount. You can also use AWS Budgets to set reservation utilization or coverage targets and receive alerts when your utilization drops below the threshold you define. AWS Budgets can be created at the monthly, quarterly, or yearly level, and you can customize the start and end dates. You can further refine your budget to track costs associated with multiple dimensions, such as AWS service, linked account, tag, and others. You cannot use this service to create data-driven business cases for transitioning your business from on-premises to AWS Cloud.

AWS Trusted Advisor - AWS Trusted Advisor is an online tool that provides real-time guidance to help provision your resources following AWS best practices. Whether establishing new workflows, developing applications, or as part of ongoing improvement, recommendations provided by AWS Trusted Advisor regularly help keep your solutions provisioned optimally. AWS Trusted Advisor analyzes your AWS environment and provides best practice recommendations in five categories: Cost Optimization, Performance, Security, Fault Tolerance, Service Limits. You cannot use this service to create data-driven business cases for transitioning your business from on-premises to AWS Cloud.

AWS Billing and Cost Management - AWS Billing and Cost Management is the service that you use to pay your AWS bill, monitor your usage, and analyze and control your costs. It is the billing department for AWS services - with necessary tools and services under its hood. You cannot use this service to create data-driven business cases for transitioning your business from on-premises to AWS Cloud.

245
Q

Which AWS service will you use if you have to move large volumes of on-premises data to AWS Cloud from a remote location with limited bandwidth?

a. AWS Snowball
b. AWS Direct Connect
c. AWS Transit Gateway
d. AWS Virtual Private Network (VPN)

A

a. AWS Snowball

AWS Snowball, a part of the AWS Snow Family, is a data migration and edge computing device. If you have large quantities of data you need to migrate into AWS, offline data transfer with AWS Snowball can overcome the challenge of limited bandwidth, and avoid the need to lease additional bandwidth. AWS Snowball moves terabytes of data in about a week. You can use it to move things like databases, backups, archives, healthcare records, analytics datasets, IoT sensor data and media content, especially when network conditions prevent realistic timelines for transferring large amounts of data both into and out of AWS.

Incorrect options:

AWS Virtual Private Network (VPN) - A VPN connection refers to the connection between your Virtual Private Cloud and your on-premises network. By default, instances that you launch into an Amazon VPC can’t communicate with your own (remote) network. You can enable access to your remote network from your VPC by creating an AWS Site-to-Site VPN (Site-to-Site VPN) connection, and configuring routing to pass traffic through the connection. VPN aids regular connectivity of AWS and your private om-premises network, it is not a data migration solution.

AWS Direct Connect - AWS Direct Connect is a cloud service solution that makes it easy to establish a dedicated network connection from your premises to AWS. You can use AWS Direct Connect to establish a private virtual interface from your on-premise network directly to your Amazon VPC, providing you with a private, high bandwidth network connection between your network and your VPC. This connection is private and does not go over the public internet. It takes at least a month to establish this physical connection. It is not feasible to set up AWS Direct Connect in remote locations.

AWS Transit Gateway - AWS Transit Gateway connects VPCs and on-premises networks through a central hub. This simplifies your network and puts an end to complex peering relationships. It acts as a cloud router – each new connection is only made once. VPC peering across large connections is made possible using AWS Transit Gateway without ending up with a complex VPC peering network. AWS Transit Gateway is not a data migration solution.

246
Q

As a Cloud Practitioner, which of the following credentials would you recommend for signing in to the AWS Management Console to meet security best practices? (Select two)

a. IAM Username and password
b. X.509 certificate
c. Multi Factor Authentication (MFA)
d. Access Key ID
e. Secret Access Key

A

a. IAM Username and password
c. Multi Factor Authentication (MFA)

An AWS Identity and Access Management (IAM) user is an entity that you create in AWS. The IAM user represents the human user or workload who uses the IAM user to interact with AWS. A user in AWS consists of a name and credentials. You also need a password that the IAM user can type to sign in to interactive sessions using the AWS Management Console.

AWS multi-factor authentication (MFA) is an AWS Identity and Access Management (IAM) best practice that requires a second authentication factor in addition to user name and password sign-in credentials. You can enable MFA at the AWS account level and for root and IAM users you have created in your account.

Incorrect options:

Secret Access Key
Access Key ID

Access keys are long-term credentials for an IAM user or the AWS account root user. You can use access keys to sign programmatic requests to the AWS CLI or AWS API (directly or using the AWS SDK). Access keys consist of two parts: an access key ID (for example, AKIAIOSFODNN7EXAMPLE) and a secret access key (for example, wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY). You must use both the access key ID and secret access key together to authenticate your requests.

X.509 certificate

X.509 certificates are used by the AWS Certificate Manager (ACM). ACM certificates are X.509 SSL/TLS certificates that bind the identity of your website and the details of your organization to the public key that is contained in the certificate. One of the keys is public and is typically made available in the X.509 certificate. The other key is private and is stored securely. The X.509 certificate binds the identity of a user, computer, or other resource (the certificate subject) to the public key.

247
Q

A cargo shipping company runs its server fleet on Amazon Elastic Compute Cloud (Amazon EC2) instances. Some of these instances host the CRM (Customer Relationship Management) applications that need to be accessible 24*7. These applications are not mission-critical. In case of a disaster, these applications can be managed on a lesser number of instances for some time.

Which disaster recovery strategy is well-suited as well as cost-effective for this requirement?

a. Warm Standby strategy
b. Multi-site active-active strategy
c. Pilot Light strategy
d. Backup & Restore strategy

A

a. Warm Standby strategy

When selecting your disaster recovery (DR) strategy, you must weigh the benefits of lower RTO (recovery time objective) and RPO (recovery point objective) vs the costs of implementing and operating a strategy. The pilot light and warm standby strategies both offer a good balance of benefits and cost.

This strategy replicates data from the primary Region to data resources in the recovery Region, such as Amazon Relational Database Service (Amazon RDS) DB instances or Amazon DynamoDB tables. These data resources are ready to serve requests. In addition to replication, this strategy requires you to create a continuous backup in the recovery Region. This is because when “human action” type disasters occur, data can be deleted or corrupted, and replication will replicate the bad data. Backups are necessary to enable you to get back to the last known good state.

The warm standby strategy deploys a functional stack, but at reduced capacity. The DR endpoint can handle requests, but cannot handle production levels of traffic. It may be more, but is always less than the full production deployment for cost savings. If the passive stack is deployed to the recovery Region at full capacity, however, then this strategy is known as “hot standby.” Because warm standby deploys a functional stack to the recovery Region, this makes it easier to test Region readiness using synthetic transactions.

Disaster recovery (DR) strategies: https://aws.amazon.com/blogs/architecture/disaster-recovery-dr-architecture-on-aws-part-iii-pilot-light-and-warm-standby/

Incorrect options:

Multi-site active-active strategy - This strategy uses AWS Regions as your active sites, creating a multi-Region active/active architecture. Generally, two Regions are used. Each Region hosts a highly available, multi-Availability Zone (AZ) workload stack. In each Region, data is replicated live between the data stores and also backed up. This protects against disasters that include data deletion or corruption since the data backup can be restored to the last known good state. Each regional stack serves production traffic effectively. But, this strategy is cost involving and should only be used for mission-critical applications.

Pilot Light strategy - Pilot Light, like Warm Standby strategy, replicates data from the primary Region to data resources in the recovery Region, such as Amazon Relational Database Service (Amazon RDS) DB instances or Amazon DynamoDB tables. But, the DR Region in a pilot light strategy (unlike warm standby) cannot serve requests until additional steps are taken. A pilot light in a home furnace does not provide heat to the home. It provides a quick way to light the furnace burners that then provide heat.

Warm standby can handle traffic at reduced levels immediately. Pilot light requires you to first deploy infrastructure and then scale out resources before the workload can handle requests.

Backup & Restore strategy - Backup and Restore is associated with higher RTO (recovery time objective) and RPO (recovery point objective). This results in longer downtimes and greater loss of data between when the disaster event occurs and recovery. However, backup and restore can still be the right strategy for workloads because it is the easiest and least expensive strategy to implement.

248
Q

Which of the following is the MOST cost-effective option to purchase an EC2 Reserved Instance (RI)?
a. All upfront payment option with the standard 1-year term
b. Partial upfront payment option with standard 3-years term
c. No upfront payment option with standard 1-year term
d. No upfront payment option with standard 3-years term

A

b. Partial upfront payment option with standard 3-years term

There was a whole payment chart for this. Just know 3-year plan means cheaper and paying upfront means cheaper.

249
Q

Which of the following are correct statements regarding the AWS Shared Responsibility Model? (Select two)

a. Configuration Management is the responsibility of the customer
b. For abstracted services like Amazon S3, AWS operates the infrastructure layer, the operating system, and platforms
c. For a service like Amazon EC2, that falls under Infrastructure as a Service (IaaS), AWS is responsible for maintaining guest operating system
d. AWS is responsible for Security ‘of’ the Cloud
e. AWS is responsible for training AWS and customer employees on AWS products and services

A

b. For abstracted services like Amazon S3, AWS operates the infrastructure layer, the operating system, and platforms
d. AWS is responsible for Security ‘of’ the Cloud

Security and Compliance is a shared responsibility between AWS and the customer. This shared model can help relieve the customer’s operational burden as AWS operates, manages and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the service operates.

AWS is responsible for protecting the infrastructure that runs all of the services offered in the AWS Cloud. This infrastructure is composed of the hardware, software, networking, and facilities that run AWS Cloud services.

For abstracted services, such as Amazon S3 and Amazon DynamoDB, AWS operates the infrastructure layer, the operating system, and platforms, and customers access the endpoints to store and retrieve data.

Incorrect options:

For a service like Amazon EC2, that falls under Infrastructure as a Service (IaaS), AWS is responsible for maintaining guest operating system - A service such as Amazon Elastic Compute Cloud (Amazon EC2) is categorized as Infrastructure as a Service (IaaS) and, as such, requires the customer to perform all of the necessary security configuration and management tasks. Customers are responsible for the management of the guest operating system (including updates and security patches), any application software or utilities installed by the customer on the instances, and the configuration of the AWS-provided firewall (called a security group) on each instance.

250
Q

AWS Support offers five support plans for its customers. Which of the following features are covered as part of the AWS Basic Support Plan? (Select two)

a. Service health checks
b. Infrastructure event management
c. Use-case guidance – What AWS products, features, and services to use for best supporting your specific needs
d. Client-side diagnostic tools
e. One-on-one responses to account and billing questions

A

a. Service health checks
e. One-on-one responses to account and billing questions

AWS Support offers five support plans: Basic support plan, AWS Developer support plan, AWS Business support plan, AWS Enterprise-On-Ramp support plan, and AWS Enterprise Support plan.

The Basic plan offers support for account and billing questions and service quota increases. The other plans offer a number of technical support cases with pay-by-the-month pricing and no long-term contracts. All AWS customers automatically have 24/7 access to these features of the Basic support plan: 1. One-on-one responses to account and billing questions 2. Support forums 3. Service health checks 4. Documentation, technical papers, and best practice guides

Incorrect options:

Client-side diagnostic tools - Customers with any of the Developer, Business, Enterprise-On-Ramp, Enterprise support plans have access to client-side diagnostic tools.

Use-case guidance – What AWS products, features, and services to use for best supporting your specific needs - Customers with any of the Business, Enterprise-On-Ramp, Enterprise support plans have access to use-case guidance.

Infrastructure event management - Customers with AWS Enterprise-On-Ramp or Enterprise support plan have access to infrastructure event management which is a short-term engagement with AWS Support to get a deep understanding of customer use-cases. After analysis, AWS provides architectural and scaling guidance for an event.

251
Q

As an AWS Cloud Practitioner, you have been tasked to find examples of AWS Cloud solution designs.

Which service/feature would you recommend?

a. AWS Marketplace
b. AWS Architecture Center
c. AWS Trusted Advisor
d. APN Consulting Partner

A

b. AWS Architecture Center

The AWS Architecture Center provides reference architecture diagrams, vetted architecture solutions, Well-Architected best practices, patterns, icons, etc.

Incorrect options:

AWS Marketplace - AWS Marketplace is a digital catalog with thousands of software listings from independent software vendors that make it easy to find, test, buy, and deploy software that runs on AWS.

APN Consulting Partner - The AWS Partner Network (APN) is the global partner program for technology and consulting businesses that leverage Amazon Web Services to build solutions and services for customers.

APN Consulting Partners are professional services firms that help customers of all types and sizes design, architect, build, migrate, and manage their workloads and applications on AWS, accelerating their migration to AWS cloud.

AWS Trusted Advisor - AWS Trusted Advisor is an online tool that provides you real-time guidance to help you provision your resources following AWS best practices on cost optimization, security, fault tolerance, service limits, and performance improvement.

252
Q

A social media company wants to have the MOST cost-optimal strategy for deploying Amazon Elastic Compute Cloud (Amazon EC2) instances. As a Cloud Practitioner, which of the following options would you recommend? (Select two)

a. Use On-Demand Instances to run applications with a predictable usage over the next one year
b. Use On-Demand Instances for ad-hoc jobs that can be interrupted
c. Use Spot Instances for ad-hoc jobs that can be interrupted
d. Use Reserved Instances (RI) to run applications with a predictable usage over the next one year
e. Use Reserved Instances (RI) for ad-hoc jobs that can be interrupted

A

c. Use Spot Instances for ad-hoc jobs that can be interrupted
d. Use Reserved Instances (RI) to run applications with a predictable usage over the next one year

A Spot Instance is an unused EC2 instance that is available for less than the On-Demand price. Because Spot Instances enable you to request unused EC2 instances at steep discounts (up to 90%), you can lower your Amazon EC2 costs significantly. Spot Instances are well-suited for data analysis, batch jobs, background processing, and optional tasks. These can be terminated at short notice, so these are not suitable for critical workloads that need to run at a specific point in time.

Reserved Instances (RI) provide you with significant savings (up to 75%) on your Amazon EC2 costs compared to On-Demand Instance pricing. Reserved Instances (RI) are not physical instances, but rather a billing discount applied to the use of On-Demand Instances in your account. You can purchase a Reserved Instance (RI) for a one-year or three-year commitment, with the three-year commitment offering a bigger discount. Reserved instances (RI) are a great fit for application with a steady-state usage. Reserved instances (RI) cannot be interrupted.

Amazon EC2 Pricing Options Overview: https://aws.amazon.com/ec2/pricing/

Incorrect options:

Use On-Demand Instances to run applications with a predictable usage over the next one year
Use On-Demand Instances for ad-hoc jobs that can be interrupted

An On-Demand Instance is an instance that you use on-demand. You have full control over its lifecycle — you decide when to launch, stop, hibernate, start, reboot, or terminate it. There is no long-term commitment required when you purchase On-Demand Instances. There is no upfront payment and you pay only for the seconds that your On-Demand Instances are running. The price per second for running an On-Demand Instance is fixed. On-demand instances cannot be interrupted. However, On-demand instances are not as cost-effective as Spot instances or Reserved instances (RI), so both these options are not correct.

Use Reserved Instances (RI) for ad-hoc jobs that can be interrupted - Spot instances are more cost-effective than Reserved instances (RI) for running ad-hoc jobs that can be interrupted, so this option is not correct.

253
Q

What is the primary benefit of deploying an Amazon RDS Multi-AZ database with one standby?

a. Amazon RDS Multi-AZ improves database performance for read-heavy workloads
b. Amazon RDS Multi-AZ enhances database availability
c. Amazon RDS Multi-AZ protects the database from a regional failure
d. Amazon RDS Multi-AZ reduces database usage costs

A

b. Amazon RDS Multi-AZ enhances database availability

Amazon RDS Multi-AZ deployments provide enhanced availability and durability forAmazon Relational Database Service (Amazon RDS) instances, making them a natural fit for production database workloads. When you provision an Amazon RDS Multi-AZ Instance with one standby, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ).

In case of an infrastructure failure, Amazon RDS performs an automatic failover to the standby so that you can resume database operations as soon as the failover is complete.

How Amazon RDS Multi-AZ Works: https://aws.amazon.com/rds/features/multi-az/

254
Q

Which of the following AWS services is essential for implementing security of resources in AWS Cloud?

a. AWS Shield
b. AWS Identity and Access Management (IAM)
c. Amazon CloudWatch
d. AWS Web Application Firewall (AWS WAF)

A

b. AWS Identity and Access Management (IAM)

AWS Identity and Access Management (IAM) enables you to manage access to AWS services and resources securely. Using IAM, you can create and manage AWS users and groups, and use permissions to allow and deny their access to AWS resources. IAM enables security best practices by allowing you to grant unique security credentials to users and groups to specify which AWS service APIs and resources they can access. These features make IAM an important service for the overall security of AWS resources in your account. IAM is secure by default; users have no access to AWS resources until permissions are explicitly granted.

255
Q

Which of the following statements are CORRECT about the AWS Auto Scaling group? (Select two)

a. Auto Scaling group scales in and reduces the number of Amazon EC2 instances to match a decrease in demand
b. Auto Scaling group scales down and reduces the number of Amazon EC2 instances to match a decrease in demand
c. Auto Scaling group scales down and downgrades to a less powerful Amazon EC2 instance to match a decrease in demand
d. Auto Scaling group scales out and adds more number of Amazon EC2 instances to match an increase in demand
e. Auto Scaling group scales up and upgrades to a more powerful Amazon EC2 instance to match an increase in demand

A

a. Auto Scaling group scales in and reduces the number of Amazon EC2 instances to match a decrease in demand
d. Auto Scaling group scales out and adds more number of Amazon EC2 instances to match an increase in demand

Other options involved scaling up or down. AWS Auto Scaling group is for horizontal scaling.

256
Q

A Cloud Practitioner would like to get operational insights of its resources to quickly identify any issues that might impact applications using those resources. Which AWS service can help with this task?

a. AWS Trusted Advisor
b. AWS Systems Manager
c. Amazon Inspector
d. AWS Health Dashboard - Your Account Health

A

b. AWS Systems Manager

AWS Systems Manager allows you to centralize operational data from multiple AWS services and automate tasks across your AWS resources. You can create logical groups of resources such as applications, different layers of an application stack, or production versus development environments.

With AWS Systems Manager, you can select a resource group and view its recent API activity, resource configuration changes, related notifications, operational alerts, software inventory, and patch compliance status. You can also take action on each resource group depending on your operational needs. AWS Systems Manager provides a central place to view and manage your AWS resources, so you can have complete visibility and control over your operations.

How AWS Systems Manager works: https://aws.amazon.com/systems-manager/

Incorrect options:

AWS Health Dashboard - Your Account Health - AWS Health Dashboard - Your Account Health provides alerts and remediation guidance when AWS is experiencing events that may impact you. It is not used to get operational insights of AWS resources.

AWS Trusted Advisor - AWS Trusted Advisor is an online resource to help you reduce cost, increase performance, and improve security by optimizing your AWS environment. AWS Trusted Advisor provides real-time guidance to help you provision your resources following AWS best practices. It is not used to get operational insights of AWS resources.

257
Q

A social media company wants to protect its web application from common web exploits such as SQL injection and cross-site scripting. Which of the following AWS services can be used to address this use-case?

a. AWS Web Application Firewall (AWS WAF)
b. Amazon GuardDuty
c. AWS CloudWatch
d. Amazon Inspector

A

a. AWS Web Application Firewall (AWS WAF)

AWS Web Application Firewall (AWS WAF) is a web application firewall that helps protect your web applications or APIs against common web exploits that may affect availability, compromise security, or consume excessive resources. AWS Web Application Firewall (AWS WAF) gives you control over how traffic reaches your applications by enabling you to create security rules that block common attack patterns such as SQL injection or cross-site scripting. You can also use rate-based rules to mitigate the Web layer DDoS attack.

How AWS Web Application Firewall (AWS WAF) Works: via - https://aws.amazon.com/waf/

An SQL injection attack works by exploiting any one of the known SQL vulnerabilities that allow the SQL server to run malicious code. For example, if a SQL server is vulnerable to an injection attack, it may be possible for an attacker to go to a website’s search box and type in code that would force the site’s SQL server to dump all of its stored usernames and passwords for the site.

Similar to an SQL injection attack, a cross-site scripting attack also involves injecting malicious code into a website, but in this case, the website itself is not being attacked. Instead, the malicious code the attacker has injected only runs in the user’s browser when they visit the attacked website, and it goes after the visitor directly, not the website.

Incorrect options:

Amazon GuardDuty is a threat detection service that monitors malicious activity and unauthorized behavior to protect your AWS account. Amazon GuardDuty analyzes billions of events across your AWS accounts from AWS CloudTrail (AWS user and API activity in your accounts), Amazon VPC Flow Logs (network traffic data), and DNS Logs (name query patterns). Amazon GuardDuty cannot be used to protect from web exploits such as SQL injection and cross-site scripting.

How Amazon GuardDuty Works: via - https://aws.amazon.com/guardduty/

258
Q

Which of the following AWS services can be used to continuously monitor both malicious activities as well as unauthorized behavior to protect your AWS accounts and workloads?

a. Amazon Inspector
b. AWS Security Hub
c. Amazon Detective
d. Amazon GuardDuty

A

d. Amazon GuardDuty

Amazon GuardDuty is a threat detection service that continuously monitors for malicious activity and unauthorized behavior to protect your AWS accounts, workloads, and data stored in Amazon Simple Storage Service (Amazon S3). With the cloud, the collection and aggregation of account and network activities are simplified, but it can be time-consuming for security teams to continuously analyze event log data for potential threats. With Amazon GuardDuty, you now have an intelligent and cost-effective option for continuous threat detection in AWS.

The service uses machine learning, anomaly detection, and integrated threat intelligence to identify and prioritize potential threats. GuardDuty analyzes tens of billions of events across multiple AWS data sources, such as AWS CloudTrail event logs, Amazon VPC Flow Logs, and DNS logs. With a few clicks in the AWS Management Console, GuardDuty can be enabled with no software or hardware to deploy or maintain.

Amazon GuardDuty makes it easy for you to enable continuous monitoring of your AWS accounts, workloads, and data stored in Amazon S3. It operates completely independently from your resources so there is no risk of performance or availability impacts to your workloads. It’s fully managed with integrated threat intelligence, anomaly detection, and machine learning. Amazon GuardDuty delivers detailed and actionable alerts that are easy to integrate with existing event management and workflow systems. There are no upfront costs and you pay only for the events analyzed, with no additional software to deploy or subscriptions to threat intelligence feeds required.

Incorrect options:

AWS Security Hub - AWS Security Hub gives you a comprehensive view of your security alerts and security posture across your AWS accounts. There is a range of powerful security tools at your disposal, from firewalls and endpoint protection to vulnerability and compliance scanners. With Security Hub, you have a single place that aggregates, organizes, and prioritizes your security alerts, or findings, from multiple AWS services, such as Amazon GuardDuty, Amazon Inspector, and Amazon Macie, as well as from AWS Partner solutions.

Amazon Detective - Amazon Detective simplifies the process of investigating security findings and identifying the root cause. Amazon Detective analyzes trillions of events from multiple data sources such as VPC Flow Logs, AWS CloudTrail logs, and Amazon GuardDuty findings and automatically creates a graph model that provides you with a unified, interactive view of your resources, users, and the interactions between them over time.

Amazon Inspector - Amazon Inspector is an automated security assessment service that helps you test the network accessibility of your Amazon EC2 instances and the security state of your applications running on the instances. Amazon Inspector allows you to automate security vulnerability assessments throughout your development and deployment pipeline or against static production systems. This allows you to make security testing a more regular occurrence as part of the development and IT operations.

259
Q

A company needs a storage solution for a project wherein the data is accessed less frequently but needs rapid access when required. Which S3 storage class is the MOST cost-effective for the given use-case?
a. Amazon S3 Intelligent-Tiering (S3 Intelligent-Tiering)
b. Amazon S3 Standard-Infrequent Access (S3 Standard-IA)
c. Amazon S3 Glacier (S3 Glacier)
d. Amazon S3 Standard

A

b. Amazon S3 Standard-Infrequent Access (S3 Standard-IA)

Amazon S3 Standard-Infrequent Access (S3 Standard-IA) is for data that is accessed less frequently, but requires rapid access when needed. S3 Standard-IA offers the high durability, high throughput, and low latency of S3 Standard, with a low per GB storage price and per GB retrieval fee. This combination of low cost and high performance make S3 Standard-IA ideal for long-term storage, backups, and as a data store for disaster recovery files.

260
Q

According to the AWS Shared Responsibility Model, which of the following is both the responsibility of AWS and the customer? (Select two)

a. Operating system (OS) configuration
b. Disposal of disk drives
c. Data center security
d. Customer data
e. Configuration management

A

a. Operating system (OS) configuration
e. Configuration management

Shared Controls – Controls that apply to both the infrastructure layer and customer layers, but in completely separate contexts or perspectives. In a shared control, AWS provides the requirements for the infrastructure and the customer must provide their own control implementation within their use of AWS services.

Configuration Management – AWS maintains the configuration of its infrastructure devices, but a customer is responsible for configuring their own guest operating systems, databases, and applications.

The customers are responsible for “Security IN the cloud”. It includes customer data, as well as the guest operating system configuration.

Operating System configuration as a whole is an AWS shared responsibility, but be careful: the host operating system configuration is the responsibility of AWS, and the guest operating system configuration is the responsibility of the customer.

Exam Alert:

Please review the AWS Shared Responsibility Model in detail as you can expect multiple questions on the shared responsibility model in the exam: https://aws.amazon.com/compliance/shared-responsibility-model/

Incorrect options:

Customer data
Data center security
Disposal of disk drives

AWS is responsible for “Security OF the cloud”. It includes the infrastructure, which is composed of the hardware, software, networking, and facilities that run AWS Cloud services. It includes the disposal and the replacement of disk drives as well as data center security.

261
Q

A silicon valley based healthcare startup stores anonymized patient health data on Amazon S3. The CTO further wants to ensure that any sensitive data on S3 is discovered and identified to prevent any sensitive data leaks. As a Cloud Practitioner, which AWS service would you recommend addressing this use-case?
a. AWS Glue
b. Amazon Polly
c. Amazon Macie
d. AWS Secrets Manager

A

c. Amazon Macie

Amazon Macie is a fully managed data security and data privacy service that uses machine learning and pattern matching to discover and protect your sensitive data in AWS. Macie automatically provides an inventory of Amazon S3 buckets including a list of unencrypted buckets, publicly accessible buckets, and buckets shared with AWS accounts outside those you have defined in AWS Organizations. Then, Macie applies machine learning and pattern matching techniques to the buckets you select to identify and alert you to sensitive data, such as personally identifiable information (PII).

262
Q

Which of the following statements is INCORRECT regarding Amazon EBS Elastic Volumes?

a. Amazon EBS Elastic Volumes are bound to a specific Availability Zone (AZ)
b. Amazon EBS Elastic Volumes can be bound to several Availability Zones (AZs)
c. Amazon EBS Elastic Volumes can be mounted to one instance at a time
d. Amazon EBS Elastic Volumes can persist data after their termination

A

b. Amazon EBS Elastic Volumes can be bound to several Availability Zones (AZs)

An Amazon EBS Elastic Volume is a durable, block-level storage device that you can attach to your instances. After you attach a volume to an instance, you can use it as you would use a physical hard drive.

When using Amazon EBS Elastic Volumes, the volume, and the instance must be in the same Availability Zone (AZ).

Incorrect options:

Amazon EBS Elastic Volumes can be mounted to one instance at a time - At the Certified Cloud Practitioner level, Amazon EBS Elastic Volumes can be mounted to one instance at a time. It is also possible that an Amazon EBS Elastic Volume is not mounted to an instance.

Amazon EBS Elastic Volumes are bound to a specific Availability Zone (AZ) - As mentioned, when using Amazon EBS Elastic Volumes, the volume and the instance must be in the same Availability Zone(AZ).

Amazon EBS Elastic Volumes can persist data after their termination - Unlike an Amazon EC2 instance store, an Amazon EBS Elastic Volume is off-instance storage that can persist independently from the life of an instance.

263
Q

Which AWS technology/service helps you to scale your resources to match supply with demand while still keeping your cloud solution cost-effective?

a. AWS CloudFormation
b. AWS Cost Explorer
c. AWS Auto Scaling
d. AWS OpsWorks

A

c. AWS Auto Scaling

AWS Auto Scaling monitors applications and automatically adjusts capacity to maintain steady, predictable performance at the lowest possible cost. Using AWS Auto Scaling, it’s easy to setup application scaling for multiple resources across multiple services in minutes. The service provides a simple, powerful user interface that lets you build scaling plans for resources including Amazon EC2 instances and Spot Fleets, Amazon ECS tasks, Amazon DynamoDB tables and indexes, and Amazon Aurora Replicas. AWS Auto Scaling makes scaling simple with recommendations that allow you to optimize performance, costs, or balance between them.

How AWS Auto Scaling Works: via - https://aws.amazon.com/autoscaling/

264
Q

Which AWS service can be used as an in-memory database with high-performance and low latency?

a. Amazon ElastiCache
b. Amazon Relational Database Service (Amazon RDS)
c. Amazon Athena
d. Amazon DynamoDB

A

a. Amazon ElastiCache

Amazon ElastiCache allows you to seamlessly set up, run, and scale popular open-Source compatible in-memory data stores in the cloud. Build data-intensive apps or boost the performance of your existing databases by retrieving data from high throughput and low latency in-memory data stores. Amazon ElastiCache is a popular choice for real-time use cases like Caching, Session Stores, Gaming, Geospatial Services, Real-Time Analytics, and Queuing. ElastiCache cannot be used for online analytical processing.

How Amazon ElastiCache Works: https://aws.amazon.com/elasticache/

265
Q

Adding more CPU/RAM to an Amazon Elastic Compute Cloud (Amazon EC2) instance represents which of the following?

a. Horizontal scaling
b. Loose coupling
c. Managing increasing volumes of data
d. Vertical scaling

A

d. Vertical scaling

A vertically scalable system is constrained to running its processes on only one computer; in such systems, the only way to increase performance is to add more resources into one computer in the form of faster (or more) CPUs, memory, or storage.

Incorrect options:

Horizontal scaling - A horizontally scalable system is one that can increase capacity by adding more computers to the system.

Managing increasing volumes of data - Traditional data storage and analytics tools can no longer provide the agility and flexibility required to deliver relevant business insights. That’s why many organizations are shifting to a data lake architecture. A data lake is an architectural approach that allows you to store massive amounts of data in a central location so that it’s readily available to be categorized, processed, analyzed, and consumed by diverse groups within your organization.

Loose coupling - As application complexity increases, a desirable attribute of an IT system is that it can be broken into smaller, loosely coupled components. This means that IT systems should be designed in a way that reduces interdependencies—a change or a failure in one component should not cascade to other components.

266
Q

Which of the following is best-suited for load-balancing HTTP and HTTPS traffic?

a. AWS Auto Scaling
b. Network Load Balancer
c. Application Load Balancer
d. System Load Balancer

A

c. Application Load Balancer

Elastic Load Balancing (ELB) automatically distributes incoming application traffic across multiple targets, such as Amazon EC2 instances, containers, IP addresses, and Lambda functions. It can handle the varying load of your application traffic in a single Availability Zone or across multiple Availability Zones. Elastic Load Balancing (ELB) offers three types of load balancers that all feature the high availability, automatic scaling, and robust security necessary to make your applications fault-tolerant.

https://aws.amazon.com/elasticloadbalancing/

Application Load Balancer is used for load balancing of HTTP and HTTPS traffic and provides advanced request routing targeted at the delivery of modern application architectures, including microservices and containers.

Incorrect options:

Network Load Balancer - Network Load Balancer is best suited for load balancing of Transmission Control Protocol (TCP), User Datagram Protocol (UDP) and Transport Layer Security (TLS) traffic where extreme performance is required.

AWS Auto Scaling - AWS Auto Scaling monitors your applications and automatically adjusts the capacity to maintain steady, predictable performance at the lowest possible cost. Using AWS Auto Scaling, it’s easy to setup application scaling for multiple resources across multiple services in minutes. The service provides a simple, powerful user interface that lets you build scaling plans for resources including Amazon EC2 instances and Spot Fleets, Amazon ECS tasks, Amazon DynamoDB tables and indexes, and Amazon Aurora Replicas. Auto Scaling cannot be used for load-balancing HTTP and HTTPS traffic.

System Load Balancer - This is a made-up option and has been added as a distractor.

267
Q

A cyber-security agency uses AWS Cloud and wants to carry out security assessments on its own AWS infrastructure without any prior approval from AWS. Which of the following describes/facilitates this practice?

a. Network Stress Testing
b. Amazon Inspector
c. AWS Secrets Manager
d. Penetration Testing

A

d. Penetration Testing

AWS customers can carry out security assessments or penetration tests against their AWS infrastructure without prior approval for few common AWS services. Customers are not permitted to conduct any security assessments of AWS infrastructure, or the AWS services themselves.

Incorrect options:

Network Stress Testing - AWS considers “network stress test” to be when a test sends a large volume of legitimate or test traffic to a specific intended target application. The endpoint and infrastructure are expected to be able to handle this traffic.

AWS Secrets Manager - AWS Secrets Manager helps you protect secrets needed to access your applications, services, and IT resources. The service enables you to easily rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle. Users and applications retrieve secrets with a call to AWS Secrets Manager APIs, eliminating the need to hardcode sensitive information in plain text.

268
Q

A company’s flagship application runs on a fleet of Amazon Elastic Compute Cloud (Amazon EC2) instances. As per the new policies, the system administrators are looking for the best way to provide secure shell access to Amazon Elastic Compute Cloud (Amazon EC2) instances without opening new ports or using public IP addresses.

Which tool/service will help you achieve this requirement?

a. Amazon Elastic Compute Cloud (Amazon EC2) Instance Connect
b. Amazon Route 53
c. Amazon Inspector
d. AWS Systems Manager Session Manager

A

d. AWS Systems Manager Session Manager

AWS Systems Manager Session Manager is a fully-managed service that provides you with an interactive browser-based shell and CLI experience. It helps provide secure and auditable instance management without the need to open inbound ports, maintain bastion hosts, and manage SSH keys. AWS Systems Manager Session Manager helps to enable compliance with corporate policies that require controlled access to instances, increase security and auditability of access to the instances while providing simplicity and cross-platform instance access to end-users.

Incorrect options:

Amazon Elastic Compute Cloud (Amazon EC2) Instance Connect - Amazon Elastic Compute Cloud (Amazon EC2) Instance Connect provides a simple and secure way to connect to your Linux instances using Secure Shell (SSH). With EC2 Instance Connect, you use AWS Identity and Access Management (IAM) policies and principals to control SSH access to your instances, removing the need to share and manage SSH keys. EC2 Instance Connect will need port 22 to be open for traffic. Therefore, not the correct option here.

269
Q

Which AWS service will you use to provision the same AWS infrastructure across multiple AWS accounts and regions?

a. AWS Systems Manager
b. AWS CodeDeploy
c. AWS OpsWorks
d. AWS CloudFormation

A

d. AWS CloudFormation

AWS CloudFormation allows you to use programming languages or a simple text file to model and provision, in an automated and secure manner, all the resources needed for your applications across all Regions and accounts. A stack is a collection of AWS resources that you can manage as a single unit. In other words, you can create, update, or delete a collection of resources by creating, updating, or deleting stacks.

AWS CloudFormation StackSets extends the functionality of stacks by enabling you to create, update, or delete stacks across multiple accounts and regions with a single operation. Using an administrator account, you define and manage an AWS CloudFormation template, and use the template as the basis for provisioning stacks into selected target accounts across specified regions.

How CloudFormation Works: https://aws.amazon.com/cloudformation/

Incorrect options:

AWS CodeDeploy - AWS CodeDeploy is a fully managed deployment service that automates software deployments to a variety of compute services such as Amazon EC2, AWS Fargate, AWS Lambda, and your on-premises servers. AWS CodeDeploy makes it easier for you to rapidly release new features, helps you avoid downtime during application deployment, and handles the complexity of updating your applications. You cannot use this service to provision AWS infrastructure.

AWS OpsWorks - AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet. OpsWorks lets you use Chef and Puppet to automate how servers are configured, deployed and managed across your Amazon EC2 instances or on-premises compute environments. You cannot use OpsWorks for running commands or managing patches on servers. You cannot use this service to provision AWS infrastructure.

AWS Systems Manager - AWS Systems Manager gives you visibility and control of your infrastructure on AWS. Systems Manager provides a unified user interface so you can view operational data from multiple AWS services and allows you to automate operational tasks across your AWS resources. With Systems Manager, you can group resources, like Amazon EC2 instances, Amazon S3 buckets, or Amazon RDS instances, by application, view operational data for monitoring and troubleshooting, and take action on your groups of resources. You cannot use this service to provision AWS infrastructure.

270
Q

Which of the following statements are CORRECT regarding security groups and network access control lists (network ACL)? (Select two)

a. A security group is stateless, that is, the return traffic must be explicitly allowed
b. A security group is stateful, that is, it automatically allows the return traffic
c. A network access control list (network ACL) contains a numbered list of rules and evaluates these rules in the increasing order while deciding whether to allow the traffic
d. A network access control list (network ACL) is stateful, that is, it automatically allows the return traffic
e. A security group contains a numbered list of rules and evaluates these rules in the increasing order while deciding whether to allow the traffic

A

b. A security group is stateful, that is, it automatically allows the return traffic
c. A network access control list (network ACL) contains a numbered list of rules and evaluates these rules in the increasing order while deciding whether to allow the traffic

A security group acts as a virtual firewall for your instance to control inbound and outbound traffic. Security groups act at the instance level, not at the subnet level. Security groups are stateful — if you send a request from your instance, the response traffic for that request is allowed to flow in regardless of inbound security group rules. A security group evaluates all rules before deciding whether to allow traffic.

Security group overview: https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html

A network access control list (network ACL) is an optional layer of security for your VPC that acts as a firewall for controlling traffic in and out of one or more subnets (i.e. it works at subnet level). A network access control list (network ACL) contains a numbered list of rules. A network access control list (network ACL) evaluates the rules in order, starting with the lowest numbered rule, to determine whether traffic is allowed in or out of any subnet associated with the network ACL. The highest number that you can use for a rule is 32766. AWS recommends that you start by creating rules in increments (for example, increments of 10 or 100) so that you can insert new rules where you need to later on.

Network access control list (network ACL) overview: https://docs.aws.amazon.com/vpc/latest/userguide/vpc-network-acls.html

271
Q

Which of the following use-cases is NOT supported by Amazon Rekognition?

a. Detect text in a photo
b. Identify person in a photo
c. Quickly resize photos to create thumbnails
d. Label objects in a photo

A

c. Quickly resize photos to create thumbnails

You cannot use Amazon Rekognition to resize photos to create thumbnails.

With Amazon Rekognition, you can identify objects, people, text, scenes, and activities in images and videos, as well as detect any inappropriate content. Amazon Rekognition also provides highly accurate facial analysis and facial search capabilities that you can use to detect, analyze, and compare faces for a wide variety of user verification, people counting, and public safety use cases.

Amazon Rekognition Use-Cases: https://aws.amazon.com/rekognition/

272
Q

A startup has just moved its IT infrastructure to AWS Cloud. The CTO would like to receive detailed reports that break down the startup’s AWS costs by the hour in an Amazon Simple Storage Service (Amazon S3) bucket. As a Cloud Practitioner, which AWS service would you recommend for this use-case?

a. AWS Cost & Usage Report (AWS CUR)
b. AWS Budgets
c. AWS Cost Explorer
d. AWS Pricing Calculator

A

a. AWS Cost & Usage Report (AWS CUR)

AWS Cost & Usage Report (AWS CUR) contains the most comprehensive set of cost and usage data available. You can use AWS Cost & Usage Report (AWS CUR) to publish your AWS billing reports to an Amazon Simple Storage Service (Amazon S3) bucket that you own. You can receive reports that break down your costs by the hour or month, by product or product resource, or by tags that you define yourself. AWS updates the report in your bucket once a day in comma-separated value (CSV) format.

AWS Cost & Usage Report (AWS CUR) Overview: https://docs.aws.amazon.com/cur/latest/userguide/what-is-cur.html

Incorrect Options:

AWS Cost Explorer - AWS Cost Explorer has an easy-to-use interface that lets you visualize, understand, and manage your AWS costs and usage over time. AWS Cost Explorer includes a default report that helps you visualize the costs and usage associated with your top five cost-accruing AWS services, and gives you a detailed breakdown of all services in the table view. The reports let you adjust the time range to view historical data going back up to twelve months to gain an understanding of your cost trends. AWS Cost Explorer cannot provide a detailed report of your AWS costs by the hour into an Amazon S3 bucket.

273
Q

Which of the following AWS services offer block-level storage? (Select two)

a. Instance Store
b. Amazon Elastic Container Service (Amazon ECS)
c. Amazon Elastic Block Store (Amazon EBS)
d. Amazon Simple Storage Service (Amazon S3)
e. Amazon Elastic File System (Amazon EFS)

A

a. Instance Store
c. Amazon Elastic Block Store (Amazon EBS)

Amazon Elastic Block Store (Amazon EBS) is an easy to use, high-performance block storage service designed for use with Amazon Elastic Compute Cloud (Amazon EC2) for both throughput and transaction-intensive workloads at any scale. A broad range of workloads, such as relational and non-relational databases, enterprise applications, containerized applications, big data analytics engines, file systems, and media workflows are widely deployed on Amazon EBS.

An instance store provides temporary block-level storage for your EC2 instance. This storage is located on disks that are physically attached to the host computer. Instance store is ideal for the temporary storage of information that changes frequently, such as buffers, caches, scratch data, and other temporary content, or for data that is replicated across a fleet of instances, such as a load-balanced pool of web servers. Instance storage is temporary, data is lost if instance experiences failure or is terminated. Amazon EC2 instance store cannot be used for file sharing between instances.

274
Q

Multi-AZ deployment is an example of which of the following?

a. Performance Efficiency
b. Scale up
c. Scale out
d. High Availability

A

d. High Availability

A system that is available is capable of delivering the designed functionality at a given point in time. Highly available systems are those that can withstand some measure of degradation while still remaining available. On AWS Cloud, you can run instances for an application in a multi-AZ deployment to achieve High Availability.

275
Q

Which AWS service should be used when you want to run container applications, but want to avoid the operational overhead of scaling, patching, securing, and managing servers?

a. Amazon Elastic Container Service (Amazon ECS) - Fargate launch type
b. AWS Lambda
c. Amazon Elastic Compute Cloud (Amazon EC2)
d. Amazon Elastic Container Service (Amazon ECS) - EC2 launch type

A

a. Amazon Elastic Container Service (Amazon ECS) - Fargate launch type

AWS Fargate is a serverless compute engine for containers. It works with both Amazon Elastic Container Service (Amazon ECS) and Amazon Elastic Kubernetes Service (Amazon EKS). AWS Fargate makes it easy for you to focus on building your applications. AWS Fargate removes the need to provision and manage servers, lets you specify and pay for resources per application, and improves security through application isolation by design. AWS Fargate allocates the right amount of compute, eliminating the need to choose instances and scale cluster capacity. You only pay for the resources required to run your containers, so there is no over-provisioning and paying for additional servers. AWS Fargate runs each task or pod in its kernel providing the tasks and pods their own isolated compute environment. This enables your application to have workload isolation and improved security by design.

How AWS Fargate Works: https://aws.amazon.com/fargate/

Incorrect options:

Amazon Elastic Container Service (Amazon ECS) - EC2 launch type - Amazon Elastic Container Service (Amazon ECS) is a highly scalable, fast, container management service that makes it easy to run, stop, and manage Docker containers and allows you to easily run applications on a managed cluster of Amazon EC2 instances. Unlike AWS Fargate, this is not a fully managed service and you need to manage the underlying servers yourself.

AWS Lambda - AWS Lambda is a compute service that lets you run code without provisioning or managing servers. AWS Lambda executes your code only when needed and scales automatically, from a few requests per day to thousands per second. AWS Lambda does not support running container applications.

Amazon Elastic Compute Cloud (Amazon EC2) - Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud, per-second billing, and access to the underlying OS. It is designed to make web-scale cloud computing easier for developers. Maintenance of the server and its software has to be done by the customer, so this option is ruled out.

276
Q

Which tool/service will help you access AWS services using programming language-specific APIs?
a. AWS Management Console
b. AWS Software Developer Kit (SDK)
c. Integrated Development Environments (IDE)
d. AWS Command Line Interface (CLI)

A

b. AWS Software Developer Kit (SDK)

SDKs take the complexity out of coding by providing language-specific APIs for AWS services. For example, the AWS SDK for JavaScript simplifies the use of AWS Services by providing a set of libraries that are consistent and familiar for JavaScript developers. It provides support for API lifecycle considerations such as credential management, retries, data marshaling, serialization, and deserialization. AWS SDKs are offered in several programming languages to make it simple for developers working on different programming and scripting languages. So, AWS SDK can help with using AWS services from within an application using language-specific APIs.

277
Q

Which of the following are correct statements regarding the AWS Global Infrastructure? (Select two)
a. Each AWS Region consists of a minimum of two Availability Zones (AZ)
b. Each Availability Zone (AZ) consists of two or more discrete data centers
c. Each AWS Region consists of a minimum of three Availability Zones (AZ)
d. Each Availability Zone (AZ) consists of one or more discrete data centers
e. Each AWS Region consists of two or more Edge Locations

A

c. Each AWS Region consists of a minimum of three Availability Zones (AZ)

d. Each Availability Zone (AZ) consists of one or more discrete data centers

AWS has the concept of a Region, which is a physical location around the world where AWS clusters its data centers. AWS calls each group of logical data centers an Availability Zone (AZ). Each AWS Region consists of a minimum of three, isolated, and physically separate AZs within a geographic area. Each AZ has independent power, cooling, and physical security and is connected via redundant, ultra-low-latency networks.

An Availability Zone (AZ) is one or more discrete data centers with redundant power, networking, and connectivity in an AWS Region. All AZs in an AWS Region are interconnected with high-bandwidth, low-latency networking, over fully redundant, dedicated metro fiber providing high-throughput, low-latency networking between AZs.

278
Q

Which of the following entities can be used to connect to an Amazon Elastic Compute Cloud (Amazon EC2) server from a Mac OS, Windows or Linux based computer via a browser-based client?

a. AWS Direct Connect
b. SSH
c. Amazon EC2 Instance Connect
d. Putty

A

c. Amazon EC2 Instance Connect

Amazon EC2 Instance Connect provides a simple and secure way to connect to your instances using Secure Shell (SSH). With Amazon EC2 Instance Connect, you use AWS Identity and Access Management (AWS IAM) policies and principals to control SSH access to your instances, removing the need to share and manage SSH keys. All connection requests using Amazon EC2 Instance Connect are logged to AWS CloudTrail so that you can audit connection requests.

You can use Amazon EC2 Instance Connect to connect to your Linux instances using a browser-based client, the Amazon EC2 Instance Connect CLI, or the SSH client of your choice. Amazon EC2 Instance Connect can be used to connect to an EC2 instance from a Mac OS, Windows or Linux based computer.

Incorrect options:

SSH - SSH can be used from a Mac OS, Windows or Linux based computer, but it’s not a browser-based client.

Putty - Putty can be used only from Windows based computers.

AWS Direct Connect - AWS Direct Connect is a cloud service solution that makes it easy to establish a dedicated network connection from your premises to AWS. You can use AWS Direct Connect to establish a private virtual interface from your on-premise network directly to your Amazon VPC. This private connection takes at least one month for completion. AWS Direct Connect cannot be used to connect to an Amazon EC2 instance from a Mac OS, Windows or Linux based computer.

279
Q

AWS Trusted Advisor can provide alerts on which of the following common security misconfigurations? (Select two)?

a. When you don’t enable data encryption on Amazon S3 Glacier
b. When you share IAM user credentials with others
c. When you don’t turn on user activity logging (AWS CloudTrail)
d. When you allow public access to Amazon S3 buckets
e. When you don’t tag objects in Amazon S3 buckets

A

c. When you don’t turn on user activity logging (AWS CloudTrail)
d. When you allow public access to Amazon S3 buckets

AWS Trusted Advisor is an online tool that provides real-time guidance to help provision your resources following AWS best practices. Whether establishing new workflows, developing applications, or as part of ongoing improvement, recommendations provided by Trusted Advisor regularly help keep your solutions provisioned optimally. AWS Trusted Advisor analyzes your AWS environment and provides best practice recommendations in five categories: Cost Optimization, Performance, Security, Fault Tolerance, Service Limits.

Trusted Advisor inspects your AWS environment and makes recommendations when opportunities may exist to save money, improve system performance, or close security gaps. It provides alerts on several of the most common security misconfigurations that can occur, including leaving certain ports open that make you vulnerable to hacking and unauthorized access, neglecting to create IAM accounts for your internal users, allowing public access to Amazon S3 buckets, not turning on user activity logging (AWS CloudTrail), or not using MFA on your root AWS Account.

How AWS Trusted Advisor Works: https://aws.amazon.com/premiumsupport/technology/trusted-advisor/

Incorrect options:

When you don’t tag objects in Amazon S3 buckets - Tagging objects (or any resource) in Amazon S3 is not mandatory and it’s not a security threat.

When you share IAM user credentials with others - It is the customer’s responsibility to adhere to the IAM security best practices and never share the IAM user credentials with others. AWS Trusted Advisor cannot send an alert for such use-cases.

When you don’t enable data encryption on Amazon S3 Glacier - By default, data is encrypted on S3 Glacier. So, this option has been added as a distractor.

280
Q

A company is looking at a service/tool to automate and minimize the time spent on keeping the server images up-to-date. These server images are used by Amazon Elastic Compute Cloud (Amazon EC2) instances as well as the on-premises systems.

Which AWS service will help achieve the company’s need?

a. AWS Systems Manager (Amazon Simple Systems Manager (SSM))
b. Amazon EC2 Amazon Machine Image (AMI)
c. AWS CloudFormation templates
d. Amazon EC2 Image Builder

A

d. Amazon EC2 Image Builder

Amazon EC2 Image Builder simplifies the building, testing, and deployment of Virtual Machine and container images for use on AWS or on-premises.

Keeping Virtual Machine (VM) and container images up-to-date can be time-consuming, resource-intensive, and error-prone. Currently, customers either manually update and snapshot VMs or have teams that build automation scripts to maintain images.

Amazon EC2 Image Builder significantly reduces the effort of keeping images up-to-date and secure by providing a simple graphical interface, built-in automation, and AWS-provided security settings. With Image Builder, there are no manual steps for updating an image nor do you have to build your own automation pipeline.

How to use Amazon EC2 Image Builder to automate server image creation: https://aws.amazon.com/image-builder/

Incorrect options:

Amazon EC2 Amazon Machine Image (AMI) - An Amazon Machine Image (AMI) provides the information required to launch an instance. You must specify an AMI when you launch an EC2 instance. An Amazon Machine Image (AMI) is the basic unit of deployment in Amazon EC2 and is one of the types of images you can create with Image Builder.

AWS CloudFormation templates - AWS CloudFormation simplifies provisioning and management on AWS. You can create templates for the service or application architectures you want and have AWS CloudFormation use those templates for quick and reliable provisioning of the services or applications.

AWS Systems Manager (Amazon Simple Systems Manager (SSM)) - AWS Systems Manager (formerly known as SSM) is an AWS service that you can use to view and control your infrastructure on AWS. Using the Systems Manager console, you can view operational data from multiple AWS services and automate operational tasks across your AWS resources.

Instances used to build images and run tests using Image Builder must have access to the Systems Manager service. All build activity is orchestrated by SSM Automation. The SSM Agent will be installed on the source image if it is not already present, and it will be removed before the image is created.

281
Q

A financial services company needs to retain its data for 10 years to meet compliance norms. Which Amazon Simple Storage Service (Amazon S3) storage class is the best fit for this use case considering that the data has to be stored at a minimal cost?

a. Amazon S3 Glacier Flexible Retrieval
b. Amazon S3 Intelligent-Tiering
c. Amazon S3 Glacier Deep Archive
d. Amazon S3 Standard-Infrequent Access (S3 Standard-IA)

A

c. Amazon S3 Glacier Deep Archive

282
Q

A multinational company has just moved its infrastructure to AWS Cloud and has employees traveling to different offices around the world. How should the company set the AWS accounts?

a. Create an IAM user for each user in each AWS region
b. As employees travel, they can use other employees’ accounts
c. There is nothing to do, AWS Identity and Access Management (AWS IAM) is a global service
d. Create global permissions so users can access resources from all around the world

A

c. There is nothing to do, AWS Identity and Access Management (AWS IAM) is a global service

AWS Identity and Access Management (IAM) enables you to manage access to AWS services and resources securely. Using IAM, you can create and manage IAM users and IAM user groups, and use permissions to allow and deny their access to AWS resources.

AWS IAM is a global service. Users created within IAM can access their accounts all around the world, and deploy resources in every region.

Incorrect options:

Create an IAM user for each user in each AWS region - IAM users can access their accounts from different AWS regions.

Create global permissions so users can access resources from all around the world - AWS Identity and Access Management (AWS IAM) is a global service. You can use it globally without implementing anything.

As employees travel, they can use other employees’ accounts - You should never share you IAM user credentials.

283
Q

An e-commerce application sends out messages to a downstream application whenever an order is created. The downstream application processes the messages and updates its own systems. Currently, the two applications directly communicate with each other.

Which service will you use to decouple this architecture, without any communication loss between the two systems?

a. AWS Lambda
b. Amazon Kinesis Data Streams
c. Amazon Simple Notification Service (Amazon SNS)
d. Amazon Simple Queue Service (SQS)

A

d. Amazon Simple Queue Service (SQS)

Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. SQS eliminates the complexity and overhead associated with managing and operating message-oriented middleware and empowers developers to focus on differentiating work. Using SQS, you can send, store, and receive messages between software components at any volume, without losing messages or requiring other services to be available. Get started with SQS in minutes using the AWS console, Command Line Interface or SDK of your choice, and three simple commands.

Amazon SQS uses a pull mechanism, i.e. the messages in the queue are available till a registered process pulls the messages to process them. This decouples the architecture since the second application does not need to be available all the time to process messages coming from application one.

Incorrect options:

Amazon Simple Notification Service (Amazon SNS) - Amazon Simple Notification Service (Amazon SNS) is a fully managed messaging service for both application-to-application (A2A) and application-to-person (A2P) communication. The A2A pub/sub functionality provides topics for high-throughput, push-based, many-to-many messaging between distributed systems, microservices, and event-driven serverless applications. Amazon SNS allows applications to send time-critical messages to multiple subscribers through a “push” mechanism, which implies that the receiving applications have to be present and running to receive the messages. There is a scope for message loss in SNS and hence SQS is the right choice for this use case.

Amazon Kinesis Data Streams - Amazon Kinesis Data Streams enables you to build custom applications that process or analyze streaming data for specialized needs. You can continuously add various types of data such as clickstreams, application logs, and social media to an Amazon Kinesis data stream from hundreds of thousands of sources. Within seconds, the data will be available for your Amazon Kinesis Applications to read and process from the stream. Kinesis Data streams are overkill for this use-case since Kinesis Data Streams are meant for real-time processing of streaming big data.

AWS Lambda - AWS Lambda is a serverless compute service that lets you run code without provisioning or managing servers, creating workload-aware cluster scaling logic, maintaining event integrations, or managing runtimes. With Lambda, you can run code for virtually any type of application or backend service - all with zero administration. Just upload your code as a ZIP file or container image, and Lambda automatically and precisely allocates compute execution power and runs your code based on the incoming request or event, for any scale of traffic. Lambda functions cannot self invocate and need to be called. Also, Lambda functions cannot store data for later processing.

284
Q

Which of the following types are free under the Amazon Simple Storage Service (Amazon S3) pricing model? (Select two)

a. Data storage fee for objects stored in S3 Standard
b. Data storage fee for objects stored in S3 Glacier
c. Data transferred in from the internet
d. Data transferred out to an Amazon Elastic Compute Cloud (Amazon EC2) instance in any AWS Region
e. Data transferred out to an Amazon Elastic Compute Cloud (Amazon EC2) instance, when the instance is in the same AWS Region as the S3 bucket

A

c. Data transferred in from the internet
e. Data transferred out to an Amazon Elastic Compute Cloud (Amazon EC2) instance, when the instance is in the same AWS Region as the S3 bucket

There are four cost components to consider for S3 pricing – storage pricing; request and data retrieval pricing; data transfer and transfer acceleration pricing; and data management features pricing. Under “Data Transfer”, You pay for all bandwidth into and out of Amazon S3, except for the following: (1) Data transferred in from the internet, (2) Data transferred out to an Amazon Elastic Compute Cloud (Amazon EC2) instance, when the instance is in the same AWS Region as the S3 bucket, (3) Data transferred out to Amazon CloudFront (CloudFront).

Incorrect options:

Data transferred out to an Amazon Elastic Compute Cloud (Amazon EC2) instance in any AWS Region - This is incorrect. Data transfer charges apply when the instance is not in the same AWS Region as the S3 bucket.

Data storage fee for objects stored in S3 Standard - S3 Standard charges a storage fee for objects.

Data storage fee for objects stored in S3 Glacier - S3 Glacier charges a storage fee for objects.

285
Q

Which of the following can you use to run a bootstrap script while launching an Amazon Elastic Compute Cloud (Amazon EC2) instance?

a. Amazon EC2 instance configuration data
b. Amazon EC2 instance AMI data
c. Amazon EC2 instance user data
d. Amazon EC2 instance metadata

A

c. Amazon EC2 instance user data

Amazon EC2 instance user data is the data that you specified in the form of a bootstrap script or configuration parameters while launching your instance.

Amazon EC2 instance metadata and user data: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html

Incorrect options:

Amazon EC2 instance metadata - Amazon EC2 instance metadata is data about your instance that you can use to manage the instance. You can get instance items such as ami-id, public-hostname, local-hostname, hostname, public-ipv4, local-ipv4, public-keys, instance-id by using instance metadata. You cannot use Amazon EC2 instance metadata to run a bootstrap script while launching an Amazon EC2 instance. So this option is incorrect.

286
Q

An e-commerce company wants to store data from a recommendation engine in a database. As a Cloud Practioner, which AWS service would you recommend to provide this functionality with the LEAST operational overhead for any scale?

a. Amazon Neptune
b. Amazon Simple Storage Service (Amazon S3)
c. Amazon Relational Database Service (Amazon RDS)
d. Amazon DynamoDB

A

d. Amazon DynamoDB

Amazon DynamoDB is a key-value and document database that delivers sub-millisecond performance at any scale. Amazon DynamoDB enables customers to offload the administrative burdens of operating and scaling distributed databases to AWS so that they don’t have to worry about hardware provisioning, setup and configuration, throughput capacity planning, replication, software patching, or cluster scaling.

You can use Amazon DynamoDB to store recommendation results with the LEAST operational overhead for any scale.

https://catalog.us-east-1.prod.workshops.aws/workshops/ed82a5d4-6630-41f0-a6a1-9345898fa6ec/en-US/batch/dynamodb

287
Q

A medical device company is looking for a durable and cost-effective way of storing their historic data. Due to compliance requirements, the data must be stored for 10 years. Which AWS Storage solution will you suggest?

a. Amazon Elastic File System (Amazon EFS)
b. Amazon S3 Glacier Deep Archive
c. Amazon S3 Glacier Flexible Retrieval
d. AWS Storage Gateway

A

b. Amazon S3 Glacier Deep Archive

288
Q

A blogging company is looking at an easy to use solution to host WordPress blogs. The company needs a cost-effective, readily available solution without the need to manage the configurations for servers or the databases.

Which AWS service will help you achieve this functionality?

a. Amazon Lightsail
b. AWS Fargate
c. Amazon Elastic Compute Cloud (EC2) with Amazon S3 for storage
d. Host the application directly on Amazon S3

A

a. Amazon Lightsail

Amazon Lightsail is the easiest way to get started with AWS for developers, small businesses, students, and other users who need a solution to build and host their applications on the cloud. Lightsail provides developers with compute, storage, and networking capacity and capabilities to deploy and manage websites and web applications in the cloud. Lightsail includes everything you need to launch your project quickly – virtual machines, containers, databases, CDN, load balancers, DNS management, etc. – for a low, predictable monthly price.

You can get preconfigured virtual private server (VPS) plans that include everything to easily deploy and manage your application. Amazon Lightsail is best suited to projects that require a few virtual private servers and users who prefer a simple management interface. Common use cases for Lightsail include running websites, web applications, blogs, e-commerce sites, simple software, and more.

Also referred to as a bundle, a Lightsail plan includes a virtual server with a fixed amount of memory (RAM) and compute (vCPUs), SSD-based storage (disks), and a free data transfer allowance. Amazon Lightsail plans also offer static IP addresses (5 per account) and DNS management (3 domain zones per account). Lightsail plans are charged on an hourly, on-demand basis, so you only pay for a plan when you’re using it.

Amazon Lightsail offers a number of preconfigured, one-click-to-launch operating systems, development stacks, and web applications, including Linux and Windows OS, WordPress, LAMP, CentOS, and more.

Incorrect options:

AWS Fargate - AWS Fargate is a serverless compute engine for containers that work with both Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS). Fargate makes it easy for you to focus on building your applications. Fargate removes the need to provision and manage servers, lets you specify and pay for resources per application, and improves security through application isolation by design. Fargate is meant for container applications that you wish to host without having to manage the servers such as EC2 instances.

Amazon Elastic Compute Cloud (EC2) with Amazon S3 for storage - Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. Amazon EC2’s simple web service interface allows you to obtain and configure capacity with minimal friction. It provides you with complete control of your computing resources and lets you run on Amazon’s proven computing environment.

Amazon EC2 instances need to be managed by the customers and hence is the wrong choice for the given scenario.

Host the application directly on Amazon S3 - Amazon S3 does not support compute capacity to generate dynamic content. Only static web applications can be hosted on Amazon S3.

289
Q

A multi-national company has its business-critical data stored on a fleet of Amazon Elastic Compute Cloud (Amazon EC2) instances, in various countries, configured in region-specific compliance rules. To demonstrate compliance, the company needs to submit historical configurations on a regular basis. Which AWS service is best suited for this requirement?

a. Amazon Macie
b. AWS Config
c. Amazon GuardDuty
d. AWS CloudTrail

A

b. AWS Config

AWS Config provides a detailed view of the configuration of AWS resources in your AWS account. This includes how the resources are related to one another and how they were configured in the past so that you can see how the configurations and relationships change over time. AWS Config is designed to help you oversee your application resources in the following scenarios: Resource Administration, Auditing and Compliance, Managing and Troubleshooting Configuration Changes, Security Analysis.

How AWS Config Works: https://aws.amazon.com/config/

290
Q

What are the different gateway types supported by AWS Storage Gateway service?

a. Tape Gateway, File Gateway and Volume Gateway
b. Tape Gateway, Object Gateway and Volume Gateway
c. Object Gateway, File Gateway and Block Gateway
d. Tape Gateway, File Gateway and Block Gateway

A

a. Tape Gateway, File Gateway and Volume Gateway

AWS Storage Gateway is a hybrid cloud storage service that connects your existing on-premises environments with the AWS Cloud. Customers use Storage Gateway to simplify storage management and reduce costs for key hybrid cloud storage use cases. These include moving tape backups to the cloud, reducing on-premises storage with cloud-backed file shares, providing low latency access to data in AWS for on-premises applications, as well as various migration, archiving, processing, and disaster recovery use cases.

AWS Storage Gateway service provides three different types of gateways – Tape Gateway, File Gateway, and Volume Gateway – that seamlessly connect on-premises applications to cloud storage, caching data locally for low-latency access.

Gateway Storage Types Overview: https://aws.amazon.com/storagegateway/features/

Block Gateway and Object Gateway are made-up options, so these three options are incorrect.

291
Q

Which service/tool will you use to create and provide trusted users with temporary security credentials that can control access to your AWS resources?

a. AWS IAM Identity Center
b. AWS Web Application Firewall (AWS WAF)
c. AWS Security Token Service (AWS STS)
d. Amazon Cognito

A

c. AWS Security Token Service (AWS STS)

AWS Security Token Service (AWS STS) is a web service that enables you to request temporary, limited-privilege credentials for AWS Identity and Access Management (AWS IAM) users or for users that you authenticate (federated users).

You can use the AWS Security Token Service (AWS STS) to create and provide trusted users with temporary security credentials that can control access to your AWS resources. Temporary security credentials work almost identically to the long-term access key credentials that your IAM users can use, with the following differences:

  1. Temporary security credentials are short-term, as the name implies. They can be configured to last for anywhere from a few minutes to several hours. After the credentials expire, AWS no longer recognizes them or allows any kind of access from API requests made with them.
  2. Temporary security credentials are not stored with the user but are generated dynamically and provided to the user when requested. When (or even before) the temporary security credentials expire, the user can request new credentials, as long as the user requesting them still has permission to do so.

Temporary security credentials are generated by AWS Security Token Service (AWS STS). By default, AWS STS is a global service with a single endpoint at https://sts.amazonaws.com. However, you can also choose to make AWS STS API calls to endpoints in any other supported Region.

Incorrect options:

Amazon Cognito - Amazon Cognito is a higher level of abstraction than AWS Security Token Service (AWS STS). Amazon Cognito supports the same identity providers as AWS STS, and also supports unauthenticated (guest) access, and lets you migrate user data when a user signs in. Amazon Cognito also provides API operations for synchronizing user data so that it is preserved as users move between devices. Amazon Cognito helps create the user database, which is not possible with STS.

AWS IAM Identity Center - AWS IAM Identity Center is the successor to AWS Single Sign-On (AWS SSO). It is built on top of AWS Identity and Access Management (AWS IAM) to simplify access management to multiple AWS accounts, AWS applications, and other SAML-enabled cloud applications. In AWS IAM Identity Center, you create or connect, your workforce users for use across AWS. You can choose to manage access just to your AWS accounts, just to your cloud applications, or to both.

AWS Web Application Firewall (AWS WAF) - AWS Web Application Firewall (AWS WAF) is a web application firewall that helps protect your web applications or APIs against common web exploits and bots that may affect availability, compromise security, or consume excessive resources. AWS WAF gives you control over how traffic reaches your applications by enabling you to create security rules that control bot traffic and block common attack patterns, such as SQL injection or cross-site scripting.

292
Q

Which AWS service protects your AWS account by monitoring malicious activity and detecting threats?

a. Amazon CloudWatch
b. AWS CloudTrail
c. AWS Trusted Advisor
d. Amazon GuardDuty

A

d. Amazon GuardDuty

293
Q

A company would like to define a set of rules to manage objects cost-effectively between Amazon Simple Storage Service (Amazon S3) storage classes. As a Cloud Practitioner, which Amazon S3 feature would you use?

a. Amazon Simple Storage Service (Amazon S3) Lifecycle configuration
b. Amazon Simple Storage Service (Amazon S3) Bucket policies
c. Amazon S3 Transfer Acceleration (Amazon S3TA)
d. S3 Cross-Region Replication (S3 CRR)

A

a. Amazon Simple Storage Service (Amazon S3) Lifecycle configuration

To manage your objects so that they are stored cost-effectively throughout their lifecycle, configure their Amazon S3 Lifecycle. An S3 Lifecycle configuration is a set of rules that define actions that Amazon S3 applies to a group of objects. There are two types of actions: Transition actions (define when objects transition to another storage class) and expiration actions (define when objects expire. Amazon S3 deletes expired objects on your behalf).

In this particular use case, you would use a transition action.

Incorrect options:

Amazon S3 Transfer Acceleration (Amazon S3TA) - Amazon S3 Transfer Acceleration (Amazon S3TA) enables fast, easy, and secure transfers of files over long distances between your client and an Amazon S3 bucket. It is not used to move objects between storage classes.

Amazon Simple Storage Service (Amazon S3) Bucket policies - An S3 bucket policy is a resource-based AWS Identity and Access Management (IAM) policy. You add a bucket policy to a bucket to grant other AWS accounts or IAM users access permissions for the bucket and the objects in it. Object permissions apply only to the objects that the bucket owner creates. It is not used to move objects between storage classes.

S3 Cross-Region Replication (S3 CRR) - S3 Cross-Region Replication (S3 CRR) enables automatic, asynchronous copying of objects across Amazon S3 buckets. Cross-Region replication (CRR) is used to copy objects across Amazon S3 buckets in different AWS Regions. It is not used to move objects between storage classes.

294
Q

Which AWS service publishes up-to-the-minute information on the general status and availability of all AWS services in all the Regions of AWS Cloud?

a. Amazon CloudWatch
b. AWS CloudFormation
c. AWS Health Dashboard - service health
d. AWS Health Dashboard – Your account health

A

c. AWS Health Dashboard - service health

AWS Health Dashboard - service health is the single place to learn about the availability and operations of AWS services. You can view the overall status of AWS services, and you can sign in to view personalized communications about your particular AWS account or organization.

You can check on this page https://health.aws.amazon.com/health/status to get current status information.

295
Q

Amazon Elastic Compute Cloud (Amazon EC2) Spot instances are a best-fit for which of the following scenarios?

a. To run batch processes for critical workloads
b. To run scheduled jobs (jobs that run at the same time every day)
c. To install cost-effective Amazon RDS database
d. To run any containerized workload with Amazon Elastic Container Service (Amazon ECS) that can be interrupted

A

d. To run any containerized workload with Amazon Elastic Container Service (Amazon ECS) that can be interrupted

Amazon EC2 Spot Instances let you take advantage of unused EC2 capacity in the AWS cloud. Spot Instances are available at up to a 90% discount compared to On-Demand prices.

Containers are stateless, fault-tolerant and a great fit for Spot Instances. Spot Instances can be used with Elastic Container Service (ECS) or Elastic Container Service for Kubernetes (EKS) to run any containerized workload, from distributed parallel test systems to applications that map millions of miles a day. Spot instances provide the flexibility of ad-hoc provisioning for multiple instance types in different Availability Zones, with an option to hibernate, stop or terminate instances when EC2 needs the capacity back and Spot Instances are reclaimed.

via - https://aws.amazon.com/ec2/spot/containers-for-less/

Incorrect options:

To install cost-effective Amazon RDS database - Spot instance capacity allocated to you can be taken back anytime without notice if AWS needs them. Hence, Spot instances can only be used as additional compute capacity and not for hosting or installing any software or database.

To run batch processes for critical workloads - Business-critical workloads cannot be run on Spot instances.

To run scheduled jobs (jobs that run at the same time every day) - There is no guarantee that a Spot instance will be available at a specific time every day. For a scheduled requirement, Scheduled Reserved instances should be used.

296
Q

A financial consulting company is looking for automated reference deployments, that will speed up the process of deploying its financial solutions on AWS Cloud. The reference deployment should be able to deploy most of the well-known functions of financial services and leave space for customizations, if necessary.

Which AWS service will help achieve this requirement?

a. AWS Elastic Beanstalk
b. AWS Partner Solutions(formerly Quick Starts)
c. Amazon Quicksight
d. AWS CloudFormation

A

b. AWS Partner Solutions(formerly Quick Starts)

AWS Partner Solutions are automated reference deployments for key workloads on the AWS Cloud. Each Partner Solution launches, configures, and runs the AWS compute, network, storage, and other services required to deploy a specific workload on AWS, using AWS best practices for security and availability.

Partner Solutions are accelerators that condense hundreds of manual procedures into just a few steps. They are customizable and designed for production.

Partner Solutions include:
1. A reference architecture for the deployment
2. AWS CloudFormation templates (JSON or YAML scripts) that automate and configure the deployment
3. A deployment guide, which explains the architecture and implementation in detail, and provides instructions for customizing the deployment

Partner Solutions also include integrations that extend the cloud-based contact center functionality provided by Amazon Connect with key services and solutions from APN Partners—for customer relationship management (CRM), workforce optimization (WFO), analytics, unified communications (UC), and other use cases.

Incorrect options:

AWS CloudFormation - AWS CloudFormation gives you an easy way to model a collection of related AWS and third-party resources, provision them quickly and consistently, and manage them throughout their lifecycles, by treating infrastructure as code. A CloudFormation template describes your desired resources and their dependencies so you can launch and configure them together as a stack.

AWS Elastic Beanstalk - AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services developed with Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker on familiar servers such as Apache, Nginx, Passenger, and IIS. You can simply upload your code and Elastic Beanstalk automatically handles the deployment, from capacity provisioning, load balancing, auto-scaling to application health monitoring. At the same time, you retain full control over the AWS resources powering your application and can access the underlying resources at any time.

Amazon Quicksight - Amazon QuickSight is a scalable, serverless, embeddable, machine learning-powered business intelligence (BI) service built for the cloud. QuickSight lets you easily create and publish interactive BI dashboards that include Machine Learning-powered insights. With QuickSight, you can quickly embed interactive dashboards into your applications, websites, and portals.

297
Q

An AWS user is trying to launch an Amazon Elastic Compute Cloud (Amazon EC2) instance in a given region. What is the region-specific constraint that the Amazon Machine Image (AMI) must meet so that it can be used for this Amazon Elastic Compute Cloud (Amazon EC2) instance?

a. You must use an Amazon Machine Image (AMI) from the same region as that of the Amazon EC2 instance. The region of the Amazon Machine Image (AMI) has no bearing on the performance of the Amazon EC2 instance
b. You should use an Amazon Machine Image (AMI) from the same region, as it improves the performance of the Amazon EC2 instance
c. You can use an Amazon Machine Image (AMI) from a different region, but it degrades the performance of the Amazon EC2 instance
d. An Amazon Machine Image (AMI) is a global entity, so the region is not applicable

A

a. You must use an Amazon Machine Image (AMI) from the same region as that of the Amazon EC2 instance. The region of the Amazon Machine Image (AMI) has no bearing on the performance of the Amazon EC2 instance

An Amazon Machine Image (AMI) provides the information required to launch an instance. You must specify an Amazon Machine Image (AMI) when you launch an instance. You can launch multiple instances from a single AMI when you need multiple instances with the same configuration.

The Amazon Machine Image (AMI) must be in the same region as that of the Amazon EC2 instance to be launched. If the Amazon Machine Image (AMI) exists in a different region, you can copy that Amazon Machine Image (AMI) to the region where you want to launch the EC2 instance. The region of Amazon Machine Image (AMI) has no bearing on the performance of the Amazon EC2 instance.

Amazon Machine Images (AMI) Overview: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html

298
Q

Which AWS compute service provides the EASIEST way to access resizable compute capacity in the cloud with support for per-second billing and access to the underlying OS?

a. Amazon Lightsail
b. Amazon Elastic Compute Cloud (Amazon EC2)
c. AWS Lambda
d. Amazon Elastic Container Service (Amazon ECS)

A

b. Amazon Elastic Compute Cloud (Amazon EC2)

Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud with support for per-second billing. It is the easiest way to provision servers on AWS Cloud and access the underlying OS. Amazon EC2 reduces the time required to obtain and boot new server instances to minutes, allowing you to quickly scale capacity, both up and down, as your computing requirements change.

Amazon Elastic Compute Cloud (Amazon EC2) Overview: https://aws.amazon.com/ec2/

Incorrect options:

Amazon Lightsail - Amazon Lightsail is an easy-to-use cloud platform that offers you everything needed to build an application or website, plus a cost-effective, monthly plan. Amazon Lightsail offers several preconfigured, one-click-to-launch operating systems, development stacks, and web applications, including Linux, Windows OS, and WordPress. Amazon Lightsail comes with monthly payment plans and does not support per second billing, so this option is ruled out.

299
Q

A company uses reserved EC2 instances across multiple units with each unit having its own AWS account. However, some of the units under-utilize their reserved instances while other units need more reserved instances. As a Cloud Practitioner, which of the following would you recommend as the most cost-optimal solution?
a. Use AWS Trusted Advisor to manage AWS accounts of all units and then share the reserved EC2 instances amongst all units
b. Use AWS Organizations to manage AWS accounts of all units and then share the reserved EC2 instances amongst all units
c. Use AWS Cost Explorer to manage AWS accounts of all units and then share the reserved EC2 instances amongst all units
d. Use AWS Systems Manager to manage AWS accounts of all units and then share the reserved EC2 instances amongst all units

A

b. Use AWS Organizations to manage AWS accounts of all units and then share the reserved EC2 instances amongst all units

AWS Organizations helps you to centrally manage billing; control access, compliance, and security; and share resources across your AWS accounts. Using AWS Organizations, you can automate account creation, create groups of accounts to reflect your business needs, and apply policies for these groups for governance. You can also simplify billing by setting up a single payment method for all of your AWS accounts. AWS Organizations is available to all AWS customers at no additional charge.

Key Features of AWS Organizations: https://aws.amazon.com/organizations/

300
Q

Which AWS service would you choose for a data processing project that needs a schemaless database?

a. Amazon RedShift
b. Amazon DynamoDB
c. Amazon Aurora
d. Amazon Relational Database Service (Amazon RDS)

A

b. Amazon DynamoDB

Amazon DynamoDB is a key-value and document database that delivers single-digit millisecond performance at any scale. It’s a fully managed, multi-Region, multi-master, durable database with built-in security, backup and restore, and in-memory caching for internet-scale applications. DynamoDB is schemaless. DynamoDB can manage structured or semistructured data, including JSON documents.

Incorrect options:

Amazon RedShift - Amazon Redshift is a fully-managed petabyte-scale cloud-based data warehouse product designed for large scale data set storage and analysis. Amazon Redshift requires a well-defined schema.

Amazon Aurora - Amazon Aurora is an AWS service for relational databases. Aurora requires a well-defined schema.

Amazon Relational Database Service (Amazon RDS) - Amazon Relational Database Service (Amazon RDS) is an AWS service for relational databases. RDS requires a well-defined schema.

301
Q

An intern at an IT company provisioned a Linux based On-demand EC2 instance with per-second billing but terminated it within 30 seconds as he wanted to provision another instance type. What is the duration for which the instance would be charged?
a. 600 seconds
b. 30 seconds
c. 300 seconds
d. 60 seconds

A

d. 60 seconds

There is a one-minute minimum charge for Linux based EC2 instances, so this is the correct option.

302
Q

According to the AWS Shared Responsibility Model, which of the following are responsibilities of AWS? (Select two)
a. Replacing faulty hardware of Amazon EC2 instances
b. Creating IAM role for accessing Amazon EC2 instances
c. Enabling Multi Factor Authentication on AWS accounts in your organization
d. Creating S3 bucket policies for appropriate user access
e. Operating the infrastructure layer, the operating system and the platform for the Amazon S3 service

A

a. Replacing faulty hardware of Amazon EC2 instances

e. Operating the infrastructure layer, the operating system and the platform for the Amazon S3 service

I got this fair and square.

303
Q

Which of the following are examples of Horizontal Scalability (aka Elasticity)? (Select two)

a. Read Replicas in Amazon Relational Database Service (Amazon RDS)
b. Modify a Database instance to higher CPU and RAM
c. Elastic Load Balancing (ELB)
d. Add a bigger CPU to a computer
e. Modify an EC2 instance type from t2.nano to u-12tb1.metal

A

a. Read Replicas in Amazon Relational Database Service (Amazon RDS)
c. Elastic Load Balancing (ELB)

A “horizontally scalable” system is one that can increase capacity by adding more computers to the system. This is in contrast to a “vertically scalable” system, which is constrained to running its processes on only one computer; in such systems, the only way to increase performance is to add more resources into one computer in the form of faster (or more) CPUs, memory or storage. Horizontally scalable systems are oftentimes able to outperform vertically scalable systems by enabling parallel execution of workloads and distributing those across many different computers.

Elastic Load Balancing (ELB)

Elastic Load Balancing (ELB) automatically distributes incoming application traffic across multiple targets, such as Amazon EC2 instances, containers, IP addresses, and Lambda functions. It can handle the varying load of your application traffic in a single Availability Zone (AZ) or across multiple Availability Zones (AZ). This falls under Horizontal Scaling.

Read Replicas in Amazon Relational Database Service (Amazon RDS)

Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. Read replicas allow you to create read-only copies that are synchronized with your master database. You can also place your read replica in a different AWS Region closer to your users for better performance. Read replicas are an example of horizontal scaling of resources.

304
Q

Which AWS service can be used to set up billing alarms to monitor estimated charges on your AWS account?

a. AWS Cost Explorer
b. Amazon CloudWatch
c. AWS Organizations
d. AWS CloudTrail

A

b. Amazon CloudWatch

Amazon CloudWatch can be used to create an alarm to monitor your estimated charges. When you enable the monitoring of estimated charges for your AWS account, the estimated charges are calculated and sent several times daily to CloudWatch as metric data. You can choose to receive alerts by email when charges have exceeded a certain threshold. These alerts are triggered by Amazon CloudWatch and messages are sent using Amazon Simple Notification Service (Amazon SNS). Billing metric data is stored in the US East (N. Virginia) Region and reflects worldwide charges.

The alarm triggers when your account billing exceeds the threshold you specify. It triggers only when actual billing exceeds the threshold. It doesn’t use projections based on your usage so far in the month.

Amazon CloudWatch Billing Alarms Overview: https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/monitor_estimated_charges_with_cloudwatch.html

Exam Alert:

It is useful to note the difference between Amazon CloudWatch Billing vs AWS Budgets:

Amazon CloudWatch Billing Alarms: Sends an alarm when the actual cost exceeds a certain threshold.

AWS Budgets: Sends an alarm when the actual cost exceeds the budgeted amount or even when the cost forecast exceeds the budgeted amount.

Incorrect options:

AWS CloudTrail - AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. With AWS CloudTrail, you can log, continuously monitor, and retain account activity related to actions across your AWS infrastructure. Billing alarms cannot be triggered via AWS CloudTrail.

AWS Organizations - AWS Organizations is an account management service that enables you to consolidate multiple AWS accounts into an organization that you create and centrally manage. Consolidated billing is a feature of AWS Organizations. You can use the master account of your organization to consolidate and pay for all member accounts. Billing alarms cannot, however, be triggered using Consolidated Billing.

AWS Cost Explorer - AWS Cost Explorer has an easy-to-use interface that lets you visualize, understand, and manage your AWS costs and usage over time. AWS Cost Explorer will help analyze your data at a high level or dive deeper into your cost and usage data using various reports (Monthly costs by AWS service, hourly and resource Level cost). Billing alarms cannot be triggered via AWS Cost Explorer.

305
Q

Which of the following statements are true regarding Amazon Simple Storage Service (Amazon S3) (Select two)?

a. Amazon Simple Storage Service (Amazon S3) stores data in a flat non-hierarchical structure
b. Amazon Simple Storage Service (Amazon S3) is a block storage service designed for a broad range of workloads
c. You can install databases on Amazon Simple Storage Service (Amazon S3)
d. Amazon Simple Storage Service (Amazon S3) is a key value based object storage service
e. Amazon Simple Storage Service (Amazon S3) is a fully managed, elastic file system storage service used as database backup

A

a. Amazon Simple Storage Service (Amazon S3) stores data in a flat non-hierarchical structure
d. Amazon Simple Storage Service (Amazon S3) is a key value based object storage service

Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance. Amazon Simple Storage Service (Amazon S3) stores data in a flat non-hierarchical structure. All objects are stored in Amazon S3 buckets and can be organized with shared names called prefixes. You can also append up to 10 key-value pairs called Amazon S3 object tags to each object, which can be created, updated, and deleted throughout an object’s lifecycle.

Incorrect options:

Amazon Simple Storage Service (Amazon S3) is a block storage service designed for a broad range of workloads - Block storage service is provided by Amazon Elastic Block Store (Amazon EBS) to provide persistent block-level storage volumes for use with Amazon EC2 instances. Amazon S3 is an object storage service.

Amazon Simple Storage Service (Amazon S3) is a fully managed, elastic file system storage service used as database backup - Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources. Amazon S3 is an object storage service.

You can install databases on Amazon Simple Storage Service (Amazon S3) - Amazon S3 is an object storage service. You cannot install databases on Amazon S3.

306
Q

Which AWS services/features support High Availability by default? (Select two)

a. Instance Store
b. Amazon Elastic File System (Amazon EFS)
c. Amazon Elastic Block Store (Amazon EBS)
d. Subnet
e. Amazon DynamoDB

A

b. Amazon Elastic File System (Amazon EFS)
e. Amazon DynamoDB

Amazon DynamoDB is a key-value and document database that delivers single-digit millisecond performance at any scale. It’s a fully managed, multi-Region, multi-master, durable database with built-in security, backup and restore, and in-memory caching for internet-scale applications. All of your data is stored on solid-state disks (SSDs) and is automatically replicated across multiple Availability Zones (AZ) in an AWS Region, providing built-in high availability and data durability.

Amazon DynamoDB High Availability: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Introduction.html

Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources. It is built to scale on-demand to petabytes without disrupting applications, growing and shrinking automatically as you add and remove files, eliminating the need to provision and manage capacity to accommodate growth. Amazon EFS is a regional service storing data within and across multiple Availability Zones (AZ) for high availability and durability.

Amazon EFS High Availability: https://aws.amazon.com/efs/faq/

Incorrect options:

Subnet - A subnet is a range of IP addresses in your VPC. You can create AWS resources, such as EC2 instances, in specific subnets. Each subnet must reside entirely within one Availability Zone and cannot span multiple Availability Zones.

Amazon Elastic Block Store (Amazon EBS) - Amazon Elastic Block Store (Amazon EBS) is an easy to use, high-performance block storage service designed for use with Amazon Elastic Compute Cloud (EC2) for both throughput and transaction-intensive workloads at any scale. EBS volumes are replicated within an Availability Zone (AZ) and can easily scale to petabytes of data.

Instance Store - As Instance Store volumes are tied to an EC2 instance, they are also single Availability Zone (AZ) entities.

307
Q

Which of the following Cloud Computing models does the ‘gmail’ service represent?

a. Software as a service (SaaS)
b. Function as a service (FaaS)
c. Platform as a service (PaaS)
d. Infrastructure as a service (IaaS)

A

a. Software as a service (SaaS)

308
Q

Which of the following AWS Support plans provide programmatic access to AWS Support Center features to create, manage and close your support cases? (Select two)

a. AWS Business Support
b. AWS Developer Support
c. AWS Basic Support
d. AWS Corporate Support
e. AWS Enterprise Support

A

a. AWS Business Support
e. AWS Enterprise Support

AWS offers four different support plans to cater to each of its customers - AWS Developer Support, AWS Business Support, AWS Enterprise On-Ramp Support and AWS Enterprise Support plans. A basic support plan is included for all AWS customers.

AWS Enterprise Support

You should use the AWS Enterprise Support plan to provide customers with concierge-like service where the main focus is helping the customer achieve their outcomes and find success in the cloud. With AWS Enterprise Support plan, you get 24x7 technical support from high-quality engineers, tools and technology to automatically manage the health of your environment, consultative architectural guidance delivered in the context of your applications and use-cases, and a designated Technical Account Manager (TAM) to coordinate access to proactive/preventative programs and AWS subject matter experts. AWS Enterprise Support plan provides 24x7 phone, email and chat access to technical support however it’s costlier than the AWS Business Support plan. You get programmatic access (API Access) to AWS Support Center features to create, manage, and close your support cases, and operationally manage your Trusted Advisor check requests and status. You should note that the AWS Enterprise On-Ramp Support plan also gets programmatic access (API Access) to AWS Support Center.

AWS Business Support

You should use the AWS Business Support plan if you have production workloads on AWS and want 24x7 phone, email and chat access to technical support and architectural guidance in the context of your specific use-cases. You get programmatic access (API Access) to AWS Support Center features to create, manage, and close your support cases, and operationally manage your Trusted Advisor check requests and status.

Exam Alert:

AWS Business Support, AWS Enterprise On-Ramp, and AWS Enterprise Support plans support full set of checks for AWS Trusted Advisor best practices and also provides support for programmatic case management.

Please review the differences between the AWS Developer Support, AWS Business Support, AWS Enterprise On-Ramp Support and AWS Enterprise Support plans as you can expect at least a couple of questions on the exam: https://aws.amazon.com/premiumsupport/plans/

309
Q

Which of the following improves the availability for a fleet of Amazon Elastic Compute Cloud (Amazon EC2) instances?

a. Deploy the Amazon Elastic Compute Cloud (Amazon EC2) instances in the same Availability Zone (AZ) across two different AWS Regions
b. Deploy the Amazon Elastic Compute Cloud (Amazon EC2) instances across different Availability Zones (AZ) in the same AWS Region
c. Deploy the Amazon Elastic Compute Cloud (Amazon EC2) instances in the same Availability Zone (AZ) of an AWS Region
d. Deploy the Amazon Elastic Compute Cloud (Amazon EC2) instances across different AWS Regions of the same Availability Zone (AZ)

A

b. Deploy the Amazon Elastic Compute Cloud (Amazon EC2) instances across different Availability Zones (AZ) in the same AWS Region

Two of the other options are not possible and the other option that’s possible does not help with availability.

310
Q

A brand-new startup would like to remove its need to manage the underlying infrastructure and focus on the deployment and management of its applications. Which type of cloud computing does this refer to?

a. On-premises
b. Platform as a Service (PaaS)
c. Infrastructure as a Service (IaaS)
d. Software as a Service (SaaS)

A

b. Platform as a Service (PaaS)

Cloud Computing can be broadly divided into three types - Infrastructure as a Service (IaaS), Platform as a Service (PaaS), Software as a Service (SaaS).

Platform as a Service (PaaS) removes the need to manage underlying infrastructure (usually hardware and operating systems) and allows you to focus on the deployment and management of your applications. You don’t need to worry about resource procurement, capacity planning, software maintenance, patching, or any of the other undifferentiated heavy lifting involved in running your application.

Please review this overview of the types of cloud computing: https://aws.amazon.com/types-of-cloud-computing/

Incorrect options:

Infrastructure as a Service (IaaS) - Infrastructure as a Service (IaaS) contains the basic building blocks for cloud IT. It typically provides access to networking features, computers (virtual or on dedicated hardware), and data storage space. Infrastructure as a Service (IaaS) gives the highest level of flexibility and management control over IT resources.

Software as a Service (SaaS) - Software as a Service (SaaS) provides you with a complete product that is run and managed by the service provider. With a Software as a Service (SaaS) offering, you don’t have to think about how the service is maintained or how the underlying infrastructure is managed. You only need to think about how you will use that particular software. Amazon Rekognition is an example of a SaaS service.

On-premises - When an enterprise opts for on-premises, it needs to create, upgrade, and scale the on-premise IT infrastructure by investing in sophisticated hardware, compatible software, and robust services. Also, the business needs to deploy dedicated IT staff to upkeep, scale, and manage the on-premise infrastructure continuously.

311
Q

A company is moving its on-premises application to AWS Cloud. The application uses in-memory caches for running custom workloads. Which Amazon Elastic Compute Cloud (Amazon EC2) instance type is the right choice for the given requirement?

a. Compute Optimized instance types
b. Accelerated computing instance types
c. Storage Optimized instance types
d. Memory Optimized instance types

A

d. Memory Optimized instance types

Memory optimized instances are designed to deliver fast performance for workloads that process large data sets in memory. Memory optimized instances offer large memory size for memory intensive applications including in-memory applications, in-memory databases, in-memory analytics solutions, High Performance Computing (HPC), scientific computing, and other memory-intensive applications.

Amazon EC2 R6g instances are the next-generation of memory-optimized instances powered by Arm-based AWS Graviton2 Processors.

Incorrect options:

Compute Optimized instance types - Compute Optimized instances are designed for applications that benefit from high compute power. These applications include compute-intensive applications like high-performance web servers, high-performance computing (HPC), scientific modelling, distributed analytics, and machine learning inference.

Amazon EC2 C6g instances are the next-generation of compute-optimized instances powered by Arm-based AWS Graviton2 Processors.

Storage Optimized instance types - Dense-storage instances are designed for workloads that require high sequential read and write access to very large data sets, such as Hadoop distributed computing, massively parallel processing data warehousing, and log processing applications. The Dense-storage instances offer the best price/GB-storage and price/disk-throughput across other EC2 instances.

Accelerated computing instance types - Accelerated Computing instance family is a family of instances that use hardware accelerators, or co-processors, to perform some functions, such as floating-point number calculation and graphics processing, more efficiently than is possible in software running on CPUs. Amazon EC2 provides three types of Accelerated Computing instances – GPU compute instances for general-purpose computing, GPU graphics instances for graphics-intensive applications, and FPGA programmable hardware compute instances for advanced scientific workloads.

312
Q

Which of the following AWS services allows a database to have flexible schema and supports document data models?

a. Amazon Aurora
b. Amazon Relational Database Service (Amazon RDS)
c. Amazon DynamoDB
d. Amazon Redshift

A

c. Amazon DynamoDB

Amazon DynamoDB is a NoSQL database that supports key-value and document data models and enables developers to build modern, serverless applications that can start small and scale globally to support petabytes of data and tens of millions of read and write requests per second.

Amazon DynamoDB supports both key-value and document data models. This enables Amazon DynamoDB to have a flexible schema, so each row can have any number of columns at any point in time. This allows you to easily adapt the tables as your business requirements change, without having to redefine the table schema as you would in relational databases.

Incorrect options:

Amazon Aurora - Amazon Aurora is an AWS service for relational databases. Schema change on a relational database is not easy and straight-forward as it is on a NoSQL database. Amazon Aurora does not support flexible schema.

313
Q

A data analytics company stores its data on Amazon Simple Storage Service (Amazon S3) and wants to do SQL based analysis on this data with minimum effort. As a Cloud Practitioner, which of the following AWS services will you suggest for this use case?

a. Amazon DynamoDB
b. Amazon Redshift
c. Amazon Athena
d. Amazon Aurora

A

c. Amazon Athena

Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL. Amazon Athena is serverless, so there is no infrastructure to manage, and you pay only for the queries that you run.

Key features of Amazon Athena: https://aws.amazon.com/athena/

To use Amazon Athena, simply point to your data in Amazon Simple Storage Service (Amazon S3), define the schema, and start querying using standard SQL. Most results are delivered within seconds. With Amazon Athena, there’s no need for complex ETL jobs to prepare your data for analysis. This makes it easy for anyone with SQL skills to quickly analyze large-scale datasets.

Incorrect options:

Amazon Aurora - Amazon Aurora is a MySQL and PostgreSQL-compatible relational database built for the cloud, that combines the performance and availability of traditional enterprise databases with the simplicity and cost-effectiveness of open source databases. You cannot use Amazon Aurora for SQL analysis on S3 based data.

314
Q

Which of the following statements is INCORRECT about AWS Auto Scaling?
a. You can automatically deploy AWS Shield when a DDoS attack is detected
b. You can scale out and add more Amazon Elastic Compute Cloud (Amazon EC2) instances to match an increase in demand as well as scale in and remove Amazon Elastic Compute Cloud (Amazon EC2) instances to match a reduced demand
c. You can automatically register new instances to a load balancer
d. You can automatically remove unhealthy instances

A

a. You can automatically deploy AWS Shield when a DDoS attack is detected

AWS Auto Scaling is helpful during a DDoS attack, as it can scale out resources fast. But, it cannot automatically deploy AWS Shield service onto its group of resources.

315
Q

Which of the following AWS Support plans provide access to guidance, configuration, and troubleshooting of AWS interoperability with third-party software? (Select two)
a. AWS Basic Support
b. AWS Developer Support
c. AWS Corporate Support
d. AWS Business Support
e. AWS Enterprise Support

A

d. AWS Business Support
e. AWS Enterprise Support

These are the highest tiers.

316
Q

A company wants to improve the resiliency of its flagship application so it wants to move from its traditional database system to a managed AWS NoSQL database service to support active-active configuration in both the East and West US AWS regions. The active-active configuration with cross-region support is the prime criteria for any database solution that the company considers.

Which AWS database service is the right fit for this requirement?
a. Amazon DynamoDB with global tables
b. Amazon Relational Database Service (Amazon RDS) for MYSQL
c. Amazon DynamoDB with DynamoDB Accelerator
d. Amazon Aurora with multi-master clusters

A

a. Amazon DynamoDB with global tables

Amazon DynamoDB is a fully managed, serverless, key-value NoSQL database designed to run high-performance applications at any scale. DynamoDB offers built-in security, continuous backups, automated multi-region replication, in-memory caching, and data export tools.

DynamoDB global tables replicate data automatically across your choice of AWS Regions and automatically scale capacity to accommodate your workloads. With global tables, your globally distributed applications can access data locally in the selected regions to get single-digit millisecond read and write performance. DynamoDB offers active-active cross-region support that is needed for the company.

Amazon DynamoDB with DynamoDB Accelerator - DynamoDB Accelerator (DAX) is an in-memory cache that delivers fast read performance for your tables at scale by enabling you to use a fully managed in-memory cache. Using DAX, you can improve the read performance of your DynamoDB tables by up to 10 times—taking the time required for reads from milliseconds to microseconds, even at millions of requests per second. DAX does not offer active-active cross-Region configuration.

317
Q

A company is planning to implement Chaos Engineering to expose any blind spots that can disrupt the resiliency of the application.

Which AWS service will help implement this requirement with the least effort?

a. Amazon Inspector
b. Amazon GuardDuty
c. AWS Trusted Advisor
d. AWS Fault Injection Simulator (AWS FIS)

A

d. AWS Fault Injection Simulator (AWS FIS)

AWS Fault Injection Simulator (AWS FIS) is a fully managed service for running fault injection experiments on AWS that makes it easier to improve an application’s performance, observability, and resiliency. Fault injection experiments are used in chaos engineering, which is the practice of stressing an application in testing or production environments by creating disruptive events, such as a sudden increase in CPU or memory consumption, observing how the system responds, and implementing improvements. Fault injection experiment helps teams create the real-world conditions needed to uncover the hidden bugs, and monitor blind spots, and performance bottlenecks that are difficult to find in distributed systems.

AWS Fault Injection Simulator (AWS FIS) simplifies the process of setting up and running controlled fault injection experiments across a range of AWS services so teams can build confidence in their application behavior. With AWS Fault Injection Simulator (AWS FIS), teams can quickly set up experiments using pre-built templates that generate the desired disruptions. AWS Fault Injection Simulator (AWS FIS) provides the controls and guardrails that teams need to run experiments in production, such as automatically rolling back or stopping the experiment if specific conditions are met. With a few clicks in the console, teams can run complex scenarios with common distributed system failures happening in parallel or building sequentially over time, enabling them to create the real-world conditions necessary to find hidden weaknesses.

Incorrect options:

Amazon Inspector - Amazon Inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on AWS. Amazon Inspector automatically assesses applications for exposure, vulnerabilities, and deviations from best practices. After performing an assessment, Amazon Inspector produces a detailed list of security findings prioritized by level of severity.

AWS Trusted Advisor - AWS Trusted Advisors provides recommendations that help you follow AWS best practices. AWS Trusted Advisor evaluates your account by using checks. These checks identify ways to optimize your AWS infrastructure, improve security and performance, reduce costs, and monitor service quotas. You can then follow the check recommendations to optimize your services and resources.

Amazon GuardDuty - Amazon GuardDuty is a threat detection service that continuously monitors your AWS accounts and workloads for malicious activity and delivers detailed security findings for visibility and remediation.

318
Q

An IT company has a hybrid cloud architecture and it wants to centralize the server logs for its Amazon Elastic Compute Cloud (Amazon EC2) instances and on-premises servers. Which of the following is the MOST effective for this use-case?

a. Use Amazon CloudWatch Logs for both the Amazon Elastic Compute Cloud (Amazon EC2) instance and the on-premises servers
b. Use Amazon CloudWatch Logs for the Amazon Elastic Compute Cloud (Amazon EC2) instance and AWS CloudTrail for the on-premises servers
c. Use AWS CloudTrail for the Amazon Elastic Compute Cloud (Amazon EC2) instance and Amazon CloudWatch Logs for the on-premises servers
d. Use AWS Lambda to send log data from Amazon Elastic Compute Cloud (Amazon EC2) instance as well as on-premises servers to Amazon CloudWatch Logs

A

a. Use Amazon CloudWatch Logs for both the Amazon Elastic Compute Cloud (Amazon EC2) instance and the on-premises servers

You can use Amazon CloudWatch Logs to monitor, store, and access your log files from Amazon Elastic Compute Cloud (Amazon EC2) instances, AWS CloudTrail, Route 53, and other sources such as on-premises servers.

Amazon CloudWatch Logs enables you to centralize the logs from all of your systems, applications, and AWS services that you use, in a single, highly scalable service. You can then easily view them, search them for specific error codes or patterns, filter them based on specific fields, or archive them securely for future analysis.

319
Q

Which of the following AWS services can be used to prevent Distributed Denial-of-Service (DDoS) attack? (Select three)

a. AWS Shield
b. Amazon Inspector
c. Amazon CloudFront with Amazon Route 53
d. AWS Web Application Firewall (AWS WAF)
e. AWS Trusted Advisor
f. AWS CloudHSM

A

a. AWS Shield
c. Amazon CloudFront with Amazon Route 53
d. AWS Web Application Firewall (AWS WAF)

AWS Shield - AWS Shield is a managed Distributed Denial of Service (DDoS) protection service that safeguards applications running on AWS. AWS Shield provides always-on detection and automatic inline mitigations that minimize application downtime and latency, so there is no need to engage AWS Support to benefit from DDoS protection.

AWS Web Application Firewall (AWS WAF) - By using AWS Web Application Firewall (AWS WAF), you can configure web access control lists (Web ACLs) on your Amazon CloudFront distributions or Application Load Balancers to filter and block requests based on request signatures. Besides, by using the AWS Web Application Firewall (AWS WAF) rate-based rules, you can automatically block the IP addresses of bad actors when requests matching a rule exceed a threshold that you define.

Amazon CloudFront with Amazon Route 53 - AWS hosts Amazon CloudFront and Amazon Route 53 services on a distributed network of proxy servers in data centers throughout the world called edge locations. Using the global Amazon network of edge locations for application delivery and DNS service plays an important part in building a comprehensive defense against DDoS attacks for your dynamic web applications.

How AWS Shield, AWS Web Application Firewall (AWS WAF), and Amazon CloudFront with Amazon Route 53 help mitigate DDoS attacks: https://aws.amazon.com/blogs/security/how-to-protect-dynamic-web-applications-against-ddos-attacks-by-using-amazon-cloudfront-and-amazon-route-53/

320
Q

An IT company is planning to migrate from an on-premises environment to AWS Cloud. Which of the following expense areas would result in cost savings when the company moves to AWS Cloud? (Select two)
a. Developer salary
b. Project manager salary
c. Data center physical security expenditure
d. Data center hardware infrastructure expenditure
e. SaaS application license fee

A

c. Data center physical security expenditure

d. Data center hardware infrastructure expenditure

The company does not need to spend on the computing hardware infrastructure and data center physical security. So these expense areas would result in cost savings. The expenditure on the SaaS application license fee, developer salary, and project manager salary would remain the same.

Exam Alert:

Please check out the following six advantages of Cloud Computing. You would certainly be asked questions on the advantages of Cloud Computing compared to a traditional on-premises setup:

https://docs.aws.amazon.com/whitepapers/latest/aws-overview/six-advantages-of-cloud-computing.html

321
Q

Which of the following AWS services should be used to automatically distribute incoming traffic across multiple targets?
a. AWS Elastic Beanstalk
b. Amazon OpenSearch Service
c. AWS Elastic Load Balancing (ELB)
d. AWS Auto Scaling

A

c. AWS Elastic Load Balancing (ELB)

Elastic Load Balancing (ELB) is used to automatically distribute your incoming application traffic across all the EC2 instances that you are running. You can use Elastic Load Balancing to manage incoming requests by optimally routing traffic so that no one instance is overwhelmed. Your load balancer acts as a single point of contact for all incoming web traffic to your application. When an instance is added, it needs to register with the load balancer or no traffic is routed to it. When an instance is removed, it must deregister from the load balancer or traffic continues to be routed to it.

AWS Elastic Beanstalk - AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services developed in a variety of programming languages. You can simply upload your code and Elastic Beanstalk automatically handles the deployment, from capacity provisioning, load balancing, auto-scaling to application health monitoring. You cannot use Beanstalk to distribute incoming traffic across multiple targets.

322
Q

A startup runs its proprietary application on docker containers. As a Cloud Practitioner, which AWS service would you recommend so that the startup can run containers and still have access to the underlying servers?

a. AWS Fargate
b. AWS Lambda
c. Amazon Elastic Container Registry (Amazon ECR)
d. Amazon Elastic Container Service (Amazon ECS)

A

d. Amazon Elastic Container Service (Amazon ECS)

Amazon Elastic Container Service (Amazon ECS) is a highly scalable, fast, container management service that makes it easy to run, stop, and manage Docker containers on a cluster. This is not a fully managed service and you can manage the underlying servers yourself.

Incorrect options:

AWS Fargate - AWS Fargate is a serverless compute engine for containers. It works with both Amazon Elastic Container Service (Amazon ECS) and Amazon Elastic Kubernetes Service (Amazon EKS). AWS Fargate makes it easy for you to focus on building your applications. AWS Fargate removes the need to provision and manage servers, lets you specify and pay for resources per application, and improves security through application isolation by design. With AWS Fargate, you do not have access to the underlying servers, so this option is incorrect.

How AWS Fargate Works: via - https://aws.amazon.com/fargate/

AWS Lambda - AWS Lambda is a compute service that lets you run code without provisioning or managing servers. AWS Lambda executes your code only when needed and scales automatically, from a few requests per day to thousands per second. AWS Lambda does not support running container applications.

Amazon Elastic Container Registry (Amazon ECR) - Amazon Elastic Container Registry (Amazon ECR) can be used to store, manage, and deploy Docker container images. Amazon Elastic Container Registry (Amazon ECR) eliminates the need to operate your container repositories. Amazon Elastic Container Registry (Amazon ECR) does not support running container applications.

323
Q

Which Amazon Route 53 routing policy would you use to improve the performance for your customers by routing the requests to the AWS endpoint that provides the fastest experience?

a. Failover routing
b. Weighted routing
c. Simple routing
d. Latency-based routing

A

d. Latency-based routing

Amazon Route 53 is a highly available and scalable cloud Domain Name System (DNS) web service. It is designed to give developers and businesses an extremely reliable and cost-effective way to route end users to Internet applications by translating names like www.example.com into the numeric IP addresses like 192.0.2.1 that computers use to connect to each other.

If your application is hosted in multiple AWS Regions, you can use latency-based routing policy to improve the performance for your users by serving their requests from the AWS Region that provides the lowest latency. To use latency-based routing, you create latency records for your resources in multiple AWS Regions. When Amazon Route 53 receives a DNS query for your domain or subdomain (example.com or acme.example.com), it determines which AWS Regions you’ve created latency records for, determines which region gives the user the lowest latency, and then selects a latency record for that region. Amazon Route 53 responds with the value from the selected record, such as the IP address for a web server.

Amazon Route 53 Routing Policy Overview: https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html

Incorrect options:

Failover routing - This routing policy is used when you want to configure active-passive failover.

Weighted routing - This routing policy is used to route traffic to multiple resources in proportions that you specify.

Simple routing - With simple routing, you typically route traffic to a single resource, for example, to a web server for your website.

324
Q

As per the AWS Shared Responsibility Model, which of the following is a responsibility of AWS from a security and compliance point of view?

a. Identity and Access Management
b. Patching networking infrastructure
c. Patching guest OS and applications
d. Service and Communications Protection

A

b. Patching networking infrastructure

According to the AWS Shared Responsibility Model, AWS is responsible for “Security of the Cloud”. This includes protecting the infrastructure that runs all of the services offered in the AWS Cloud. This infrastructure is composed of the hardware, software, networking, and facilities that run AWS Cloud services. Therefore, patching networking infrastructure is the responsibility of AWS.

Incorrect options:

Service and Communications Protection
Identity and Access Management
Patching guest OS and applications

The customer is responsible for security “in” the cloud. This covers things such as services and communications protection; Identity and Access Management; and patching guest OS and applications. Customers are responsible for managing their data including encryption options and using Identity and Access Management tools for implementing appropriate access control policies as per their organization requirements. Therefore, these three options fall under the responsibility of the customer according to the AWS shared responsibility model.

Exam Alert:

Please review the AWS Shared Responsibility Model in detail as you can expect multiple questions on this topic in the exam: https://aws.amazon.com/compliance/shared-responsibility-model/

325
Q

Which AWS tool/service will help you define your cloud infrastructure using popular programming languages such as Python and JavaScript?

a. AWS CloudFormation
b. AWS CodeBuild
c. AWS Elastic Beanstalk
d. AWS Cloud Development Kit (AWS CDK)

A

The AWS Cloud Development Kit (AWS CDK) is an open-source software development framework to define your cloud application resources using familiar programming languages.

AWS Cloud Development Kit (AWS CDK) uses the familiarity and expressive power of programming languages for modeling your applications. It provides you with high-level components called constructs that preconfigure cloud resources with proven defaults, so you can build cloud applications without needing to be an expert. AWS CDK provisions your resources in a safe, repeatable manner through AWS CloudFormation. It also enables you to compose and share your own custom constructs that incorporate your organization’s requirements, helping you start new projects faster.

In short, you use the AWS CDK framework to author AWS CDK projects which are executed to generate AWS CloudFormation templates.

How Cloud Development Kit (AWS CDK) works: https://aws.amazon.com/cdk/

AWS Elastic Beanstalk - AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services developed with Java, .NET, PHP, Node.js, Python, etc. You can simply upload your code in a programming language of your choice and AWS Elastic Beanstalk automatically handles the deployment, from capacity provisioning, load balancing, and auto scaling to application health monitoring.

AWS CloudFormation - AWS CloudFormation is a service that gives developers and businesses an easy way to create a collection of related AWS and third-party resources, and provision and manage them in an orderly and predictable fashion. AWS CloudFormation is designed to allow resource lifecycles to be managed repeatably, predictably, and safely while allowing for automatic rollbacks, automated state management, and management of resources across accounts and regions. AWS Cloud Development Kit (AWS CDK) helps code the same in higher-level languages and converts them into AWS CloudFormation templates.

AWS CodeBuild - AWS CodeBuild is a fully managed continuous integration service that compiles source code, runs tests, and produces software packages that are ready to deploy. With AWS CodeBuild, you don’t need to provision, manage, and scale your own build servers. AWS CodeBuild scales continuously and processes multiple builds concurrently, so your builds are not left waiting in a queue.

326
Q

A data analytics company is running a proprietary batch analytics application on AWS and wants to use a storage service which would be accessed by hundreds of EC2 instances simultaneously to append data to existing files. As a Cloud Practitioner, which AWS service would you suggest for this use-case?
a. Amazon Simple Storage Service (Amazon S3)
b. Instance Store
c. Amazon Elastic File System (Amazon EFS)
d. Amazon Elastic Block Store (Amazon EBS)

A

c. Amazon Elastic File System (Amazon EFS)

Amazon EFS is a file storage service for use with Amazon EC2. Amazon EFS provides a file system interface, file system access semantics, and concurrently-accessible storage for up to thousands of Amazon EC2 instances. Amazon EFS uses the Network File System protocol.

Amazon Elastic Block Store (Amazon EBS) - Amazon Elastic Block Store (EBS) is an easy to use, high-performance block storage service designed for use with Amazon Elastic Compute Cloud (EC2) for both throughput and transaction-intensive workloads at any scale. EBS volumes cannot be accessed simultaneously by multiple EC2 instances, so this option is incorrect.

327
Q

Which AWS service will help you receive alerts when the reservation utilization falls below the defined threshold?
a. AWS CloudTrail
b. AWS Trusted Advisor
c. AWS Pricing Calculator
d. AWS Budgets

A

d. AWS Budgets

328
Q

Which tool will help you review your workloads against current AWS best practices for cost optimization, security, and performance improvement and then obtain advice to architect them better?

a. AWS Trusted Advisor
b. Amazon CloudWatch
c. Amazon Inspector
d. AWS Cost Explorer

A

a. AWS Trusted Advisor

AWS Trusted Advisor is an online tool that provides you real-time guidance to help you provision your resources following AWS best practices on cost optimization, security, fault tolerance, service limits, and performance improvement. Whether establishing new workflows, developing applications, or as part of ongoing improvement, recommendations provided by Trusted Advisor regularly help keep your solutions provisioned optimally. All AWS customers get access to the seven core Trusted Advisor checks to help increase the security and performance of the AWS environment.

How Trusted Advisor Works: https://aws.amazon.com/premiumsupport/technology/trusted-advisor/

329
Q

An e-commerce company has deployed an RDS database in a single Availability Zone (AZ). The engineering team wants to ensure that in case of an AZ outage, the database should continue working on the same endpoint without any manual administrative intervention. Which of the following solutions can address this use-case?
a. Deploy the database via AWS Elastic Beanstalk
b. Configure the database in RDS read replica mode with automatic failover to the standby
c. Configure the database in RDS Multi-AZ deployment with automatic failover to the standby
d. Provision the database via AWS CloudFormation

A

c. Configure the database in RDS Multi-AZ deployment with automatic failover to the standby

When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). In case of an infrastructure failure, Amazon RDS performs an automatic failover to the standby (or to a read replica in the case of Amazon Aurora), so that you can resume database operations as soon as the failover is complete. Since the endpoint for your DB Instance remains the same after a failover, your application can resume database operation without the need for manual administrative intervention.

Configure the database in RDS read replica mode with automatic failover to the standby - For RDS, Read replicas allow you to create read-only copies that are synchronized with your master database. There is no standby available while using read replicas. In case of infrastructure failure, you have to manually promote the read replica to be its own standalone DB Instance, which means that the database endpoint would change. Therefore, this option is incorrect.

330
Q

Which of the following statements is CORRECT regarding the scope of an Amazon Virtual Private Cloud (VPC)?

a. A VPC spans all Availability Zones (AZs) in all AWS regions
b. A VPC spans all AWS regions within an Availability Zone (AZ)
c. Amazon VPC spans all subnets in all AWS regions
d. A VPC spans all Availability Zones (AZs) within an AWS region

A

d. A VPC spans all Availability Zones (AZs) within an AWS region

Amazon Virtual Private Cloud (Amazon VPC) lets you provision a logically isolated section of the AWS Cloud where you can launch AWS resources in a virtual network that you define. You have complete control over your virtual networking environment, including selection of your own IP address range, creation of subnets, and configuration of route tables and network gateways.

A VPC is located within an AWS region.

331
Q

Which of the following AWS entities lists all users in your account and the status of their various account aspects such as passwords, access keys, and MFA devices?

a. Credentials Report
b. Amazon Inspector
c. AWS Cost & Usage Report (AWS CUR)
d. AWS Trusted Advisor

A

a. Credentials Report

Credentials Report

You can generate and download a credentials report that lists all users in your account and the status of their various credentials, including passwords, access keys, and MFA devices. You can use the credentials report to assist in your auditing and compliance efforts. You can use the report to audit the effects of credential lifecycle requirements, such as password and access key rotation. You can provide the report to an external auditor, or grant permissions to an auditor so that he or she can download the report directly.

Incorrect options:

AWS Trusted Advisor - AWS Trusted Advisor is an online tool that provides real-time guidance to help provision your resources following AWS best practices. Whether establishing new workflows, developing applications, or as part of ongoing improvement, recommendations provided by Trusted Advisor regularly help keep your solutions provisioned optimally. AWS Trusted Advisor analyzes your AWS environment and provides best practice recommendations in five categories: Cost Optimization, Performance, Security, Fault Tolerance, Service Limits.

AWS Cost & Usage Report (AWS CUR) - The AWS Cost & Usage Report (AWS CUR) contains the most comprehensive set of cost and usage data available. You can use the AWS Cost & Usage Report (AWS CUR) to publish your AWS billing reports to an Amazon Simple Storage Service (Amazon S3) bucket that you own. You can receive reports that break down your costs by the hour or month, by product or product resource, or by tags that you define yourself. The AWS Cost & Usage Report (AWS CUR) cannot be used to identify under-utilized Amazon EC2 instances.

Amazon Inspector - Amazon Inspector is an automated, security assessment service that helps you check for unintended network accessibility of your Amazon EC2 instances and for vulnerabilities on those EC2 instances. Amazon Inspector assessments are offered to you as pre-defined rules packages mapped to common security best practices and vulnerability definitions.

332
Q

A growing start-up has trouble identifying and protecting sensitive data at scale. Which AWS fully managed service can assist with this task?

a. Amazon Macie
b. AWS Secrets Manager
c. AWS Key Management Service (AWS KMS)
d. AWS Artifact

A

a. Amazon Macie

Amazon Macie is a fully managed data security and data privacy service that uses Machine Learning and pattern matching to discover and protect your sensitive data in AWS.

Amazon Macie uses Machine Learning and pattern matching to cost-efficiently discover sensitive data at scale. Amazon Macie automatically detects a large and growing list of sensitive data types, including personally identifiable information (PII) such as names, addresses, and credit card numbers. It also gives you constant visibility of the data security and data privacy of your data stored in Amazon S3.

How Amazon Macie works: https://aws.amazon.com/macie/

Incorrect options:

AWS Artifact - AWS Artifact is your go-to, central resource for compliance-related information that matters to you. It provides on-demand access to AWS’ security and compliance reports and selects online agreements. It is not used to discover and protect sensitive data in AWS.

AWS Secrets Manager - AWS Secrets Manager helps you protect secrets needed to access your applications, services, and IT resources. The service enables you to easily rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle. It is not used to discover and protect sensitive data in AWS.

AWS Key Management Service (AWS KMS) - AWS Key Management Service (AWS KMS) makes it easy for you to create and manage keys and control the use of encryption across a wide range of AWS services and in your applications. It is not used to discover and protect sensitive data in AWS.

333
Q

A startup wants to migrate its data and applications from the on-premises data center to AWS Cloud. Which of the following options can be used by the startup to help with this migration? (Select two)
a. Raise a support ticket with AWS Support for further assistance
b. Consult moderators on AWS Developer Forums
c. Leverage AWS Professional Services to accelerate the infrastructure migration
d. Use AWS Trusted Advisor to automate the infrastructure migration
e. Utilize AWS Partner Network (APN) to build a custom solution for this infrastructure migration

A

c. Leverage AWS Professional Services to accelerate the infrastructure migration

e. Utilize AWS Partner Network (APN) to build a custom solution for this infrastructure migration

Leverage AWS Professional Services to accelerate the infrastructure migration

The AWS Professional Services organization is a global team of experts that can help you realize your desired business outcomes when using the AWS Cloud. AWS Professional Services consultants can supplement your team with specialized skills and experience that can help you achieve quick results. Therefore, leveraging AWS Professional Services can accelerate the infrastructure migration for the startup.

Utilize AWS Partner Network (APN) to build a custom solution for this infrastructure migration

The AWS Partner Network (APN) is the global partner program for technology and consulting businesses that leverage Amazon Web Services to build solutions and services for customers. The startup can work with experts from APN to build a custom solution for this infrastructure migration.

334
Q

A company needs to use a secure online data transfer tool/service that can automate the ongoing transfers from on-premises systems into AWS while providing support for incremental data backups.

Which AWS tool/service is an optimal fit for this requirement?

a. AWS Storage Gateway
b. AWS Snowcone
c. AWS DataSync
d. AWS Snowmobile

A

c. AWS DataSync

AWS DataSync is a secure online data transfer service that simplifies, automates, and accelerates copying terabytes of data to and from AWS storage services. Easily migrate or replicate large data sets without having to build custom solutions or oversee repetitive tasks. DataSync can copy data between Network File System (NFS) shares, or Server Message Block (SMB) shares, self-managed object storage, AWS Snowcone, Amazon Simple Storage Service (Amazon S3) buckets, Amazon Elastic File System (Amazon EFS) file systems, and Amazon FSx for Windows File Server file systems.

You can use AWS DataSync for ongoing transfers from on-premises systems into or out of AWS for processing. DataSync can help speed up your critical hybrid cloud storage workflows in industries that need to move active files into AWS quickly. This includes machine learning in life sciences, video production in media and entertainment, and big data analytics in financial services. AWS DataSync provides timely delivery to ensure dependent processes are not delayed. You can specify exclude filters, include filters, or both, to determine which files, folders, or objects get transferred each time your task runs.

AWS DataSync employs an AWS-designed transfer protocol—decoupled from the storage protocol—to accelerate data movement. The protocol performs optimizations on how, when, and what data is sent over the network. Network optimizations performed by DataSync include incremental transfers, in-line compression, and sparse file detection, as well as in-line data validation and encryption.

Data Transfer between on-premises and AWS using AWS DataSync: https://aws.amazon.com/datasync/

Incorrect options:

AWS Storage Gateway - AWS Storage Gateway is a set of hybrid cloud services that give you on-premises access to virtually unlimited cloud storage. Customers use AWS Storage Gateway to integrate AWS Cloud storage with existing on-site workloads so they can simplify storage management and reduce costs for key hybrid cloud storage use cases. These include moving backups to the cloud, using on-premises file shares backed by cloud storage, and providing low latency access to data in AWS for on-premises applications.

AWS Snowmobile - AWS Snowmobile is an Exabyte-scale data transfer service used to move extremely large amounts of data to AWS. You can transfer up to 100PB per Snowmobile, a 45-foot-long ruggedized shipping container, pulled by a semi-trailer truck. AWS Snowmobile makes it easy to move massive volumes of data to the cloud, including video libraries, image repositories, or even a complete data center migration.

AWS Snowcone - AWS Snowcone is the smallest member of the AWS Snow Family of edge computing, edge storage, and data transfer devices. Weighing in at 4.5 pounds (2.1 kg), AWS Snowcone is equipped with 8 terabytes of usable storage, while AWS Snowcone Solid State Drive (SSD) supports 14 terabytes of usable storage. Both referred to as AWS Snowcone, the device is ruggedized, secure, and purpose-built for use outside of a traditional data center. Its small form factor makes it a perfect fit for tight spaces or where portability is a necessity and network connectivity is unreliable. You can use AWS Snowcone in backpacks for first responders, or for IoT, vehicular, and drone use cases. You can execute compute applications at the edge, and you can ship the device with data to AWS for offline data transfer, or you can transfer data online with AWS DataSync from edge locations.

335
Q

Compared to the on-demand instance prices, what is the highest possible discount offered for spot instances?

a. 75
b. 10
c. 50
d. 90

A

d. 90

Amazon EC2 spot instances let you take advantage of unused EC2 capacity in the AWS cloud. Spot instances are available at up to a 90% discount compared to the on-demand instance prices. You can use spot instances for various stateless, fault-tolerant, or flexible applications such as big data, containerized workloads, CI/CD, web servers, high-performance computing (HPC), and other test & development workloads.

336
Q

Which of the following statements are correct regarding the AWS Control Tower and Service Control Policies? (Select two)

a. Service Control Policies (SCPs) can help grant permissions to the accounts in your organization
b. Service Control Policies (SCPs), by default, affect all the users in the AWS Organization. They have to be configured to effect only the member accounts, if needed
c. AWS Control Tower is an AWS native service providing a pre-defined set of blueprints and guardrails to help customers implement a landing zone for new AWS accounts
d. Service Control Policies (SCPs) are a type of organization policy that you can use to manage permissions in your organization
e. AWS Control Tower helps you deploy a multi-account AWS environment and operate it with day-to-day reminders and recommendations

A

c. AWS Control Tower is an AWS native service providing a pre-defined set of blueprints and guardrails to help customers implement a landing zone for new AWS accounts
d. Service Control Policies (SCPs) are a type of organization policy that you can use to manage permissions in your organization

AWS Control Tower is an AWS native service providing a pre-defined set of blueprints and guardrails to help customers implement a landing zone for new AWS accounts.

AWS Control Tower is designed to provide an easy, self-service setup experience and an interactive user interface for ongoing governance with guardrails. While Control Tower automates creation of a new landing zone with pre-configured blueprints (e.g., AWS IAM Identity Center for directory and access), the AWS Landing Zone solution provides a configurable setup of a landing zone with rich customization options through custom add-ons (e.g., Active Directory, Okta Directory) and ongoing modifications through a code deployment and configuration pipeline.

How AWS Control Tower Works: https://aws.amazon.com/controltower/

Service control policies (SCPs) are a type of organization policy that you can use to manage permissions in your organization. Service control policies (SCPs) offer central control over the maximum available permissions for all accounts in your organization. Service control policies (SCPs) help you to ensure your accounts stay within your organization’s access control guidelines. SCPs are available only in an organization that has all features enabled. SCPs aren’t available if your organization has enabled only the consolidated billing features.

Incorrect options:

AWS Control Tower helps you deploy a multi-account AWS environment and operate it with day-to-day reminders and recommendations - AWS Control Tower helps you deploy a multi-account AWS environment based on best practices, however, the customer is still responsible for day-to-day operations and checking compliance status. Enterprises that need help operating regulated infrastructure in the cloud should consider a certified MSP partner or AWS Managed Services (AMS).

Service Control Policies (SCPs) can help grant permissions to the accounts in your organization - SCPs alone are not sufficient to grant permissions to the accounts in your organization. No permissions are granted by an SCP. An SCP defines a guardrail, or sets limits, on the actions that the account’s administrator can delegate to the IAM users and roles in the affected accounts. The administrator must still attach identity-based or resource-based policies to IAM users or roles, or to the resources in your accounts to actually grant permissions. The effective permissions are the logical intersection between what is allowed by the SCP and what is allowed by the IAM and resource-based policies.

Service Control Policies (SCPs), by default, affect all the users in the AWS Organization. They have to be configured to effect only the member accounts, if needed - SCPs don’t affect users or roles in the management account. They affect only the member accounts in your organization.

337
Q

Which of the following are recommended security best practices for the AWS account root user? (Select two)

a. Share AWS account root user access keys with other administrators
b. Keep your AWS account root user access keys in an encrypted file on Amazon S3
c. Set up an IAM user with administrator permissions and do not use AWS account root user for administrative tasks
d. Enable multi-factor authentication (MFA) for the AWS account root user
e. Disable multi-factor authentication (MFA) for the AWS account root user as it can lock the entire AWS account if the multi-factor authentication (MFA) device is lost

A

c. Set up an IAM user with administrator permissions and do not use AWS account root user for administrative tasks
d. Enable multi-factor authentication (MFA) for the AWS account root user

When you create an AWS account, you create an AWS account root user identity, which you use to sign in to AWS. You can sign in to the AWS Management Console using this root user identity — that is, the email address and password that you provided when creating the account. This combination of your email address and password is also called your root user credentials.

Some of the AWS account root user security best practices are as follows:

Do not use the AWS account root user for any task where it’s not required. Instead, create a new IAM user for each person that requires administrator access. Then make those users administrators by placing the users into an “Administrators” group to which you attach the AdministratorAccess managed policy.

If you don’t already have an access key for your AWS account root user, don’t create one unless you need to. If you do have an access key for your AWS account root user, delete it.

Never share your AWS account root user password or access keys with anyone. Use a strong password to help protect account-level access to the AWS Management Console.

Enable AWS multi-factor authentication (MFA) on your AWS account root user account.

https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#lock-away-credentials

Incorrect options:

Disable multi-factor authentication (MFA) for the AWS account root user as it can lock the entire AWS account if the multi-factor authentication (MFA) device is lost - AWS recommends that you enable AWS multi-factor authentication (MFA) on your AWS account root user account.

Keep your AWS account root user access keys in an encrypted file on Amazon S3 - AWS recommends that if you do have an access key for your AWS account root user, delete it.

Share AWS account root user access keys with other administrators - The access key for your AWS account root user gives full access to all your resources for all AWS services, including your billing information. You cannot reduce the permissions associated with your AWS account root user access key. You should never share these access keys with any other users, not even the administrators.

338
Q

Which AWS support plan provides access to a designated Technical Account Manager (TAM)?

a. AWS Enterprise Support
b. AWS Enterprise On-Ramp Support
c. AWS Business Support
d. AWS Developer Support

A

a. AWS Enterprise Support

AWS offers four different support plans to cater to each of its customers - AWS Developer Support, AWS Business Support, AWS Enterprise On-Ramp Support and AWS Enterprise Support plans. A basic support plan is included for all AWS customers.

AWS Enterprise Support provides customers with concierge-like service where the main focus is helping the customer achieve their outcomes and find success in the cloud. With AWS Enterprise Support, you get 24x7 technical support from high-quality engineers, tools and technology to automatically manage the health of your environment, consultative architectural guidance delivered in the context of your applications and use-cases, and a designated Technical Account Manager (TAM) to coordinate access to proactive/preventative programs and AWS subject matter experts.

Exam Alert:

Please review the differences between the AWS Developer Support, AWS Business Support, AWS Enterprise On-Ramp Support and AWS Enterprise Support plans as you can expect at least a couple of questions on the exam: https://aws.amazon.com/premiumsupport/plans/

339
Q

Which budget types can be created under AWS Budgets (Select three)?

a. Resource budget
b. Reservation budget
c. Hardware budget
d. Software budget
e. Cost budget
f. Usage budget

A

b. Reservation budget
e. Cost budget
f. Usage budget

AWS Budgets enable you to plan your service usage, service costs, and instance reservations. AWS Budgets information is updated up to three times a day. Updates typically occur between 8 to 12 hours after the previous update. Budgets track your unblended costs, subscriptions, refunds, and RIs. There are four different budget types you can create under AWS Budgets - Cost budget, Usage budget, Reservation budget and Savings Plans budget.

Cost budget - Helps you plan how much you want to spend on a service.

Usage budget - Helps you plan how much you want to use one or more services.

Reservation budget - This helps you track the usage of your Reserved Instances (RI). Two ways of doing it - Reserved Instance (RI) utilization budgets (This lets you see if your Reserved Instances (RI) are unused or under-utilized), Reserved Instance (RI) coverage budgets (This lets you see how much of your instance usage is covered by a reservation).

340
Q

A Cloud Practitioner would like to deploy identical resources across all AWS regions and accounts using templates while estimating costs. Which AWS service can assist with this task?

a. AWS CodeDeploy
b. AWS Directory Service for Microsoft Active Directory (AWS Managed Microsoft AD)
c. Amazon LightSail
d. AWS CloudFormation

A

d. AWS CloudFormation

AWS CloudFormation gives developers and systems administrators an easy way to create and manage a collection of related AWS resources, provisioning and updating them in an orderly and predictable fashion.

You can use the AWS CloudFormation sample templates or create your own templates to describe your AWS resources, and any associated dependencies or runtime parameters, required to run your application. This provides a single source of truth for all your resources and helps you to standardize infrastructure components used across your organization, enabling configuration compliance and faster troubleshooting.

AWS CloudFormation templates allow you to estimate the cost of your resources.

How AWS CloudFormation works: https://aws.amazon.com/cloudformation/

Incorrect options:

AWS Directory Service for Microsoft Active Directory (AWS Managed Microsoft AD) - AWS Directory Service for Microsoft Active Directory (AWS Managed Microsoft AD), also known as AWS Managed Microsoft AD, enables your directory-aware workloads and AWS resources to use managed Active Directory in the AWS Cloud. It is not used to deploy resources.

Amazon LightSail - Amazon Lightsail is designed to be the easiest way to launch and manage a virtual private server with AWS. It is not best suited when deploying more complex resources, while AWS CloudFormation can.

AWS CodeDeploy - AWS CodeDeploy is a service that automates code deployments to any instance, including Amazon EC2 instances and instances running on-premises. Unlike AWS CloudFormation, it does not deal with infrastructure configuration and orchestration.

341
Q

Which of the following will help you control the incoming traffic to an Amazon EC2 instance?

a. Route Table
b. Network access control list (network ACL)
c. Security Group
d. AWS Resource Group

A

c. Security Group

A security group acts as a virtual firewall for your Amazon EC2 instances to control incoming and outgoing traffic. Inbound rules control the incoming traffic to your instance, and outbound rules control the outgoing traffic from your instance. When you launch an instance, you can specify one or more security groups. If you don’t specify a security group, Amazon EC2 uses the default security group. You can add rules to each security group that allows traffic to or from its associated instances. You can modify the rules for a security group at any time. New and modified rules are automatically applied to all instances that are associated with the security group. When Amazon EC2 decides whether to allow traffic to reach an instance, it evaluates all of the rules from all of the security groups that are associated with the instance.

Security is a shared responsibility between AWS and you. AWS provides security groups as one of the tools for securing your instances, and you need to configure them to meet your security needs. If you have requirements that aren’t fully met by security groups, you can maintain your own firewall on any of your instances in addition to using security groups.

Incorrect options:

AWS Resource Group - You can use AWS Resource Groups to organize your AWS resources. Resource groups make it easier to manage and automate tasks on large numbers of resources at one time. Resource Groups feature permissions are at the account level. As long as users who are sharing your account have the correct IAM permissions, they can work with the resource groups that you create. Resource Groups are for grouping resources for managing the resources. They do not provide access to Amazon EC2 instances.

Network access control list (network ACL) - A Network access control list (network ACL) is an optional layer of security for your VPC that acts as a firewall for controlling traffic in and out of one or more subnets. You might set up network ACLs with rules similar to your security groups in order to add an additional layer of security to your VPC.

Route Table - A Route table contains a set of rules, called routes, that are used to determine where network traffic from your VPC is directed. You can explicitly associate a subnet with a particular route table. Otherwise, the subnet is implicitly associated with the main route table.

Each route in a route table specifies the range of IP addresses where you want the traffic to go (the destination) and the gateway, network interface, or connection through which to send the traffic (the target).

342
Q

A customer has created a VPC and a subnet within AWS Cloud. Which of the following statements is correct?

a. A subnet spans all of the Availability Zones (AZ) in the Region whereas an Amazon Virtual Private Cloud (Amazon VPC) spans only one Availability Zone (AZ) in the Region
b. An Amazon Virtual Private Cloud (Amazon VPC) spans all of the Availability Zones (AZ) in the Region whereas a subnet spans only one Availability Zone (AZ) in the Region
c. Both the Amazon Virtual Private Cloud (Amazon VPC) and the subnet span all of the Availability Zones (AZ) in the Region
d. Both the Amazon Virtual Private Cloud (Amazon VPC) and the subnet span only one Availability Zone (AZ) in the Region

A

b. An Amazon Virtual Private Cloud (Amazon VPC) spans all of the Availability Zones (AZ) in the Region whereas a subnet spans only one Availability Zone (AZ) in the Region

Amazon Virtual Private Cloud (Amazon VPC) is a logically isolated section of the AWS Cloud where you can launch AWS resources in a virtual network that you define. You have complete control over your virtual networking environment, including the selection of your IP address range, creation of subnets, and configuration of route tables and network gateways. An Amazon Virtual Private Cloud (Amazon VPC) spans all of the Availability Zones (AZ) in the Region.

A subnet is a range of IP addresses within your Amazon Virtual Private Cloud (Amazon VPC). A subnet spans only one Availability Zone (AZ) in the Region.

Amazon Virtual Private Cloud (Amazon VPC) and Subnet Overview: https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Subnets.html

343
Q

A healthcare company wants to implement a continuous replication based disaster recovery mechanism and provide fast, reliable recovery of physical, virtual, and cloud-based servers into AWS Cloud. Which of the following represents the best-fit solution for this use case?

a. CloudCover Disaster Recovery
b. AWS Storage Gateway
c. CloudEndure Disaster Recovery
d. AWS Snowball Edge

A

c. CloudEndure Disaster Recovery

CloudEndure Disaster Recovery, available from the AWS Marketplace, continuously replicates server-hosted applications and server-hosted databases from any source into AWS using block-level replication of the underlying server. CloudEndure Disaster Recovery enables you to use AWS Cloud as a disaster recovery Region for an on-premises workload and its environment. It can also be used for disaster recovery of AWS hosted workloads if they consist only of applications and databases hosted on EC2 (i.e. not RDS).

Features of CloudEndure Disaster Recovery:
1. Continuous replication: CloudEndure Disaster Recovery provides continuous, asynchronous, block-level replication of your source machines into a staging area. This allows you to achieve sub-second Recovery Point Objectives (RPOs), since up-to-date applications are always ready to be spun up on AWS if a disaster strikes.
2. Low-cost staging area: Data is continually kept in sync in a lightweight staging area in your target AWS Region. The staging area contains low-cost resources that are automatically provisioned and managed by CloudEndure Disaster Recovery. This eliminates the need for duplicate resources and significantly reduces your disaster recovery total cost of ownership (TCO).
3. Automated machine conversion and orchestration: In the event of a disaster or drill, CloudEndure Disaster Recovery triggers a highly automated machine conversion process and a scalable orchestration engine that quickly spins up thousands of machines in your target AWS Region in parallel. This enables Recovery Time Objectives (RTOs) of minutes. Unlike application-level solutions, CloudEndure Disaster Recovery replicates entire machines, including OS, system state configuration, system disks, databases, applications, and files.
4. Point-in-time recovery: Granular point-in-time recovery allows you to recover applications and IT environments that have been corrupted as a result of accidental system changes, ransomware, or other malicious attacks. In such cases, you can launch applications from a previous consistent point in time rather than launching applications in their most up-to-date state. During the recovery, you can select either the latest state or an earlier state from a list of points in time.
5. Easy, non-disruptive drills: With CloudEndure Disaster Recovery, you can conduct disaster recovery drills without disrupting your source environment or risking data loss. During drills, CloudEndure Disaster Recovery spins up machines in your target AWS Region in complete isolation to avoid network conflicts and performance impact.
6. Wide application and infrastructure support: Because CloudEndure Disaster Recovery replicates data at the block level, you can use it for all applications and databases that run on supported versions of Windows and Linux OS.

CloudEndure Disaster Recovery: https://docs.aws.amazon.com/whitepapers/latest/disaster-recovery-workloads-on-aws/disaster-recovery-options-in-the-cloud.html

CloudCover Disaster Recovery - This is a made-up option and has been added as a distractor.

344
Q

Which of the following are components of an AWS Site-to-Site VPN? (Select two)

a. Network Address Translation gateway (NAT gateway)
b. Internet gateway
c. AWS storage gateway
d. Virtual private gateway (VGW)
e. Customer gateway

A

d. Virtual private gateway (VGW)
e. Customer gateway

AWS Site-to-Site VPN enables you to securely connect your on-premises network or branch office site to your Amazon Virtual Private Cloud (Amazon VPC). VPN Connections are a good solution if you have an immediate need, and have low to modest bandwidth requirements. This connection goes over the public internet. Virtual private gateway (VGW) / Transit Gateway and Customer Gateway are the components of an AWS Site-to-Site VPN.

A virtual private gateway (VGW) is the VPN concentrator on the Amazon side of the AWS Site-to-Site VPN connection. A customer gateway is a resource in AWS that provides information to AWS about your Customer gateway device.

Components of an AWS Site-to-Site VPN: https://docs.aws.amazon.com/vpn/latest/s2svpn/how_it_works.html

Incorrect options:

AWS storage gateway - AWS storage gateway is a hybrid cloud storage service that connects your existing on-premises environments with the AWS Cloud. Customers use storage gateway to simplify storage management and reduce costs for key hybrid cloud storage use cases.

Network Address Translation gateway (NAT gateway) - A Network Address Translation gateway (NAT gateway) or a NAT Instance can be used in a public subnet in your VPC to enable instances in the private subnet to initiate outbound IPv4 traffic to the Internet. Network Address Translation gateway (NAT gateway) is managed by AWS but NAT Instance is managed by you.

Internet gateway - An internet gateway is a horizontally scaled, redundant, and highly available VPC component that allows communication between instances in your VPC and the internet. It, therefore, imposes no availability risks or bandwidth constraints on your network traffic.

345
Q

As part of a flexible pricing model, AWS offers two types of Savings Plans. Which of the following are the Savings Plans from AWS?

a. Reserved Instances (RI) Savings Plans, EC2 Instance Savings Plans
b. Instance Savings Plans, Storage Savings Plans
c. Compute Savings Plans, EC2 Instance Savings Plans
d. Compute Savings Plans, Storage Savings Plans

A

c. Compute Savings Plans, EC2 Instance Savings Plans

Savings Plans is a flexible pricing model that provides savings of up to 72% on your AWS compute usage. This pricing model offers lower prices on Amazon EC2 instances usage, regardless of instance family, size, OS, tenancy or AWS Region, and also applies to AWS Fargate and AWS Lambda usage.

Savings Plans offer significant savings over On-Demand, just like EC2 Reserved Instances, in exchange for a commitment to use a specific amount of compute power (measured in $/hour) for a one or three-year period. You can sign up for Savings Plans for a 1- or 3-year term and easily manage your plans by taking advantage of recommendations, performance reporting and budget alerts in the AWS Cost Explorer.

AWS offers two types of Savings Plans:
1. Compute Savings Plans provide the most flexibility and help to reduce your costs by up to 66%. These plans automatically apply to EC2 instance usage regardless of instance family, size, AZ, region, OS or tenancy, and also apply to Fargate and Lambda usage. For example, with Compute Savings Plans, you can change from C4 to M5 instances, shift a workload from EU (Ireland) to EU (London), or move a workload from EC2 to Fargate or Lambda at any time and automatically continue to pay the Savings Plans price.
2. EC2 Instance Savings Plans provide the lowest prices, offering savings up to 72% in exchange for a commitment to the usage of individual instance families in a region (e.g. M5 usage in N. Virginia). This automatically reduces your cost on the selected instance family in that region regardless of AZ, size, OS or tenancy. EC2 Instance Savings Plans give you the flexibility to change your usage between instances within a family in that region. For example, you can move from c5.xlarge running Windows to c5.2xlarge running Linux and automatically benefit from the Savings Plans prices.

How Savings Plans Work: https://aws.amazon.com/savingsplans/

346
Q

What foundational capability under the operations perspective is part of the AWS Cloud Adoption Framework (AWS CAF)?

a. Performance and capacity management
b. Application portfolio management
c. Vulnerability management
d. Platform engineering

A

a. Performance and capacity management

The AWS Cloud Adoption Framework (AWS CAF) leverages AWS experience and best practices to help you digitally transform and accelerate your business outcomes through innovative use of AWS. Use the AWS CAF to identify and prioritize transformation opportunities, evaluate and improve your cloud readiness, and iteratively evolve your transformation roadmap.

AWS CAF groups its capabilities in six perspectives: Business, People, Governance, Platform, Security, and Operations. Each perspective comprises a set of capabilities that functionally related stakeholders own or manage in your cloud transformation journey.

Operations perspective helps ensure that your cloud services are delivered at a level that meets the needs of your business. Performance and capacity management under the Operations perspective is part of the AWS Cloud Adoption Framework (AWS CAF)

AWS Cloud Adoption Framework (AWS CAF) - Foundational capabilities: https://docs.aws.amazon.com/whitepapers/latest/overview-aws-cloud-adoption-framework/foundational-capabilities.html

Incorrect options:

Vulnerability management - Vulnerability management is a foundational capability under the Security perspective for the AWS Cloud Adoption Framework (AWS CAF).

Platform engineering - Platform engineering is a foundational capability under the Platform perspective for the AWS Cloud Adoption Framework (AWS CAF).

Application portfolio management - Application Portfolio Management is a foundational capability under the Governance perspective for the AWS Cloud Adoption Framework (AWS CAF).

347
Q

AWS Marketplace facilitates which of the following use-cases? (Select two)

a. Purchase compliance documents from third-party vendors
b. Sell Software as a Service (SaaS) solutions to AWS customers
c. Raise request for purchasing AWS Direct Connect connection
d. AWS customer can buy software that has been bundled into customized Amazon Machine Image (AMIs) by the AWS Marketplace sellers
e. Buy Amazon EC2 Standard Reserved Instances (RI)

A

b. Sell Software as a Service (SaaS) solutions to AWS customers
d. AWS customer can buy software that has been bundled into customized Amazon Machine Image (AMIs) by the AWS Marketplace sellers

AWS Marketplace is a digital catalog with thousands of software listings from independent software vendors that make it easy to find, test, buy, and deploy software that runs on AWS. The AWS Marketplace enables qualified partners to market and sell their software to AWS Customers.

AWS Marketplace offers two ways for sellers to deliver software to customers: Amazon Machine Image (AMI) and Software as a Service (SaaS).

Amazon Machine Image (AMI): Offering an AMI is the preferred option for listing products in AWS Marketplace. Partners have the option for free or paid products. Partners can offer paid products charged by the hour or month. Bring-Your-Own-License (BYOL) is also available and enables customers with existing software licenses to easily migrate to AWS.

Software as a Service (SaaS): If you offer a SaaS solution running on AWS (and are unable to build your product into an AMI) the SaaS listing offers our partners a way to market their software to customers.

Incorrect options:

Purchase compliance documents from third-party vendors - There is no third party vendor for providing compliance documents. AWS Artifact is your go-to, central resource for compliance-related information that matters to you. It provides on-demand access to AWS’ security and compliance reports and select online agreements.

Buy Amazon EC2 Standard Reserved Instances (RI) - Amazon EC2 Standard Reserved Instances (RI) can be bought from the Amazon EC2 console at https://console.aws.amazon.com/ec2/

Raise request for purchasing AWS Direct Connect connection - AWS Direct Connect connection can be raised from the AWS management console at https://console.aws.amazon.com/directconnect/v2/home

348
Q

An engineering team would like to cost-effectively run hundreds of thousands of batch computing workloads on AWS. As a Cloud Practitioner, which AWS service would you use for this task?

a. AWS Fargate
b. Amazon Lightsail
c. AWS Batch
d. AWS Lambda

A

c. AWS Batch

AWS Batch enables developers, scientists, and engineers to easily and efficiently run hundreds of thousands of batch computing jobs on AWS.

You can use AWS Batch to plan, schedule, and execute your batch computing workloads across the full range of AWS compute services. AWS Batch dynamically provisions the optimal quantity and type of compute resources (for example - memory optimized instance or CPU) based on the volume and specific resource requirements of the batch jobs submitted. AWS Batch provisions compute resources and optimize the job distribution based on the volume and resource requirements of the submitted batch jobs.

Please review the common use cases for AWS Batch: https://aws.amazon.com/batch/

Incorrect options:

AWS Lambda - AWS Lambda lets you run code without provisioning or managing servers. It can be used to run batch jobs but has a time limit and limited runtimes. It is usually used for smaller batch jobs.

Amazon Lightsail - Amazon Lightsail is designed to be the easiest way to launch and manage a virtual private server (VPS) with AWS. Amazon Lightsail plans include everything you need to jumpstart your project – a virtual machine, SSD- based storage, data transfer, Domain Name System (DNS) management, and a static IP address – for a low, predictable price. It is not used to run batch jobs.

AWS Fargate - AWS Fargate is a compute engine for Amazon Elastic Container Service (Amazon ECS) that allows you to run containers without having to manage servers or clusters. You can run batch jobs on AWS Fargate, but it is more expensive than AWS Batch.

349
Q

Which of the following AWS services can be used to forecast your AWS account usage and costs?

a. AWS Cost & Usage Report (AWS CUR)
b. AWS Cost Explorer
c. AWS Budgets
d. AWS Pricing Calculator

A

b. AWS Cost Explorer

AWS Cost Explorer has an easy-to-use interface that lets you visualize, understand, and manage your AWS costs and usage over time. AWS Cost Explorer includes a default report that helps you visualize the costs and usage associated with your top five cost-accruing AWS services, and gives you a detailed breakdown of all services in the table view. The reports let you adjust the time range to view historical data going back up to twelve months to gain an understanding of your cost trends. AWS Cost Explorer also supports forecasting to get a better idea of what your costs and usage may look like in the future so that you can plan.

AWS Cost Explorer Features: https://aws.amazon.com/aws-cost-management/aws-cost-explorer/

Incorrect options:

AWS Cost & Usage Report (AWS CUR) - The AWS Cost & Usage Report (AWS CUR) contains the most comprehensive set of cost and usage data available. You can use Cost and Usage Reports to publish your AWS billing reports to an Amazon Simple Storage Service (Amazon S3) bucket that you own. You can receive reports that break down your costs by the hour or month, by product or product resource, or by tags that you define yourself. AWS updates the report in your bucket once a day in a comma-separated value (CSV) format. AWS Cost and Usage Reports cannot forecast your AWS account cost and usage.

AWS Budgets - AWS Budgets gives the ability to set custom budgets that alert you when your costs or usage exceed (or are forecasted to exceed) your budgeted amount. You can also use AWS Budgets to set reservation utilization or coverage targets and receive alerts when your utilization drops below the threshold you define. Budgets can be created at the monthly, quarterly, or yearly level, and you can customize the start and end dates. You can further refine your budget to track costs associated with multiple dimensions, such as AWS service, linked account, tag, and others. AWS Budgets cannot forecast your AWS account cost and usage.

AWS Pricing Calculator - AWS Pricing Calculator lets you explore AWS services and create an estimate for the cost of your use cases on AWS. You can model your solutions before building them, explore the price points and calculations behind your estimate, and find the available instance types and contract terms that meet your needs. This enables you to make informed decisions about using AWS. You can plan your AWS costs and usage or price out setting up a new set of instances and services. You cannot use this service to forecast your AWS account cost and usage.

350
Q

Which of the following statement is correct regarding the AWS pricing policy for data transfer charges into or out of an AWS Region?

a. Both inbound data transfer and outbound data transfer are charged
b. Only outbound data transfer is charged
c. Only inbound data transfer is charged
d. Neither inbound nor outbound data transfer are charged

A

b. Only outbound data transfer is charged

One of the main benefits of cloud services is the ability it gives you to optimize costs to match your needs, even as those needs change. AWS services do not have complex dependencies or licensing requirements, so you can get exactly what you need to build innovative, cost-effective solutions using the latest technology.

There are three fundamental drivers of cost with AWS: compute, storage, and outbound data transfer. These characteristics vary somewhat, depending on the AWS product and pricing model you choose. Outbound data to the internet from all AWS regions is billed at region-specific, tiered data transfer rates. Inbound data transfer into all AWS regions from the internet is free.

351
Q

An organization is planning to move its infrastructure from the on-premises datacenter to AWS Cloud. As a Cloud Practioner, which options would you recommend so that the organization can identify the right AWS services to build solutions on AWS Cloud (Select two)?

a. AWS CloudTrail
b. Amazon CloudWatch
c. AWS Partner Network (APN)
d. AWS Service Catalog
e. AWS Organizations

A

c. AWS Partner Network (APN)
d. AWS Service Catalog

AWS Service Catalog - AWS Service Catalog allows organizations to create and manage catalogs of IT services that are approved for use on AWS. These IT services can include everything from virtual machine images, servers, software, and databases to complete multi-tier application architectures.

AWS Partner Network (APN) - Organizations can take help from the AWS Partner Network (APN) to identify the right AWS services to build solutions on AWS Cloud. AWS Partner Network (APN) is the global partner program for technology and consulting businesses that leverage Amazon Web Services to build solutions and services for customers.

352
Q

A firm wants to maintain the same data on Amazon Simple Storage Service (Amazon S3) between its production account and multiple test accounts. Which technique should you choose to copy data into multiple test accounts while retaining object metadata?

a. Amazon S3 Replication
b. Amazon S3 Bucket Policy
c. Amazon S3 Storage Classes
d. Amazon S3 Transfer Acceleration (Amazon S3TA)

A

a. Amazon S3 Replication

Replication enables automatic, asynchronous copying of objects across Amazon S3 buckets. Buckets that are configured for object replication can be owned by the same AWS account or by different accounts. You can copy objects between different AWS Regions or within the same Region. You can use replication to make copies of your objects that retain all metadata, such as the original object creation time and version IDs. This capability is important if you need to ensure that your replica is identical to the source object.

Exam Alert:

Amazon S3 supports two types of replication: S3 cross-region replication (S3 CRR) vs S3 same-region replication (S3 SRR). Please review the differences between S3 SRR and S3 CRR: https://docs.aws.amazon.com/AmazonS3/latest/dev/replication.html

Incorrect options:

Amazon S3 Bucket Policy - A bucket policy is a resource-based AWS Identity and Access Management (IAM) policy. You add a bucket policy to a bucket to grant other AWS accounts or IAM users access permissions for the bucket and the objects in it. Object permissions apply only to the objects that the bucket owner creates. You cannot replicate data using a bucket policy.

Amazon S3 Transfer Acceleration (Amazon S3TA) - Amazon S3 Transfer Acceleration (Amazon S3TA) enables fast, easy, and secure transfers of files over long distances between your client and an S3 bucket. Transfer Acceleration takes advantage of Amazon CloudFront’s globally distributed edge locations. This facility speeds up access between end-user and S3, this is not for replicating data.

Amazon S3 Storage Classes - Amazon S3 offers a range of storage classes designed for different use cases. Each storage class has a defined set of rules to store, encrypt data at a certain price. Based on the use case, customers can choose the storage class that best suits their business requirements.

These include S3 Standard for general-purpose storage of frequently accessed data; S3 Intelligent-Tiering for data with unknown or changing access patterns; S3 Standard-Infrequent Access (S3 Standard-IA) and S3 One Zone-Infrequent Access (S3 One Zone-IA) for long-lived, but less frequently accessed data; and Amazon S3 Glacier (S3 Glacier) and Amazon S3 Glacier Deep Archive (S3 Glacier Deep Archive) for long-term archive and digital preservation. You cannot replicate data using storage classes.

353
Q

A cyber forensics team has detected that AWS owned IP-addresses are being used to carry out malicious attacks. As this constitutes prohibited use of AWS services, which of the following is the correct solution to address this issue?
a. Write an email to Jeff Bezos, the founder of Amazon, with the details of the incident
b. Contact AWS Support
c. Contact AWS Developer Forum moderators
d. Contact AWS Abuse Team

A

d. Contact AWS Abuse Team

The AWS Abuse team can assist you when AWS resources are used to engage in abusive behavior.

Please see details of the various scenarios that the AWS Abuse team can address: https://aws.amazon.com/premiumsupport/knowledge-center/report-aws-abuse/

354
Q

A production company would like to establish an AWS managed virtual private network (VPN) service between its on-premises network and AWS. Which item needs to be set up on the company’s side?

a. A VPC endpoint interface
b. A customer gateway
c. A security group
d. A virtual private gateway (VGW)

A

b. A customer gateway

A customer gateway device is a physical or software appliance on your side of a Site-to-Site VPN connection. You or your network administrator must configure the device to work with the Site-to-Site VPN connection.

You can enable access to your remote network from your VPC by creating an AWS Site-to-Site VPN (Site-to-Site VPN) connection, and configuring routing to pass traffic through the connection.

Incorrect options:

A security group - A security group acts as a virtual firewall for your instance to control inbound and outbound traffic. It is not a component of a connection between on-premises network and AWS.

A VPC endpoint interface - An interface VPC endpoint (interface endpoint) enables you to connect to services powered by AWS PrivateLink. It is not a component of a connection between on-premises network and AWS.

A virtual private gateway (VGW) - A virtual private gateway (VGW) device is a physical or software appliance on AWS side of a Site-to-Site VPN connection.

355
Q

Which of the following describes an Availability Zone (AZ) in the AWS Cloud?

a. One or more data centers in multiple locations
b. One or more data centers in the same location
c. One or more server racks in multiple locations
d. One or more server racks in the same location

A

b. One or more data centers in the same location

An Availability Zone (AZ) is one or more discrete data centers with redundant power, networking, and connectivity in an AWS Region. An AWS Region refers to a physical location around the world where AWS clusters data centers. AZs give customers the ability to operate production applications and databases that are more highly available, fault-tolerant, and scalable than would be possible from a single data center. All AZs in an AWS Region are interconnected with high-bandwidth, low-latency networking, over fully redundant, dedicated metro fiber providing high-throughput, low-latency networking between AZs.

AWS Regions and Availability Zones (AZs) Explained: https://aws.amazon.com/about-aws/global-infrastructure/regions_az/

356
Q

A media company uses Amazon Simple Storage Service (Amazon S3) for storing all its data. Which storage class should it consider for cost-optimal storage of the data that has random access patterns?

a. Amazon S3 Intelligent-Tiering (S3 Intelligent-Tiering)
b. Amazon S3 Standard (S3 Standard)
c. Amazon S3 Random Access (S3 Random-Access)
d. Amazon S3 Standard-Infrequent Access (S3 Standard-IA)

A

a. Amazon S3 Intelligent-Tiering (S3 Intelligent-Tiering)

357
Q

An e-commerce company has its on-premises data storage on an NFS file system that is accessed in parallel by multiple applications. The company is looking at moving the applications and data stores to AWS Cloud.

Which storage service should the company use to move their files to AWS Cloud seamlessly if the application is hosted on Amazon Elastic Compute Cloud (Amazon EC2) instances?

a. AWS Storage Gateway
b. Amazon Simple Storage Service (Amazon S3)
c. Amazon Elastic File System (Amazon EFS)
d. Amazon Elastic Block Store (Amazon EBS)

A

c. Amazon Elastic File System (Amazon EFS)

Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources. It is built to scale on-demand to petabytes without disrupting applications, growing and shrinking automatically as you add and remove files, eliminating the need to provision and manage capacity to accommodate growth.

Amazon EFS is designed to provide massively parallel shared access to thousands of Amazon EC2 instances, enabling your applications to achieve high levels of aggregate throughput and input/output operations per second (IOPS) with consistently low latencies.

Amazon EFS is well suited to support a broad spectrum of use cases from home directories to business-critical applications. Customers can use EFS to lift and shift existing enterprise applications to the AWS Cloud. Other use cases include big data analytics, web serving and content management, application development, and testing, media and entertainment workflows, database backups, and container storage.

Amazon EFS offers two storage classes: the Standard storage class, and the Infrequent Access storage class (EFS IA). EFS IA provides price/performance that’s cost-optimized for files not accessed every day. By simply enabling EFS Lifecycle Management on your file system, files not accessed according to the lifecycle policy you choose will be automatically and transparently moved into EFS IA.

How Amazon EFS Works: https://aws.amazon.com/efs/

Incorrect options:

Amazon Elastic Block Store (Amazon EBS) - Amazon Elastic Block Store (Amazon EBS) is an easy-to-use, high-performance, block-storage service designed for use with Amazon Elastic Compute Cloud (EC2) for both throughput and transaction-intensive workloads at any scale. A broad range of workloads, such as relational and non-relational databases, enterprise applications, containerized applications, big data analytics engines, file systems, and media workflows are widely deployed on Amazon EBS. EBS is a block storage service and not a file storage service like EFS.

Amazon Simple Storage Service (Amazon S3) - Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance. This means customers of all sizes and industries can use it to store and protect any amount of data for a range of use cases, such as data lakes, websites, mobile applications, backup and restore, archive, enterprise applications, IoT devices, and big data analytics. Parallel access of NFS file systems is not a feature Amazon S3 is capable of and hence EFS is the right choice here.

AWS Storage Gateway - AWS Storage Gateway is a hybrid cloud storage service that gives you on-premises access to virtually unlimited cloud storage. Customers use Storage Gateway to simplify storage management and reduce costs for key hybrid cloud storage use cases. These include moving backups to the cloud, using on-premises file shares backed by cloud storage, and providing low latency access to data in AWS for on-premises applications.

358
Q

Which AWS service can be used to host a static website with the LEAST effort?

a. AWS Storage Gateway
b. Amazon Elastic File System (Amazon EFS)
c. Amazon S3 Glacier
d. Amazon Simple Storage Service (Amazon S3)

A

d. Amazon Simple Storage Service (Amazon S3)

Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance. Amazon S3’s flat, non-hierarchical structure and various management features are helping customers of all sizes and industries organize their data in ways that are valuable to their businesses and teams. To host a static website on Amazon S3, you configure an Amazon S3 bucket for website hosting and then upload your website content to the bucket. When you configure a bucket as a static website, you must enable website hosting, set permissions, and create and add an index document.

Hosting a static website on Amazon S3: https://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteHosting.html

Incorrect options:

AWS Storage Gateway - AWS Storage Gateway is a hybrid cloud storage service that gives you on-premises access to virtually unlimited cloud storage. Customers use Storage Gateway to simplify storage management and reduce costs for key hybrid cloud storage use cases. It helps on-premises applications to access data on AWS Cloud. It cannot be used to host a website.

Amazon Elastic File System (Amazon EFS) - Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources. EFS storage option cannot directly be used to host a website, EFS needs to be mounted on Amazon EC2 to work as a static website.

Amazon S3 Glacier - Amazon S3 Glacier is a secure, durable, and extremely low-cost Amazon S3 cloud storage class for data archiving and long-term backup. As you see, this cannot be used for hosting a website.

359
Q

An e-commerce company wants to assess its applications deployed on Amazon Elastic Compute Cloud (Amazon EC2) instances for vulnerabilities and deviations from AWS best practices. Which AWS service can be used to facilitate this?

a. Amazon Inspector
b. AWS CloudHSM
c. AWS Secrets Manager
d. AWS Trusted Advisor

A

a. Amazon Inspector

Amazon Inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on AWS. Amazon Inspector automatically assesses applications for exposure, vulnerabilities, and deviations from best practices.

Overview of Amazon Inspector: https://aws.amazon.com/inspector/

360
Q

A social media analytics company wants to migrate to a serverless stack on AWS. Which of the following scenarios can be handled by AWS Lambda? (Select two)

a. AWS Lambda can be used to store sensitive environment variables
b. You can install Container Services on AWS Lambda
c. AWS Lambda can be used for preprocessing of data before it is stored in Amazon S3 buckets
d. AWS Lambda can be used to execute code in response to events such as updates to DynamoDB tables
e. You can install low latency databases on AWS Lambda

A

c. AWS Lambda can be used for preprocessing of data before it is stored in Amazon S3 buckets
d. AWS Lambda can be used to execute code in response to events such as updates to DynamoDB tables

AWS Lambda lets you run code without provisioning or managing servers (Lambda is serverless). With Lambda, you can run code for virtually any type of application or backend service - all with zero administration. Just upload your code and Lambda takes care of everything required to run and scale your code with high availability. This functionality makes it an extremely useful service capable of being a serverless backend for websites, data preprocessing, real-time data transformations when used with streaming data, etc.

How AWS Lambda Works: https://aws.amazon.com/lambda/

AWS Lambda can be used to execute code in response to events such as updates to DynamoDB tables

AWS Lambda can be configured to execute code in response to events, such as changes to Amazon S3 buckets, updates to an Amazon DynamoDB table, or custom events generated by your applications or devices.

AWS Lambda can be used for preprocessing of data before it is stored in Amazon S3 buckets

AWS Lambda can be used to run preprocessing scripts to filter, sort or transform data before sending it to downstream applications/services.

Incorrect options:

You can install low latency databases on AWS Lambda - AWS Lambda is serverless, so the underlying hardware and its working is not exposed to the customer. Installing software is not possible since we do not have access to the actual physical server on which Lambda executes the code.

You can install Container Services on AWS Lambda - As discussed above, AWS Lambda cannot be used for installing any software, since the underlying hardware/software might change for each request. But, it is possible to set an environment with necessary libraries when running scripts on Lambda.

AWS Lambda can be used to store sensitive environment variables - AWS Lambda is not a storage service and does not offer capabilities to store data. However, it is possible to read and decrypt/encrypt data using scripts in Lambda.

361
Q

A multi-national company has just moved its infrastructure from its on-premises data center to AWS Cloud. As part of the shared responsibility model, AWS is responsible for which of the following?
a. Physical and Environmental controls
b. Configuring customer applications
c. Service and Communications Protection or Zone Security
d. Patching guest OS

A

a. Physical and Environmental controls

As part of the shared responsibility model, Physical and Environmental controls are part of the inherited controls and hence these are the responsibility of AWS.

Service and Communications Protection or Zone Security - Customers are responsible for Service and Communications Protection or Zone Security which may require the customers to route or zone data within specific security environments.

362
Q

Which of the following statement is correct for a Security Group and a Network Access Control List (Network ACL)?

a. Security Group acts as a firewall at the instance level whereas Network Access Control List (Network ACL) acts as a firewall at the subnet level
b. Security Group acts as a firewall at the VPC level whereas Network Access Control List (Network ACL) acts as a firewall at the Availability Zone (AZ) level
c. Security Group acts as a firewall at the Availability Zone (AZ) level whereas Network Access Control List (Network ACL) acts as a firewall at the VPC level
d. Security Group acts as a firewall at the subnet level whereas Network Access Control List (Network ACL) acts as a firewall at the instance level

A

a. Security Group acts as a firewall at the instance level whereas Network Access Control List (Network ACL) acts as a firewall at the subnet level

A security group acts as a virtual firewall for your instance to control inbound and outbound traffic. When you launch an instance in a VPC, you can assign up to five security groups to the instance. Security groups act at the instance level, not the subnet level. A network access control list (network ACL) is an optional layer of security for your VPC that acts as a firewall for controlling traffic in and out of one or more subnets (i.e. it works at subnet level).

Security Group Overview: https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html

Network Access Control List (network NACL) Overview: https://docs.aws.amazon.com/vpc/latest/userguide/vpc-network-acls.html

363
Q

Which of the following statements is an AWS best practice when architecting for the Cloud?

a. Security comes last
b. Servers, not services
c. Automation
d. Close coupling

A

c. Automation

Automation should be implemented to improve both your system’s stability and the efficiency of your organization. There are many services to automate application architecture (AWS Elastic Beanstalk, Auto Scaling, AWS Lambda, etc.) to ensure more resiliency, scalability, and performance.

Incorrect options:

Servers, not services - The correct best practice is: “Services, not servers”. AWS recommends developing, managing, and operating applications, especially at scale, using the broad set of compute, storage, database, analytics, applications, and deployment services offered by AWS to move faster and lower IT costs.

Close coupling - The correct best practice is: “Loose coupling”. AWS recommends that, as application complexity increases, IT systems should be designed in a way that reduces interdependencies. Therefore, a change or a failure in one component should not cascade to other components.

Security comes last - AWS allows you to improve your security in many, more simple ways. Therefore, you should take advantage of this and implement a high level of security.

364
Q

According to the AWS Shared Responsibility Model, which of the following are responsibilities of the customer for Amazon RDS?

a. Applying patches to the underlying OS
b. Database encryption
c. Applying patches to the Amazon Relational Database Service (Amazon RDS) database
d. Managing the underlying server hardware on which Amazon Relational Database Service (Amazon RDS) runs

A

b. Database encryption

Security and Compliance is a shared responsibility between AWS and the customer. This shared model can help relieve the customer’s operational burden as AWS operates, manages and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the service operates.

Database encryption - Under the AWS Shared Responsibility Model, customers are responsible for managing their data, including data encryption.

AWS Shared Responsibility Model Overview: via - https://aws.amazon.com/compliance/shared-responsibility-model/

365
Q

Which of the following AWS services support reservations to optimize costs? (Select three)
a. Amazon DocumentDB
b. Amazon Elastic Compute Cloud (Amazon EC2)
c. Amazon Relational Database Service (Amazon RDS)
d. Amazon DynamoDB
e. Amazon Simple Storage Service (Amazon S3)
f. AWS Lambda

A

b. Amazon Elastic Compute Cloud (Amazon EC2)

c. Amazon Relational Database Service (Amazon RDS)

d. Amazon DynamoDB

The following AWS services support reservations to optimize costs:

Amazon EC2 Reserved Instances (RI)

Amazon DynamoDB Reserved Capacity

Amazon ElastiCache Reserved Nodes

Amazon RDS RIs

Amazon Redshift Reserved Nodes

366
Q

Which benefit of Cloud Computing allows AWS to offer lower pay-as-you-go prices as usage from hundreds of thousands of customers is aggregated in the cloud?

a. Go global in minutes
b. Increased speed and agility
c. Trade capital expense for variable expense
d. Massive economies of scale

A

d. Massive economies of scale

Cloud computing is the on-demand delivery of IT resources over the Internet with pay-as-you-go pricing. Instead of buying, owning, and maintaining physical data centers and servers, you can access technology services, such as computing power, storage, and databases, on an as-needed basis.

By using cloud computing, you can achieve a lower variable cost than you can get on your own. Because usage from hundreds of thousands of customers is aggregated in the cloud, providers such as AWS can achieve higher economies of scale, which translates into lower pay-as-you-go prices.

Exam Alert:

Please check out the following six advantages of Cloud Computing. You would certainly be asked questions on the advantages of Cloud Computing compared to a traditional on-premises setup:

Trade fixed expense for variable expense – Instead of having to invest heavily in data centers and servers before you know how you’re going to use them, you can pay only when you consume computing resources, and pay only for how much you consume.

Benefit from massive economies of scale – By using cloud computing, you can achieve a lower variable cost than you can get on your own. Because usage from hundreds of thousands of customers is aggregated in the cloud, providers such as AWS can achieve higher economies of scale, which translates into lower pay as-you-go prices.

Stop guessing capacity – Eliminate guessing on your infrastructure capacity needs. When you make a capacity decision prior to deploying an application, you often end up either sitting on expensive idle resources or dealing with limited capacity. With cloud computing, these problems go away. You can access as much or as little capacity as you need, and scale up and down as required with only a few minutes’ notice.

Increase speed and agility – In a cloud computing environment, new IT resources are only a click away, which means that you reduce the time to make those resources available to your developers from weeks to just minutes. This results in a dramatic increase in agility for the organization, since the cost and time it takes to experiment and develop is significantly lower.

Stop spending money running and maintaining data centers – Focus on projects that differentiate your business, not the infrastructure. Cloud computing lets you focus on your own customers, rather than on the heavy lifting of racking, stacking, and powering servers.

Go global in minutes – Easily deploy your application in multiple regions around the world with just a few clicks. This means you can provide lower latency and a better experience for your customers at minimal cost.

367
Q

Which of the following solutions can you use to connect your on-premises network with AWS Cloud (Select two)?

a. AWS Virtual Private Network (VPN)
b. AWS Direct Connect
c. Amazon Virtual Private Cloud (Amazon VPC)
d. Amazon Route 53
e. Internet Gateway

A

a. AWS Virtual Private Network (VPN)
b. AWS Direct Connect

AWS Direct Connect - AWS Direct Connect is a cloud service solution that makes it easy to establish a dedicated network connection from your premises to AWS. Using AWS Direct Connect, you can establish private connectivity between AWS and your datacenter, office, or colocation environment, which in many cases can reduce your network costs, increase bandwidth throughput, and provide a more consistent network experience than Internet-based connections.

How AWS Direct Connect Works: https://aws.amazon.com/directconnect/

AWS Virtual Private Network (VPN) - AWS Virtual Private Network (VPN) solutions establish secure connections between on-premises networks, remote offices, client devices, and the AWS global network. AWS VPN is comprised of two services: AWS Site-to-Site VPN and AWS Client VPN. Together, they deliver a highly-available, managed, and elastic cloud VPN solution to protect your network traffic.

How AWS Client VPN Works: https://aws.amazon.com/vpn/

368
Q

Which of the following billing timeframes is applied when running a Windows EC2 on-demand instance?

a. Pay per minute
b. Pay per second
c. Pay per hour
d. Pay per day

A

b. Pay per second

With On-Demand instances, you only pay for the Amazon EC2 instances you use. The use of On-Demand instances frees you from the costs and complexities of planning, purchasing, and maintaining hardware and transforms what are commonly large fixed costs into much smaller variable costs.

When running a Windows EC2 on-demand instance, pay-per-second pricing is applied.

Incorrect options:

Pay per hour - When running an Amazon Windows EC2 On-demand instance, pay-per-second pricing is applied. Windows-based EC2 instances used to follow pay-per-hour pricing earlier.

Pay per minute - Pay per minute pricing is not available for Windows EC2 on-demand instances, or any other type of on-demand EC2 instance.

Pay per day - Pay per day pricing is not available for Windows EC2 on-demand instances, or any other type of on-demand EC2 instance.

369
Q

Which of the following options can be used to access and manage all AWS services (Select three)?

a. AWS Software Development Kit (SDK)
b. AWS Command Line Interface (AWS CLI)
c. AWS Systems Manager
d. AWS Management Console
e. AWS Secrets Manager
f. Amazon API Gateway

A

a. AWS Software Development Kit (SDK)
b. AWS Command Line Interface (AWS CLI)
d. AWS Management Console

AWS services can be accessed in three different ways:

AWS Management Console

This is a simple web interface for accessing AWS services.

AWS Command Line Interface (AWS CLI)

You can access AWS services from the command line and automate service management with scripts.

AWS Software Development Kit (SDK)

You can also access via AWS SDK that provides language-specific abstracted APIs for AWS services.

370
Q

Which type of cloud computing does Amazon Elastic Compute Cloud (EC2) represent?
a. Software as a Service (SaaS)
b. Network as a Service (NaaS)
c. Infrastructure as a Service (IaaS)
d. Platform as a Service (PaaS)

A

c. Infrastructure as a Service (IaaS)

Cloud Computing can be broadly divided into three types - Infrastructure as a Service (IaaS), Platform as a Service (PaaS), Software as a Service (SaaS).

IaaS contains the basic building blocks for cloud IT. It typically provides access to networking features, computers (virtual or on dedicated hardware), and data storage space. IaaS gives the highest level of flexibility and management control over IT resources.

EC2 gives you full control over managing the underlying OS, virtual network configurations, storage, data and applications. So EC2 is an example of an IaaS service.

Please review this overview of the types of Cloud Computing: https://aws.amazon.com/types-of-cloud-computing/

371
Q

Which of the following is a perspective of the AWS Cloud Adoption Framework (AWS CAF)?

a. Business
b. Process
c. Architecture
d. Product

A

a. Business

The AWS Cloud Adoption Framework (AWS CAF) leverages AWS experience and best practices to help you digitally transform and accelerate your business outcomes through innovative use of AWS. AWS CAF identifies specific organizational capabilities that underpin successful cloud transformations.

AWS CAF groups its capabilities in six perspectives: Business, People, Governance, Platform, Security, and Operations.

372
Q

Which Amazon Route 53 routing policy would you use to route traffic to a single resource such as a web server for your website?

a. Failover routing
b. Simple routing
c. Weighted routing
d. Latency-based routing

A

b. Simple routing

373
Q

Which Amazon Elastic Compute Cloud (Amazon EC2) pricing model is the most cost-effective and flexible with no requirement for a long term resource commitment or upfront payment but still guarantees that instance would not be interrupted?

a. Dedicated Host
b. Reserved Instance (RI)
c. Spot Instance
d. On-demand Instance

A

d. On-demand Instance

An On-Demand Instance is an instance that you use on-demand. You have full control over its lifecycle — you decide when to launch, stop, hibernate, start, reboot, or terminate it. There is no long-term commitment required when you purchase On-Demand Instances. There is no upfront payment and you pay only for the seconds that your On-Demand Instances are running. The price per second for running an On-Demand Instance is fixed. On-demand instances cannot be interrupted.

EC2 Pricing Options Overview: https://aws.amazon.com/ec2/pricing/

374
Q

A start-up would like to quickly deploy a popular technology on AWS. As a Cloud Practitioner, which AWS tool would you use for this task?

a. AWS CodeDeploy
b. AWS Whitepapers
c. AWS Forums
d. AWS Partner Solutions (formerly Quick Starts)

A

d. AWS Partner Solutions (formerly Quick Starts)

AWS Partner Solutions are automated reference deployments built by Amazon Web Services (AWS) solutions architects and AWS Partners. Partner Solutions help you deploy popular technologies to AWS according to AWS best practices. You can reduce hundreds of manual procedures to a few steps and start using your environment within minutes.

AWS Partner Solutions are automated reference deployments for key workloads on the AWS Cloud. Each Partner Solution launches, configures, and runs the AWS compute, network, storage, and other services required to deploy a specific workload on AWS, using AWS best practices for security and availability.

Incorrect options:

AWS Forums - AWS Forums is an AWS community platform where people can help each other. It is not used to deploy technologies on AWS.

AWS CodeDeploy - AWS CodeDeploy is a service that automates code deployments to any instance, including Amazon EC2 instances and instances running on-premises. It is not suited to rapidly deploy popular technologies on AWS ready to be used immediately.

AWS Whitepapers - AWS Whitepapers are technical content authored by AWS and the AWS community to expand your knowledge of the cloud. They include technical whitepapers, technical guides, reference material, and reference architecture diagrams. You can find useful content for your deployment, but it is not a service that will deploy technologies.

375
Q

A team manager needs data about the changes that have taken place for AWS resources in his account during the past two weeks. Which AWS service can help get this data?

a. Amazon Inspector
b. AWS Config
c. Amazon CloudWatch
d. AWS CloudTrail

A

b. AWS Config

AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. Config continuously monitors and records your AWS resource configurations and allows you to automate the evaluation of recorded configurations against desired configurations. With Config, you can review changes in configurations and relationships between AWS resources, dive into detailed resource configuration histories, and determine your overall compliance against the configurations specified in your internal guidelines. This enables you to simplify compliance auditing, security analysis, change management, and operational troubleshooting.

While AWS Config helps you answer questions like - “What did my AWS resource look like?” at a point in time. You can use AWS CloudTrail to answer “Who made an API call to modify this resource?”

Diagrammatic representation of how AWS Config works: https://aws.amazon.com/config/

Incorrect options:

Amazon CloudWatch - You can use Amazon CloudWatch to collect and track metrics, collect and monitor log files, and set alarms. Amazon CloudWatch can monitor AWS resources such as Amazon EC2 instances, Amazon DynamoDB tables, and Amazon RDS DB instances, as well as custom metrics generated by your applications and services, and any log files your applications generate. You can use Amazon CloudWatch to gain system-wide visibility into resource utilization, application performance, and operational health. You can use these insights to react and keep your application running smoothly. CloudWatch cannot however tell if the configuration of the resource has changed and what exactly changed.

376
Q

Which of the following are the advantages of using the AWS Cloud? (Select TWO)

a. Trade operational expense for capital expense
b. AWS is responsible for security in the cloud
c. Limited scaling
d. Increase speed and agility
e. Stop guessing about capacity

A

d. Increase speed and agility
e. Stop guessing about capacity

Exam Alert:

Please check out the following six advantages of Cloud Computing. You would certainly be asked questions on the advantages of Cloud Computing compared to a traditional on-premises setup:

Trade fixed expense for variable expense – Instead of having to invest heavily in data centers and servers before you know how you’re going to use them, you can pay only when you consume computing resources, and pay only for how much you consume.

Benefit from massive economies of scale – By using cloud computing, you can achieve a lower variable cost than you can get on your own. Because usage from hundreds of thousands of customers is aggregated in the cloud, providers such as AWS can achieve higher economies of scale, which translates into lower pay as-you-go prices.

Stop guessing capacity – Eliminate guessing on your infrastructure capacity needs. When you make a capacity decision prior to deploying an application, you often end up either sitting on expensive idle resources or dealing with limited capacity. With cloud computing, these problems go away. You can access as much or as little capacity as you need, and scale up and down as required with only a few minutes’ notice.

Increase speed and agility – In a cloud computing environment, new IT resources are only a click away, which means that you reduce the time to make those resources available to your developers from weeks to just minutes. This results in a dramatic increase in agility for the organization, since the cost and time it takes to experiment and develop is significantly lower.

Stop spending money running and maintaining data centers – Focus on projects that differentiate your business, not the infrastructure. Cloud computing lets you focus on your own customers, rather than on the heavy lifting of racking, stacking, and powering servers.

Go global in minutes – Easily deploy your application in multiple regions around the world with just a few clicks. This means you can provide lower latency and a better experience for your customers at minimal cost.

377
Q

An AWS hardware failure has impacted one of your Amazon Elastic Block Store (Amazon EBS) volumes. Which AWS service will alert you of the affected resources and provide a remedial action?

a. AWS Health Dashboard – Your account health
b. Amazon GuardDuty
c. AWS Trusted Advisor
d. AWS Config

A

a. AWS Health Dashboard – Your account health

378
Q

Under the AWS Shared Responsibility Model, which of the following is the responsibility of a customer regarding AWS Lambda?

a. Maintain all runtime environments for AWS Lambda functions
b. Patch underlying OS for the AWS Lambda function infrastructure
c. Maintain versions of an AWS Lambda function
d. Configure networking infrastructure for the AWS Lambda functions

A

c. Maintain versions of an AWS Lambda function

Under the AWS Shared Responsibility Model, AWS is responsible for protecting the infrastructure that runs all of the services offered in the AWS Cloud. This infrastructure is composed of the hardware, software, networking, and facilities that run AWS Cloud services.

Under the AWS Shared Responsibility Model, customer’s responsibility is determined by the AWS Cloud services that a customer selects. For abstracted services, such as Amazon S3 and Amazon DynamoDB, AWS operates the infrastructure layer, the operating system, and platforms, and customers access the endpoints to store and retrieve data. Customers are responsible for managing their data (including encryption options), classifying their assets, and using IAM tools to apply the appropriate permissions.

For the given use-case, the customer is responsible for maintaining the versions of an AWS Lambda function.

AWS Shared Responsibility Model Overview: https://aws.amazon.com/compliance/shared-responsibility-model/

379
Q

Due to regulatory and compliance reasons, an organization is supposed to use a hardware device for any data encryption operations in the cloud. Which AWS service can be used to meet this compliance requirement?

a. AWS Secrets Manager
b. AWS Key Management Service (AWS KMS)
c. AWS CloudHSM
d. AWS Trusted Advisor

A

c. AWS CloudHSM

AWS CloudHSM is a cloud-based Hardware Security Module (HSM) that enables you to easily generate and use your encryption keys on the AWS Cloud. With CloudHSM, you can manage your encryption keys using FIPS 140-2 Level 3 validated HSMs. It is a fully-managed service that automates time-consuming administrative tasks for you, such as hardware provisioning, software patching, high-availability, and backups.

Please review this detailed description for CloudHSM: https://aws.amazon.com/cloudhsm/

380
Q

Which pillar of AWS Well-Architected Framework is responsible for making sure that you select the right resource types and sizes based on your workload requirements?

a. Operational Excellence
b. Cost Optimization
c. Performance Efficiency
d. Reliability

A

c. Performance Efficiency

The AWS Well-Architected Framework helps you understand the pros and cons of decisions you make while building systems on AWS. By using the AWS Well-Architected Framework you will learn architectural best practices for designing and operating reliable, secure, efficient, and cost-effective systems in the cloud. It provides a way for you to consistently measure your architectures against best practices and identify areas for improvement.

The AWS Well-Architected Framework is based on six pillars — Operational Excellence, Security, Reliability, Performance Efficiency, Cost Optimization and Sustainability.

Overview of the six pillars of the AWS Well-Architected Framework: https://aws.amazon.com/architecture/well-architected/

Performance Efficiency - The performance efficiency pillar focuses on using IT and computing resources efficiently. Key topics include selecting the right resource types and sizes based on workload requirements, monitoring performance, and making informed decisions to maintain efficiency as business needs evolve.

Incorrect options:

Cost Optimization - Cost Optimization focuses on avoiding un-needed costs. Key topics include understanding and controlling where the money is being spent, selecting the most appropriate and right number of resource types, analyzing spend over time, and scaling to meet business needs without overspending.

Reliability - This refers to the ability of a system to recover from infrastructure or service disruptions, dynamically acquire computing resources to meet demand, and mitigate disruptions such as misconfigurations or transient network issues.

Operational Excellence - The Operational Excellence pillar includes the ability to run and monitor systems to deliver business value and to continually improve supporting processes and procedures. In the cloud, you can apply the same engineering discipline that you use for application code to your entire environment. You can define your entire workload (applications, infrastructure) as code and update it with code. You can implement your operations procedures as code and automate their execution by triggering them in response to events.

381
Q

Which of the following statements are correct about the AWS root user account? (Select two)

a. It is highly recommended to enable Multi-Factor Authentication (MFA) for root user account
b. Root user access credentials are the email address and password used to create the AWS account
c. Root user account password cannot be changed once it is set
d. Root user account gets unrestricted permissions when the account is created, but these can be restricted using IAM policies
e. Root user credentials should only be shared with managers requiring administrative responsibilities to complete their jobs

A

a. It is highly recommended to enable Multi-Factor Authentication (MFA) for root user account
b. Root user access credentials are the email address and password used to create the AWS account

The Email address and the password used for signing up for AWS services are the AWS root user account credentials. Root user account, therefore, has full permissions on all AWS resources under that account. Restricting root user account access is not possible. As a best practice, Multi-Factor Authentication (MFA) should be set on the root user account. The root user account password can be changed after account creation. For all employees performing various administrative jobs, create individual user accounts using AWS IAM, and give administrative permissions as needed.

AWS Root User Account Security Best Practices: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_root-user.html

382
Q

Which of the following statements are true about AWS Elastic Beanstalk? (Select two)

a. AWS Elastic Beanstalk automates the details of capacity provisioning, load balancing, and application deployment, creating an environment that runs a version of your application. However, auto-scaling functionality cannot be automated using AWS Elastic Beanstalk
b. With AWS Elastic Beanstalk, you can quickly deploy and manage applications in the AWS Cloud without having to learn about the infrastructure that runs those applications
c. AWS Elastic Beanstalk supports web applications built on different languages. But, AWS Elastic Beanstalk cannot be used for deploying non-web applications
d. There is no additional charge for AWS Elastic Beanstalk. You pay only for the underlying AWS resources that your application consumes
e. AWS Elastic Beanstalk supports Java, .NET, PHP, but does not support Docker web applications

A

b. With AWS Elastic Beanstalk, you can quickly deploy and manage applications in the AWS Cloud without having to learn about the infrastructure that runs those applications
d. There is no additional charge for AWS Elastic Beanstalk. You pay only for the underlying AWS resources that your application consumes

With AWS Elastic Beanstalk, you can quickly deploy and manage applications in the AWS Cloud without having to learn about the infrastructure that runs those applications. Elastic Beanstalk reduces management complexity without restricting choice or control. You simply upload your application, and AWS Elastic Beanstalk automatically handles the details of capacity provisioning, load balancing, scaling, and application health monitoring.

AWS Elastic Beanstalk supports applications developed in Go, Java, .NET, Node.js, PHP, Python, and Ruby. When you deploy your application, AWS Elastic Beanstalk builds the selected supported platform version and provisions one or more AWS resources, such as Amazon EC2 instances, to run your application.

There is no additional charge for AWS Elastic Beanstalk. You pay only for the underlying AWS resources that your application consumes.

Incorrect options:

AWS Elastic Beanstalk supports Java, .NET, PHP, but does not support Docker web applications - AWS Elastic Beanstalk supports Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker web applications.

AWS Elastic Beanstalk automates the details of capacity provisioning, load balancing, and application deployment, creating an environment that runs a version of your application. However, auto-scaling functionality cannot be automated using AWS Elastic Beanstalk - AWS Elastic Beanstalk automates the details of capacity provisioning, load balancing, auto-scaling, and application deployment, creating an environment that runs a version of your application. You can simply upload your deployable code (e.g., WAR file), and AWS Elastic Beanstalk does the rest.

AWS Elastic Beanstalk supports web applications built on different languages. But, AWS Elastic Beanstalk cannot be used for deploying non-web applications - AWS Elastic Beanstalk supports Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker, and is ideal for web applications. However, due to Elastic Beanstalk’s open architecture, non-web applications can also be deployed using AWS Elastic Beanstalk.

383
Q

Compared to the on-demand instance prices, what is the highest possible discount offered for reserved instances (RI)?

a. 40
b. 90
c. 72
d. 50

A

c. 72

Reserved instances (RI) provide you with significant savings (up to 72%) on your Amazon Elastic Compute Cloud (Amazon EC2) costs compared to on-demand instance pricing. Reserved Instances (RI) are not physical instances, but rather a billing discount applied to the use of on-demand instances in your account. You can purchase a reserved instance (RI) for a one-year or three-year commitment, with the three-year commitment offering a bigger discount.

Amazon Elastic Compute Cloud (Amazon EC2) Pricing Options Overview: https://aws.amazon.com/ec2/pricing/

Note: Spot Instances can offer up to 90% discounts.

384
Q

Bob and Susan each have an AWS account in AWS Organizations. Susan has five Reserved Instances (RIs) of the same type and Bob has none. During one particular hour, Susan uses three instances and Bob uses six for a total of nine instances on the organization’s consolidated bill.

Which of the following statements are correct about consolidated billing in AWS Organizations? (Select two)

a. Bob does not receive any cost-benefit since he hasn’t purchased any Reserved Instance (RI). If his account has even one RI, then the cost-benefit from Susan’s account is also added to his account
b. AWS bills three instances as Reserved Instances (RI), and the remaining six instances as regular instances
c. Bob receives the cost-benefit from Susan’s Reserved Instance (RI) only if he launches his instances in the same AWS Region where Susan purchased her Reserved Instances (RI)
d. Bob receives the cost-benefit from Susan’s Reserved Instances (RI) only if he launches his instances in the same Availability Zone (AZ) where Susan purchased her Reserved Instances
e. AWS bills five instances as Reserved Instances, and the remaining four instances as regular instances

A

d. Bob receives the cost-benefit from Susan’s Reserved Instances (RI) only if he launches his instances in the same Availability Zone (AZ) where Susan purchased her Reserved Instances
e. AWS bills five instances as Reserved Instances, and the remaining four instances as regular instances

Bob receives the cost-benefit from Susan’s Reserved Instances (RI) only if he launches his instances in the same Availability Zone (AZ) where Susan purchased her Reserved Instances. For example, if Susan specifies us-west-2a when she purchases her Reserved Instances, Bob must specify us-west-2a when he launches his instances to get the cost-benefit on the organization’s consolidated bill. However, the actual locations of Availability Zones (AZs) are independent of one account to another. For example, the us-west-2a Availability Zone (AZ) for Bob’s account might be in a different location than the location for Susan’s account.

Since Susan has five Reserved Instances (RIs), AWS bills five instances as Reserved Instances, and the remaining four instances as regular instances.

Incorrect options:

AWS bills three instances as Reserved Instances (RI), and the remaining six instances as regular instances - This option contradicts the explanation provided above, so it’s incorrect.

Bob does not receive any cost-benefit since he hasn’t purchased any Reserved Instance (RI). If his account has even one RI, then the cost-benefit from Susan’s account is also added to his account - For billing purposes, the consolidated billing feature of AWS Organizations treats all the accounts in the organization as one account. This means that all accounts in the organization can receive the hourly cost-benefit of Reserved Instance (RI) that are purchased by any other account.

Bob receives the cost-benefit from Susan’s Reserved Instance (RI) only if he launches his instances in the same AWS Region where Susan purchased her Reserved Instances (RI) - As discussed above, this statement is incorrect. Bob receives the cost-benefit from Susan’s Reserved Instances only if he launches his instances in the same Availability Zone (AZ) where Susan purchased her Reserved Instance (RI).

385
Q

Which pillar of the AWS Well-Architected Framework recommends maintaining infrastructure as code (IaC)?

a. Performance Efficiency
b. Cost Optimization
c. Security
d. Operational Excellence

A

d. Operational Excellence

The AWS Well-Architected Framework helps you understand the pros and cons of decisions you make while building systems on AWS. By using the Framework you will learn architectural best practices for designing and operating reliable, secure, efficient, and cost-effective systems in the cloud. It provides a way for you to consistently measure your architectures against best practices and identify areas for improvement.

The AWS Well-Architected Framework is based on six pillars — Operational Excellence, Security, Reliability, Performance Efficiency, Cost Optimization and Sustainability.

The Operational Excellence pillar includes the ability to run and monitor systems to deliver business value and to continually improve supporting processes and procedures. In the cloud, you can apply the same engineering discipline that you use for application code to your entire environment. You can define your entire workload (applications, infrastructure) as code and update it with code. You can implement your operations procedures as code and automate their execution by triggering them in response to events.

386
Q

A financial services company wants to ensure that its AWS account activity meets the governance, compliance and auditing norms. As a Cloud Practitioner, which AWS service would you recommend for this use-case?
a. AWS CloudTrail
b. AWS Config
c. AWS Trusted Advisor
d. Amazon CloudWatch

A

a. AWS CloudTrail

You can use CloudTrail to log, monitor and retain account activity related to actions across your AWS infrastructure. CloudTrail provides an event history of your AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command-line tools, and other AWS services.

387
Q

Which of the following statements are true about AWS Regions and Availability Zones (AZ)? (Select two)

a. An Availability Zone (AZ) is a physical location where AWS clusters the data centers
b. All traffic between Availability Zones (AZ) is encrypted
c. AWS calls each group of logical data centers as AWS Regions
d. Traffic between Availability Zones (AZ) is not encrypted by default, but can be configured from AWS console
e. Each AWS Region consists of multiple, isolated, and physically separate Availability Zones (AZ) within a geographic area

A

b. All traffic between Availability Zones (AZ) is encrypted
e. Each AWS Region consists of multiple, isolated, and physically separate Availability Zones (AZ) within a geographic area

AWS has the concept of a Region, which is a physical location around the world where we cluster data centers. AWS calls each group of logical data centers an Availability Zone. Each AWS Region consists of multiple, isolated, and physically separate Availability Zones (AZ) within a geographic area.

Each Availability Zone (AZ) has independent power, cooling, and physical security and is connected via redundant, ultra-low-latency networks. AWS customers focused on high availability can design their applications to run in multiple AZ’s to achieve even greater fault tolerance. AWS infrastructure Regions meet the highest levels of security, compliance, and data protection.

All Availability Zones (AZ) in an AWS Region are interconnected with high-bandwidth, low-latency networking, over fully redundant, dedicated metro fiber providing high-throughput, low-latency networking between AZs. All traffic between Availability Zones (AZ) is encrypted.

Incorrect options:

Traffic between Availability Zones (AZ) is not encrypted by default, but can be configured from AWS console - All traffic between Availability Zones (AZ) is encrypted.

An Availability Zone (AZ) is a physical location where AWS clusters the data centers - AWS has the concept of a Region, which is a physical location around the world where AWS clusters the data centers.

AWS calls each group of logical data centers as AWS Regions - AWS has the concept of a Region, which is a physical location around the world where AWS clusters the data centers. AWS calls each group of logical data centers as an Availability Zone (AZ).

388
Q

A medical research startup wants to understand the compliance of AWS services concerning HIPAA guidelines. Which AWS service can be used to review the HIPAA compliance and governance-related documents on AWS?
a. AWS Secrets Manager
b. AWS Systems Manager
c. AWS Trusted Advisor
d. AWS Artifact

A

d. AWS Artifact

AWS Artifact is your go-to, central resource for compliance-related information that matters to your organization. It provides on-demand access to AWS security and compliance reports and select online agreements. Reports available in AWS Artifact include our Service Organization Control (SOC) reports, Payment Card Industry (PCI) reports, and certifications from accreditation bodies across geographies and compliance verticals that validate the implementation and operating effectiveness of AWS security controls. Different types of agreements are available in AWS Artifact Agreements to address the needs of customers subject to specific regulations. For example, the Business Associate Addendum (BAA) is available for customers that need to comply with the Health Insurance Portability and Accountability Act (HIPAA). It is not a service, it’s a no-cost, self-service portal for on-demand access to AWS compliance reports.

Incorrect options:

AWS Secrets Manager - AWS Secrets Manager helps you protect secrets needed to access your applications, services, and IT resources. The service enables you to easily rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle. Users and applications retrieve secrets with a call to Secrets Manager APIs, eliminating the need to hardcode sensitive information in plain text.

AWS Systems Manager - AWS Systems Manager gives you visibility and control of your infrastructure on AWS. Systems Manager provides a unified user interface so you can view operational data from multiple AWS services and allows you to automate operational tasks across your AWS resources. With Systems Manager, you can group resources, like Amazon EC2 instances, Amazon S3 buckets, or Amazon RDS instances, by application, view operational data for monitoring and troubleshooting, and take action on your groups of resources.

389
Q

The DevOps team at a Big Data consultancy has set up Amazon Elastic Compute Cloud (Amazon EC2) instances across two AWS Regions for its flagship application. Which of the following characterizes this application architecture?

a. Deploying the application across two AWS Regions improves scalability
b. Deploying the application across two AWS Regions improves agility
c. Deploying the application across two AWS Regions improves availability
d. Deploying the application across two AWS Regions improves security

A

c. Deploying the application across two AWS Regions improves availability

Highly available systems are those that can withstand some measure of degradation while remaining available. Each AWS Region is fully isolated and comprised of multiple Availability Zones (AZ), which are fully isolated partitions of AWS infrastructure. To better isolate any issues and achieve high availability, you can partition applications across multiple Availability Zones (AZ) in the same AWS Region or even across multiple AWS Regions.

Key Benefits of AWS Global Infrastructure: https://aws.amazon.com/about-aws/global-infrastructure/

390
Q

AWS Compute Optimizer delivers recommendations for which of the following AWS resources? (Select two)

a. Amazon Elastic Compute Cloud (Amazon EC2) instances, Amazon Elastic File System (Amazon EFS)
b. Amazon Elastic Compute Cloud (Amazon EC2) instances, Amazon EC2 Auto Scaling groups
c. Amazon Elastic File System (Amazon EFS), AWS Lambda functions
d. AWS Lambda functions, Amazon Simple Storage Service (Amazon S3)
e. Amazon Elastic Block Store (Amazon EBS), AWS Lambda functions

A

b. Amazon Elastic Compute Cloud (Amazon EC2) instances, Amazon EC2 Auto Scaling groups
e. Amazon Elastic Block Store (Amazon EBS), AWS Lambda functions

AWS Compute Optimizer helps you identify the optimal AWS resource configurations, such as Amazon EC2 instance types, Amazon EBS volume configurations, and AWS Lambda function memory sizes, using machine learning to analyze historical utilization metrics. AWS Compute Optimizer delivers recommendations for selected types of EC2 instances, EC2 Auto Scaling groups, Amazon EBS volumes, and AWS Lambda functions.

AWS Compute Optimizer calculates an individual performance risk score for each resource dimension of the recommended instance, including CPU, memory, EBS throughput, EBS IOPS, disk throughput, disk throughput, network throughput, and network packets per second (PPS).

AWS Compute Optimizer provides EC2 instance type and size recommendations for EC2 Auto Scaling groups with a fixed group size, meaning desired, minimum, and maximum are all set to the same value and have no scaling policy attached.

AWS Compute Optimizer supports IOPS and throughput recommendations for General Purpose (SSD) (gp3) volumes and IOPS recommendations for Provisioned IOPS (io1 and io2) volumes.

AWS Compute Optimizer helps you optimize two categories of Lambda functions. The first category includes Lambda functions that may be over-provisioned in memory sizes. The second category includes compute-intensive Lambda functions that may benefit from additional CPU power.

391
Q

Which of the following are the advantages of cloud computing? (Select three)
a. Go global in minutes and deploy applications in multiple regions around the world with just a few clicks
b. Trade variable expense for capital expense
c. Allocate a few months of planning for your infrastructure capacity needs
d. Trade capital expense for variable expense
e. Benefit from massive economies of scale
f. Spend money on building and maintaining data centers

A

a. Go global in minutes and deploy applications in multiple regions around the world with just a few clicks

d. Trade capital expense for variable expense

e. Benefit from massive economies of scale

https://docs.aws.amazon.com/whitepapers/latest/aws-overview/six-advantages-of-cloud-computing.html