AWS Cloud Practitioner Practice Exam Questions Flashcards

1
Q

Which of the following statements are CORRECT regarding the AWS VPC service? (Select two)
a. A Security Group can have allow rules only
b. A network access control list (network ACL) can have allow rules only
c. A Security Group can have both allow and deny rules
d. A Network Address Translation instance (NAT instance) is managed by AWS
e. A Network Address Translation gateway (NAT gateway) is managed by AWS

A

a. A Security Group can have allow rules only

e. A Network Address Translation gateway (NAT gateway) is managed by AWS

Honestly…the explanation is lengthy and confusing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

The DevOps team at an IT company is moving 500 GB of data from an EC2 instance to an S3 bucket in the same region. Which of the following scenario captures the correct charges for this data transfer?
a. The company would only be charged for the inbound data transfer into the S3 bucket
b. The company would only be charged for the outbound data transfer from EC2 instance
c. The company would not be charged for this data transfer
d. The company would be charged for both the outbound data transfer from EC2 instance as well as the inbound data transfer into the S3 bucket

A

c. The company would not be charged for this data transfer

There are three fundamental drivers of cost with AWS: compute, storage, and outbound data transfer. In most cases, there is no charge for inbound data transfer or data transfer between other AWS services within the same region. Outbound data transfer is aggregated across services and then charged at the outbound data transfer rate.

Per AWS pricing, data transfer between S3 and EC2 instances within the same region is not charged, so there would be no data transfer charge for moving 500 GB of data from an EC2 instance to an S3 bucket in the same region.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

According to the AWS Shared Responsibility Model, which of the following are responsibilities of AWS? (Select two)
a. Replacing faulty hardware of Amazon EC2 instances
b. Creating IAM role for accessing Amazon EC2 instances
c. Enabling Multi Factor Authentication on AWS accounts in your organization
d. Creating S3 bucket policies for appropriate user access
e. Operating the infrastructure layer, the operating system and the platform for the Amazon S3 service

A

a. Replacing faulty hardware of Amazon EC2 instances

e. Operating the infrastructure layer, the operating system and the platform for the Amazon S3 service

I got this fair and square.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

The engineering team at an IT company wants to monitor the CPU utilization for its fleet of Amazon Elastic Compute Cloud (Amazon EC2) instances and send an email to the administrator if the utilization exceeds 80%. As a Cloud Practitioner, which AWS services would you recommend to build this solution? (Select two)

a. AWS CloudTrail
b. AWS Lambda
c. Amazon CloudWatch
d. Amazon Simple Queue Service (SQS)
e. Amazon Simple Notification Service (SNS)

A

c. Amazon CloudWatch
e. Amazon Simple Notification Service (SNS)

Amazon CloudWatch - Amazon CloudWatch is a monitoring and observability service built for DevOps engineers, developers, site reliability engineers (SREs), and IT managers. CloudWatch provides data and actionable insights to monitor applications, respond to system-wide performance changes, optimize resource utilization, and get a unified view of operational health. You can create an CloudWatch alarm that sends an email message using Amazon SNS when the alarm changes state from OK to ALARM. The alarm changes to the ALARM state when the average CPU use of an EC2 instance exceeds a specified threshold for consecutive specified periods.

Amazon Simple Notification Service (SNS) - Amazon Simple Notification Service (SNS) is a highly available, durable, secure, fully managed pub/sub messaging service that enables you to decouple microservices, distributed systems, and serverless applications.

How SNS Works: https://aws.amazon.com/sns/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Which of the following AWS services support VPC Endpoint Gateway for a private connection from a VPC? (Select two)
a. Amazon Elastic Compute Cloud (Amazon EC2)
b. Amazon Simple Storage Service (Amazon S3)
c. Amazon DynamoDB
d. Amazon Simple Notification Service (SNS)
e. Amazon Simple Queue Service (SQS)

A

b. Amazon Simple Storage Service (Amazon S3)

c. Amazon DynamoDB

A VPC endpoint enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by AWS PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Instances in your VPC do not require public IP addresses to communicate with resources in the service. Traffic between your VPC and the other service does not leave the Amazon network.

There are two types of VPC endpoints: interface endpoints and gateway endpoints.

An interface endpoint is an elastic network interface with a private IP address from the IP address range of your subnet that serves as an entry point for traffic destined to a supported service. Interface endpoints are powered by AWS PrivateLink, a technology that enables you to privately access services by using private IP addresses.

A gateway endpoint is a gateway that you specify as a target for a route in your route table for traffic destined to a supported AWS service. The following AWS services are supported:

Amazon Simple Storage Service (Amazon S3)

Amazon DynamoDB

Exam Alert:

You may see a question around this concept in the exam. Just remember that only Amazon S3 and Amazon DynamoDB support VPC gateway endpoint. All other services that support VPC Endpoints use a VPC interface endpoint (note that Amazon S3 supports the VPC interface endpoint as well).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Which of the following AWS services comes under the Software as a Service (SaaS) Cloud Computing Type?

a. Elastic Load Balancing (ELB)
b. AWS Elastic Beanstalk
c. Amazon Elastic Compute Cloud (Amazon EC2)
d. Amazon Rekognition

A

d. Amazon Rekognition

Cloud Computing can be broadly divided into three types - Infrastructure as a Service (IaaS), Platform as a Service (PaaS), Software as a Service (SaaS).

IaaS contains the basic building blocks for cloud IT. It typically provides access to networking features, computers (virtual or on dedicated hardware), and data storage space. IaaS gives the highest level of flexibility and management control over IT resources. Examples - Amazon EC2 (on AWS), GCP, Azure, Rackspace, Digital Ocean, Linode.

PaaS removes the need to manage underlying infrastructure (usually hardware and operating systems) and allows you to focus on the deployment and management of your applications. You don’t need to worry about resource procurement, capacity planning, software maintenance, patching, or any of the other undifferentiated heavy lifting involved in running your application. Examples - Elastic Beanstalk (on AWS), Heroku, Google App Engine (GCP), Windows Azure (Microsoft).

SaaS provides you with a complete product that is run and managed by the service provider. With a SaaS offering, you don’t have to think about how the service is maintained or how the underlying infrastructure is managed. You only need to think about how you will use that particular software. Examples - Amazon Rekognition, Google Apps (Gmail), Dropbox, Zoom.

Overview of Cloud Computing Types: https://aws.amazon.com/types-of-cloud-computing/

You can use Amazon Rekognition to add image and video analysis to your applications using proven, highly scalable, deep learning technology that requires no machine learning expertise. With Amazon Rekognition, you can identify objects, people, text, scenes, and activities in images and videos as well as detect any inappropriate content. Rekognition is an example of Software as a Service (Saas) model.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

An AWS user is trying to launch an Amazon Elastic Compute Cloud (Amazon EC2) instance in a given region. What is the region-specific constraint that the Amazon Machine Image (AMI) must meet so that it can be used for this Amazon Elastic Compute Cloud (Amazon EC2) instance?

a. You must use an Amazon Machine Image (AMI) from the same region as that of the Amazon EC2 instance. The region of the Amazon Machine Image (AMI) has no bearing on the performance of the Amazon EC2 instance
b. You should use an Amazon Machine Image (AMI) from the same region, as it improves the performance of the Amazon EC2 instance
c. You can use an Amazon Machine Image (AMI) from a different region, but it degrades the performance of the Amazon EC2 instance
d. An Amazon Machine Image (AMI) is a global entity, so the region is not applicable

A

a. You must use an Amazon Machine Image (AMI) from the same region as that of the Amazon EC2 instance. The region of the Amazon Machine Image (AMI) has no bearing on the performance of the Amazon EC2 instance

An Amazon Machine Image (AMI) provides the information required to launch an instance. You must specify an Amazon Machine Image (AMI) when you launch an instance. You can launch multiple instances from a single AMI when you need multiple instances with the same configuration.

The Amazon Machine Image (AMI) must be in the same region as that of the Amazon EC2 instance to be launched. If the Amazon Machine Image (AMI) exists in a different region, you can copy that Amazon Machine Image (AMI) to the region where you want to launch the EC2 instance. The region of Amazon Machine Image (AMI) has no bearing on the performance of the Amazon EC2 instance.

Amazon Machine Images (AMI) Overview: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

A silicon valley based healthcare startup stores anonymized patient health data on Amazon S3. The CTO further wants to ensure that any sensitive data on S3 is discovered and identified to prevent any sensitive data leaks. As a Cloud Practitioner, which AWS service would you recommend addressing this use-case?
a. AWS Glue
b. Amazon Polly
c. Amazon Macie
d. AWS Secrets Manager

A

c. Amazon Macie

Amazon Macie is a fully managed data security and data privacy service that uses machine learning and pattern matching to discover and protect your sensitive data in AWS. Macie automatically provides an inventory of Amazon S3 buckets including a list of unencrypted buckets, publicly accessible buckets, and buckets shared with AWS accounts outside those you have defined in AWS Organizations. Then, Macie applies machine learning and pattern matching techniques to the buckets you select to identify and alert you to sensitive data, such as personally identifiable information (PII).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Which of the following AWS services can be used to connect a company’s on-premises environment to a VPC without using the public internet?
a. VPC Endpoint
b. Internet Gateway
c. AWS Direct Connect
d. AWS Site-to-Site VPN

A

c. AWS Direct Connect

AWS Direct Connect is a cloud service solution that makes it easy to establish a dedicated network connection from your premises to AWS. You can use AWS Direct Connect to establish a private virtual interface from your on-premise network directly to your Amazon VPC, providing you with a private, high bandwidth network connection between your network and your VPC. This connection is private and does not go over the public internet. It takes at least a month to establish this physical connection.

How Direct Connect Works: https://aws.amazon.com/directconnect/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

A cyber-security agency uses AWS Cloud and wants to carry out security assessments on its own AWS infrastructure without any prior approval from AWS. Which of the following describes/facilitates this practice?

a. Network Stress Testing
b. Amazon Inspector
c. AWS Secrets Manager
d. Penetration Testing

A

d. Penetration Testing

AWS customers can carry out security assessments or penetration tests against their AWS infrastructure without prior approval for few common AWS services. Customers are not permitted to conduct any security assessments of AWS infrastructure, or the AWS services themselves.

Incorrect options:

Network Stress Testing - AWS considers “network stress test” to be when a test sends a large volume of legitimate or test traffic to a specific intended target application. The endpoint and infrastructure are expected to be able to handle this traffic.

AWS Secrets Manager - AWS Secrets Manager helps you protect secrets needed to access your applications, services, and IT resources. The service enables you to easily rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle. Users and applications retrieve secrets with a call to AWS Secrets Manager APIs, eliminating the need to hardcode sensitive information in plain text.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

An e-commerce company wants to store data from a recommendation engine in a database. As a Cloud Practioner, which AWS service would you recommend to provide this functionality with the LEAST operational overhead for any scale?

a. Amazon Neptune
b. Amazon Simple Storage Service (Amazon S3)
c. Amazon Relational Database Service (Amazon RDS)
d. Amazon DynamoDB

A

d. Amazon DynamoDB

Amazon DynamoDB is a key-value and document database that delivers sub-millisecond performance at any scale. Amazon DynamoDB enables customers to offload the administrative burdens of operating and scaling distributed databases to AWS so that they don’t have to worry about hardware provisioning, setup and configuration, throughput capacity planning, replication, software patching, or cluster scaling.

You can use Amazon DynamoDB to store recommendation results with the LEAST operational overhead for any scale.

https://catalog.us-east-1.prod.workshops.aws/workshops/ed82a5d4-6630-41f0-a6a1-9345898fa6ec/en-US/batch/dynamodb

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Due to regulatory and compliance reasons, an organization is supposed to use a hardware device for any data encryption operations in the cloud. Which AWS service can be used to meet this compliance requirement?

a. AWS Secrets Manager
b. AWS Key Management Service (AWS KMS)
c. AWS CloudHSM
d. AWS Trusted Advisor

A

c. AWS CloudHSM

AWS CloudHSM is a cloud-based Hardware Security Module (HSM) that enables you to easily generate and use your encryption keys on the AWS Cloud. With CloudHSM, you can manage your encryption keys using FIPS 140-2 Level 3 validated HSMs. It is a fully-managed service that automates time-consuming administrative tasks for you, such as hardware provisioning, software patching, high-availability, and backups.

Please review this detailed description for CloudHSM: https://aws.amazon.com/cloudhsm/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

AWS Identity and Access Management (AWS IAM) policies are written as JSON documents. Which of the following are mandatory elements of an IAM policy?

a. Action, Condition
b. Effect, Sid
c. Sid, Principal
d. Effect, Action

A

d. Effect, Action

Most policies are stored in AWS as JSON documents. Identity-based policies and policies used to set permissions boundaries are JSON policy documents that you attach to a user or role. Resource-based policies are JSON policy documents that you attach to a resource.

A JSON policy document includes these elements:

  1. Optional policy-wide information at the top of the document
  2. One or more individual statements

Each statement includes information about a single permission. The information in a statement is contained within a series of elements.

  1. Version – Specify the version of the policy language that you want to use. As a best practice, use the latest 2012-10-17 version.
  2. Statement – Use this main policy element as a container for the following elements. You can include more than one statement in a policy.

a. Sid (Optional) – Include an optional statement ID to differentiate between your statements.

b. Effect – Use Allow or Deny to indicate whether the policy allows or denies access.

c. Principal (Required in only some circumstances) – If you create a resource-based policy, you must indicate the account, user, role, or federated user to which you would like to allow or deny access. If you are creating an IAM permissions policy to attach to a user or role, you cannot include this element. The principal is implied as that user or role.

d. Action – Include a list of actions that the policy allows or denies.

e. Resource (Required in only some circumstances) – If you create an IAM permissions policy, you must specify a list of resources to which the actions apply. If you create a resource-based policy, this element is optional. If you do not include this element, then the resource to which the action applies is the resource to which the policy is attached.

f. Condition (Optional) – Specify the circumstances under which the policy grants permission.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Which of the following is the best way to protect your data from accidental deletion on Amazon S3?

a. Amazon S3 lifecycle configuration
b. Amazon S3 storage classes
c. Amazon S3 Versioning
d. Amazon S3 Transfer Acceleration (Amazon S3TA)

A

c. Amazon S3 Versioning

Versioning is a means of keeping multiple variants of an object in the same bucket. You can use versioning to preserve, retrieve, and restore every version of every object stored in your Amazon S3 bucket. With versioning, you can easily recover from both unintended user actions and application failures.

Versioning-enabled buckets enable you to recover objects from accidental deletion or overwrite. For example: if you delete an object, instead of removing it permanently, Amazon S3 inserts a delete marker, which becomes the current object version.

Amazon S3 Versioning Overview: https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectVersioning.html

Incorrect options:

Amazon S3 lifecycle configuration - To manage your Amazon S3 objects so that they are stored cost-effectively throughout their lifecycle, configure their Amazon S3 Lifecycle. With Amazon S3 Lifecycle configuration rules, you can tell Amazon S3 to transition objects to less expensive storage classes, or archive or delete them. Amazon S3 lifecycle configuration will do the hard lifting of moving your data into cost-effective storage classes without user intervention. Amazon S3 Lifecycle configuration is not meant to protect from accidental deletion of data.

Amazon S3 storage classes - Amazon S3 offers a range of storage classes designed for different use cases. These include S3 Standard for general-purpose storage of frequently accessed data; S3 Intelligent-Tiering for data with unknown or changing access patterns; S3 Standard-Infrequent Access (S3 Standard-IA) and S3 One Zone-Infrequent Access (S3 One Zone-IA) for long-lived, but less frequently accessed data; and Amazon S3 Glacier (S3 Glacier) and Amazon S3 Glacier Deep Archive (S3 Glacier Deep Archive) for long-term archive and digital preservation. Storage classes are for different storage pattern needs that customers have, and not a data protection mechanism for S3.

Amazon S3 Transfer Acceleration (Amazon S3TA) - Amazon S3 Transfer Acceleration (Amazon S3TA) enables fast, easy, and secure transfers of files over long distances between your client and an S3 bucket. Amazon S3 Transfer Acceleration (Amazon S3TA) takes advantage of Amazon CloudFront’s globally distributed edge locations. As the data arrives at an edge location, data is routed to Amazon S3 over an optimized network path. Amazon S3 Transfer Acceleration (Amazon S3TA) cannot be used to protect from accidental deletion of data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

A medical research startup wants to understand the compliance of AWS services concerning HIPAA guidelines. Which AWS service can be used to review the HIPAA compliance and governance-related documents on AWS?
a. AWS Secrets Manager
b. AWS Systems Manager
c. AWS Trusted Advisor
d. AWS Artifact

A

d. AWS Artifact

AWS Artifact is your go-to, central resource for compliance-related information that matters to your organization. It provides on-demand access to AWS security and compliance reports and select online agreements. Reports available in AWS Artifact include our Service Organization Control (SOC) reports, Payment Card Industry (PCI) reports, and certifications from accreditation bodies across geographies and compliance verticals that validate the implementation and operating effectiveness of AWS security controls. Different types of agreements are available in AWS Artifact Agreements to address the needs of customers subject to specific regulations. For example, the Business Associate Addendum (BAA) is available for customers that need to comply with the Health Insurance Portability and Accountability Act (HIPAA). It is not a service, it’s a no-cost, self-service portal for on-demand access to AWS compliance reports.

Incorrect options:

AWS Secrets Manager - AWS Secrets Manager helps you protect secrets needed to access your applications, services, and IT resources. The service enables you to easily rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle. Users and applications retrieve secrets with a call to Secrets Manager APIs, eliminating the need to hardcode sensitive information in plain text.

AWS Systems Manager - AWS Systems Manager gives you visibility and control of your infrastructure on AWS. Systems Manager provides a unified user interface so you can view operational data from multiple AWS services and allows you to automate operational tasks across your AWS resources. With Systems Manager, you can group resources, like Amazon EC2 instances, Amazon S3 buckets, or Amazon RDS instances, by application, view operational data for monitoring and troubleshooting, and take action on your groups of resources.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Which option is a common stakeholder role for the AWS Cloud Adoption Framework (AWS CAF) platform perspective? (Select two)
a. Chief Data Officer (CDO)
b. Chief Product Officer (CPO)
c. Engineer
d. Chief Technology Officer (CTO)
e. Chief Information Officer (CIO)

A

c. Engineer

d. Chief Technology Officer (CTO)

Because this is all about eng.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Which AWS Service can be used to mitigate a Distributed Denial of Service (DDoS) attack?
a. Amazon CloudWatch
b. AWS Systems Manager
c. AWS Shield
d. AWS Key Management Service (AWS KMS)

A

c. AWS Shield

AWS Shield is a managed Distributed Denial of Service (DDoS) protection service that safeguards applications running on AWS. AWS Shield provides always-on detection and automatic inline mitigations that minimize application downtime and latency, so there is no need to engage AWS Support to benefit from DDoS protection. There are two tiers of AWS Shield - Standard and Advanced.

All AWS customers benefit from the automatic protections of AWS Shield Standard, at no additional charge. AWS Shield Standard defends against most common, frequently occurring network and transport layer DDoS attacks that target your web site or applications. When you use AWS Shield Standard with Amazon CloudFront and Amazon Route 53, you receive comprehensive availability protection against all known infrastructure (Layer 3 and 4) attacks.

For higher levels of protection against attacks targeting your applications running on Amazon Elastic Compute Cloud (EC2), Elastic Load Balancing (ELB), Amazon CloudFront, AWS Global Accelerator and Amazon Route 53 resources, you can subscribe to AWS Shield Advanced. In addition to the network and transport layer protections that come with Standard, AWS Shield Advanced provides additional detection and mitigation against large and sophisticated DDoS attacks, near real-time visibility into attacks, and integration with AWS WAF, a web application firewall.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Which AWS service can be used as an in-memory database with high-performance and low latency?

a. Amazon ElastiCache
b. Amazon Relational Database Service (Amazon RDS)
c. Amazon Athena
d. Amazon DynamoDB

A

a. Amazon ElastiCache

Amazon ElastiCache allows you to seamlessly set up, run, and scale popular open-Source compatible in-memory data stores in the cloud. Build data-intensive apps or boost the performance of your existing databases by retrieving data from high throughput and low latency in-memory data stores. Amazon ElastiCache is a popular choice for real-time use cases like Caching, Session Stores, Gaming, Geospatial Services, Real-Time Analytics, and Queuing. ElastiCache cannot be used for online analytical processing.

How Amazon ElastiCache Works: https://aws.amazon.com/elasticache/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Which tool/service will help you access AWS services using programming language-specific APIs?
a. AWS Management Console
b. AWS Software Developer Kit (SDK)
c. Integrated Development Environments (IDE)
d. AWS Command Line Interface (CLI)

A

b. AWS Software Developer Kit (SDK)

SDKs take the complexity out of coding by providing language-specific APIs for AWS services. For example, the AWS SDK for JavaScript simplifies the use of AWS Services by providing a set of libraries that are consistent and familiar for JavaScript developers. It provides support for API lifecycle considerations such as credential management, retries, data marshaling, serialization, and deserialization. AWS SDKs are offered in several programming languages to make it simple for developers working on different programming and scripting languages. So, AWS SDK can help with using AWS services from within an application using language-specific APIs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

AWS Lambda pricing is based on which of the following criteria? (Select two)

a. The size of the deployment package for the AWS Lambda function
b. The language runtime of the AWS Lambda function
c. The time it takes for the AWS Lambda function to execute
d. The number of lines of code for the AWS Lambda function
e. Number of requests for the AWS Lambda function

A

c. The time it takes for the AWS Lambda function to execute
e. Number of requests for the AWS Lambda function

AWS Lambda lets you run code without provisioning or managing servers. With AWS Lambda, you can run code for virtually any type of application or backend service - all with zero administration. Just upload your code and Lambda takes care of everything required to run and scale your code with high availability.

With AWS Lambda, you pay only for what you use. You are charged based on the number of requests for your functions and the duration, the time it takes for your code to execute. AWS Lambda counts a request each time it starts executing in response to an event notification or invoke call, including test invokes from the console. Duration is calculated from the time your code begins executing until it returns or otherwise terminates, rounded up to the nearest 100ms.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

An e-commerce company wants to assess its applications deployed on Amazon Elastic Compute Cloud (Amazon EC2) instances for vulnerabilities and deviations from AWS best practices. Which AWS service can be used to facilitate this?

a. Amazon Inspector
b. AWS CloudHSM
c. AWS Secrets Manager
d. AWS Trusted Advisor

A

a. Amazon Inspector

Amazon Inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on AWS. Amazon Inspector automatically assesses applications for exposure, vulnerabilities, and deviations from best practices.

Overview of Amazon Inspector: https://aws.amazon.com/inspector/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Which of the following are examples of Horizontal Scalability (aka Elasticity)? (Select two)

a. Read Replicas in Amazon Relational Database Service (Amazon RDS)
b. Modify a Database instance to higher CPU and RAM
c. Elastic Load Balancing (ELB)
d. Add a bigger CPU to a computer
e. Modify an EC2 instance type from t2.nano to u-12tb1.metal

A

a. Read Replicas in Amazon Relational Database Service (Amazon RDS)
c. Elastic Load Balancing (ELB)

A “horizontally scalable” system is one that can increase capacity by adding more computers to the system. This is in contrast to a “vertically scalable” system, which is constrained to running its processes on only one computer; in such systems, the only way to increase performance is to add more resources into one computer in the form of faster (or more) CPUs, memory or storage. Horizontally scalable systems are oftentimes able to outperform vertically scalable systems by enabling parallel execution of workloads and distributing those across many different computers.

Elastic Load Balancing (ELB)

Elastic Load Balancing (ELB) automatically distributes incoming application traffic across multiple targets, such as Amazon EC2 instances, containers, IP addresses, and Lambda functions. It can handle the varying load of your application traffic in a single Availability Zone (AZ) or across multiple Availability Zones (AZ). This falls under Horizontal Scaling.

Read Replicas in Amazon Relational Database Service (Amazon RDS)

Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. Read replicas allow you to create read-only copies that are synchronized with your master database. You can also place your read replica in a different AWS Region closer to your users for better performance. Read replicas are an example of horizontal scaling of resources.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Which AWS compute service provides the EASIEST way to access resizable compute capacity in the cloud with support for per-second billing and access to the underlying OS?

a. Amazon Lightsail
b. Amazon Elastic Compute Cloud (Amazon EC2)
c. AWS Lambda
d. Amazon Elastic Container Service (Amazon ECS)

A

b. Amazon Elastic Compute Cloud (Amazon EC2)

Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud with support for per-second billing. It is the easiest way to provision servers on AWS Cloud and access the underlying OS. Amazon EC2 reduces the time required to obtain and boot new server instances to minutes, allowing you to quickly scale capacity, both up and down, as your computing requirements change.

Amazon Elastic Compute Cloud (Amazon EC2) Overview: https://aws.amazon.com/ec2/

Incorrect options:

Amazon Lightsail - Amazon Lightsail is an easy-to-use cloud platform that offers you everything needed to build an application or website, plus a cost-effective, monthly plan. Amazon Lightsail offers several preconfigured, one-click-to-launch operating systems, development stacks, and web applications, including Linux, Windows OS, and WordPress. Amazon Lightsail comes with monthly payment plans and does not support per second billing, so this option is ruled out.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Which of the following is the MOST cost-effective option to purchase an EC2 Reserved Instance (RI)?
a. All upfront payment option with the standard 1-year term
b. Partial upfront payment option with standard 3-years term
c. No upfront payment option with standard 1-year term
d. No upfront payment option with standard 3-years term

A

b. Partial upfront payment option with standard 3-years term

There was a whole payment chart for this. Just know 3-year plan means cheaper and paying upfront means cheaper.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Which of the following AWS services is essential for implementing security of resources in AWS Cloud?

a. AWS Shield
b. AWS Identity and Access Management (IAM)
c. Amazon CloudWatch
d. AWS Web Application Firewall (AWS WAF)

A

b. AWS Identity and Access Management (IAM)

AWS Identity and Access Management (IAM) enables you to manage access to AWS services and resources securely. Using IAM, you can create and manage AWS users and groups, and use permissions to allow and deny their access to AWS resources. IAM enables security best practices by allowing you to grant unique security credentials to users and groups to specify which AWS service APIs and resources they can access. These features make IAM an important service for the overall security of AWS resources in your account. IAM is secure by default; users have no access to AWS resources until permissions are explicitly granted.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

The AWS Well-Architected Framework provides guidance on building cloud based applications using AWS best practices. Which of the following options are the pillars mentioned in the AWS Well-Architected Framework? (Select two)

a. Reliability
b. Availability
c. Cost Optimization
d. Scalability
e. Elasticity

A

a. Reliability
c. Cost Optimization

The AWS Well-Architected Framework provides guidance on building secure, high-performing, resilient, and efficient infrastructure for cloud based applications. Based on six pillars — operational excellence, security, reliability, performance efficiency, cost optimization and sustainability — the Framework provides a consistent approach for customers and partners to evaluate architectures, and implement designs that will scale over time.

Incorrect options:

Elasticity - Elasticity is the ability to acquire resources as you need them and release resources when you no longer need them. In the cloud, you want to do this automatically.

Availability - A system that is available is capable of delivering the designed functionality at a given point in time. Highly available systems are those that can withstand some measure of degradation while still remaining available.

Scalability - A measurement of a system’s ability to grow to accommodate an increase in demand.

These three options are not part of the AWS Well-Architected Framework.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Which Amazon Elastic Compute Cloud (Amazon EC2) pricing model is the most cost-effective and flexible with no requirement for a long term resource commitment or upfront payment but still guarantees that instance would not be interrupted?

a. Dedicated Host
b. Reserved Instance (RI)
c. Spot Instance
d. On-demand Instance

A

d. On-demand Instance

An On-Demand Instance is an instance that you use on-demand. You have full control over its lifecycle — you decide when to launch, stop, hibernate, start, reboot, or terminate it. There is no long-term commitment required when you purchase On-Demand Instances. There is no upfront payment and you pay only for the seconds that your On-Demand Instances are running. The price per second for running an On-Demand Instance is fixed. On-demand instances cannot be interrupted.

EC2 Pricing Options Overview: https://aws.amazon.com/ec2/pricing/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

A multi-national corporation wants to get expert professional advice on migrating to AWS and managing their applications on AWS Cloud. Which of the following entities would you recommend for this engagement?
a. Concierge Support Team
b. AWS Trusted Advisor
c. APN Consulting Partner
d. APN Technology Partner

A

c. APN Consulting Partner

The AWS Partner Network (APN) is the global partner program for technology and consulting businesses that leverage Amazon Web Services to build solutions and services for customers.

APN Consulting Partners are professional services firms that help customers of all types and sizes design, architect, build, migrate, and manage their workloads and applications on AWS, accelerating their migration to AWS cloud.

APN Partner Types Overview: https://aws.amazon.com/partners/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Which pillar of the AWS Well-Architected Framework recommends maintaining infrastructure as code (IaC)?

a. Performance Efficiency
b. Cost Optimization
c. Security
d. Operational Excellence

A

d. Operational Excellence

The AWS Well-Architected Framework helps you understand the pros and cons of decisions you make while building systems on AWS. By using the Framework you will learn architectural best practices for designing and operating reliable, secure, efficient, and cost-effective systems in the cloud. It provides a way for you to consistently measure your architectures against best practices and identify areas for improvement.

The AWS Well-Architected Framework is based on six pillars — Operational Excellence, Security, Reliability, Performance Efficiency, Cost Optimization and Sustainability.

The Operational Excellence pillar includes the ability to run and monitor systems to deliver business value and to continually improve supporting processes and procedures. In the cloud, you can apply the same engineering discipline that you use for application code to your entire environment. You can define your entire workload (applications, infrastructure) as code and update it with code. You can implement your operations procedures as code and automate their execution by triggering them in response to events.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

An IT company has deployed a static website on Amazon Simple Storage Service (Amazon S3), but the website is still inaccessible. As a Cloud Practioner, which of the following solutions would you suggest to address this issue?

a. Enable Amazon S3 versioning
b. Enable Amazon S3 replication
c. Disable Amazon S3 encryption
d. Fix the Amazon S3 bucket policy

A

d. Fix the Amazon S3 bucket policy

To host a static website on Amazon S3, you configure an Amazon S3 bucket for website hosting and then upload your website content to the bucket. When you configure a bucket as a static website, you must enable website hosting, set permissions, and create and add an index document.

Hosting a static website on Amazon S3: https://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteHosting.html

If you want to configure an existing bucket as a static website that has public access, you must edit block public access settings for that bucket. You may also have to edit your account-level block public access settings. Amazon S3 applies the most restrictive combination of the bucket-level and account-level block public access settings.

Here is how you can edit Public Access settings for S3 buckets: https://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteAccessPermissionsReqd.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Which of the following statements is INCORRECT about AWS Auto Scaling?
a. You can automatically deploy AWS Shield when a DDoS attack is detected
b. You can scale out and add more Amazon Elastic Compute Cloud (Amazon EC2) instances to match an increase in demand as well as scale in and remove Amazon Elastic Compute Cloud (Amazon EC2) instances to match a reduced demand
c. You can automatically register new instances to a load balancer
d. You can automatically remove unhealthy instances

A

a. You can automatically deploy AWS Shield when a DDoS attack is detected

AWS Auto Scaling is helpful during a DDoS attack, as it can scale out resources fast. But, it cannot automatically deploy AWS Shield service onto its group of resources.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Which AWS service can be used to provision resources to run big data workloads on Hadoop clusters?

a. AWS Step Functions
b. AWS Batch
c. Amazon Elastic Compute Cloud (Amazon EC2)
d. Amazon EMR

A

d. Amazon EMR

Amazon EMR is the industry-leading cloud big data platform for processing vast amounts of data using open source tools such as Hadoop, Apache Spark, Apache Hive, Apache HBase, Apache Flink, Apache Hudi, and Presto. Amazon EMR can be used to provision resources to run big data workloads on Hadoop clusters.

Incorrect options:

AWS Step Functions - AWS Step Functions service lets you coordinate multiple AWS services into serverless workflows. You can design and run workflows that stitch together services such as AWS Lambda, AWS Glue and Amazon SageMaker.

AWS Step Functions Overview: https://aws.amazon.com/step-functions/

AWS Batch - You can use AWS Batch to plan, schedule and execute your batch computing workloads across the full range of AWS compute services. AWS Batch dynamically provisions the optimal quantity and type of compute resources (e.g., CPU or memory optimized instances) based on the volume and specific resource requirements of the batch jobs submitted. AWS Batch provisions compute resources and optimizes the job distribution based on the volume and resource requirements of the submitted batch jobs.

Please review the common use-cases for AWS Batch: https://aws.amazon.com/batch/

Exam Alert:

Understand the difference between AWS Step Functions and AWS Batch. You may get questions to choose one over the other. AWS Batch runs batch computing workloads by provisioning the compute resources. AWS Step Functions does not provision any resources. AWS Step Functions only orchestrates AWS services required for a given workflow. You cannot use AWS Step Functions to plan, schedule and execute your batch computing workloads by provisioning underlying resources.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

A company wants to move to AWS cloud and release new features with quick iterations by utilizing relevant AWS services whenever required. Which of the following characteristics of AWS Cloud does it want to leverage?
a. Scalability
b. Agility
c. Reliability
d. Elasticity

A

b. Agility

In the world of cloud computing, “Agility” refers to the ability to rapidly develop, test and launch software applications that drive business growth Another way to explain “Agility” - AWS provides a massive global cloud infrastructure that allows you to quickly innovate, experiment and iterate. Instead of waiting weeks or months for hardware, you can instantly deploy new applications. This ability is called Agility.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

Which of the following is the correct statement regarding the AWS Storage services?

a. Amazon Simple Storage Service (Amazon S3) is file based storage, Amazon Elastic Block Store (Amazon EBS) is block based storage and Amazon Elastic File System (Amazon EFS) is object based storage
b. Amazon Simple Storage Service (Amazon S3) is block based storage, Amazon Elastic Block Store (Amazon EBS) is object based storage and Amazon Elastic File System (Amazon EFS) is file based storage
c. Amazon Simple Storage Service (Amazon S3) is object based storage, Amazon Elastic Block Store (Amazon EBS) is file based storage and Amazon Elastic File System (Amazon EFS) is block based storage
d. Amazon Simple Storage Service (Amazon S3) is object based storage, Amazon Elastic Block Store (Amazon EBS) is block based storage and Amazon Elastic File System (Amazon EFS) is file based storage

A

d. Amazon Simple Storage Service (Amazon S3) is object based storage, Amazon Elastic Block Store (Amazon EBS) is block based storage and Amazon Elastic File System (Amazon EFS) is file based storage

Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed, elastic NFS file system.

Amazon Elastic Block Store (Amazon EBS) is an easy to use, high-performance block storage service designed for use with Amazon Elastic Compute Cloud (Amazon EC2) for both throughput and transaction-intensive workloads at any scale.

Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

A startup has just moved its IT infrastructure to AWS Cloud. The CTO would like to receive detailed reports that break down the startup’s AWS costs by the hour in an Amazon Simple Storage Service (Amazon S3) bucket. As a Cloud Practitioner, which AWS service would you recommend for this use-case?

a. AWS Cost & Usage Report (AWS CUR)
b. AWS Budgets
c. AWS Cost Explorer
d. AWS Pricing Calculator

A

a. AWS Cost & Usage Report (AWS CUR)

AWS Cost & Usage Report (AWS CUR) contains the most comprehensive set of cost and usage data available. You can use AWS Cost & Usage Report (AWS CUR) to publish your AWS billing reports to an Amazon Simple Storage Service (Amazon S3) bucket that you own. You can receive reports that break down your costs by the hour or month, by product or product resource, or by tags that you define yourself. AWS updates the report in your bucket once a day in comma-separated value (CSV) format.

AWS Cost & Usage Report (AWS CUR) Overview: https://docs.aws.amazon.com/cur/latest/userguide/what-is-cur.html

Incorrect Options:

AWS Cost Explorer - AWS Cost Explorer has an easy-to-use interface that lets you visualize, understand, and manage your AWS costs and usage over time. AWS Cost Explorer includes a default report that helps you visualize the costs and usage associated with your top five cost-accruing AWS services, and gives you a detailed breakdown of all services in the table view. The reports let you adjust the time range to view historical data going back up to twelve months to gain an understanding of your cost trends. AWS Cost Explorer cannot provide a detailed report of your AWS costs by the hour into an Amazon S3 bucket.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

Which AWS support plan provides access to a designated Technical Account Manager (TAM)?

a. AWS Enterprise Support
b. AWS Enterprise On-Ramp Support
c. AWS Business Support
d. AWS Developer Support

A

a. AWS Enterprise Support

AWS offers four different support plans to cater to each of its customers - AWS Developer Support, AWS Business Support, AWS Enterprise On-Ramp Support and AWS Enterprise Support plans. A basic support plan is included for all AWS customers.

AWS Enterprise Support provides customers with concierge-like service where the main focus is helping the customer achieve their outcomes and find success in the cloud. With AWS Enterprise Support, you get 24x7 technical support from high-quality engineers, tools and technology to automatically manage the health of your environment, consultative architectural guidance delivered in the context of your applications and use-cases, and a designated Technical Account Manager (TAM) to coordinate access to proactive/preventative programs and AWS subject matter experts.

Exam Alert:

Please review the differences between the AWS Developer Support, AWS Business Support, AWS Enterprise On-Ramp Support and AWS Enterprise Support plans as you can expect at least a couple of questions on the exam: https://aws.amazon.com/premiumsupport/plans/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

A company runs an application on a fleet of EC2 instances. The company wants to automate the traditional maintenance job of running timely assessments and checking for OS vulnerabilities. As a Cloud Practitioner, which service will you suggest for this use case?
a. Amazon GuardDuty
b. AWS Shield
c. Amazon Inspector
d. Amazon Macie

A

c. Amazon Inspector

Amazon Inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on your Amazon EC2 instances. Amazon Inspector automatically assesses applications for exposure, vulnerabilities, and deviations from best practices. After performing an assessment, Amazon Inspector produces a detailed list of security findings prioritized by level of severity. These findings can be reviewed directly or as part of detailed assessment reports which are available via the Amazon Inspector console or API.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

Which of the following statement is correct regarding the AWS pricing policy for data transfer charges into or out of an AWS Region?

a. Both inbound data transfer and outbound data transfer are charged
b. Only outbound data transfer is charged
c. Only inbound data transfer is charged
d. Neither inbound nor outbound data transfer are charged

A

b. Only outbound data transfer is charged

One of the main benefits of cloud services is the ability it gives you to optimize costs to match your needs, even as those needs change. AWS services do not have complex dependencies or licensing requirements, so you can get exactly what you need to build innovative, cost-effective solutions using the latest technology.

There are three fundamental drivers of cost with AWS: compute, storage, and outbound data transfer. These characteristics vary somewhat, depending on the AWS product and pricing model you choose. Outbound data to the internet from all AWS regions is billed at region-specific, tiered data transfer rates. Inbound data transfer into all AWS regions from the internet is free.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

Which AWS service should be used when you want to run container applications, but want to avoid the operational overhead of scaling, patching, securing, and managing servers?

a. Amazon Elastic Container Service (Amazon ECS) - Fargate launch type
b. AWS Lambda
c. Amazon Elastic Compute Cloud (Amazon EC2)
d. Amazon Elastic Container Service (Amazon ECS) - EC2 launch type

A

a. Amazon Elastic Container Service (Amazon ECS) - Fargate launch type

AWS Fargate is a serverless compute engine for containers. It works with both Amazon Elastic Container Service (Amazon ECS) and Amazon Elastic Kubernetes Service (Amazon EKS). AWS Fargate makes it easy for you to focus on building your applications. AWS Fargate removes the need to provision and manage servers, lets you specify and pay for resources per application, and improves security through application isolation by design. AWS Fargate allocates the right amount of compute, eliminating the need to choose instances and scale cluster capacity. You only pay for the resources required to run your containers, so there is no over-provisioning and paying for additional servers. AWS Fargate runs each task or pod in its kernel providing the tasks and pods their own isolated compute environment. This enables your application to have workload isolation and improved security by design.

How AWS Fargate Works: https://aws.amazon.com/fargate/

Incorrect options:

Amazon Elastic Container Service (Amazon ECS) - EC2 launch type - Amazon Elastic Container Service (Amazon ECS) is a highly scalable, fast, container management service that makes it easy to run, stop, and manage Docker containers and allows you to easily run applications on a managed cluster of Amazon EC2 instances. Unlike AWS Fargate, this is not a fully managed service and you need to manage the underlying servers yourself.

AWS Lambda - AWS Lambda is a compute service that lets you run code without provisioning or managing servers. AWS Lambda executes your code only when needed and scales automatically, from a few requests per day to thousands per second. AWS Lambda does not support running container applications.

Amazon Elastic Compute Cloud (Amazon EC2) - Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud, per-second billing, and access to the underlying OS. It is designed to make web-scale cloud computing easier for developers. Maintenance of the server and its software has to be done by the customer, so this option is ruled out.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

Which AWS service can be used to automate code deployment to Amazon Elastic Compute Cloud (Amazon EC2) instances as well as on-premises instances?

a. AWS CodePipeline
b. AWS CloudFormation
c. AWS CodeCommit
d. AWS CodeDeploy

A

d. AWS CodeDeploy

AWS CodeDeploy is a service that automates code deployments to any instance, including Amazon EC2 instances and instances running on-premises. AWS CodeDeploy makes it easier for you to rapidly release new features, helps you avoid downtime during deployment, and handles the complexity of updating your applications. You can use AWS CodeDeploy to automate deployments, eliminating the need for error-prone manual operations, and the service scales with your infrastructure so you can easily deploy to one instance or thousands.

Incorrect options:

AWS CodeCommit - AWS CodeCommit is a fully-managed source control service that hosts secure Git-based repositories. It makes it easy for teams to collaborate on code in a secure and highly scalable ecosystem. CodeCommit eliminates the need to operate your own source control system or worry about scaling its infrastructure. It cannot be used to automate code deployment.

AWS CloudFormation - AWS CloudFormation allows you to use programming languages or a simple text file to model and provision, in an automated and secure manner, all the resources needed for your applications across all regions and accounts. It cannot be used to automate code deployment.

AWS CodePipeline - AWS CodePipeline is a continuous delivery service that enables you to model, visualize, and automate the steps required to release your software. With AWS CodePipeline, you model the full release process for building your code, deploying to pre-production environments, testing your application and releasing it to production.

AWS CodePipeline integrates with AWS services such as AWS CodeCommit, Amazon S3, AWS CodeBuild, AWS CodeDeploy, AWS Elastic Beanstalk, AWS CloudFormation, AWS OpsWorks, Amazon ECS, and AWS Lambda. To further elucidate, CodePipeline cannot by itself deploy the code, it can integrate with CodeDeploy for the actual deployment.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

Which of the following entities applies patches to the underlying OS for Amazon Aurora?
a. The AWS customer by SSHing on the instances
b. The AWS customer by using AWS Systems Manager
c. The AWS Support after receiving a request from the customer
d. The AWS Product Team automatically

A

d. The AWS Product Team automatically

Amazon Aurora is a MySQL and PostgreSQL-compatible relational database built for the cloud. Amazon Aurora is fully managed by Amazon Relational Database Service (RDS), which automates time-consuming administration tasks like hardware provisioning, database setup, patching, and backups. The AWS Product team is responsible for applying patches to the underlying OS for AWS Aurora.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

Which of the following improves the availability for a fleet of Amazon Elastic Compute Cloud (Amazon EC2) instances?

a. Deploy the Amazon Elastic Compute Cloud (Amazon EC2) instances in the same Availability Zone (AZ) across two different AWS Regions
b. Deploy the Amazon Elastic Compute Cloud (Amazon EC2) instances across different Availability Zones (AZ) in the same AWS Region
c. Deploy the Amazon Elastic Compute Cloud (Amazon EC2) instances in the same Availability Zone (AZ) of an AWS Region
d. Deploy the Amazon Elastic Compute Cloud (Amazon EC2) instances across different AWS Regions of the same Availability Zone (AZ)

A

b. Deploy the Amazon Elastic Compute Cloud (Amazon EC2) instances across different Availability Zones (AZ) in the same AWS Region

Two of the other options are not possible and the other option that’s possible does not help with availability.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

Which of the following is CORRECT regarding removing an AWS account from AWS Organizations?
a. Raise a support ticket with AWS Support to remove the account
b. The AWS account must be able to operate as a standalone account. Only then it can be removed from AWS organizations
c. The AWS account can be removed from AWS Systems Manager
d. The AWS account must not have any Service Control Policies (SCPs) attached to it. Only then it can be removed from AWS organizations

A

b. The AWS account must be able to operate as a standalone account. Only then it can be removed from AWS organizations

You can remove an account from your organization only if the account has the information that is required for it to operate as a standalone account. For each account that you want to make standalone, you must accept the AWS Customer Agreement, choose a support plan, provide and verify the required contact information, and provide a current payment method. AWS uses the payment method to charge for any billable (not AWS Free Tier) AWS activity that occurs while the account isn’t attached to an organization.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

A company wants to identify the optimal AWS resource configuration for its workloads so that the company can reduce costs and increase workload performance. Which of the following services can be used to meet this requirement?
a. AWS Cost Explorer
b. AWS Systems Manager
c. AWS Budgets
d. AWS Compute Optimizer

A

d. AWS Compute Optimizer

AWS Compute Optimizer recommends optimal AWS resources for your workloads to reduce costs and improve performance by using machine learning to analyze historical utilization metrics. Over-provisioning resources can lead to unnecessary infrastructure costs, and under-provisioning resources can lead to poor application performance. Compute Optimizer helps you choose optimal configurations for three types of AWS resources: Amazon EC2 instances, Amazon EBS volumes, and AWS Lambda functions, based on your utilization data.

Compute Optimizer recommends up to 3 options from 140+ EC2 instance types, as well as a wide range of EBS volume and Lambda function configuration options, to right-size your workloads. Compute Optimizer also projects what the CPU utilization, memory utilization, and run time of your workload would have been on recommended AWS resource options. This helps you understand how your workload would have performed on the recommended options before implementing the recommendations.

How Compute Optimizer works: https://aws.amazon.com/compute-optimizer/

Incorrect options:

AWS Cost Explorer - AWS Cost Explorer has an easy-to-use interface that lets you visualize, understand, and manage your AWS costs and usage over time. Cost Explorer Resource Rightsizing Recommendations and Compute Optimizer use the same recommendation engine. The Compute Optimizer recommendation engine delivers recommendations to help customers identify optimal EC2 instance types for their workloads. The Cost Explorer console and API surface a subset of these recommendations that may lead to cost savings, and augments them with customer-specific cost and savings information (e.g. billing information, available credits, RI, and Savings Plans) to help Cost Management owners quickly identify savings opportunities through infrastructure rightsizing. Compute Optimizer console and its API delivers all recommendations regardless of the cost implications.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

AWS Web Application Firewall (WAF) offers protection from common web exploits at which layer?
a. Layer 3
b. Layer 4 and 7
c. Layer 4
d. Layer 7

A

d. Layer 7

AWS WAF is a web application firewall that lets you monitor the HTTP and HTTPS requests that are forwarded to an Amazon API Gateway API, Amazon CloudFront or an Application Load Balancer. HTTP and HTTPS requests are part of the Application layer, which is layer 7.

Incorrect options:

Layer 3 - Layer 3 is the Network layer and this layer decides which physical path data will take when it moves on the network. AWS Shield offers protection at this layer. WAF does not offer protection at this layer.

Layer 4 - Layer 4 is the Transport layer and this layer data transmission occurs using TCP or UDP protocols. AWS Shield offers protection at this layer. WAF does not offer protection at this layer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

Which policy describes prohibited uses of the web services offered by Amazon Web Services?

a. AWS Trusted Advisor
b. AWS Acceptable Use Policy
c. AWS Fair Use Policy
d. AWS Applicable Use Policy

A

b. AWS Acceptable Use Policy

The Acceptable Use Policy describes prohibited uses of the web services offered by Amazon Web Services, Inc. and its affiliates (the “Services”) and the website located at http://aws.amazon.com (the “AWS Site”). This policy is present at https://aws.amazon.com/aup/ and is updated on a need basis by AWS.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

Which AWS service can be used for online analytical processing?

a. Amazon DynamoDB
b. Amazon Redshift
c. Amazon Relational Database Service (Amazon RDS)
d. Amazon ElastiCache

A

b. Amazon Redshift

Amazon Redshift is a fast, fully managed cloud data warehouse that makes it simple and cost-effective to analyze all your data using standard SQL and your existing Business Intelligence (BI) tools. It allows you to run complex analytic queries against terabytes to petabytes of structured data, using sophisticated query optimization, columnar storage on high-performance storage, and massively parallel query execution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

Which AWS technology/service helps you to scale your resources to match supply with demand while still keeping your cloud solution cost-effective?

a. AWS CloudFormation
b. AWS Cost Explorer
c. AWS Auto Scaling
d. AWS OpsWorks

A

c. AWS Auto Scaling

AWS Auto Scaling monitors applications and automatically adjusts capacity to maintain steady, predictable performance at the lowest possible cost. Using AWS Auto Scaling, it’s easy to setup application scaling for multiple resources across multiple services in minutes. The service provides a simple, powerful user interface that lets you build scaling plans for resources including Amazon EC2 instances and Spot Fleets, Amazon ECS tasks, Amazon DynamoDB tables and indexes, and Amazon Aurora Replicas. AWS Auto Scaling makes scaling simple with recommendations that allow you to optimize performance, costs, or balance between them.

How AWS Auto Scaling Works: via - https://aws.amazon.com/autoscaling/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

Which AWS service can help you create data-driven business cases for transitioning your business from on-premises to AWS Cloud?

a. AWS Budgets
b. AWS Billing and Cost Management
c. AWS Migration Evaluator
d. AWS Trusted Advisor

A

c. AWS Migration Evaluator

AWS Migration Evaluator (Formerly TSO Logic) is a complimentary service to create data-driven business cases for AWS Cloud planning and migration.

AWS Migration Evaluator quickly provides a business case to make sound AWS planning and migration decisions. With AWS Migration Evaluator, your organization can build a data-driven business case for AWS, gets access to AWS expertise, visibility into the costs associated with multiple migration strategies, and insights on how reusing existing software licensing reduces costs further.

Incorrect options:

AWS Budgets - AWS Budgets gives the ability to set custom budgets that alert you when your costs or usage exceed (or are forecasted to exceed) your budgeted amount. You can also use AWS Budgets to set reservation utilization or coverage targets and receive alerts when your utilization drops below the threshold you define. AWS Budgets can be created at the monthly, quarterly, or yearly level, and you can customize the start and end dates. You can further refine your budget to track costs associated with multiple dimensions, such as AWS service, linked account, tag, and others. You cannot use this service to create data-driven business cases for transitioning your business from on-premises to AWS Cloud.

AWS Trusted Advisor - AWS Trusted Advisor is an online tool that provides real-time guidance to help provision your resources following AWS best practices. Whether establishing new workflows, developing applications, or as part of ongoing improvement, recommendations provided by AWS Trusted Advisor regularly help keep your solutions provisioned optimally. AWS Trusted Advisor analyzes your AWS environment and provides best practice recommendations in five categories: Cost Optimization, Performance, Security, Fault Tolerance, Service Limits. You cannot use this service to create data-driven business cases for transitioning your business from on-premises to AWS Cloud.

AWS Billing and Cost Management - AWS Billing and Cost Management is the service that you use to pay your AWS bill, monitor your usage, and analyze and control your costs. It is the billing department for AWS services - with necessary tools and services under its hood. You cannot use this service to create data-driven business cases for transitioning your business from on-premises to AWS Cloud.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

Which of the following are correct statements regarding the AWS Shared Responsibility Model? (Select two)

a. Configuration Management is the responsibility of the customer
b. For abstracted services like Amazon S3, AWS operates the infrastructure layer, the operating system, and platforms
c. For a service like Amazon EC2, that falls under Infrastructure as a Service (IaaS), AWS is responsible for maintaining guest operating system
d. AWS is responsible for Security ‘of’ the Cloud
e. AWS is responsible for training AWS and customer employees on AWS products and services

A

b. For abstracted services like Amazon S3, AWS operates the infrastructure layer, the operating system, and platforms
d. AWS is responsible for Security ‘of’ the Cloud

Security and Compliance is a shared responsibility between AWS and the customer. This shared model can help relieve the customer’s operational burden as AWS operates, manages and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the service operates.

AWS is responsible for protecting the infrastructure that runs all of the services offered in the AWS Cloud. This infrastructure is composed of the hardware, software, networking, and facilities that run AWS Cloud services.

For abstracted services, such as Amazon S3 and Amazon DynamoDB, AWS operates the infrastructure layer, the operating system, and platforms, and customers access the endpoints to store and retrieve data.

Incorrect options:

For a service like Amazon EC2, that falls under Infrastructure as a Service (IaaS), AWS is responsible for maintaining guest operating system - A service such as Amazon Elastic Compute Cloud (Amazon EC2) is categorized as Infrastructure as a Service (IaaS) and, as such, requires the customer to perform all of the necessary security configuration and management tasks. Customers are responsible for the management of the guest operating system (including updates and security patches), any application software or utilities installed by the customer on the instances, and the configuration of the AWS-provided firewall (called a security group) on each instance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
51
Q

A development team is looking for a forum where the most frequent questions and requests from AWS customers are listed along with AWS provided solutions.

Which AWS forum/service can be used for troubleshooting an issue or checking for a solution?

a. AWS Knowledge Center
b. AWS Marketplace
c. AWS Support Center
d. AWS Health Dashboard - service health

A

a. AWS Knowledge Center

AWS Knowledge Center contains the most frequent & common questions and requests and AWS provided solutions for the same. This should be the starting point of checking for a solution or troubleshooting an issue with AWS services. The URL for Knowledge Center is https://aws.amazon.com/premiumsupport/knowledge-center/.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
52
Q

Which type of cloud computing does Amazon Elastic Compute Cloud (EC2) represent?
a. Software as a Service (SaaS)
b. Network as a Service (NaaS)
c. Infrastructure as a Service (IaaS)
d. Platform as a Service (PaaS)

A

c. Infrastructure as a Service (IaaS)

Cloud Computing can be broadly divided into three types - Infrastructure as a Service (IaaS), Platform as a Service (PaaS), Software as a Service (SaaS).

IaaS contains the basic building blocks for cloud IT. It typically provides access to networking features, computers (virtual or on dedicated hardware), and data storage space. IaaS gives the highest level of flexibility and management control over IT resources.

EC2 gives you full control over managing the underlying OS, virtual network configurations, storage, data and applications. So EC2 is an example of an IaaS service.

Please review this overview of the types of Cloud Computing: https://aws.amazon.com/types-of-cloud-computing/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
53
Q

What foundational capability under the operations perspective is part of the AWS Cloud Adoption Framework (AWS CAF)?

a. Performance and capacity management
b. Application portfolio management
c. Vulnerability management
d. Platform engineering

A

a. Performance and capacity management

The AWS Cloud Adoption Framework (AWS CAF) leverages AWS experience and best practices to help you digitally transform and accelerate your business outcomes through innovative use of AWS. Use the AWS CAF to identify and prioritize transformation opportunities, evaluate and improve your cloud readiness, and iteratively evolve your transformation roadmap.

AWS CAF groups its capabilities in six perspectives: Business, People, Governance, Platform, Security, and Operations. Each perspective comprises a set of capabilities that functionally related stakeholders own or manage in your cloud transformation journey.

Operations perspective helps ensure that your cloud services are delivered at a level that meets the needs of your business. Performance and capacity management under the Operations perspective is part of the AWS Cloud Adoption Framework (AWS CAF)

AWS Cloud Adoption Framework (AWS CAF) - Foundational capabilities: https://docs.aws.amazon.com/whitepapers/latest/overview-aws-cloud-adoption-framework/foundational-capabilities.html

Incorrect options:

Vulnerability management - Vulnerability management is a foundational capability under the Security perspective for the AWS Cloud Adoption Framework (AWS CAF).

Platform engineering - Platform engineering is a foundational capability under the Platform perspective for the AWS Cloud Adoption Framework (AWS CAF).

Application portfolio management - Application Portfolio Management is a foundational capability under the Governance perspective for the AWS Cloud Adoption Framework (AWS CAF).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
54
Q

What are the advantages that AWS Cloud offers over a traditional on-premises IT infrastructure? (Select two)
a. Trade capital expense for variable expense
b. Make a capacity decision before deploying an application, to reduce costs
c. Increase speed and agility by keeping servers and other required resources ready before time in your data centers
d. Provide lower latency to applications by maintaining servers on-premises
e. Eliminate guessing on your infrastructure capacity needs

A

a. Trade capital expense for variable expense

e. Eliminate guessing on your infrastructure capacity needs

Trade capital expense for variable expense

In a traditional on-premises environment, you have to invest heavily in data centers and servers before you know how you’re going to use them. With Cloud Computing, you can pay only when you consume computing resources, and pay only for how much you consume.

Eliminate guessing on your infrastructure capacity needs

When you make a capacity decision before deploying an application, you often end up either sitting on expensive idle resources or dealing with limited capacity. With Cloud Computing, these problems go away. You can access as much or as little capacity as you need, and scale up and down as required with only a few minutes’ notice. You can Stop guessing capacity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
55
Q

Which AWS services can be used to decouple components of a microservices based application on AWS Cloud? (Select two)
a. Amazon Simple Queue Service (SQS)
b. AWS Lambda
c. AWS Step Functions
d. Amazon Elastic Compute Cloud (Amazon EC2)
e. Amazon Simple Notification Service (SNS)

A

a. Amazon Simple Queue Service (SQS)

e. Amazon Simple Notification Service (SNS)

Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. Using SQS, you can send, store, and receive messages between software components at any volume, without losing messages or requiring other services to be available.

Amazon Simple Notification Service (SNS) is a highly available, durable, secure, fully managed pub/sub messaging service that enables you to decouple microservices, distributed systems, and serverless applications. Using Amazon SNS topics, your publisher systems can fan-out messages to a large number of subscriber endpoints for parallel processing, including Amazon SQS queues, AWS Lambda functions, and HTTP/S webhooks. Additionally, SNS can be used to fan out notifications to end users using mobile push, SMS, and email.

Therefore, both SNS and SQS can be used to decouple components of a microservices-based application.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
56
Q

According to the AWS Cloud Adoption Framework (AWS CAF), what are two tasks that a company should perform when planning to migrate to the AWS Cloud and aiming to become more responsive to customer inquiries and feedback as part of their organizational transformation? (Select two)
a. Leverage legacy infrastructure for cost efficiencies
b. Leverage agile methods to rapidly iterate and evolve
c. Organize your teams around bureaucratic design principles
d. Organize your teams around products and value streams
e. Create new analytical insights with existing products and services

A

b. Leverage agile methods to rapidly iterate and evolve

d. Organize your teams around products and value streams

The AWS Cloud Adoption Framework (AWS CAF) leverages AWS experience and best practices to help you digitally transform and accelerate your business outcomes through innovative use of AWS. AWS CAF identifies specific organizational capabilities that underpin successful cloud transformations. These capabilities provide best practice guidance that helps you improve your cloud readiness. AWS CAF groups its capabilities in six perspectives: Business, People, Governance, Platform, Security, and Operations.

Organize your teams around products and value streams
Leverage agile methods to rapidly iterate and evolve

Using the AWS Cloud Adoption Framework (AWS CAF), you can reimagine how your business and technology teams create customer value and meet your strategic intent. Organizing your teams around products and value streams while leveraging agile methods to rapidly iterate and evolve will help you become more responsive and customer centric.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
57
Q

A company needs a storage solution for a project wherein the data is accessed less frequently but needs rapid access when required. Which S3 storage class is the MOST cost-effective for the given use-case?
a. Amazon S3 Intelligent-Tiering (S3 Intelligent-Tiering)
b. Amazon S3 Standard-Infrequent Access (S3 Standard-IA)
c. Amazon S3 Glacier (S3 Glacier)
d. Amazon S3 Standard

A

b. Amazon S3 Standard-Infrequent Access (S3 Standard-IA)

Amazon S3 Standard-Infrequent Access (S3 Standard-IA) is for data that is accessed less frequently, but requires rapid access when needed. S3 Standard-IA offers the high durability, high throughput, and low latency of S3 Standard, with a low per GB storage price and per GB retrieval fee. This combination of low cost and high performance make S3 Standard-IA ideal for long-term storage, backups, and as a data store for disaster recovery files.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
58
Q

A company is using a message broker service on its on-premises application and wants to move this messaging functionality to AWS Cloud. Which of the following AWS services is the right choice to move the existing functionality easily?

a. Amazon Simple Queue Service (Amazon SQS)
b. Amazon MQ
c. Amazon Simple Notification Service (Amazon SNS)
d. Amazon Kinesis Data Streams

A

b. Amazon MQ

Amazon MQ is a managed message broker service for Apache ActiveMQ and RabbitMQ that makes it easy to set up and operate message brokers on AWS. Amazon MQ reduces your operational responsibilities by managing the provisioning, setup, and maintenance of message brokers for you. Because Amazon MQ connects to your current applications with industry-standard APIs and protocols, you can easily migrate to AWS without having to rewrite code.

If you’re using messaging with existing applications, and want to move the messaging functionality to the cloud quickly and easily, AWS recommends you consider Amazon MQ. It supports industry-standard APIs and protocols so you can switch from any standards-based message broker to Amazon MQ without rewriting the messaging code in your applications. If you are building brand new applications in the cloud, AWS recommends you consider Amazon SQS and Amazon SNS.

How Amazon MQ works: https://aws.amazon.com/amazon-mq/

Incorrect options:

Amazon Kinesis Data Streams - Amazon Kinesis Data Streams enables you to build custom applications that process or analyze streaming data for specialized needs. You can continuously add various types of data such as clickstreams, application logs, and social media to an Amazon Kinesis data stream from hundreds of thousands of sources. Within seconds, the data will be available for your Amazon Kinesis Applications to read and process from the stream.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
59
Q

Which of the following AWS services can be used to prevent Distributed Denial-of-Service (DDoS) attack? (Select three)

a. AWS Shield
b. Amazon Inspector
c. Amazon CloudFront with Amazon Route 53
d. AWS Web Application Firewall (AWS WAF)
e. AWS Trusted Advisor
f. AWS CloudHSM

A

a. AWS Shield
c. Amazon CloudFront with Amazon Route 53
d. AWS Web Application Firewall (AWS WAF)

AWS Shield - AWS Shield is a managed Distributed Denial of Service (DDoS) protection service that safeguards applications running on AWS. AWS Shield provides always-on detection and automatic inline mitigations that minimize application downtime and latency, so there is no need to engage AWS Support to benefit from DDoS protection.

AWS Web Application Firewall (AWS WAF) - By using AWS Web Application Firewall (AWS WAF), you can configure web access control lists (Web ACLs) on your Amazon CloudFront distributions or Application Load Balancers to filter and block requests based on request signatures. Besides, by using the AWS Web Application Firewall (AWS WAF) rate-based rules, you can automatically block the IP addresses of bad actors when requests matching a rule exceed a threshold that you define.

Amazon CloudFront with Amazon Route 53 - AWS hosts Amazon CloudFront and Amazon Route 53 services on a distributed network of proxy servers in data centers throughout the world called edge locations. Using the global Amazon network of edge locations for application delivery and DNS service plays an important part in building a comprehensive defense against DDoS attacks for your dynamic web applications.

How AWS Shield, AWS Web Application Firewall (AWS WAF), and Amazon CloudFront with Amazon Route 53 help mitigate DDoS attacks: https://aws.amazon.com/blogs/security/how-to-protect-dynamic-web-applications-against-ddos-attacks-by-using-amazon-cloudfront-and-amazon-route-53/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
60
Q

Which of the following statements are true about Cost Allocation Tags in AWS Billing? (Select two)

a. Tags help in organizing resources and are a mandatory configuration item to run reports
b. Only user-defined tags need to be activated before they can appear in Cost Explorer or on a cost allocation report
c. For each resource, each tag key must be unique, but can have multiple values
d. For each resource, each tag key must be unique, and each tag key can have only one value
e. You must activate both AWS generated tags and user-defined tags separately before they can appear in Cost Explorer or on a cost allocation report

A

d. For each resource, each tag key must be unique, and each tag key can have only one value
e. You must activate both AWS generated tags and user-defined tags separately before they can appear in Cost Explorer or on a cost allocation report

A Cost Allocation Tag is a label that you or AWS assigns to an AWS resource. Each tag consists of a key and a value. For each resource, each tag key must be unique, and each tag key can have only one value. You can use tags to organize your resources, and cost allocation tags to track your AWS costs on a detailed level.

AWS provides two types of cost allocation tags, an AWS generated tags and user-defined tags. AWS defines, creates, and applies the AWS generated tags for you, and you define, create, and apply user-defined tags. You must activate both types of tags separately before they can appear in Cost Explorer or on a cost allocation report.

AWS Cost Allocation Tags Overview: https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/cost-alloc-tags.html

Incorrect options:

Tags help in organizing resources and are a mandatory configuration item to run reports - Tags definitely help organize resources as per an organization’s requirement; they are not mandatory though.

For each resource, each tag key must be unique, but can have multiple values - For each resource, each tag key must be unique, and each tag key can have only one value.

Only user-defined tags need to be activated before they can appear in Cost Explorer or on a cost allocation report - As explained above, both kinds of tags (user-defined and AWS generated) need to be activated separately before they can appear in report generation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
61
Q

Which service gives a personalized view of the status of the AWS services that are part of your Cloud architecture so that you can quickly assess the impact on your business when AWS service(s) are experiencing issues?

a. Amazon Inspector
b. Amazon CloudWatch
c. AWS Health - Your Account Health Dashboard
d. AWS Health - Service Health Dashboard

A

c. AWS Health - Your Account Health Dashboard

AWS Health - Your Account Health Dashboard provides alerts and remediation guidance when AWS is experiencing events that may impact you.

With AWS Health - Your Account Health Dashboard, alerts are triggered by changes in the health of your AWS resources, giving you event visibility, and guidance to help quickly diagnose and resolve issues.

You can check on this page https://phd.aws.amazon.com/phd/home to get current status information.

Incorrect options:

AWS Health - Service Health Dashboard - The AWS Health - Service Health Dashboard is the single place to learn about the availability and operations of AWS services. You can view the overall status of AWS services, and you can sign in to view personalized communications about your particular AWS account or organization.

You can check on this page https://health.aws.amazon.com/health/status to get current status information.

Exam Alert:

While the AWS Health - Service Health Dashboard displays the general status of AWS services; the AWS Health - Your Account Health Dashboard gives you a personalized view of the performance and availability of the AWS services underlying your AWS resources.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
62
Q

A company wants to have control over creating and using its own keys for encryption on AWS services. Which of the following can be used for this use-case?
a. customer managed key (CMK)
b. AWS Secrets Manager
c. AWS owned key
d. AWS managed key

A

a. customer managed key (CMK)

An AWS KMS key is a logical representation of a cryptographic key. A KMS key contains metadata, such as the key ID, key spec, key usage, creation date, description, and key state. Most importantly, it contains a reference to the key material that is used when you perform cryptographic operations with the KMS key.

The KMS keys that you create are customer managed keys. Customer managed keys are KMS keys in your AWS account that you create, own, and manage. You have full control over these KMS keys, including establishing and maintaining their key policies, IAM policies, and grants, enabling and disabling them, rotating their cryptographic material, adding tags, creating aliases that refer to the KMS keys, and scheduling the KMS keys for deletion.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
63
Q

Which budget types can be created under AWS Budgets (Select three)?

a. Resource budget
b. Reservation budget
c. Hardware budget
d. Software budget
e. Cost budget
f. Usage budget

A

b. Reservation budget
e. Cost budget
f. Usage budget

AWS Budgets enable you to plan your service usage, service costs, and instance reservations. AWS Budgets information is updated up to three times a day. Updates typically occur between 8 to 12 hours after the previous update. Budgets track your unblended costs, subscriptions, refunds, and RIs. There are four different budget types you can create under AWS Budgets - Cost budget, Usage budget, Reservation budget and Savings Plans budget.

Cost budget - Helps you plan how much you want to spend on a service.

Usage budget - Helps you plan how much you want to use one or more services.

Reservation budget - This helps you track the usage of your Reserved Instances (RI). Two ways of doing it - Reserved Instance (RI) utilization budgets (This lets you see if your Reserved Instances (RI) are unused or under-utilized), Reserved Instance (RI) coverage budgets (This lets you see how much of your instance usage is covered by a reservation).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
64
Q

Which of the following is a part of the AWS Global Infrastructure?

a. Virtual Private Cloud (VPC)
b. Subnet
c. Virtual Private Network (VPN)
d. AWS Region

A

d. AWS Region

AWS Region is a physical location around the world where AWS builds its data centers. Each group of logical data centers is called an Availability Zone (AZ). Each AWS Region consists of multiple, isolated, and physically separate AZ’s within a geographic area.

Please see this illustration for AWS regions in the US: https://aws.amazon.com/about-aws/global-infrastructure/regions_az/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
65
Q

Which of the following statements are correct about the AWS root user account? (Select two)

a. It is highly recommended to enable Multi-Factor Authentication (MFA) for root user account
b. Root user access credentials are the email address and password used to create the AWS account
c. Root user account password cannot be changed once it is set
d. Root user account gets unrestricted permissions when the account is created, but these can be restricted using IAM policies
e. Root user credentials should only be shared with managers requiring administrative responsibilities to complete their jobs

A

a. It is highly recommended to enable Multi-Factor Authentication (MFA) for root user account
b. Root user access credentials are the email address and password used to create the AWS account

The Email address and the password used for signing up for AWS services are the AWS root user account credentials. Root user account, therefore, has full permissions on all AWS resources under that account. Restricting root user account access is not possible. As a best practice, Multi-Factor Authentication (MFA) should be set on the root user account. The root user account password can be changed after account creation. For all employees performing various administrative jobs, create individual user accounts using AWS IAM, and give administrative permissions as needed.

AWS Root User Account Security Best Practices: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_root-user.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
66
Q

AWS Trusted Advisor analyzes your AWS environment and provides best practice recommendations for which of the following categories? (Select two)

a. Change Management
b. Service Limits
c. Elasticity
d. Cost Optimization
e. Documentation

A

b. Service Limits
d. Cost Optimization

AWS Trusted Advisor is an online tool that provides real-time guidance to help provision your resources following AWS best practices. Whether establishing new workflows, developing applications, or as part of ongoing improvement, recommendations provided by Trusted Advisor on a regular basis help keep your solutions provisioned optimally. AWS Trusted Advisor analyzes your AWS environment and provides best practice recommendations in five categories: Cost Optimization, Performance, Security, Fault Tolerance, Service Limits.

How Trusted Advisor Works:

AWS Trusted Advisor Recommendations:
https://aws.amazon.com/premiumsupport/technology/trusted-advisor/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
67
Q

An organization is planning to move its infrastructure from the on-premises datacenter to AWS Cloud. As a Cloud Practioner, which options would you recommend so that the organization can identify the right AWS services to build solutions on AWS Cloud (Select two)?

a. AWS CloudTrail
b. Amazon CloudWatch
c. AWS Partner Network (APN)
d. AWS Service Catalog
e. AWS Organizations

A

c. AWS Partner Network (APN)
d. AWS Service Catalog

AWS Service Catalog - AWS Service Catalog allows organizations to create and manage catalogs of IT services that are approved for use on AWS. These IT services can include everything from virtual machine images, servers, software, and databases to complete multi-tier application architectures.

AWS Partner Network (APN) - Organizations can take help from the AWS Partner Network (APN) to identify the right AWS services to build solutions on AWS Cloud. AWS Partner Network (APN) is the global partner program for technology and consulting businesses that leverage Amazon Web Services to build solutions and services for customers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
68
Q

An IT company wants to run a log backup process every Monday at 2 AM. The usual runtime of the process is 5 minutes. As a Cloud Practitioner, which AWS services would you recommend to build a serverless solution for this use-case? (Select two)

a. AWS Step Function
b. AWS Lambda
c. Amazon Elastic Compute Cloud (Amazon EC2)
d. Amazon Eventbridge
e. AWS Systems Manager

A

b. AWS Lambda
d. Amazon Eventbridge

Amazon Eventbridge - Amazon EventBridge is a service that provides real-time access to changes in data in AWS services, your own applications, and software as a service (SaaS) applications without writing code. Amazon EventBridge Scheduler is a serverless task scheduler that simplifies creating, executing, and managing millions of schedules across AWS services without provisioning or managing underlying infrastructure.

Amazon Eventbridge Scheduler: via - https://aws.amazon.com/eventbridge/

AWS Lambda - AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume. The lambda has a maximum execution time of 15 minutes, so it can be used to run this log backup process.

To build the solution for the given use-case, you can leverage the Amazon EventBridge Scheduler to trigger on a schedule. You can then set the Lambda as the target for this rule.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
69
Q

A company’s flagship application runs on a fleet of Amazon Elastic Compute Cloud (Amazon EC2) instances. As per the new policies, the system administrators are looking for the best way to provide secure shell access to Amazon Elastic Compute Cloud (Amazon EC2) instances without opening new ports or using public IP addresses.

Which tool/service will help you achieve this requirement?

a. Amazon Elastic Compute Cloud (Amazon EC2) Instance Connect
b. Amazon Inspector
c. Amazon Route 53
d. AWS Systems Manager Session Manager

A

d. AWS Systems Manager Session Manager

AWS Systems Manager Session Manager is a fully-managed service that provides you with an interactive browser-based shell and CLI experience. It helps provide secure and auditable instance management without the need to open inbound ports, maintain bastion hosts, and manage SSH keys. AWS Systems Manager Session Manager helps to enable compliance with corporate policies that require controlled access to instances, increase security and auditability of access to the instances while providing simplicity and cross-platform instance access to end-users.

Incorrect options:

Amazon Elastic Compute Cloud (Amazon EC2) Instance Connect - Amazon Elastic Compute Cloud (Amazon EC2) Instance Connect provides a simple and secure way to connect to your Linux instances using Secure Shell (SSH). With EC2 Instance Connect, you use AWS Identity and Access Management (IAM) policies and principals to control SSH access to your instances, removing the need to share and manage SSH keys. EC2 Instance Connect will need port 22 to be open for traffic. Therefore, not the correct option here.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
70
Q

Which AWS service protects your AWS account by monitoring malicious activity and detecting threats?

a. Amazon CloudWatch
b. AWS CloudTrail
c. AWS Trusted Advisor
d. Amazon GuardDuty

A

d. Amazon GuardDuty

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
71
Q

Compared to the on-demand instance prices, what is the highest possible discount offered for spot instances?

a. 75
b. 10
c. 50
d. 90

A

d. 90

Amazon EC2 spot instances let you take advantage of unused EC2 capacity in the AWS cloud. Spot instances are available at up to a 90% discount compared to the on-demand instance prices. You can use spot instances for various stateless, fault-tolerant, or flexible applications such as big data, containerized workloads, CI/CD, web servers, high-performance computing (HPC), and other test & development workloads.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
72
Q

Which Amazon Route 53 routing policy would you use when you want to route your traffic in an active-passive configuration?

a. Simple routing
b. Failover routing
c. Latency-based routing
d. Weighted routing

A

b. Failover routing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
73
Q

A startup wants to migrate its data and applications from the on-premises data center to AWS Cloud. Which of the following options can be used by the startup to help with this migration? (Select two)
a. Raise a support ticket with AWS Support for further assistance
b. Consult moderators on AWS Developer Forums
c. Leverage AWS Professional Services to accelerate the infrastructure migration
d. Use AWS Trusted Advisor to automate the infrastructure migration
e. Utilize AWS Partner Network (APN) to build a custom solution for this infrastructure migration

A

c. Leverage AWS Professional Services to accelerate the infrastructure migration

e. Utilize AWS Partner Network (APN) to build a custom solution for this infrastructure migration

Leverage AWS Professional Services to accelerate the infrastructure migration

The AWS Professional Services organization is a global team of experts that can help you realize your desired business outcomes when using the AWS Cloud. AWS Professional Services consultants can supplement your team with specialized skills and experience that can help you achieve quick results. Therefore, leveraging AWS Professional Services can accelerate the infrastructure migration for the startup.

Utilize AWS Partner Network (APN) to build a custom solution for this infrastructure migration

The AWS Partner Network (APN) is the global partner program for technology and consulting businesses that leverage Amazon Web Services to build solutions and services for customers. The startup can work with experts from APN to build a custom solution for this infrastructure migration.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
74
Q

Which of the following AWS services are always free to use (Select two)?

a. Amazon Simple Storage Service (Amazon S3)
b. Amazon DynamoDB
c. Amazon Elastic Compute Cloud (Amazon EC2)
d. AWS Identity and Access Management (AWS IAM)
e. AWS Auto Scaling

A

d. AWS Identity and Access Management (AWS IAM)
e. AWS Auto Scaling

AWS Identity and Access Management (AWS IAM) - AWS Identity and Access Management (AWS IAM) enables you to manage access to AWS services and resources securely. Using IAM, you can create and manage AWS users and groups, and use permissions to allow and deny their access to AWS resources. IAM is a feature of your AWS account offered at no additional charge.

AWS Auto Scaling - AWS Auto Scaling monitors your applications and automatically adjusts the capacity to maintain steady, predictable performance at the lowest possible cost. Using AWS Auto Scaling, it’s easy to setup application scaling for multiple resources across multiple services in minutes. AWS Auto Scaling is available at no additional charge. You pay only for the AWS resources needed to run your applications and Amazon CloudWatch monitoring fees.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
75
Q

An IT company is planning to migrate from an on-premises environment to AWS Cloud. Which of the following expense areas would result in cost savings when the company moves to AWS Cloud? (Select two)
a. Developer salary
b. Project manager salary
c. Data center physical security expenditure
d. Data center hardware infrastructure expenditure
e. SaaS application license fee

A

c. Data center physical security expenditure

d. Data center hardware infrastructure expenditure

The company does not need to spend on the computing hardware infrastructure and data center physical security. So these expense areas would result in cost savings. The expenditure on the SaaS application license fee, developer salary, and project manager salary would remain the same.

Exam Alert:

Please check out the following six advantages of Cloud Computing. You would certainly be asked questions on the advantages of Cloud Computing compared to a traditional on-premises setup:

https://docs.aws.amazon.com/whitepapers/latest/aws-overview/six-advantages-of-cloud-computing.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
76
Q

A leading research firm needs to access information available in old patents and documents (such as PDFs, Text Files, Word documents, etc) present in its huge knowledge base. The firm is looking for a powerful search tool that can dig into these knowledge resources and return the most relevant files/documents. Which of the following is the correct service to address this requirement?

a. Amazon Personalize
b. Amazon Lex
c. Amazon Kendra
d. Amazon Comprehend

A

c. Amazon Kendra

Amazon Kendra is an intelligent search service powered by machine learning. Kendra reimagines enterprise search for your websites and applications so your employees and customers can easily find the content they are looking for, even when it’s scattered across multiple locations and content repositories within your organization.

Using Amazon Kendra, you can stop searching through troves of unstructured data and discover the right answers to your questions, when you need them. Amazon Kendra is a fully managed service, so there are no servers to provision, and no machine learning models to build, train, or deploy. Amazon Kendra supports unstructured and semi-structured data in .html, MS Office (.doc, .ppt), PDF, and text formats.

Unlike conventional search technology, natural language search capabilities return the answers you’re looking for quickly and accurately, no matter where the information lives within your organization.

Amazon Kendra’s deep learning models come pre-trained across 14 industry domains, allowing it to extract more accurate answers across a wide range of business use cases from the get-go. You can also fine-tune search results by manually adjusting the importance of data sources, authors, freshness, or using custom tags.

Incorrect options:

Amazon Personalize - Amazon Personalize enables developers to build applications with the same machine learning (ML) technology used by Amazon.com for real-time personalized recommendations. Amazon Personalize makes it easy for developers to build applications capable of delivering a wide array of personalization experiences, including specific product recommendations, personalized product re-ranking, and customized direct marketing.

Amazon Comprehend - Amazon Comprehend is a natural-language processing (NLP) service that uses machine learning to uncover information in unstructured data. Instead of combing through documents, the process is simplified and unseen information is easier to understand.

Amazon Kendra provides ML-powered search capabilities for all unstructured data customers store in AWS. Amazon Kendra offers easy-to-use native connectors to popular AWS repository types such as S3 and RDS databases. Other AI services such as Amazon Comprehend, Amazon Transcribe, and Amazon Comprehend Medical can be used to pre-process documents, generate searchable text, extract entities, and enrich their metadata for more specialized search experiences.

Amazon Lex - Amazon Lex is a service for building conversational interfaces into any application using voice and text. Amazon Lex provides the advanced deep learning functionalities of automatic speech recognition (ASR) for converting speech to text, and natural language understanding (NLU) to recognize the intent of the text, to enable you to build applications with highly engaging user experiences and lifelike conversational interactions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
77
Q

A multi-national company has its business-critical data stored on a fleet of Amazon Elastic Compute Cloud (Amazon EC2) instances, in various countries, configured in region-specific compliance rules. To demonstrate compliance, the company needs to submit historical configurations on a regular basis. Which AWS service is best suited for this requirement?

a. Amazon Macie
b. AWS Config
c. Amazon GuardDuty
d. AWS CloudTrail

A

b. AWS Config

AWS Config provides a detailed view of the configuration of AWS resources in your AWS account. This includes how the resources are related to one another and how they were configured in the past so that you can see how the configurations and relationships change over time. AWS Config is designed to help you oversee your application resources in the following scenarios: Resource Administration, Auditing and Compliance, Managing and Troubleshooting Configuration Changes, Security Analysis.

How AWS Config Works: https://aws.amazon.com/config/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
78
Q

A startup wants to provision an EC2 instance for the lowest possible cost for a long-term duration but needs to make sure that the instance would never be interrupted. As a Cloud Practitioner, which of the following options would you recommend?
a. EC2 Dedicated Host
b. EC2 Spot Instance
c. EC2 On-Demand Instance
d. EC2 Reserved Instance (RI)

A

d. EC2 Reserved Instance (RI)

An EC2 Reserved Instance (RI) provides you with significant savings (up to 75%) on your Amazon EC2 costs compared to On-Demand Instance pricing. A Reserved Instance (RI) is not a physical instance, but rather a billing discount applied to the use of On-Demand Instances in your account. You can purchase a Reserved Instance (RI) for a one-year or three-year commitment, with the three-year commitment offering a bigger discount. A reserved instance (RI) cannot be interrupted. So this is the correct option.

On-demand instances cannot be interrupted. However, On-demand instances are not as cost-effective as Reserved instances, so this option is not correct.

EC2 Spot Instances are as cost-effective as Reserved instances. However, EC2 Spot Instance can be interrupted, so this option is not correct.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
79
Q

According to the AWS Shared Responsibility Model, which of the following are responsibilities of the customer for Amazon RDS?

a. Applying patches to the underlying OS
b. Database encryption
c. Applying patches to the Amazon Relational Database Service (Amazon RDS) database
d. Managing the underlying server hardware on which Amazon Relational Database Service (Amazon RDS) runs

A

b. Database encryption

Security and Compliance is a shared responsibility between AWS and the customer. This shared model can help relieve the customer’s operational burden as AWS operates, manages and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the service operates.

Database encryption - Under the AWS Shared Responsibility Model, customers are responsible for managing their data, including data encryption.

AWS Shared Responsibility Model Overview: via - https://aws.amazon.com/compliance/shared-responsibility-model/

80
Q

Which of the following is correct regarding the AWS Shield Advanced pricing?

a. AWS Shield Advanced is a free service for AWS Business Support plan
b. AWS Shield Advanced is a free service for all AWS Support plans
c. AWS Shield Advanced is a free service for AWS Enterprise Support plan
d. AWS Shield Advanced offers protection against higher fees that could result from a DDoS attack

A

d. AWS Shield Advanced offers protection against higher fees that could result from a DDoS attack

AWS Shield Advanced offers some cost protection against spikes in your AWS bill that could result from a DDoS attack. This cost protection is provided for your Elastic Load Balancing load balancers, Amazon CloudFront distributions, Amazon Route 53 hosted zones, Amazon Elastic Compute Cloud instances, and your AWS Global Accelerator accelerators.

AWS Shield Advanced is a paid service for all customers, irrespective of the Support plan.

81
Q

Which of the following use-cases is NOT supported by Amazon Rekognition?

a. Detect text in a photo
b. Identify person in a photo
c. Quickly resize photos to create thumbnails
d. Label objects in a photo

A

c. Quickly resize photos to create thumbnails

You cannot use Amazon Rekognition to resize photos to create thumbnails.

With Amazon Rekognition, you can identify objects, people, text, scenes, and activities in images and videos, as well as detect any inappropriate content. Amazon Rekognition also provides highly accurate facial analysis and facial search capabilities that you can use to detect, analyze, and compare faces for a wide variety of user verification, people counting, and public safety use cases.

Amazon Rekognition Use-Cases: https://aws.amazon.com/rekognition/

82
Q

A social media company wants to protect its web application from common web exploits such as SQL injection and cross-site scripting. Which of the following AWS services can be used to address this use-case?

a. AWS Web Application Firewall (AWS WAF)
b. Amazon GuardDuty
c. AWS CloudWatch
d. Amazon Inspector

A

a. AWS Web Application Firewall (AWS WAF)

AWS Web Application Firewall (AWS WAF) is a web application firewall that helps protect your web applications or APIs against common web exploits that may affect availability, compromise security, or consume excessive resources. AWS Web Application Firewall (AWS WAF) gives you control over how traffic reaches your applications by enabling you to create security rules that block common attack patterns such as SQL injection or cross-site scripting. You can also use rate-based rules to mitigate the Web layer DDoS attack.

How AWS Web Application Firewall (AWS WAF) Works: via - https://aws.amazon.com/waf/

An SQL injection attack works by exploiting any one of the known SQL vulnerabilities that allow the SQL server to run malicious code. For example, if a SQL server is vulnerable to an injection attack, it may be possible for an attacker to go to a website’s search box and type in code that would force the site’s SQL server to dump all of its stored usernames and passwords for the site.

Similar to an SQL injection attack, a cross-site scripting attack also involves injecting malicious code into a website, but in this case, the website itself is not being attacked. Instead, the malicious code the attacker has injected only runs in the user’s browser when they visit the attacked website, and it goes after the visitor directly, not the website.

Incorrect options:

Amazon GuardDuty is a threat detection service that monitors malicious activity and unauthorized behavior to protect your AWS account. Amazon GuardDuty analyzes billions of events across your AWS accounts from AWS CloudTrail (AWS user and API activity in your accounts), Amazon VPC Flow Logs (network traffic data), and DNS Logs (name query patterns). Amazon GuardDuty cannot be used to protect from web exploits such as SQL injection and cross-site scripting.

How Amazon GuardDuty Works: via - https://aws.amazon.com/guardduty/

83
Q

A company wants a fully managed, flexible, and scalable file storage system, with low latency access, for its Windows-based applications. Which AWS service is the right choice for the company?

a. Amazon FSx for Lustre
b. Amazon Elastic File System (Amazon EFS)
c. Amazon Elastic Block Storage (Amazon EBS)
d. Amazon FSx for Windows File Server

A

d. Amazon FSx for Windows File Server

Amazon FSx for Windows File Server provides fully managed, highly reliable, and scalable file storage that is accessible over the industry-standard Service Message Block (SMB) protocol. It is built on Windows Server, delivering a wide range of administrative features such as user quotas, end-user file restore, and Microsoft Active Directory (AD) integration.

To support a wide spectrum of workloads, Amazon FSx provides high levels of throughput, IOPS and consistent sub-millisecond latencies. Amazon FSx is accessible from Windows, Linux, and macOS compute instances and devices.

For Windows-based applications, Amazon FSx provides fully managed Windows file servers with features and performance optimized for “lift-and-shift” business-critical application workloads including home directories (user shares), media workflows, and ERP applications. It is accessible from Windows and Linux instances via the SMB protocol.

Incorrect options:

Amazon FSx for Lustre - For compute-intensive and fast processing workloads, like high-performance computing (HPC), machine learning, EDA, and media processing, Amazon FSx for Lustre, provides a file system that’s optimized for performance, with input and output stored on Amazon S3. Amazon FSx for Lustre is only compatible with Linux.

84
Q

Which of the following is a perspective of the AWS Cloud Adoption Framework (AWS CAF)?

a. Business
b. Process
c. Architecture
d. Product

A

a. Business

The AWS Cloud Adoption Framework (AWS CAF) leverages AWS experience and best practices to help you digitally transform and accelerate your business outcomes through innovative use of AWS. AWS CAF identifies specific organizational capabilities that underpin successful cloud transformations.

AWS CAF groups its capabilities in six perspectives: Business, People, Governance, Platform, Security, and Operations.

85
Q

Amazon CloudWatch billing metric data is stored in which AWS Region?

a. In the AWS Region where the AWS resource is provisioned
b. US East (N. Virginia) - us-east-1
c. US West (N. California) - us-west-1
d. In the AWS Region where the AWS account is created

A

b. US East (N. Virginia) - us-east-1

You can monitor your estimated AWS charges by using Amazon CloudWatch. Billing metric data is stored in the US East (N. Virginia) Region and represents worldwide charges. This data includes the estimated charges for every service in AWS that you use, in addition to the estimated overall total of your AWS charges.

86
Q

Which of the following use cases is best suited for Amazon EFS Standard-Infrequent Access (EFS Standard-IA) storage class?

a. Use as boot volume for highly available Amazon Elastic Compute Cloud (Amazon EC2) instances
b. Storing data in a single AWS Availability Zone (AZ)
c. Storing files in an accessible location to satisfy audit requirements
d. Object storage for workloads that need sub-second latency speeds for accessing the data

A

c. Storing files in an accessible location to satisfy audit requirements

The Amazon EFS Standard-Infrequent Access (EFS Standard-IA) storage class reduces storage costs for files that are not accessed every day. It does this without sacrificing the high availability, high durability, elasticity, and POSIX file system access that Amazon Elastic File System (Amazon EFS) provides.

AWS recommends Amazon EFS Standard-Infrequent Access (EFS Standard-IA) storage class if you need your full dataset to be readily accessible and want to automatically save on storage costs for files that are less frequently accessed. Examples include keeping files accessible to satisfy audit requirements, performing historical analysis, or performing backup and recovery. Amazon EFS Standard-Infrequent Access (EFS Standard-IA) storage is compatible with all Amazon EFS features, and is available in all AWS Regions where Amazon EFS is available.

87
Q

As a Cloud Practitioner, which Amazon Simple Storage Service (Amazon S3) storage class would you recommend for data archival?

a. Amazon S3 Standard
b. Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA)
c. Amazon S3 Intelligent-Tiering
d. Amazon S3 Glacier Flexible Retrieval

A

d. Amazon S3 Glacier Flexible Retrieval

Amazon S3 Intelligent-Tiering is not suitable for data archival.

88
Q

As per the AWS Shared Responsibility Model, which of the following is a responsibility of the customer from a security and compliance point of view?

a. Managing patches of the guest operating system on Amazon Elastic Compute Cloud (Amazon EC2)
b. Availability Zone (AZ) infrastructure management
c. Patching/fixing flaws within the AWS infrastructure
d. Configuration management for AWS global infrastructure

A

a. Managing patches of the guest operating system on Amazon Elastic Compute Cloud (Amazon EC2)

Security and Compliance is a shared responsibility between AWS and the customer. This shared model can help relieve the customer’s operational burden as AWS operates, manages, and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the service operates.

As per the AWS shared responsibility model, the customer is responsible for security “in” the cloud. Customers that deploy an Amazon EC2 instance are responsible for the management of the guest operating system (including updates and security patches), any application software or utilities installed by the customer on the instances, and the configuration of the AWS-provided firewall (called a security group) on each instance.

Exam Alert:

Please review the Shared Responsibility Model in detail as you can expect multiple questions on the shared responsibility model in the exam: https://aws.amazon.com/compliance/shared-responsibility-model/

89
Q

A photo sharing web application wants to store thumbnails of user-uploaded images on Amazon Simple Storage Service (Amazon S3). The thumbnails are rarely used but need to be immediately accessible from the web application. The thumbnails can be regenerated easily if they are lost. Which is the most cost-effective way to store these thumbnails on Amazon Simple Storage Service (Amazon S3)?

a. Use Amazon S3 Standard to store the thumbnails
b. Use Amazon S3 Glacier Flexible Retrieval to store the thumbnails
c. Use Amazon S3 Standard-Infrequent Access (S3 Standard-IA) to store the thumbnails
d. Use Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) to store the thumbnails

A

d. Use Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) to store the thumbnails

Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) is for data that is accessed less frequently but requires rapid access when needed. Unlike other S3 Storage Classes which store data in a minimum of three Availability Zones (AZs), Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) stores data in a single Availability Zone (AZ) and costs 20% less than Amazon S3 Standard-Infrequent Access (S3 Standard-IA). Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) offers the same high durability, high throughput, and low latency of S3 Standard, with a low per GB storage price and per GB retrieval fee. Although Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) offers less availability than S3 Standard but that’s not an issue for the given use-case since the thumbnails can be regenerated easily.

As the thumbnails are rarely used but need to be rapidly accessed when required, so Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) is the best choice for this use-case.

Exam Alert:

Please review this detailed comparison on S3 Storage Classes as you can expect a few questions on this aspect of S3: https://aws.amazon.com/s3/storage-classes/

90
Q

Which of the following are components of an AWS Site-to-Site VPN? (Select two)

a. Network Address Translation gateway (NAT gateway)
b. Internet gateway
c. AWS storage gateway
d. Virtual private gateway (VGW)
e. Customer gateway

A

d. Virtual private gateway (VGW)
e. Customer gateway

AWS Site-to-Site VPN enables you to securely connect your on-premises network or branch office site to your Amazon Virtual Private Cloud (Amazon VPC). VPN Connections are a good solution if you have an immediate need, and have low to modest bandwidth requirements. This connection goes over the public internet. Virtual private gateway (VGW) / Transit Gateway and Customer Gateway are the components of an AWS Site-to-Site VPN.

A virtual private gateway (VGW) is the VPN concentrator on the Amazon side of the AWS Site-to-Site VPN connection. A customer gateway is a resource in AWS that provides information to AWS about your Customer gateway device.

Components of an AWS Site-to-Site VPN: https://docs.aws.amazon.com/vpn/latest/s2svpn/how_it_works.html

Incorrect options:

AWS storage gateway - AWS storage gateway is a hybrid cloud storage service that connects your existing on-premises environments with the AWS Cloud. Customers use storage gateway to simplify storage management and reduce costs for key hybrid cloud storage use cases.

Network Address Translation gateway (NAT gateway) - A Network Address Translation gateway (NAT gateway) or a NAT Instance can be used in a public subnet in your VPC to enable instances in the private subnet to initiate outbound IPv4 traffic to the Internet. Network Address Translation gateway (NAT gateway) is managed by AWS but NAT Instance is managed by you.

Internet gateway - An internet gateway is a horizontally scaled, redundant, and highly available VPC component that allows communication between instances in your VPC and the internet. It, therefore, imposes no availability risks or bandwidth constraints on your network traffic.

91
Q

A data analytics company is running a proprietary batch analytics application on AWS and wants to use a storage service which would be accessed by hundreds of EC2 instances simultaneously to append data to existing files. As a Cloud Practitioner, which AWS service would you suggest for this use-case?
a. Amazon Simple Storage Service (Amazon S3)
b. Instance Store
c. Amazon Elastic File System (Amazon EFS)
d. Amazon Elastic Block Store (Amazon EBS)

A

c. Amazon Elastic File System (Amazon EFS)

Amazon EFS is a file storage service for use with Amazon EC2. Amazon EFS provides a file system interface, file system access semantics, and concurrently-accessible storage for up to thousands of Amazon EC2 instances. Amazon EFS uses the Network File System protocol.

Amazon Elastic Block Store (Amazon EBS) - Amazon Elastic Block Store (EBS) is an easy to use, high-performance block storage service designed for use with Amazon Elastic Compute Cloud (EC2) for both throughput and transaction-intensive workloads at any scale. EBS volumes cannot be accessed simultaneously by multiple EC2 instances, so this option is incorrect.

92
Q

A startup is looking for 24x7 phone based technical support for its AWS account. Which of the following is the MOST cost-effective AWS support plan for this use-case?

a. AWS Business Support
b. AWS Enterprise Support
c. AWS Developer Support
d. AWS Enterprise On-Ramp Support

A

a. AWS Business Support

AWS offers four different support plans to cater to each of its customers - AWS Developer Support, AWS Business Support, AWS Enterprise On-Ramp Support and AWS Enterprise Support plans. A basic support plan is included for all AWS customers.

AWS Business Support - You should use the AWS Business Support plan if you have production workloads on AWS and want 24x7 phone, email and chat access to technical support and architectural guidance in the context of your specific use-cases. AWS Business Support plan is the MOST cost-effective option for the given use-case.

Exam Alert:

Please review the differences between the AWS Developer Support, AWS Business Support, AWS Enterprise On-Ramp Support and AWS Enterprise Support plans as you can expect at least a couple of questions on the exam: https://aws.amazon.com/premiumsupport/plans/

93
Q

A Project Manager, working on AWS for the first time, is confused about how credits are used in AWS. There are two credits available in the manager’s account. Credit one is for $100, expires July 2022, and can be used for either Amazon S3 or Amazon EC2. Credit two is for $50, expires December 2022, and can be used only for Amazon EC2. The manager’s AWS account has incurred two charges: $1000 for Amazon EC2 and $500 for Amazon S3.

What will be the outcome on the overall bill once the credits are used? (Select two)
a. Then, credit two is applied to $500 for Amazon S3 usage
b. Credit one is applied, which expires in July, to the Amazon EC2 charge which leaves you with a $900 Amazon EC2 charge and a $500 Amazon S3 charge
c. Then, credit two is applied to the remaining $900 of Amazon EC2 usage
d. Credit one is applied, which expires in July, to Amazon S3 usage which leaves you with a $1000 Amazon EC2 charge and a $400 Amazon S3 charge
e. Only one credit can be used in one billing cycle and the customer has a choice to choose from the available ones

A

b. Credit one is applied, which expires in July, to the Amazon EC2 charge which leaves you with a $900 Amazon EC2 charge and a $500 Amazon S3 charge

c. Then, credit two is applied to the remaining $900 of Amazon EC2 usage

Credits are applied in the following order:

Soonest expiring

Least number of applicable products

Oldest credit

For the given use case, credit one is applied, which expires in July, to the Amazon EC2 charge which leaves you with a $900 Amazon EC2 charge and a $500 Amazon S3 charge. Then, credit two is applied to the remaining $900 of Amazon EC2 usage. You need to pay $850 for Amazon EC2 and $500 for Amazon S3. All your credits are now exhausted.

94
Q

A gaming company is looking at a technology/service that can deliver a consistent low-latency gameplay to ensure a great user experience for end-users in various locations.

Which AWS technology/service will provide the necessary low-latency access to the end-users?

a. AWS Direct Connect
b. AWS Wavelength
c. AWS Edge Locations
d. AWS Local Zones

A

d. AWS Local Zones

AWS Local Zones allow you to use select AWS services, like compute and storage services, closer to more end-users, providing them very low latency access to the applications running locally. AWS Local Zones are also connected to the parent region via Amazon’s redundant and very high bandwidth private network, giving applications running in AWS Local Zones fast, secure, and seamless access to the rest of AWS services.

You should use AWS Local Zones to deploy workloads closer to your end-users for low-latency requirements. AWS Local Zones have their connection to the internet and support AWS Direct Connect, so resources created in the Local Zone can serve local end-users with very low-latency communications.

Various AWS services such as Amazon Elastic Compute Cloud (EC2), Amazon Virtual Private Cloud (VPC), Amazon Elastic Block Store (EBS), Amazon FSx, Amazon Elastic Load Balancing, Amazon EMR, Amazon ElastiCache, and Amazon Relational Database Service (RDS) are available locally in the AWS Local Zones. You can also use services that orchestrate or work with local services such as Amazon EC2 Auto Scaling, Amazon EKS clusters, Amazon ECS clusters, Amazon EC2 Systems Manager, Amazon CloudWatch, AWS CloudTrail, and AWS CloudFormation. AWS Local Zones also provide a high-bandwidth, secure connection to the AWS Region, allowing you to seamlessly connect to the full range of services in the AWS Region through the same APIs and toolsets.

AWS Edge Locations - An AWS Edge location is a site that CloudFront uses to cache copies of the content for faster delivery to users at any location.

AWS Wavelength - AWS Wavelength extends the AWS cloud to a global network of 5G edge locations to enable developers to innovate and build a whole new class of applications that require ultra-low latency. Wavelength Zones provide a high-bandwidth, secure connection to the parent AWS Region, allowing developers to seamlessly connect to the full range of services in the AWS Region through the same APIs and toolsets.

95
Q

Which of the following AWS services allows a database to have flexible schema and supports document data models?

a. Amazon Aurora
b. Amazon Relational Database Service (Amazon RDS)
c. Amazon DynamoDB
d. Amazon Redshift

A

c. Amazon DynamoDB

Amazon DynamoDB is a NoSQL database that supports key-value and document data models and enables developers to build modern, serverless applications that can start small and scale globally to support petabytes of data and tens of millions of read and write requests per second.

Amazon DynamoDB supports both key-value and document data models. This enables Amazon DynamoDB to have a flexible schema, so each row can have any number of columns at any point in time. This allows you to easily adapt the tables as your business requirements change, without having to redefine the table schema as you would in relational databases.

Incorrect options:

Amazon Aurora - Amazon Aurora is an AWS service for relational databases. Schema change on a relational database is not easy and straight-forward as it is on a NoSQL database. Amazon Aurora does not support flexible schema.

96
Q

Which AWS service would you use to send alerts when the costs for your AWS account exceed your budgeted amount?

a. AWS Organizations
b. AWS Budgets
c. AWS Cost Explorer
d. AWS Pricing Calculator

A

b. AWS Budgets

97
Q

An e-commerce company has deployed an RDS database in a single Availability Zone (AZ). The engineering team wants to ensure that in case of an AZ outage, the database should continue working on the same endpoint without any manual administrative intervention. Which of the following solutions can address this use-case?
a. Deploy the database via AWS Elastic Beanstalk
b. Configure the database in RDS read replica mode with automatic failover to the standby
c. Configure the database in RDS Multi-AZ deployment with automatic failover to the standby
d. Provision the database via AWS CloudFormation

A

c. Configure the database in RDS Multi-AZ deployment with automatic failover to the standby

When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). In case of an infrastructure failure, Amazon RDS performs an automatic failover to the standby (or to a read replica in the case of Amazon Aurora), so that you can resume database operations as soon as the failover is complete. Since the endpoint for your DB Instance remains the same after a failover, your application can resume database operation without the need for manual administrative intervention.

Configure the database in RDS read replica mode with automatic failover to the standby - For RDS, Read replicas allow you to create read-only copies that are synchronized with your master database. There is no standby available while using read replicas. In case of infrastructure failure, you have to manually promote the read replica to be its own standalone DB Instance, which means that the database endpoint would change. Therefore, this option is incorrect.

98
Q

What are the fundamental drivers of cost with AWS Cloud?

a. Compute, Storage and Inbound Data Transfer
b. Compute, Databases and Inbound Data Transfer
c. Compute, Databases and Outbound Data Transfer
d. Compute, Storage and Outbound Data Transfer

A

d. Compute, Storage and Outbound Data Transfer

There are three fundamental drivers of cost with AWS: compute, storage, and outbound data transfer. In most cases, there is no charge for inbound data transfer or data transfer between other AWS services within the same region. Outbound data transfer is aggregated across services and then charged at the outbound data transfer rate.

AWS Cloud Pricing Fundamentals: https://d0.awsstatic.com/whitepapers/aws_pricing_overview.pdf

99
Q

Which characteristic of Cloud Computing imparts the ability to acquire resources as you need and release when you no longer need them?

a. Reliability
b. Resiliency
c. Durability
d. Elasticity

A

d. Elasticity

Elasticity

The ability to acquire resources as you need and release when they are no longer needed is termed as Elasticity of the Cloud. With cloud computing, you don’t have to over-provision resources upfront to handle peak levels of business activity in the future. Instead, you provision the number of resources that you need. You can scale these resources up or down instantly to grow and shrink capacity as your business needs change.

What is Elasticity: https://aws.amazon.com/what-is-cloud-computing/

100
Q

A research lab wants to optimize the caching capabilities for its scientific computations application running on Amazon Elastic Compute Cloud (Amazon EC2) instances. Which Amazon Elastic Compute Cloud (Amazon EC2) storage option is best suited for this use-case?

a. Instance Store
b. Amazon Elastic Block Store (Amazon EBS)
c. Amazon Elastic File System (Amazon EFS)
d. Amazon Simple Storage Service (Amazon S3)

A

a. Instance Store

An Instance Store provides temporary block-level storage for your Amazon EC2 instance. This storage is located on disks that are physically attached to the host computer. Instance store is ideal for the temporary storage of information that changes frequently, such as buffers, caches, scratch data, and other temporary content, or for data that is replicated across a fleet of instances, such as a load-balanced pool of web servers. Instance storage is temporary, data is lost if instance experiences failure or is terminated.

Instance Store Overview: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html

101
Q

The AWS Cloud Adoption Framework (AWS CAF) recommends four iterative and incremental cloud transformation phases. Which cloud transformation journey phase of the AWS Cloud Adoption Framework (AWS CAF) focuses on demonstrating how the cloud will help accelerate your business outcomes?

a. Launch
b. Envision
c. Align
d. Scale

A

b. Envision

The Envision phase of the AWS Cloud Adoption Framework (AWS CAF) focuses on demonstrating how the cloud will help accelerate your business outcomes.

Incorrect options:

Align - The Align phase of the AWS Cloud Adoption Framework (AWS CAF) focuses on identifying capability gaps across the six AWS CAF perspectives, identifying cross-organizational dependencies, and surfacing stakeholder concerns and challenges.

Launch - The Launch phase of the AWS Cloud Adoption Framework (AWS CAF) focuses on delivering pilot initiatives in production and on demonstrating incremental business value.

Scale - The Scale phase of the AWS Cloud Adoption Framework (AWS CAF) focuses on expanding production pilots and business value to desired scale and ensuring that the business benefits associated with your cloud investments are realized and sustained.

Reference:

https://d1.awsstatic.com/whitepapers/aws-caf-ebook.pdf

102
Q

A developer has written a simple web application in PHP and he wants to just upload his code to AWS Cloud and have AWS handle the deployment automatically but still wants access to the underlying operating system for further enhancements. As a Cloud Practioner, which of the following AWS services would you recommend for this use-case?

a. AWS Elastic Beanstalk
b. Amazon Elastic Compute Cloud (Amazon EC2)
c. Amazon Elastic Container Service (Amazon ECS)
d. AWS CloudFormation

A

a. AWS Elastic Beanstalk

AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services developed with Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker on familiar servers such as Apache, Nginx, Passenger, and IIS. Simply upload your code and AWS Elastic Beanstalk automatically handles the deployment, from capacity provisioning, load balancing, auto-scaling to application health monitoring. At the same time, you retain full control over the AWS resources powering your application and can access the underlying resources at any time. There is no additional charge for AWS Elastic Beanstalk - you pay only for the AWS resources needed to store and run your applications.

Key Benefits of AWS Elastic Beanstalk: https://aws.amazon.com/elasticbeanstalk/

Incorrect options:

AWS CloudFormation - AWS CloudFormation allows you to use programming languages or a simple text file (in YAML or JSON format) to model and provision, in an automated and secure manner, all the resources needed for your applications across all Regions and accounts. Think infrastructure as code; think CloudFormation. This is very different from Beanstalk where you just upload your application code and Beanstalk automatically figures out what resources are required to deploy that application. In AWS CloudFormation, you have to explicitly specify which resources you want to provision.

103
Q

Which of the following Cloud Computing models does the ‘gmail’ service represent?

a. Software as a service (SaaS)
b. Function as a service (FaaS)
c. Platform as a service (PaaS)
d. Infrastructure as a service (IaaS)

A

a. Software as a service (SaaS)

104
Q

A multi-national company has just moved its infrastructure from its on-premises data center to AWS Cloud. As part of the shared responsibility model, AWS is responsible for which of the following?
a. Physical and Environmental controls
b. Configuring customer applications
c. Service and Communications Protection or Zone Security
d. Patching guest OS

A

a. Physical and Environmental controls

As part of the shared responsibility model, Physical and Environmental controls are part of the inherited controls and hence these are the responsibility of AWS.

Service and Communications Protection or Zone Security - Customers are responsible for Service and Communications Protection or Zone Security which may require the customers to route or zone data within specific security environments.

105
Q

Which AWS service will help you receive alerts when the reservation utilization falls below the defined threshold?
a. AWS CloudTrail
b. AWS Trusted Advisor
c. AWS Pricing Calculator
d. AWS Budgets

A

d. AWS Budgets

106
Q

Which of the following AWS services has encryption enabled by default?
a. Amazon Elastic File System (Amazon EFS)
b. Amazon Relational Database Service (Amazon RDS)
c. Amazon Elastic Block Store (Amazon EBS)
d. AWS CloudTrail Logs

A

d. AWS CloudTrail Logs

AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. AWS CloudTrail can be used to record AWS API calls and other activity for your AWS account and save the recorded information to log files in an Amazon Simple Storage Service (Amazon S3) bucket that you choose. By default, the log files delivered by CloudTrail to your S3 bucket are encrypted using server-side encryption with Amazon S3 managed keys (SSE-S3).

107
Q

Which of the following statements are CORRECT regarding security groups and network access control lists (network ACL)? (Select two)

a. A security group is stateless, that is, the return traffic must be explicitly allowed
b. A security group is stateful, that is, it automatically allows the return traffic
c. A network access control list (network ACL) contains a numbered list of rules and evaluates these rules in the increasing order while deciding whether to allow the traffic
d. A network access control list (network ACL) is stateful, that is, it automatically allows the return traffic
e. A security group contains a numbered list of rules and evaluates these rules in the increasing order while deciding whether to allow the traffic

A

b. A security group is stateful, that is, it automatically allows the return traffic
c. A network access control list (network ACL) contains a numbered list of rules and evaluates these rules in the increasing order while deciding whether to allow the traffic

A security group acts as a virtual firewall for your instance to control inbound and outbound traffic. Security groups act at the instance level, not at the subnet level. Security groups are stateful — if you send a request from your instance, the response traffic for that request is allowed to flow in regardless of inbound security group rules. A security group evaluates all rules before deciding whether to allow traffic.

Security group overview: https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html

A network access control list (network ACL) is an optional layer of security for your VPC that acts as a firewall for controlling traffic in and out of one or more subnets (i.e. it works at subnet level). A network access control list (network ACL) contains a numbered list of rules. A network access control list (network ACL) evaluates the rules in order, starting with the lowest numbered rule, to determine whether traffic is allowed in or out of any subnet associated with the network ACL. The highest number that you can use for a rule is 32766. AWS recommends that you start by creating rules in increments (for example, increments of 10 or 100) so that you can insert new rules where you need to later on.

Network access control list (network ACL) overview: https://docs.aws.amazon.com/vpc/latest/userguide/vpc-network-acls.html

108
Q

Multi-AZ deployment is an example of which of the following?

a. Performance Efficiency
b. Scale up
c. Scale out
d. High Availability

A

d. High Availability

A system that is available is capable of delivering the designed functionality at a given point in time. Highly available systems are those that can withstand some measure of degradation while still remaining available. On AWS Cloud, you can run instances for an application in a multi-AZ deployment to achieve High Availability.

109
Q

A startup runs its proprietary application on docker containers. As a Cloud Practitioner, which AWS service would you recommend so that the startup can run containers and still have access to the underlying servers?

a. AWS Fargate
b. AWS Lambda
c. Amazon Elastic Container Registry (Amazon ECR)
d. Amazon Elastic Container Service (Amazon ECS)

A

d. Amazon Elastic Container Service (Amazon ECS)

Amazon Elastic Container Service (Amazon ECS) is a highly scalable, fast, container management service that makes it easy to run, stop, and manage Docker containers on a cluster. This is not a fully managed service and you can manage the underlying servers yourself.

Incorrect options:

AWS Fargate - AWS Fargate is a serverless compute engine for containers. It works with both Amazon Elastic Container Service (Amazon ECS) and Amazon Elastic Kubernetes Service (Amazon EKS). AWS Fargate makes it easy for you to focus on building your applications. AWS Fargate removes the need to provision and manage servers, lets you specify and pay for resources per application, and improves security through application isolation by design. With AWS Fargate, you do not have access to the underlying servers, so this option is incorrect.

How AWS Fargate Works: via - https://aws.amazon.com/fargate/

AWS Lambda - AWS Lambda is a compute service that lets you run code without provisioning or managing servers. AWS Lambda executes your code only when needed and scales automatically, from a few requests per day to thousands per second. AWS Lambda does not support running container applications.

Amazon Elastic Container Registry (Amazon ECR) - Amazon Elastic Container Registry (Amazon ECR) can be used to store, manage, and deploy Docker container images. Amazon Elastic Container Registry (Amazon ECR) eliminates the need to operate your container repositories. Amazon Elastic Container Registry (Amazon ECR) does not support running container applications.

110
Q

An organization deploys its IT infrastructure in a combination of its on-premises data center along with AWS Cloud. How would you categorize this deployment model?

a. Hybrid deployment
b. Mixed deployment
c. Private deployment
d. Cloud deployment

A

a. Hybrid deployment

A hybrid deployment is a way to connect your on-premises infrastructure to the cloud. The most common method of hybrid deployment is between the cloud and existing on-premises infrastructure to extend an organization’s infrastructure into the cloud while connecting cloud resources to internal systems.

Overview of Cloud Computing Deployment Models: https://aws.amazon.com/types-of-cloud-computing/

111
Q

A retail company has multiple AWS accounts for each of its departments. Which of the following AWS services can be used to set up consolidated billing and a single payment method for these AWS accounts?

a. AWS Organizations
b. AWS Cost Explorer
c. AWS Budgets
d. AWS Secrets Manager

A

a. AWS Organizations

AWS Organizations helps you to centrally manage billing; control access, compliance, and security; and share resources across your AWS accounts. Using AWS Organizations, you can automate account creation, create groups of accounts to reflect your business needs, and apply policies for these groups for governance. You can also simplify billing by setting up a single payment method for all of your AWS accounts. AWS Organizations is available to all AWS customers at no additional charge.

Key Features of AWS Organizations: https://aws.amazon.com/organizations/

112
Q

A research group wants to provision an Amazon Elastic Compute Cloud (Amazon EC2) instance for a flexible application that can be interrupted. As a Cloud Practitioner, which of the following would you recommend as the MOST cost-optimal option?

a. Dedicated Host
b. Spot Instance
c. On-Demand Instance
d. Reserved Instance (RI)

A

b. Spot Instance

A Spot Instance is an unused EC2 instance that is available for less than the On-Demand price. Because Spot Instances enable you to request unused EC2 instances at steep discounts (up to 90%), you can lower your Amazon EC2 costs significantly. Spot Instances are well-suited for data analysis, batch jobs, background processing, and other flexible tasks that can be interrupted. These can be terminated at short notice, so these are not suitable for critical workloads that need to run at a specific point in time.

Amazon EC2 Pricing Options Overview: https://aws.amazon.com/ec2/pricing/

113
Q

Which AWS Route 53 routing policy would you use to route traffic to multiple resources and also choose how much traffic is routed to each resource?
a. Simple routing
b. latency-based routing
c. Weighted routing
d. Failover routing

A

c. Weighted routing

Amazon Route 53 is a highly available and scalable cloud Domain Name System (DNS) web service. It is designed to give developers and businesses an extremely reliable and cost-effective way to route end users to Internet applications by translating names like www.example.com into the numeric IP addresses like 192.0.2.1 that computers use to connect to each other.

Weighted routing lets you associate multiple resources with a single domain name (example.com) or subdomain name (acme.example.com) and choose how much traffic is routed to each resource. This can be useful for a variety of purposes, including load balancing and testing new versions of software. To configure weighted routing, you create records that have the same name and type for each of your resources. You assign each record a relative weight that corresponds with how much traffic you want to send to each resource. Amazon Route 53 sends traffic to a resource based on the weight that you assign to the record as a proportion of the total weight for all records in the group.

Route 53 Routing Policy Overview: https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html

114
Q

A cyber forensics team has detected that AWS owned IP-addresses are being used to carry out malicious attacks. As this constitutes prohibited use of AWS services, which of the following is the correct solution to address this issue?
a. Write an email to Jeff Bezos, the founder of Amazon, with the details of the incident
b. Contact AWS Support
c. Contact AWS Developer Forum moderators
d. Contact AWS Abuse Team

A

d. Contact AWS Abuse Team

The AWS Abuse team can assist you when AWS resources are used to engage in abusive behavior.

Please see details of the various scenarios that the AWS Abuse team can address: https://aws.amazon.com/premiumsupport/knowledge-center/report-aws-abuse/

115
Q

Which AWS Support plan provides architectural guidance contextual to your specific use-cases?
a. AWS Enterprise On-Ramp Support
b. AWS Developer Support
c. AWS Enterprise Support
d. AWS Business Support

A

d. AWS Business Support

You should use AWS Business Support if you have production workloads on AWS and want 24x7 phone, email and chat access to technical support and architectural guidance in the context of your specific use-cases. You get full access to AWS Trusted Advisor Best Practice Checks. You also get access to Infrastructure Event Management for an additional fee.

AWS Enterprise Support - AWS Enterprise Support provides customers with concierge-like service where the main focus is helping the customer achieve their outcomes and find success in the cloud. With Enterprise Support, you get 24x7 technical support from high-quality engineers, tools and technology to automatically manage the health of your environment, consultative review and guidance based on your applications, and a designated Technical Account Manager (TAM) to coordinate access to proactive/preventative programs and AWS subject matter experts. This plan supports architectural guidance contextual to your application.

I still kind of don’t understand the difference. Business support drills

116
Q

Which of the following are correct statements regarding the AWS Global Infrastructure? (Select two)
a. Each AWS Region consists of a minimum of two Availability Zones (AZ)
b. Each Availability Zone (AZ) consists of two or more discrete data centers
c. Each AWS Region consists of a minimum of three Availability Zones (AZ)
d. Each Availability Zone (AZ) consists of one or more discrete data centers
e. Each AWS Region consists of two or more Edge Locations

A

c. Each AWS Region consists of a minimum of three Availability Zones (AZ)

d. Each Availability Zone (AZ) consists of one or more discrete data centers

AWS has the concept of a Region, which is a physical location around the world where AWS clusters its data centers. AWS calls each group of logical data centers an Availability Zone (AZ). Each AWS Region consists of a minimum of three, isolated, and physically separate AZs within a geographic area. Each AZ has independent power, cooling, and physical security and is connected via redundant, ultra-low-latency networks.

An Availability Zone (AZ) is one or more discrete data centers with redundant power, networking, and connectivity in an AWS Region. All AZs in an AWS Region are interconnected with high-bandwidth, low-latency networking, over fully redundant, dedicated metro fiber providing high-throughput, low-latency networking between AZs.

117
Q

Which of the following is an AWS database service?
a. Amazon Redshift
b. AWS Database Migration Service (AWS DMS)
c. AWS Storage Gateway
d. AWS Glue

A

a. Amazon Redshift

Amazon Redshift is a fully-managed petabyte-scale cloud-based data warehouse product designed for large scale data set storage and analysis.

118
Q

Which of the following statement is correct for a Security Group and a Network Access Control List (Network ACL)?

a. Security Group acts as a firewall at the instance level whereas Network Access Control List (Network ACL) acts as a firewall at the subnet level
b. Security Group acts as a firewall at the VPC level whereas Network Access Control List (Network ACL) acts as a firewall at the Availability Zone (AZ) level
c. Security Group acts as a firewall at the Availability Zone (AZ) level whereas Network Access Control List (Network ACL) acts as a firewall at the VPC level
d. Security Group acts as a firewall at the subnet level whereas Network Access Control List (Network ACL) acts as a firewall at the instance level

A

a. Security Group acts as a firewall at the instance level whereas Network Access Control List (Network ACL) acts as a firewall at the subnet level

A security group acts as a virtual firewall for your instance to control inbound and outbound traffic. When you launch an instance in a VPC, you can assign up to five security groups to the instance. Security groups act at the instance level, not the subnet level. A network access control list (network ACL) is an optional layer of security for your VPC that acts as a firewall for controlling traffic in and out of one or more subnets (i.e. it works at subnet level).

Security Group Overview: https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html

Network Access Control List (network NACL) Overview: https://docs.aws.amazon.com/vpc/latest/userguide/vpc-network-acls.html

119
Q

Which of the following AWS services are regional in scope? (Select two)

a. Amazon CloudFront
b. AWS Web Application Firewall (AWS WAF)
c. AWS Identity and Access Management (AWS IAM)
d. AWS Lambda
e. Amazon Rekognition

A

d. AWS Lambda
e. Amazon Rekognition

Most of the services that AWS offers are Region specific. But few services, by definition, need to be in a global scope because of the underlying service they offer. AWS Identity and Access Management (AWS IAM), Amazon CloudFront, Amazon Route 53 and AWS Web Application Firewall (AWS WAF) are some of the global services.

120
Q

Which AWS service can be used to store, manage, and deploy Docker container images?

a. AWS Lambda
b. Amazon Elastic Compute Cloud (Amazon EC2)
c. Amazon Elastic Container Registry (Amazon ECR)
d. Amazon Elastic Container Service (Amazon ECS)

A

c. Amazon Elastic Container Registry (Amazon ECR)

Amazon Elastic Container Registry (Amazon ECR) can be used to store, manage, and deploy Docker container images. Amazon Elastic Container Registry (Amazon ECR) eliminates the need to operate your container repositories. You can then pull your docker images from Amazon Elastic Container Registry (Amazon ECR) and run those on Amazon Elastic Container Service (Amazon ECS).

Please see this schematic diagram to understand how Amazon Elastic Container Registry (Amazon ECR) works: via - https://aws.amazon.com/ecr/

Incorrect options:

Amazon Elastic Container Service (Amazon ECS) - Amazon Elastic Container Service (Amazon ECS) is a highly scalable, fast, container management service that makes it easy to run, stop, and manage Docker containers on a cluster. You cannot use Amazon Elastic Container Service (Amazon ECS) to store and deploy docker container images.

Please see this schematic diagram to understand how Amazon Elastic Container Service (Amazon ECS) works: via - https://aws.amazon.com/ecs/

121
Q

Which of the following is a serverless AWS service?
a. AWS Lambda
b. AWS Elastic Beanstalk
c. Amazon Elastic Compute Cloud (Amazon EC2)
d. Amazon EMR

A

a. AWS Lambda

Remember: C1 calls serverless Lambda

AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume. With Lambda, you can run code for virtually any type of application or backend service - all with zero administration. Just upload your code and Lambda takes care of everything required to run and scale your code with high availability.

122
Q

Which of the following AWS services support reservations to optimize costs? (Select three)
a. Amazon DocumentDB
b. Amazon Elastic Compute Cloud (Amazon EC2)
c. Amazon Relational Database Service (Amazon RDS)
d. Amazon DynamoDB
e. Amazon Simple Storage Service (Amazon S3)
f. AWS Lambda

A

b. Amazon Elastic Compute Cloud (Amazon EC2)

c. Amazon Relational Database Service (Amazon RDS)

d. Amazon DynamoDB

The following AWS services support reservations to optimize costs:

Amazon EC2 Reserved Instances (RI)

Amazon DynamoDB Reserved Capacity

Amazon ElastiCache Reserved Nodes

Amazon RDS RIs

Amazon Redshift Reserved Nodes

123
Q

A data analytics company stores its data on Amazon Simple Storage Service (Amazon S3) and wants to do SQL based analysis on this data with minimum effort. As a Cloud Practitioner, which of the following AWS services will you suggest for this use case?

a. Amazon DynamoDB
b. Amazon Redshift
c. Amazon Athena
d. Amazon Aurora

A

c. Amazon Athena

Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL. Amazon Athena is serverless, so there is no infrastructure to manage, and you pay only for the queries that you run.

Key features of Amazon Athena: https://aws.amazon.com/athena/

To use Amazon Athena, simply point to your data in Amazon Simple Storage Service (Amazon S3), define the schema, and start querying using standard SQL. Most results are delivered within seconds. With Amazon Athena, there’s no need for complex ETL jobs to prepare your data for analysis. This makes it easy for anyone with SQL skills to quickly analyze large-scale datasets.

Incorrect options:

Amazon Aurora - Amazon Aurora is a MySQL and PostgreSQL-compatible relational database built for the cloud, that combines the performance and availability of traditional enterprise databases with the simplicity and cost-effectiveness of open source databases. You cannot use Amazon Aurora for SQL analysis on S3 based data.

124
Q

A financial services company wants to ensure that its AWS account activity meets the governance, compliance and auditing norms. As a Cloud Practitioner, which AWS service would you recommend for this use-case?
a. AWS CloudTrail
b. AWS Config
c. AWS Trusted Advisor
d. Amazon CloudWatch

A

a. AWS CloudTrail

You can use CloudTrail to log, monitor and retain account activity related to actions across your AWS infrastructure. CloudTrail provides an event history of your AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command-line tools, and other AWS services.

125
Q

A company is looking for a guided path to help deploy, configure, and secure its new workloads while ensuring that it is ready for on-going operations in the cloud. Which of the following AWS services/tools can be leveraged for this use case?

a. AWS Shared Responsibility Model
b. Cloud Foundations
c. AWS Config
d. AWS Trusted Advisor

A

b. Cloud Foundations

Cloud Foundations provides a guided path to help customers deploy, configure, and secure their new workloads while ensuring they are ready for on-going operations in the cloud. Cloud Foundations helps customers navigate through the decisions they need to make through curated AWS Services, AWS Solutions, Partner Solutions, and Guidance.

Cloud Foundations: https://aws.amazon.com/architecture/cloud-foundations/

126
Q

What is the primary benefit of deploying an Amazon Relational Database Service (Amazon RDS) database in a Read Replica configuration?

a. Read Replica protects the database from a regional failure
b. Read Replica enhances database availability
c. Read Replica improves database scalability
d. Read Replica reduces database usage costs

A

c. Read Replica improves database scalability

Read Replicas help with scaling, not availability.

Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. Read Replicas allow you to create read-only copies that are synchronized with your master database. Read Replicas are used for improved read performance. You can also place your read replica in a different AWS Region closer to your users for better performance. Read Replicas are an example of horizontal scaling of resources.

Read Replica Overview: https://aws.amazon.com/rds/features/multi-az/

Exam Alert:

Please review the differences between Amazon RDS Multi-AZ, Multi-Region and Read Replica deployments for RDS: https://aws.amazon.com/rds/features/multi-az/

127
Q

A medical device company is looking for a durable and cost-effective way of storing their historic data. Due to compliance requirements, the data must be stored for 10 years. Which AWS Storage solution will you suggest?

a. Amazon Elastic File System (Amazon EFS)
b. Amazon S3 Glacier Deep Archive
c. Amazon S3 Glacier Flexible Retrieval
d. AWS Storage Gateway

A

b. Amazon S3 Glacier Deep Archive

128
Q

A company uses reserved EC2 instances across multiple units with each unit having its own AWS account. However, some of the units under-utilize their reserved instances while other units need more reserved instances. As a Cloud Practitioner, which of the following would you recommend as the most cost-optimal solution?
a. Use AWS Trusted Advisor to manage AWS accounts of all units and then share the reserved EC2 instances amongst all units
b. Use AWS Organizations to manage AWS accounts of all units and then share the reserved EC2 instances amongst all units
c. Use AWS Cost Explorer to manage AWS accounts of all units and then share the reserved EC2 instances amongst all units
d. Use AWS Systems Manager to manage AWS accounts of all units and then share the reserved EC2 instances amongst all units

A

b. Use AWS Organizations to manage AWS accounts of all units and then share the reserved EC2 instances amongst all units

AWS Organizations helps you to centrally manage billing; control access, compliance, and security; and share resources across your AWS accounts. Using AWS Organizations, you can automate account creation, create groups of accounts to reflect your business needs, and apply policies for these groups for governance. You can also simplify billing by setting up a single payment method for all of your AWS accounts. AWS Organizations is available to all AWS customers at no additional charge.

Key Features of AWS Organizations: https://aws.amazon.com/organizations/

129
Q

The DevOps team at an e-commerce company is trying to debug performance issues for its serverless application built using a microservices architecture. As a Cloud Practitioner, which AWS service would you recommend addressing this use-case?
a. AWS Trusted Advisor
b. AWS X-Ray
c. Amazon Pinpoint
d. AWS CloudFormation

A

b. AWS X-Ray

You can use AWS X-Ray to analyze and debug serverless and distributed applications such as those built using a microservices architecture. With X-Ray, you can understand how your application and its underlying services are performing to identify and troubleshoot the root cause of performance issues and errors.

AWS Trusted Advisor - AWS Trusted Advisor is an online tool that provides you real-time guidance to help you provision your resources following AWS best practices on cost optimization, security, fault tolerance, service limits and performance improvement. Whether establishing new workflows, developing applications, or as part of ongoing improvement, recommendations provided by Trusted Advisor regularly help keep your solutions provisioned optimally. Trusted Advisor cannot be used to debug performance issues for this serverless application built using a microservices architecture.

130
Q

Which feature of AWS Cloud offers the ability to innovate faster and rapidly develop, test and launch software applications?

a. Ability to deploy globally in minutes
b. Cost savings
c. Elasticity
d. Agility

A

d. Agility

Cloud computing is the on-demand delivery of IT resources over the Internet with pay-as-you-go pricing. Instead of buying, owning, and maintaining physical data centers and servers, you can access technology services, such as computing power, storage, and databases, on an as-needed basis from a cloud provider like Amazon Web Services (AWS).

Agility - Agility refers to the ability of the cloud to give you easy access to a broad range of technologies so that you can innovate faster and build nearly anything that you can imagine. You can quickly spin up resources as you need them – from infrastructure services, such as compute, storage, and databases, to Internet of Things, machine learning, data lakes and analytics, and much more.

Incorrect options:

Elasticity - With cloud computing elasticity, you don’t have to over-provision resources upfront to handle peak levels of business activity in the future. Instead, you provision the number of resources that you actually need. You can scale these resources up or down instantly to grow and shrink capacity as your business needs change.

Cost savings - The cloud allows you to trade capital expenses (such as data centers and physical servers) for variable expenses, and only pay for IT as you consume it. Plus, the variable expenses are much lower than what you would pay to do it yourself because of the economies of scale.

Ability to deploy globally in minutes - With the cloud, you can expand to new geographic regions and deploy globally in minutes. For example, AWS has infrastructure all over the world, so you can deploy your application in multiple physical locations with just a few clicks. Putting applications in closer proximity to end users reduces latency and improves their experience.

Exam Alert:

Please review the benefits of Cloud Computing: https://aws.amazon.com/what-is-cloud-computing/

131
Q

Which of the following AWS services offer block-level storage? (Select two)

a. Instance Store
b. Amazon Elastic Container Service (Amazon ECS)
c. Amazon Elastic Block Store (Amazon EBS)
d. Amazon Simple Storage Service (Amazon S3)
e. Amazon Elastic File System (Amazon EFS)

A

a. Instance Store
c. Amazon Elastic Block Store (Amazon EBS)

Amazon Elastic Block Store (Amazon EBS) is an easy to use, high-performance block storage service designed for use with Amazon Elastic Compute Cloud (Amazon EC2) for both throughput and transaction-intensive workloads at any scale. A broad range of workloads, such as relational and non-relational databases, enterprise applications, containerized applications, big data analytics engines, file systems, and media workflows are widely deployed on Amazon EBS.

An instance store provides temporary block-level storage for your EC2 instance. This storage is located on disks that are physically attached to the host computer. Instance store is ideal for the temporary storage of information that changes frequently, such as buffers, caches, scratch data, and other temporary content, or for data that is replicated across a fleet of instances, such as a load-balanced pool of web servers. Instance storage is temporary, data is lost if instance experiences failure or is terminated. Amazon EC2 instance store cannot be used for file sharing between instances.

132
Q

Which tool will help you review your workloads against current AWS best practices for cost optimization, security, and performance improvement and then obtain advice to architect them better?

a. AWS Trusted Advisor
b. Amazon CloudWatch
c. Amazon Inspector
d. AWS Cost Explorer

A

a. AWS Trusted Advisor

AWS Trusted Advisor is an online tool that provides you real-time guidance to help you provision your resources following AWS best practices on cost optimization, security, fault tolerance, service limits, and performance improvement. Whether establishing new workflows, developing applications, or as part of ongoing improvement, recommendations provided by Trusted Advisor regularly help keep your solutions provisioned optimally. All AWS customers get access to the seven core Trusted Advisor checks to help increase the security and performance of the AWS environment.

How Trusted Advisor Works: https://aws.amazon.com/premiumsupport/technology/trusted-advisor/

133
Q

An organization has a complex IT architecture involving a lot of system dependencies and it wants to track the history of changes to each resource. Which AWS service will help the organization track the history of configuration changes for all the resources?

a. AWS Service Catalog
b. AWS CloudFormation
c. AWS CloudTrail
d. AWS Config

A

d. AWS Config

AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. Config continuously monitors and records your AWS resource configurations and allows you to automate the evaluation of recorded configurations against desired configurations. Think resource-specific history, audit, and compliance; think Config.

With AWS Config, you can do the following: 1. Evaluate your AWS resource configurations for desired settings. 2. Get a snapshot of the current configurations of the supported resources that are associated with your AWS account. 3. Retrieve configurations of one or more resources that exist in your account. 4. Retrieve historical configurations of one or more resources. 5. Receive a notification whenever a resource is created, modified, or deleted. 6.View relationships between resources. For example, you might want to find all resources that use a particular security group.

AWS CloudTrail - AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. With CloudTrail, you can log, continuously monitor, and retain account activity related to actions across your AWS infrastructure. CloudTrail provides the event history of your AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command-line tools, and other AWS services. Think account-specific activity and audit; think CloudTrail. You cannot use CloudTrail to track changes to each resource on AWS.

134
Q

Which of the following AWS services are global in scope? (Select two)

a. Amazon CloudFront
b. Amazon Relational Database Service (Amazon RDS)
c. Amazon Simple Storage Service (Amazon S3)
d. Amazon Elastic Compute Cloud (Amazon EC2)
e. AWS Identity and Access Management (AWS IAM)

A

a. Amazon CloudFront
e. AWS Identity and Access Management (AWS IAM)

Most of the services that AWS offers are Region specific. But few services, by definition, need to be in a global scope because of the underlying service they offer. AWS Identity and Access Management (AWS IAM), Amazon CloudFront, Amazon Route 53 and AWS Web Application Firewall (AWS WAF) are some of the global services.

AWS Identity and Access Management (AWS IAM) enables you to manage access to AWS services and resources securely. Using AWS Identity and Access Management (AWS IAM), you can create and manage IAM users and IAM user-groups, and use permissions to allow and deny their access to AWS resources.

Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment.

135
Q

Which of the following statements is correct regarding the Amazon Elastic File System (Amazon EFS) storage service?

a. EC2 instances can access files on an Amazon Elastic File System (Amazon EFS) file system across many Availability Zones (AZ) but not across VPCs and Regions
b. EC2 instances can access files on an Amazon Elastic File System (Amazon EFS) file system across many Availability Zones (AZ) and VPCs but not across Regions
c. EC2 instances can access files on an Amazon Elastic File System (Amazon EFS) file system only in one Availability Zone (AZ)
d. EC2 instances can access files on an Amazon Elastic File System (Amazon EFS) file system across many Availability Zones (AZ), Regions and VPCs

A

d. EC2 instances can access files on an Amazon Elastic File System (Amazon EFS) file system across many Availability Zones (AZ), Regions and VPCs

Amazon EFS is a regional service storing data within and across multiple Availability Zones (AZs) for high availability and durability. Amazon EC2 instances can access your file system across AZs, regions, and VPCs, while on-premises servers can access using AWS Direct Connect or AWS VPN.

Amazon EFS Overview: https://aws.amazon.com/efs/

136
Q

Which of the following AWS services specialize in data migration from on-premises to AWS Cloud? (Select two)

a. AWS Transit Gateway
b. AWS Direct Connect
c. AWS Snowball
d. AWS Site-to-Site VPN
e. AWS Database Migration Service (AWS DMS)

A

c. AWS Snowball
e. AWS Database Migration Service (AWS DMS)

AWS Snowball is a data transport solution that accelerates moving terabytes to petabytes of data into and out of AWS services using storage devices designed to be secure for physical transport.

AWS Database Migration Service (AWS DMS) helps you migrate databases from on-premises to AWS quickly and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database. The AWS Database Migration Service (AWS DMS) can migrate your data to and from the most widely used commercial and open-source databases.

You can do both homogeneous and heterogeneous database migration using AWS Database Migration Service (AWS DMS): https://aws.amazon.com/dms/

Incorrect options:

AWS Site-to-Site VPN - AWS Site-to-Site VPN creates a secure connection between your data center or branch office and your AWS cloud resources. This connection goes over the public internet. Site to Site VPN is a connectivity service and it does not specialize in data migration.

AWS Direct Connect - AWS Direct Connect creates a dedicated private connection from a remote network to your VPC. This is a private connection and does not use the public internet. Takes at least a month to establish this connection. Direct Connect is a connectivity service and it does not specialize in data migration.

AWS Transit Gateway - AWS Transit Gateway connects VPCs and on-premises networks through a central hub. This simplifies your network and puts an end to complex peering relationships. It acts as a cloud router – each new connection is only made once. As you expand globally, inter-Region peering connects AWS Transit Gateways using the AWS global network. Your data is automatically encrypted and never travels over the public internet. Transit Gateway is a connectivity service and it does not specialize in data migration.

137
Q

A startup wants to set up its IT infrastructure on AWS Cloud. The CTO would like to get an estimate of the monthly AWS bill based on the AWS services that the startup wants to use. As a Cloud Practitioner, which AWS service would you suggest for this use-case?
a. AWS Budgets
b. AWS Cost & Usage Report (AWS CUR)
c. AWS Cost Explorer
d. AWS Pricing Calculator

A

d. AWS Pricing Calculator

Price calculator is used to evaluate potential costs for potential options (Cost explorer is a breakdown of money already spent)

138
Q

Which of the following is a recommended way to provide programmatic access to AWS resources?
a. Use AWS Multi-Factor Authentication (AWS MFA) to access AWS resources programmatically
b. Use Access Key ID and Secret Access Key to access AWS resources programmatically
c. Use IAM user group to access AWS resources programmatically
d. Create a new IAM user and share the username and password

A

b. Use Access Key ID and Secret Access Key to access AWS resources programmatically

Access keys are long-term credentials for an IAM user or the AWS account root user. You can use access keys to sign programmatic requests to the AWS CLI or AWS API (directly or using the AWS SDK). Access keys consist of two parts: an access key ID and a secret access key. As a user name and password, you must use both the access key ID and secret access key together to authenticate your requests. When you create an access key pair, save the access key ID and secret access key in a secure location. The secret access key is available only at the time you create it. If you lose your secret access key, you must delete the access key and create a new one.

139
Q

A fleet of Amazon EC2 instances spread across different Availability Zones (AZ) needs to access, edit and share file-based data stored centrally on a system. As a Cloud Practitioner, which AWS service would you recommend for this use-case?

a. Amazon Simple Storage Service (Amazon S3)
b. EC2 Instance Store
c. Amazon Elastic Block Store (Amazon EBS)
d. Amazon Elastic File System (Amazon EFS)

A

d. Amazon Elastic File System (Amazon EFS)

Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed, elastic NFS file system. It is built to scale on-demand to petabytes without disrupting applications, growing and shrinking automatically as you add and remove files, eliminating the need to provision and manage capacity to accommodate growth. Amazon EFS is designed to provide massively parallel shared access to thousands of Amazon EC2 instances, enabling your applications to achieve high levels of aggregate throughput and IOPS with consistent low latencies.

How Amazon Elastic File System (Amazon EFS) Works: https://aws.amazon.com/efs/

140
Q

An AWS hardware failure has impacted one of your Amazon Elastic Block Store (Amazon EBS) volumes. Which AWS service will alert you of the affected resources and provide a remedial action?

a. AWS Health Dashboard – Your account health
b. Amazon GuardDuty
c. AWS Trusted Advisor
d. AWS Config

A

a. AWS Health Dashboard – Your account health

141
Q

Which Amazon Simple Storage Service (Amazon S3) storage class offers the lowest availability?

a. Amazon S3 Glacier Flexible Retrieval
b. Amazon S3 Intelligent-Tiering
c. Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA)
d. Amazon S3 Standard

A

c. Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA)

Juat know Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) has 99.5% availability while others have either 99.9% or 99.99% availability.

142
Q

What are the different gateway types supported by AWS Storage Gateway service?

a. Tape Gateway, File Gateway and Volume Gateway
b. Tape Gateway, Object Gateway and Volume Gateway
c. Object Gateway, File Gateway and Block Gateway
d. Tape Gateway, File Gateway and Block Gateway

A

a. Tape Gateway, File Gateway and Volume Gateway

AWS Storage Gateway is a hybrid cloud storage service that connects your existing on-premises environments with the AWS Cloud. Customers use Storage Gateway to simplify storage management and reduce costs for key hybrid cloud storage use cases. These include moving tape backups to the cloud, reducing on-premises storage with cloud-backed file shares, providing low latency access to data in AWS for on-premises applications, as well as various migration, archiving, processing, and disaster recovery use cases.

AWS Storage Gateway service provides three different types of gateways – Tape Gateway, File Gateway, and Volume Gateway – that seamlessly connect on-premises applications to cloud storage, caching data locally for low-latency access.

Gateway Storage Types Overview: https://aws.amazon.com/storagegateway/features/

Block Gateway and Object Gateway are made-up options, so these three options are incorrect.

143
Q

What is the primary benefit of deploying an Amazon RDS Multi-AZ database with one standby?

a. Amazon RDS Multi-AZ improves database performance for read-heavy workloads
b. Amazon RDS Multi-AZ enhances database availability
c. Amazon RDS Multi-AZ protects the database from a regional failure
d. Amazon RDS Multi-AZ reduces database usage costs

A

b. Amazon RDS Multi-AZ enhances database availability

Amazon RDS Multi-AZ deployments provide enhanced availability and durability forAmazon Relational Database Service (Amazon RDS) instances, making them a natural fit for production database workloads. When you provision an Amazon RDS Multi-AZ Instance with one standby, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ).

In case of an infrastructure failure, Amazon RDS performs an automatic failover to the standby so that you can resume database operations as soon as the failover is complete.

How Amazon RDS Multi-AZ Works: https://aws.amazon.com/rds/features/multi-az/

144
Q

Which of the following AWS services should be used to automatically distribute incoming traffic across multiple targets?
a. AWS Elastic Beanstalk
b. Amazon OpenSearch Service
c. AWS Elastic Load Balancing (ELB)
d. AWS Auto Scaling

A

c. AWS Elastic Load Balancing (ELB)

Elastic Load Balancing (ELB) is used to automatically distribute your incoming application traffic across all the EC2 instances that you are running. You can use Elastic Load Balancing to manage incoming requests by optimally routing traffic so that no one instance is overwhelmed. Your load balancer acts as a single point of contact for all incoming web traffic to your application. When an instance is added, it needs to register with the load balancer or no traffic is routed to it. When an instance is removed, it must deregister from the load balancer or traffic continues to be routed to it.

AWS Elastic Beanstalk - AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services developed in a variety of programming languages. You can simply upload your code and Elastic Beanstalk automatically handles the deployment, from capacity provisioning, load balancing, auto-scaling to application health monitoring. You cannot use Beanstalk to distribute incoming traffic across multiple targets.

145
Q

A web application stores all of its data on Amazon S3 buckets. A client has mandated that data be encrypted before sending it to Amazon S3.

Which of the following is the right technique for encrypting data as needed by the customer?
a. Enable client-side encryption using AWS encryption SDK
b. Encryption is enabled by default for all the objects written to Amazon S3. Additional configuration is not required
c. Enable server-side encryption with Amazon S3 Managed Keys (SSE-S3)
d. Enable server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS)

A

a. Enable client-side encryption using AWS encryption SDK

The act of encrypting data before sending it to Amazon S3 is termed as client-side encryption. The AWS encryption SDK is a client-side encryption library that is separate from the language–specific SDKs. You can use this encryption library to more easily implement encryption best practices in Amazon S3. Unlike the Amazon S3 encryption clients in the language–specific AWS SDKs, the AWS encryption SDK is not tied to Amazon S3 and can be used to encrypt or decrypt data to be stored anywhere.

146
Q

A customer is running a comparative study of pricing models of Amazon EFS and Amazon Elastic Block Store (Amazon EBS) that are used with the Amazon EC2 instances that host the application. Which of the following statements are correct regarding this use-case? (Select two)

a. Amazon Elastic Compute Cloud (Amazon EC2) data transfer charges will apply for all Amazon Elastic Block Store (Amazon EBS) direct APIs for Snapshots
b. Amazon Elastic Block Store (Amazon EBS) Snapshot storage pricing is based on the amount of space your data consumes in Amazon Elastic Block Store (Amazon EBS)
c. You will pay a fee each time you read from or write data stored on the Amazon Elastic File System (Amazon EFS) - Infrequent Access storage class
d. Amazon Elastic Block Store (Amazon EBS) Snapshots are stored incrementally, which means you are billed only for the changed blocks stored
e. With AWS Backup, you pay only for the amount of Amazon Elastic File System (Amazon EFS) backup storage you use in a month, you need not pay for restoring this data

A

c. You will pay a fee each time you read from or write data stored on the Amazon Elastic File System (Amazon EFS) - Infrequent Access storage class
d. Amazon Elastic Block Store (Amazon EBS) Snapshots are stored incrementally, which means you are billed only for the changed blocks stored

Amazon Elastic File System (Amazon EFS) - Infrequent Access storage class is cost-optimized for files accessed less frequently. Data stored on the Amazon Elastic File System (Amazon EFS) - Infrequent Access storage class costs less than Standard and you will pay a fee each time you read from or write to a file.

Amazon EBS Snapshots are a point in time copy of your block data. For the first snapshot of a volume, Amazon EBS saves a full copy of your data to Amazon S3. Amazon EBS Snapshots are stored incrementally, which means you are billed only for the changed blocks stored.

147
Q

An organization maintains separate Amazon Virtual Private Clouds (Amazon VPC) for each of its departments. With expanding business, the organization now wants to connect all Amazon Virtual Private Clouds (Amazon VPC) for better departmental collaboration. Which AWS service will help the organization tackle the issue effectively?

a. VPC peering connection
b. AWS Site-to-Site VPN
c. AWS Direct Connect
d. AWS Transit Gateway

A

d. AWS Transit Gateway

AWS Transit Gateway connects Amazon Virtual Private Clouds (Amazon VPC) and on-premises networks through a central hub. This simplifies your network and puts an end to complex peering relationships. It acts as a cloud router – each new connection is only made once. As you expand globally, inter-Region peering connects AWS Transit Gateways using the AWS global network. Your data is automatically encrypted and never travels over the public internet.

How AWS Transit Gateway can simplify your network: https://aws.amazon.com/transit-gateway/

Incorrect options:

VPC peering connection - A VPC peering connection is a networking connection between two Amazon Virtual Private Clouds (Amazon VPC) that enables you to route traffic between them privately. VPC peering connection is not transitive, a separate VPC peering connection has to be made between two VPCs that need to talk to each other. With growing VPCs, this gets difficult to manage.

Transitive VPC peering connection is not allowed: https://docs.aws.amazon.com/vpc/latest/peering/invalid-peering-configurations.html

AWS Direct Connect - AWS Direct Connect creates a dedicated private connection from a remote network to your VPC. This is a private connection and does not use the public internet. Takes at least a month to establish this connection. AWS Direct Connect cannot be used to interconnect VPCs.

AWS Site-to-Site VPN - AWS Site-to-Site VPN creates a secure connection between your data center or branch office and your AWS cloud resources. This connection goes over the public internet. AWS Site-to-Site VPN cannot be used to interconnect VPCs.

148
Q

Which of the following is an INCORRECT statement about Scaling, a design principle of Reliability pillar of the AWS Well-Architected Framework?
a. A scale out operation implies you scale by adding more instances to your existing pool of resources
b. Fault tolerance is achieved by a scale out operation
c. A scale up operation implies you scale by adding more power (CPU, RAM) to your existing machine/node
d. Fault tolerance is achieved by a scale up operation

A

d. Fault tolerance is achieved by a scale up operation

A scale up operation is constrained to be running its processes on only one computer. In such systems, the only way to increase performance is to add more resources into one computer in the form of faster CPUs, memory or storage. Fault tolerance is not possible for such scaling operations since a single instance is prone to failure.

149
Q

Which of the following are the advantages of cloud computing? (Select three)
a. Go global in minutes and deploy applications in multiple regions around the world with just a few clicks
b. Trade variable expense for capital expense
c. Allocate a few months of planning for your infrastructure capacity needs
d. Trade capital expense for variable expense
e. Benefit from massive economies of scale
f. Spend money on building and maintaining data centers

A

a. Go global in minutes and deploy applications in multiple regions around the world with just a few clicks

d. Trade capital expense for variable expense

e. Benefit from massive economies of scale

https://docs.aws.amazon.com/whitepapers/latest/aws-overview/six-advantages-of-cloud-computing.html

150
Q

Which Amazon Route 53 routing policy would you use to improve the performance for your customers by routing the requests to the AWS endpoint that provides the fastest experience?

a. Failover routing
b. Weighted routing
c. Simple routing
d. Latency-based routing

A

d. Latency-based routing

Amazon Route 53 is a highly available and scalable cloud Domain Name System (DNS) web service. It is designed to give developers and businesses an extremely reliable and cost-effective way to route end users to Internet applications by translating names like www.example.com into the numeric IP addresses like 192.0.2.1 that computers use to connect to each other.

If your application is hosted in multiple AWS Regions, you can use latency-based routing policy to improve the performance for your users by serving their requests from the AWS Region that provides the lowest latency. To use latency-based routing, you create latency records for your resources in multiple AWS Regions. When Amazon Route 53 receives a DNS query for your domain or subdomain (example.com or acme.example.com), it determines which AWS Regions you’ve created latency records for, determines which region gives the user the lowest latency, and then selects a latency record for that region. Amazon Route 53 responds with the value from the selected record, such as the IP address for a web server.

Amazon Route 53 Routing Policy Overview: https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html

Incorrect options:

Failover routing - This routing policy is used when you want to configure active-passive failover.

Weighted routing - This routing policy is used to route traffic to multiple resources in proportions that you specify.

Simple routing - With simple routing, you typically route traffic to a single resource, for example, to a web server for your website.

151
Q

Access Key ID and Secret Access Key are tied to which of the following AWS Identity and Access Management (AWS IAM) entities?

a. IAM Role
b. IAM Policy
c. IAM User
d. IAM User Group

A

c. IAM User

Access keys are long-term credentials for an IAM user or the AWS account root user. You can use access keys to sign programmatic requests to the AWS CLI or AWS API (directly or using the AWS SDK). Access keys consist of two parts: an access key ID (for example, AKIAIOSFODNN7EXAMPLE) and a secret access key (for example, wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY). As a user name and password, you must use both the access key ID and secret access key together to authenticate your requests. Access Keys are secret, just like a password. You should never share them.

Note to self: Think Github

152
Q

Under the AWS Shared Responsibility Model, which of the following is a shared responsibility of both AWS and the customer?
a. Infrastructure maintenance of Amazon Simple Storage Service (Amazon S3) storage servers
b. Availability Zone (AZ) infrastructure maintenance
c. Guarantee data separation among various AWS customers
d. Configuration Management

A

d. Configuration Management

tl;dr: AWS implements the configurations they support, and the customer sets the configurations they want.

Security and Compliance is a shared responsibility between AWS and the customer. This shared model can help relieve the customer’s operational burden as AWS operates, manages and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the service operates.

Controls that apply to both the infrastructure layer and customer layers, but in completely separate contexts or perspectives are called shared controls. In a shared control, AWS provides the requirements for the infrastructure and the customer must provide their own control implementation within their use of AWS services. Configuration Management forms a part of shared controls - AWS maintains the configuration of its infrastructure devices, but a customer is responsible for configuring their own guest operating systems, databases, and applications.

153
Q

Which of the following AWS Support plans provide access to only core checks from the AWS Trusted Advisor Best Practice Checks? (Select two)
a. AWS Developer Support
b. AWS Basic Support
c. AWS Enterprise Support
d. AWS Business Support
e. AWS Enterprise On-Ramp Support

A

a. AWS Developer Support

b. AWS Basic Support

They are the basic plans.

154
Q

Which AWS service can be used to execute code triggered by new files being uploaded to Amazon Simple Storage Service (Amazon S3)?

a. Amazon Simple Queue Service (Amazon SQS)
b. Amazon Elastic Container Service (Amazon ECS)
c. Amazon Elastic Compute Cloud (Amazon EC2)
d. AWS Lambda

A

d. AWS Lambda

AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume. With AWS Lambda, you can run code for virtually any type of application or backend service - all with zero administration. Just upload your code and Lambda takes care of everything required to run and scale your code with high availability.

You can use Amazon S3 to trigger AWS Lambda to process data immediately after an upload. For example, you can use AWS Lambda to thumbnail images, transcode videos, index files, process logs, validate content, and aggregate and filter data in real-time.

How AWS Lambda executes code in response to a trigger from S3: https://aws.amazon.com/lambda/

155
Q

An intern at an IT company provisioned a Linux based On-demand EC2 instance with per-second billing but terminated it within 30 seconds as he wanted to provision another instance type. What is the duration for which the instance would be charged?
a. 600 seconds
b. 30 seconds
c. 300 seconds
d. 60 seconds

A

d. 60 seconds

There is a one-minute minimum charge for Linux based EC2 instances, so this is the correct option.

156
Q

Which AWS service will you use to privately connect your virtual private cloud (VPC) to Amazon Simple Storage Service (Amazon S3)?

a. VPC Endpoint
b. AWS Direct Connect
c. AWS Transit Gateway
d. Amazon API Gateway

A

a. VPC Endpoint

A VPC endpoint enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by AWS PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Instances in your VPC do not require public IP addresses to communicate with resources in the service. Traffic between your VPC and the other service does not leave the Amazon network.

There are two types of VPC endpoints: interface endpoints and gateway endpoints.

An interface endpoint is an elastic network interface with a private IP address from the IP address range of your subnet that serves as an entry point for traffic destined to a supported service. Interface endpoints are powered by AWS PrivateLink, a technology that enables you to privately access services by using private IP addresses.

A gateway endpoint is a gateway that you specify as a target for a route in your route table for traffic destined to a supported AWS service. The following AWS services are supported:

Amazon Simple Storage Service (Amazon S3)

Amazon DynamoDB

Exam Alert:

You may see a question around this concept in the exam. Just remember that only Amazon S3 and Amazon DynamoDB support VPC gateway endpoint. All other services that support VPC Endpoints use a VPC interface endpoint (note that Amazon S3 supports the VPC interface endpoint as well).

Incorrect options:

AWS Direct Connect - AWS Direct Connect is a cloud service solution that makes it easy to establish a dedicated network connection from your premises to AWS. You can use AWS Direct Connect to establish a private virtual interface from your on-premise network directly to your Amazon VPC. This private connection takes at least one month for completion.

AWS Transit Gateway - AWS Transit Gateway connects VPCs and on-premises networks through a central hub. This simplifies your network and puts an end to complex peering relationships. It acts as a cloud router – each new connection is only made once. This service is helpful in reducing the complex topology of VPC peering when a lot of systems are involved.

Amazon API Gateway - Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. APIs act as the “front door” for applications to access data, business logic, or functionality from your backend services.

157
Q

A customer has created a VPC and a subnet within AWS Cloud. Which of the following statements is correct?

a. A subnet spans all of the Availability Zones (AZ) in the Region whereas an Amazon Virtual Private Cloud (Amazon VPC) spans only one Availability Zone (AZ) in the Region
b. An Amazon Virtual Private Cloud (Amazon VPC) spans all of the Availability Zones (AZ) in the Region whereas a subnet spans only one Availability Zone (AZ) in the Region
c. Both the Amazon Virtual Private Cloud (Amazon VPC) and the subnet span all of the Availability Zones (AZ) in the Region
d. Both the Amazon Virtual Private Cloud (Amazon VPC) and the subnet span only one Availability Zone (AZ) in the Region

A

b. An Amazon Virtual Private Cloud (Amazon VPC) spans all of the Availability Zones (AZ) in the Region whereas a subnet spans only one Availability Zone (AZ) in the Region

Amazon Virtual Private Cloud (Amazon VPC) is a logically isolated section of the AWS Cloud where you can launch AWS resources in a virtual network that you define. You have complete control over your virtual networking environment, including the selection of your IP address range, creation of subnets, and configuration of route tables and network gateways. An Amazon Virtual Private Cloud (Amazon VPC) spans all of the Availability Zones (AZ) in the Region.

A subnet is a range of IP addresses within your Amazon Virtual Private Cloud (Amazon VPC). A subnet spans only one Availability Zone (AZ) in the Region.

Amazon Virtual Private Cloud (Amazon VPC) and Subnet Overview: https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Subnets.html

158
Q

According to the AWS Shared Responsibility Model, which of the following are the responsibilities of the customer? (Select two)

a. Operating system patches and updates of an Amazon Elastic Compute Cloud (Amazon EC2) instance
b. Compliance validation of Cloud infrastructure
c. AWS Global Network Security
d. Ensuring AWS employees cannot access customer data
e. Enabling data encryption of data stored in Amazon Simple Storage Service (Amazon S3) buckets

A

a. Operating system patches and updates of an Amazon Elastic Compute Cloud (Amazon EC2) instance
e. Enabling data encryption of data stored in Amazon Simple Storage Service (Amazon S3) buckets

Under the AWS Shared Responsibility Model, AWS is responsible for protecting the infrastructure that runs all of the services offered in the AWS Cloud. This infrastructure is composed of the hardware, software, networking, and facilities that run AWS Cloud services. Customer’s responsibility is determined by the AWS Cloud services that a customer selects.

Security and Compliance is a shared responsibility between AWS and the customer. This shared model can help relieve the customer’s operational burden as AWS operates, manages and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the service operates.

Security “in” the cloud is the responsibility of the customer. A service such as Amazon Elastic Compute Cloud (Amazon EC2) is categorized as Infrastructure as a Service (IaaS) and, as such, requires the customer to perform all of the necessary security configuration and management tasks.

In the AWS Shared Responsibility Model, customers are responsible for managing their data (including encryption options), classifying their assets, and using IAM tools to apply the appropriate permissions.

AWS Shared Responsibility Model Overview: https://aws.amazon.com/compliance/shared-responsibility-model/

159
Q

Which of the following capabilities does Amazon Rekognition provide as a ready-to-use feature?

a. Convert images into greyscale
b. Identify objects in a photo
c. Human pose detection
d. Resize images quickly

A

b. Identify objects in a photo

160
Q

Which of the following AWS services have data encryption automatically enabled? (Select two)?

a. Amazon Elastic File System (Amazon EFS)
b. Amazon Redshift
c. Amazon Simple Storage Service (Amazon S3)
d. Amazon Elastic Block Store (Amazon EBS)
e. AWS Storage Gateway

A

c. Amazon Simple Storage Service (Amazon S3)
e. AWS Storage Gateway

All Amazon S3 buckets have encryption configured by default, and objects are automatically encrypted by using server-side encryption with Amazon S3 managed keys (SSE-S3). This encryption setting applies to all objects in your Amazon S3 buckets.

AWS Storage Gateway is a hybrid cloud storage service that gives you on-premises access to virtually unlimited cloud storage. All data transferred between the gateway and AWS storage is encrypted using SSL (for all three types of gateways - File, Volume and Tape Gateways).

161
Q

Which AWS service enables users to find, buy, and immediately start using software solutions in their AWS environment?

a. AWS Config
b. AWS Systems Manager
c. AWS OpsWorks
d. AWS Marketplace

A

d. AWS Marketplace

AWS Marketplace is a digital catalog with thousands of software listings from independent software vendors that make it easy to find, test, buy, and deploy software that runs on AWS. AWS Marketplace includes thousands of software listings from popular categories such as security, networking, storage, machine learning, IoT, business intelligence, database, and DevOps. You can use AWS Marketplace as a buyer (subscriber) or as a seller (provider), or both. Anyone with an AWS account can use AWS Marketplace as a consumer and can register to become a seller.

Incorrect options:

AWS Config - AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. Config continuously monitors and records your AWS resource configurations and allows you to automate the evaluation of recorded configurations against desired configurations. Think resource-specific history, audit, and compliance; think Config.

AWS OpsWorks - AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet. OpsWorks lets you use Chef and Puppet to automate how servers are configured, deployed and managed across your Amazon EC2 instances or on-premises compute environments.

AWS Systems Manager - AWS Systems Manager gives you visibility and control of your infrastructure on AWS. Systems Manager provides a unified user interface so you can view operational data from multiple AWS services and allows you to automate operational tasks across your AWS resources. With Systems Manager, you can group resources, like Amazon EC2 instances, Amazon S3 buckets, or Amazon RDS instances, by application, view operational data for monitoring and troubleshooting, and take action on your groups of resources.

162
Q

A big data analytics company is moving its IT infrastructure from an on-premises data center to AWS Cloud. The company has some server-bound software licenses that it wants to use on AWS. As a Cloud Practitioner, which of the following EC2 instance types would you recommend to the company?
a. On-Demand Instance
b. Reserved Instance (RI)
c. Dedicated Instance
d. Dedicated Host

A

d. Dedicated Host

Amazon EC2 Dedicated Hosts allow you to use your eligible software licenses from vendors such as Microsoft and Oracle on Amazon EC2. An Amazon EC2 Dedicated Host is a physical server fully dedicated for your use, so you can help address corporate compliance requirements.

Exam Alert:

Please review the differences between Dedicated hosts and Dedicated instances: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/dedicated-hosts-overview.html

Dedicated Instance - A Dedicated Instance is an Amazon EC2 instance that runs in a virtual private cloud (VPC) on hardware that’s dedicated to a single customer. Dedicated Instances that belong to different AWS accounts are physically isolated at the hardware level. However, Dedicated Instances may share hardware with other instances from the same AWS account that are not Dedicated Instances. You cannot use Dedicated Instances for using server-bound software licenses.

163
Q

A company has a static website hosted on an Amazon Simple Storage Service (Amazon S3) bucket in an AWS Region in Asia. Although most of its users are in Asia, now it wants to drive growth globally. How can it improve the global performance of its static website?

a. Use Amazon CloudFront to improve the performance of your website
b. Use AWS Web Application Firewall (AWS WAF) to improve the performance of your website
c. Use Amazon S3 Transfer Acceleration (Amazon S3TA) to improve the performance of your website
d. Use Amazon CloudFormation to improve the performance of your website

A

a. Use Amazon CloudFront to improve the performance of your website

You can use Amazon CloudFront to improve the performance of your website. Amazon CloudFront makes your website files (such as HTML, images, and video) available from data centers around the world (called edge locations). When a visitor requests a file from your website, Amazon CloudFront automatically redirects the request to a copy of the file at the nearest edge location. This results in faster download times than if the visitor had requested the content from a data center that is located farther away.

Incorrect options:

Use Amazon CloudFormation to improve the performance of your website - AWS CloudFormation allows you to use programming languages or a simple text file to model and provision, in an automated and secure manner, all the resources needed for your applications across all regions and accounts. CloudFormation cannot be used to improve the performance of a static website.

Use AWS Web Application Firewall (AWS WAF) to improve the performance of your website - By using AWS Web Application Firewall (AWS WAF), you can configure web access control lists (Web ACLs) on your CloudFront distributions or Application Load Balancers to filter and block requests based on request signatures. Besides, by using AWS WAF’s rate-based rules, you can automatically block the IP addresses of bad actors when requests matching a rule exceed a threshold that you define. WAF cannot be used to improve the performance of a static website.

Use Amazon S3 Transfer Acceleration (Amazon S3TA) to improve the performance of your website - Amazon S3 Transfer Acceleration (Amazon S3TA) enables fast, easy, and secure transfers of files over long distances between your client and an S3 bucket. Amazon S3 Transfer Acceleration (Amazon S3TA) takes advantage of Amazon CloudFront’s globally distributed edge locations. As the data arrives at an edge location, data is routed to Amazon S3 over an optimized network path. Amazon S3 Transfer Acceleration (Amazon S3TA) cannot be used to improve the performance of a static website.

164
Q

An online gaming company wants to block users from certain geographies from accessing its content. Which AWS service can be used to accomplish this task?

a. Amazon CloudWatch
b. AWS Web Application Firewall (AWS WAF)
c. Security group
d. AWS Shield

A

b. AWS Web Application Firewall (AWS WAF)

AWS Web Application Firewall (AWS WAF) is a web application firewall that helps protect web applications from attacks by allowing you to configure rules that allow, block, or monitor (count) web requests based on conditions that you define. These conditions include IP addresses, HTTP headers, HTTP body, URI strings, SQL injection, and cross-site scripting. You can use the IP address based match rule to block specific geographies. The accuracy of the IP Address to country lookup database varies by Region. Based on recent tests, AWS mentions that the overall accuracy for the IP address to country mapping is 99.8%.

How AWS Web Application Firewall (AWS WAF) Works: https://aws.amazon.com/waf/

Security group - A security group acts as a virtual firewall for your EC2 instances to control incoming and outgoing traffic. Inbound rules control the incoming traffic to your instance, and outbound rules control the outgoing traffic from your instance. Security groups only have “allow” rules. You cannot use the security groups to block users from certain geographies.

165
Q

Which of the following AWS entities provides the information required to launch an Amazon Elastic Compute Cloud (Amazon EC2) instance?

a. Amazon Elastic Block Store (Amazon EBS)
b. Amazon Machine Image (AMI)
c. AWS Lambda
d. Amazon Elastic File System (Amazon EFS)

A

b. Amazon Machine Image (AMI)

An Amazon Machine Image (AMI) provides the information required to launch an instance. You must specify an Amazon Machine Image (AMI) when you launch an instance.

An Amazon Machine Image (AMI) includes the following:

One or more Amazon Elastic Block Store (Amazon EBS) snapshots, or, for instance store backed AMIs, a template for the root volume of the instance (for example, an operating system, an application server, and applications).

Launch permissions that control which AWS accounts can use the Amazon Machine Image (AMI) to launch instances.

A block device mapping that specifies the volumes to attach to the instance when it’s launched.

The following diagram summarizes the Amazon Machine Image (AMI) lifecycle: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html

166
Q

An IT company has a hybrid cloud architecture and it wants to centralize the server logs for its Amazon Elastic Compute Cloud (Amazon EC2) instances and on-premises servers. Which of the following is the MOST effective for this use-case?

a. Use Amazon CloudWatch Logs for both the Amazon Elastic Compute Cloud (Amazon EC2) instance and the on-premises servers
b. Use Amazon CloudWatch Logs for the Amazon Elastic Compute Cloud (Amazon EC2) instance and AWS CloudTrail for the on-premises servers
c. Use AWS CloudTrail for the Amazon Elastic Compute Cloud (Amazon EC2) instance and Amazon CloudWatch Logs for the on-premises servers
d. Use AWS Lambda to send log data from Amazon Elastic Compute Cloud (Amazon EC2) instance as well as on-premises servers to Amazon CloudWatch Logs

A

a. Use Amazon CloudWatch Logs for both the Amazon Elastic Compute Cloud (Amazon EC2) instance and the on-premises servers

You can use Amazon CloudWatch Logs to monitor, store, and access your log files from Amazon Elastic Compute Cloud (Amazon EC2) instances, AWS CloudTrail, Route 53, and other sources such as on-premises servers.

Amazon CloudWatch Logs enables you to centralize the logs from all of your systems, applications, and AWS services that you use, in a single, highly scalable service. You can then easily view them, search them for specific error codes or patterns, filter them based on specific fields, or archive them securely for future analysis.

167
Q

A unicorn startup is building an analytics application with support for a speech-based interface. The application will accept speech-based input from users and then convey results via speech. As a Cloud Practitioner, which solution would you recommend for the given use-case?
a. Use Amazon Polly to convert speech to text for downstream analysis. Then use Amazon Translate to convey the text results via speech
b. Use Amazon Polly to convert speech to text for downstream analysis. Then use Amazon Transcribe to convey the text results via speech
c. Use Amazon Transcribe to convert speech to text for downstream analysis. Then use Amazon Polly to convey the text results via speech
d. Use Amazon Translate to convert speech to text for downstream analysis. Then use Amazon Polly to convey the text results via speech

A

c. Use Amazon Transcribe to convert speech to text for downstream analysis. Then use Amazon Polly to convey the text results via speech

You can use Amazon Transcribe to add speech-to-text capability to your applications. Amazon Transcribe uses a deep learning process called automatic speech recognition (ASR) to convert speech to text quickly and accurately. Amazon Transcribe can be used to transcribe customer service calls, to automate closed captioning and subtitling, and to generate metadata for media assets.

You can use Amazon Polly to turn text into lifelike speech thereby allowing you to create applications that talk. Polly’s Text-to-Speech (TTS) service uses advanced deep learning technologies to synthesize natural sounding human speech.

168
Q

As per the AWS Shared Responsibility Model, which of the following is a responsibility of AWS from a security and compliance point of view?

a. Edge Location Management
b. Server-side Encryption (SSE)
c. Identity and Access Management
d. Customer Data

A

a. Edge Location Management

Security and Compliance is a shared responsibility between AWS and the customer. The AWS Shared Responsibility Model can help relieve the customer’s operational burden as AWS operates, manages, and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the service operates.

AWS is responsible for security “of” the cloud. This covers their global infrastructure elements including Regions, Availability Zones (AZ), and Edge Locations.

Incorrect options:

Customer Data

Identity and Access Management

Server-side Encryption (SSE)

The customer is responsible for security “in” the cloud. Customers are responsible for managing their data including encryption options and using Identity and Access Management tools for implementing appropriate access control policies as per their organization requirements. For abstracted services, such as Amazon S3 and Amazon DynamoDB, AWS operates the infrastructure layer, the operating system, and platforms, and customers access the endpoints to store and retrieve data. Therefore, these three options fall under the responsibility of the customer according to the AWS shared responsibility model.

Exam Alert:

Please review the AWS Shared Responsibility Model in detail as you can expect multiple questions on this topic in the exam: https://aws.amazon.com/compliance/shared-responsibility-model/

169
Q

A financial services enterprise plans to enable Multi-Factor Authentication (MFA) for its employees. For ease of travel, they prefer not to use any physical devices to implement Multi-Factor Authentication (MFA). Which of the below options is best suited for this use case?

a. Soft Token Multi-Factor Authentication (MFA) device
b. U2F security key
c. Hardware Multi-Factor Authentication (MFA) device
d. Virtual Multi-Factor Authentication (MFA) device

A

d. Virtual Multi-Factor Authentication (MFA) device

A software app that runs on a phone or other device and emulates a physical device. The device generates a six-digit numeric code based upon a time-synchronized one-time password algorithm. The user must type a valid code from the device on a second webpage during sign-in. Each virtual Multi-Factor Authentication (MFA) device assigned to a user must be unique. A user cannot type a code from another user’s virtual Multi-Factor Authentication (MFA) device to authenticate.

170
Q

An IT company would like to move its IT resources (including any data and applications) from an AWS Region in the US to another AWS Region in Europe. Which of the following represents the correct solution for this use-case?

a. The company should just start creating new resources in the destination AWS Region and then migrate the relevant data and applications into this new AWS Region
b. The company should use AWS CloudFormation to move the resources (including any data and applications) from source AWS Region to destination AWS Region
c. The company should use AWS Database Migration Service (AWS DMS) to move the resources (including any data and applications) from source AWS Region to destination AWS Region
d. The company should raise a ticket with AWS Support for this resource migration

A

a. The company should just start creating new resources in the destination AWS Region and then migrate the relevant data and applications into this new AWS Region

The company needs to create resources in the new AWS Region and then move the relevant data and applications into the new AWS Region. There is no off-the-shelf solution or service that the company can use to facilitate this transition.

171
Q

Which AWS service will you use if you have to move large volumes of on-premises data to AWS Cloud from a remote location with limited bandwidth?

a. AWS Snowball
b. AWS Direct Connect
c. AWS Transit Gateway
d. AWS Virtual Private Network (VPN)

A

a. AWS Snowball

AWS Snowball, a part of the AWS Snow Family, is a data migration and edge computing device. If you have large quantities of data you need to migrate into AWS, offline data transfer with AWS Snowball can overcome the challenge of limited bandwidth, and avoid the need to lease additional bandwidth. AWS Snowball moves terabytes of data in about a week. You can use it to move things like databases, backups, archives, healthcare records, analytics datasets, IoT sensor data and media content, especially when network conditions prevent realistic timelines for transferring large amounts of data both into and out of AWS.

Incorrect options:

AWS Virtual Private Network (VPN) - A VPN connection refers to the connection between your Virtual Private Cloud and your on-premises network. By default, instances that you launch into an Amazon VPC can’t communicate with your own (remote) network. You can enable access to your remote network from your VPC by creating an AWS Site-to-Site VPN (Site-to-Site VPN) connection, and configuring routing to pass traffic through the connection. VPN aids regular connectivity of AWS and your private om-premises network, it is not a data migration solution.

AWS Direct Connect - AWS Direct Connect is a cloud service solution that makes it easy to establish a dedicated network connection from your premises to AWS. You can use AWS Direct Connect to establish a private virtual interface from your on-premise network directly to your Amazon VPC, providing you with a private, high bandwidth network connection between your network and your VPC. This connection is private and does not go over the public internet. It takes at least a month to establish this physical connection. It is not feasible to set up AWS Direct Connect in remote locations.

AWS Transit Gateway - AWS Transit Gateway connects VPCs and on-premises networks through a central hub. This simplifies your network and puts an end to complex peering relationships. It acts as a cloud router – each new connection is only made once. VPC peering across large connections is made possible using AWS Transit Gateway without ending up with a complex VPC peering network. AWS Transit Gateway is not a data migration solution.

172
Q

An e-commerce company uses AWS Cloud and would like to receive separate invoices for development and production environments. As a Cloud Practioner, which of the following solutions would you recommend for this use-case?

a. Create separate AWS accounts for development and production environments to receive separate invoices
b. Use AWS Cost Explorer to create separate invoices for development and production environments
c. Use AWS Organizations to create separate invoices for development and production environments
d. Tag all resources in the AWS account as either development or production. Then use the tags to create separate invoices

A

a. Create separate AWS accounts for development and production environments to receive separate invoices

Every AWS account provides its own invoice end of the month. You can get separate invoices for development and production environments by setting up separate AWS accounts for each environment.

Incorrect options:

Use AWS Organizations to create separate invoices for development and production environments - AWS Organizations helps you to centrally manage billing; control access, compliance, and security; and share resources across your AWS accounts. Using AWS Organizations, you can automate account creation, create groups of accounts to reflect your business needs, and apply policies for these groups for governance. You can also simplify billing by setting up a single payment method for all of your AWS accounts. AWS Organizations is available to all AWS customers at no additional charge.

AWS Organizations cannot create separate invoices for development and production environments, rather, AWS Organizations helps you to centrally manage billing.

Tag all resources in the AWS account as either development or production. Then use the tags to create separate invoices - You cannot create separate invoices based on tags.

Use AWS Cost Explorer to create separate invoices for development and production environments - AWS Cost Explorer lets you explore your AWS costs and usage at both a high level and at a detailed level of analysis, and empowering you to dive deeper using several filtering dimensions (e.g., AWS Service, Region, Linked Account). AWS Cost Explorer cannot create separate invoices for development and production environments.

173
Q

Which of the following Amazon S3 storage classes takes the most time to retrieve data (also known as first byte latency)?
a. Amazon S3 Intelligent-Tiering
b. Amazon S3 Glacier Deep Archive
c. Amazon S3 Standard
d. Amazon S3 Glacier Flexible Retrieval

A

b. Amazon S3 Glacier Deep Archive

Glacier means hard to retrieve be cheap to hold onto, just in case.

174
Q

Which of the following statements are CORRECT about the AWS Auto Scaling group? (Select two)

a. Auto Scaling group scales in and reduces the number of Amazon EC2 instances to match a decrease in demand
b. Auto Scaling group scales down and reduces the number of Amazon EC2 instances to match a decrease in demand
c. Auto Scaling group scales down and downgrades to a less powerful Amazon EC2 instance to match a decrease in demand
d. Auto Scaling group scales out and adds more number of Amazon EC2 instances to match an increase in demand
e. Auto Scaling group scales up and upgrades to a more powerful Amazon EC2 instance to match an increase in demand

A

a. Auto Scaling group scales in and reduces the number of Amazon EC2 instances to match a decrease in demand
d. Auto Scaling group scales out and adds more number of Amazon EC2 instances to match an increase in demand

Other options involved scaling up or down. AWS Auto Scaling group is for horizontal scaling.

175
Q

Which of the following are recommended best practices for AWS Identity and Access Management (AWS IAM) service? (Select two)

a. Create a minimum number of accounts and share these account credentials among employees
b. Rotate credentials regularly
c. Grant maximum privileges to avoid assigning privileges again
d. Enable multi-factor authentication (MFA) for all users
e. Share AWS account root user access keys with other administrators

A

b. Rotate credentials regularly
d. Enable multi-factor authentication (MFA) for all users

AWS IAM security best practices: https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html

176
Q

Under the AWS Shared Responsibility Model, which of the following is the responsibility of a customer regarding AWS Lambda?

a. Maintain all runtime environments for AWS Lambda functions
b. Patch underlying OS for the AWS Lambda function infrastructure
c. Maintain versions of an AWS Lambda function
d. Configure networking infrastructure for the AWS Lambda functions

A

c. Maintain versions of an AWS Lambda function

Under the AWS Shared Responsibility Model, AWS is responsible for protecting the infrastructure that runs all of the services offered in the AWS Cloud. This infrastructure is composed of the hardware, software, networking, and facilities that run AWS Cloud services.

Under the AWS Shared Responsibility Model, customer’s responsibility is determined by the AWS Cloud services that a customer selects. For abstracted services, such as Amazon S3 and Amazon DynamoDB, AWS operates the infrastructure layer, the operating system, and platforms, and customers access the endpoints to store and retrieve data. Customers are responsible for managing their data (including encryption options), classifying their assets, and using IAM tools to apply the appropriate permissions.

For the given use-case, the customer is responsible for maintaining the versions of an AWS Lambda function.

AWS Shared Responsibility Model Overview: https://aws.amazon.com/compliance/shared-responsibility-model/

177
Q

Which AWS services/features support High Availability by default? (Select two)

a. Instance Store
b. Amazon Elastic File System (Amazon EFS)
c. Amazon Elastic Block Store (Amazon EBS)
d. Subnet
e. Amazon DynamoDB

A

b. Amazon Elastic File System (Amazon EFS)
e. Amazon DynamoDB

Amazon DynamoDB is a key-value and document database that delivers single-digit millisecond performance at any scale. It’s a fully managed, multi-Region, multi-master, durable database with built-in security, backup and restore, and in-memory caching for internet-scale applications. All of your data is stored on solid-state disks (SSDs) and is automatically replicated across multiple Availability Zones (AZ) in an AWS Region, providing built-in high availability and data durability.

Amazon DynamoDB High Availability: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Introduction.html

Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources. It is built to scale on-demand to petabytes without disrupting applications, growing and shrinking automatically as you add and remove files, eliminating the need to provision and manage capacity to accommodate growth. Amazon EFS is a regional service storing data within and across multiple Availability Zones (AZ) for high availability and durability.

Amazon EFS High Availability: https://aws.amazon.com/efs/faq/

Incorrect options:

Subnet - A subnet is a range of IP addresses in your VPC. You can create AWS resources, such as EC2 instances, in specific subnets. Each subnet must reside entirely within one Availability Zone and cannot span multiple Availability Zones.

Amazon Elastic Block Store (Amazon EBS) - Amazon Elastic Block Store (Amazon EBS) is an easy to use, high-performance block storage service designed for use with Amazon Elastic Compute Cloud (EC2) for both throughput and transaction-intensive workloads at any scale. EBS volumes are replicated within an Availability Zone (AZ) and can easily scale to petabytes of data.

Instance Store - As Instance Store volumes are tied to an EC2 instance, they are also single Availability Zone (AZ) entities.

178
Q

Which of the following solutions can you use to connect your on-premises network with AWS Cloud (Select two)?

a. AWS Virtual Private Network (VPN)
b. AWS Direct Connect
c. Amazon Virtual Private Cloud (Amazon VPC)
d. Amazon Route 53
e. Internet Gateway

A

a. AWS Virtual Private Network (VPN)
b. AWS Direct Connect

AWS Direct Connect - AWS Direct Connect is a cloud service solution that makes it easy to establish a dedicated network connection from your premises to AWS. Using AWS Direct Connect, you can establish private connectivity between AWS and your datacenter, office, or colocation environment, which in many cases can reduce your network costs, increase bandwidth throughput, and provide a more consistent network experience than Internet-based connections.

How AWS Direct Connect Works: https://aws.amazon.com/directconnect/

AWS Virtual Private Network (VPN) - AWS Virtual Private Network (VPN) solutions establish secure connections between on-premises networks, remote offices, client devices, and the AWS global network. AWS VPN is comprised of two services: AWS Site-to-Site VPN and AWS Client VPN. Together, they deliver a highly-available, managed, and elastic cloud VPN solution to protect your network traffic.

How AWS Client VPN Works: https://aws.amazon.com/vpn/

179
Q

Which security service of AWS is enabled for all AWS customers, by default, at no additional cost?
a. AWS Secrets Manager
b. AWS Shield Standard
c. AWS Shield Advanced
d. AWS Web Application Firewall (AWS WAF)

A

b. AWS Shield Standard

AWS Shield Standard defends against most common, frequently occurring network and transport layer DDoS attacks that target your website or applications. While AWS Shield Standard helps protect all AWS customers, you get better protection if you are using Amazon CloudFront and Amazon Route 53. All AWS customers benefit from the automatic protections of AWS Shield Standard, at no additional charge.

180
Q

Which AWS service helps with global application availability and performance using the AWS global network?

a. Elastic Load Balancing (ELB)
b. Amazon CloudFront
c. Amazon Route 53
d. AWS Global Accelerator

A

d. AWS Global Accelerator

AWS Global Accelerator is a service that improves the availability and performance of your applications with local or global users. It provides static IP addresses that act as a fixed entry point to your application endpoints in a single or multiple AWS Regions, such as your Application Load Balancers, Network Load Balancers, or Amazon EC2 instances. AWS Global Accelerator uses the AWS global network to optimize the path from your users to your applications, improving the performance of your traffic by as much as 60%.

AWS Global Accelerator improves performance for a wide range of applications over TCP or UDP by proxying packets at the edge to applications running in one or more AWS Regions. AWS Global Accelerator is a good fit for non-HTTP use cases, such as gaming (UDP), IoT (MQTT), or Voice over IP, as well as for HTTP use cases that specifically require static IP addresses or deterministic, fast regional failover.

How AWS Global Accelerator Works: https://aws.amazon.com/global-accelerator/

Exam Alert:

Please review the differences between Amazon CloudFront and AWS Global Accelerator: https://aws.amazon.com/global-accelerator/faqs/

181
Q

Which of the following statements are true about AWS Lambda? (Select two)

a. AWS Lambda lets you run code without provisioning or managing servers
b. You pay for the compute time you consume for AWS Lambda
c. AWS Lambda allows you to orchestrate and manage Docker containers to facilitate complex containerized applications on AWS
d. AWS Lambda allows you to install databases on the underlying serverless Operating System
e. AWS Lambda provides access to the underlying operating system to control its behavior through code

A

a. AWS Lambda lets you run code without provisioning or managing servers
b. You pay for the compute time you consume for AWS Lambda

AWS Lambda is a compute service that lets you run code without provisioning or managing servers. AWS Lambda executes your code only when needed and scales automatically, from a few requests per day to thousands per second. You pay for the compute time and the number of requests for your Lambda function - there is no charge when your code is not running. With AWS Lambda, you can run code for virtually any type of application or backend service - all with zero administration. AWS Lambda runs your code on a high-availability compute infrastructure and performs all of the administration of the compute resources, including server and operating system maintenance, capacity provisioning and automatic scaling, code monitoring and logging.

How AWS Lambda Works: https://aws.amazon.com/lambda/

182
Q

Which AWS service publishes up-to-the-minute information on the general status and availability of all AWS services in all the Regions of AWS Cloud?

a. Amazon CloudWatch
b. AWS CloudFormation
c. AWS Health Dashboard - service health
d. AWS Health Dashboard – Your account health

A

c. AWS Health Dashboard - service health

AWS Health Dashboard - service health is the single place to learn about the availability and operations of AWS services. You can view the overall status of AWS services, and you can sign in to view personalized communications about your particular AWS account or organization.

You can check on this page https://health.aws.amazon.com/health/status to get current status information.

183
Q

AWS Shield Advanced provides expanded DDoS attack protection for web applications running on which of the following resources? (Select two)
a. AWS Global Accelerator
b. Amazon Route 53
c. AWS Elastic Beanstalk
d. AWS CloudFormation
e. Amazon API Gateway

A

a. AWS Global Accelerator
b. Amazon Route 53

AWS Shield Standard is activated for all AWS customers, by default. For higher levels of protection against attacks, you can subscribe to AWS Shield Advanced. With Shield Advanced, you also have exclusive access to advanced, real-time metrics and reports for extensive visibility into attacks on your AWS resources. With the assistance of the DRT (DDoS response team), AWS Shield Advanced includes intelligent DDoS attack detection and mitigation for not only for network layer (layer 3) and transport layer (layer 4) attacks but also for application layer (layer 7) attacks.

AWS Shield Advanced provides expanded DDoS attack protection for web applications running on the following resources: Amazon Elastic Compute Cloud, Elastic Load Balancing (ELB), Amazon CloudFront, Amazon Route 53, AWS Global Accelerator.

Probably helps to remember the five above.

184
Q

A financial services company must meet compliance requirements that mandate storing multiple copies of data in geographically distant locations. As the company uses Amazon Simple Storage Service (Amazon S3) as its main storage service, which of the following represents the MOST resource-efficient solution for this use-case?

a. Use S3 cross-region replication (S3 CRR) to replicate data between distant AWS Regions
b. For every new object, trigger an AWS Lambda function to write data into a bucket in another AWS Region
c. Use S3 same-region replication (S3 SRR) to replicate data between distant AWS Regions
d. Run a daily job on an Amazon Elastic Compute Cloud (Amazon EC2) instance to copy objects into another Region

A

a. Use S3 cross-region replication (S3 CRR) to replicate data between distant AWS Regions

Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance.

Replication enables automatic, asynchronous copying of objects across Amazon S3 buckets. Buckets that are configured for object replication can be owned by the same AWS account or by different accounts. You can copy objects between different AWS Regions or within the same Region.

Although Amazon S3 stores your data across multiple geographically distant Availability Zones by default, compliance requirements might dictate that you store data at even greater distances. S3 cross-region replication (S3 CRR) allows you to replicate data between distant AWS Regions to satisfy these requirements.

Exam Alert:

Please review the differences between S3 same-region replication (S3 SRR) and S3 cross-region replication (S3 CRR): https://docs.aws.amazon.com/AmazonS3/latest/dev/replication.html

Incorrect options:

For every new object, trigger an AWS Lambda function to write data into a bucket in another AWS Region - Although this solution is feasible, it’s not resource efficient as the AWS Lambda is used to do something which S3 cross-region replication (S3 CRR) can achieve off-the-shelf.

185
Q

An IT company is on a cost-optimization spree and wants to identify all Amazon Elastic Compute Cloud (Amazon EC2) instances that are under-utilized. Which AWS services can be used off-the-shelf to address this use-case without needing any manual configurations? (Select two)

a. AWS Cost Explorer
b. AWS Trusted Advisor
c. AWS Budgets
d. AWS Cost & Usage Report (AWS CUR)
e. Amazon CloudWatch

A

a. AWS Cost Explorer
b. AWS Trusted Advisor

AWS Trusted Advisor is an online tool that provides real-time guidance to help provision your resources following AWS best practices. Whether establishing new workflows, developing applications, or as part of ongoing improvement, recommendations provided by Trusted Advisor regularly help keep your solutions provisioned optimally. AWS Trusted Advisor analyzes your AWS environment and provides best practice recommendations in five categories: Cost Optimization, Performance, Security, Fault Tolerance, Service Limits.

AWS Trusted Advisor checks the Amazon Elastic Compute Cloud (Amazon EC2) instances that were running at any time during the last 14 days and alerts you if the daily CPU utilization was 10% or less and network I/O was 5 MB or less on 4 or more days.

How AWS Trusted Advisor Works: https://aws.amazon.com/premiumsupport/technology/trusted-advisor/

AWS Cost Explorer has an easy-to-use interface that lets you visualize, understand, and manage your AWS costs and usage over time. AWS Cost Explorer includes a default report that helps you visualize the costs and usage associated with your top five cost-accruing AWS services, and gives you a detailed breakdown of all services in the table view. The reports let you adjust the time range to view historical data going back up to twelve months to gain an understanding of your cost trends.

The rightsizing recommendations feature in AWS Cost Explorer helps you identify cost-saving opportunities by downsizing or terminating Amazon EC2 instances. You can see all of your underutilized Amazon EC2 instances across member accounts in a single view to immediately identify how much you can save.

Incorrect options:

AWS Cost & Usage Report (AWS CUR) - The AWS Cost & Usage Report (AWS CUR) contains the most comprehensive set of cost and usage data available. You can use AWS Cost & Usage Report (AWS CUR) to publish your AWS billing reports to an Amazon Simple Storage Service (Amazon S3) bucket that you own. You can receive reports that break down your costs by the hour or month, by product or product resource, or by tags that you define yourself. AWS Cost & Usage Report (AWS CUR) cannot be used to identify under-utilized Amazon EC2 instances.

Amazon CloudWatch - Amazon CloudWatch can be used to create alarm to monitor your estimated charges. When you enable the monitoring of estimated charges for your AWS account, the estimated charges are calculated and sent several times daily to CloudWatch as metric data. You can choose to receive alerts by email when charges have exceeded a certain threshold. Think resource performance monitoring, events, and alerts; think CloudWatch. Amazon CloudWatch cannot be used to identify under-utilized Amazon EC2 instances without manually configuring an alarm with the appropriate threshold to track the Amazon EC2 utilization, so this option is incorrect.

186
Q

Which of the following AWS Support plans provide access to guidance, configuration, and troubleshooting of AWS interoperability with third-party software? (Select two)
a. AWS Basic Support
b. AWS Developer Support
c. AWS Corporate Support
d. AWS Business Support
e. AWS Enterprise Support

A

d. AWS Business Support
e. AWS Enterprise Support

These are the highest tiers.

187
Q

A research group wants to use EC2 instances to run a scientific computation application that has a fault tolerant architecture. The application needs high-performance hardware disks that provide fast I/O performance. As a Cloud Practitioner, which of the following storage options would you recommend as the MOST cost-effective solution?
a. Amazon Simple Storage Service (Amazon S3)
b. Instance Store
c. Amazon Elastic Block Store (EBS)
d. Amazon Elastic File System (Amazon EFS)

A

b. Instance Store

An instance store provides temporary block-level storage for your instance. This storage is located on disks that are physically attached to the host computer. This is a good option when you need storage with very low latency, but you don’t need the data to persist when the instance terminates or you can take advantage of fault-tolerant architectures. For this use-case, the computation application itself has a fault tolerant architecture, so it can automatically handle any failures of Instance Store volumes.

As the Instance Store volumes are included as part of the instance’s usage cost, therefore this is the correct option.

188
Q

Which of the following statements are CORRECT regarding the Availability Zone (AZ) specific characteristics of Amazon Elastic Block Store (EBS) and Amazon Elastic File System (Amazon EFS) storage types?
a. EBS volume can be attached to one or more instances in multiple Availability Zones (AZ) and EFS file system can be mounted on instances in the same Availability Zone (AZ)
b. EBS volume can be attached to a single instance in the same Availability Zone (AZ) whereas EFS file system can be mounted on instances across multiple Availability Zones (AZ)
c. EBS volume can be attached to one or more instances in multiple Availability Zones (AZ) and EFS file system can be mounted on instances across multiple Availability Zones (AZ)
d. EBS volume can be attached to a single instance in the same Availability Zone (AZ) and EFS file system can only be mounted on instances in the same Availability Zone (AZ)

A

b. EBS volume can be attached to a single instance in the same Availability Zone (AZ) whereas EFS file system can be mounted on instances across multiple Availability Zones (AZ)

Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources. It is built to scale on-demand to petabytes without disrupting applications, growing and shrinking automatically as you add and remove files, eliminating the need to provision and manage capacity to accommodate growth.

The service is designed to be highly scalable, highly available, and highly durable. Amazon EFS file systems store data and metadata across multiple Availability Zones (AZ) in an AWS Region. EFS file system can be mounted on instances across multiple Availability Zones (AZ).

Amazon Elastic Block Store (EBS) is an easy to use, high-performance block storage service designed for use with Amazon Elastic Compute Cloud (EC2) for both throughput and transaction-intensive workloads at any scale.

Designed for mission-critical systems, EBS volumes are replicated within an Availability Zone (AZ) and can easily scale to petabytes of data. You can attach an available EBS volume to one instance that is in the same Availability Zone (AZ) as the volume.

189
Q

A company wants to improve the resiliency of its flagship application so it wants to move from its traditional database system to a managed AWS NoSQL database service to support active-active configuration in both the East and West US AWS regions. The active-active configuration with cross-region support is the prime criteria for any database solution that the company considers.

Which AWS database service is the right fit for this requirement?
a. Amazon DynamoDB with global tables
b. Amazon Relational Database Service (Amazon RDS) for MYSQL
c. Amazon DynamoDB with DynamoDB Accelerator
d. Amazon Aurora with multi-master clusters

A

a. Amazon DynamoDB with global tables

Amazon DynamoDB is a fully managed, serverless, key-value NoSQL database designed to run high-performance applications at any scale. DynamoDB offers built-in security, continuous backups, automated multi-region replication, in-memory caching, and data export tools.

DynamoDB global tables replicate data automatically across your choice of AWS Regions and automatically scale capacity to accommodate your workloads. With global tables, your globally distributed applications can access data locally in the selected regions to get single-digit millisecond read and write performance. DynamoDB offers active-active cross-region support that is needed for the company.

Amazon DynamoDB with DynamoDB Accelerator - DynamoDB Accelerator (DAX) is an in-memory cache that delivers fast read performance for your tables at scale by enabling you to use a fully managed in-memory cache. Using DAX, you can improve the read performance of your DynamoDB tables by up to 10 times—taking the time required for reads from milliseconds to microseconds, even at millions of requests per second. DAX does not offer active-active cross-Region configuration.

190
Q

Which of the following are the storage services offered by the AWS Cloud? (Select two)
a. Amazon Elastic File System (Amazon EFS)
b. Amazon Simple Notification Service (SNS)
c. Amazon Elastic Compute Cloud (Amazon EC2)
d. Amazon Simple Queue Service (SQS)
e. Amazon Simple Storage Service (Amazon S3)

A

a. Amazon Elastic File System (Amazon EFS)
e. Amazon Simple Storage Service (Amazon S3)

Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance.

Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources. It is built to scale on-demand to petabytes without disrupting applications, growing and shrinking automatically as you add and remove files, eliminating the need to provision and manage capacity to accommodate growth.

191
Q

Which of the following is a benefit of using AWS managed services such as Amazon Relational Database Service (Amazon RDS)?
a. The customer needs to manage database backups
b. The customer needs to patch the underlying OS
c. The performance of AWS managed Amazon Relational Database Service (Amazon RDS) instance is better than a customer-managed database instance
d. There is no need to optimize database instance type and size

A

c. The performance of AWS managed Amazon Relational Database Service (Amazon RDS) instance is better than a customer-managed database instance

Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. It provides cost-efficient and resizable capacity while automating time-consuming administration tasks such as hardware provisioning, database setup, patching, and backups.

Amazon RDS provides a selection of instance types optimized to fit different relational database use cases. Instance types comprise varying combinations of CPU, memory, storage, and networking capacity and give you the flexibility to choose the appropriate mix of resources for your database to optimize the database for your use-case by selecting the correct instance type and size.

As the RDS instances are optimized for memory, performance, or I/O, therefore the performance of AWS managed Amazon Relational Database Service (Amazon RDS) instance is better than a customer-managed database instance.

192
Q

Which of the following options can be used to access and manage all AWS services (Select three)?

a. AWS Software Development Kit (SDK)
b. AWS Command Line Interface (AWS CLI)
c. AWS Systems Manager
d. AWS Management Console
e. AWS Secrets Manager
f. Amazon API Gateway

A

a. AWS Software Development Kit (SDK)
b. AWS Command Line Interface (AWS CLI)
d. AWS Management Console

AWS services can be accessed in three different ways:

AWS Management Console

This is a simple web interface for accessing AWS services.

AWS Command Line Interface (AWS CLI)

You can access AWS services from the command line and automate service management with scripts.

AWS Software Development Kit (SDK)

You can also access via AWS SDK that provides language-specific abstracted APIs for AWS services.

193
Q

AWS Compute Optimizer delivers recommendations for which of the following AWS resources? (Select two)

a. Amazon Elastic Compute Cloud (Amazon EC2) instances, Amazon Elastic File System (Amazon EFS)
b. Amazon Elastic Compute Cloud (Amazon EC2) instances, Amazon EC2 Auto Scaling groups
c. Amazon Elastic File System (Amazon EFS), AWS Lambda functions
d. AWS Lambda functions, Amazon Simple Storage Service (Amazon S3)
e. Amazon Elastic Block Store (Amazon EBS), AWS Lambda functions

A

b. Amazon Elastic Compute Cloud (Amazon EC2) instances, Amazon EC2 Auto Scaling groups
e. Amazon Elastic Block Store (Amazon EBS), AWS Lambda functions

AWS Compute Optimizer helps you identify the optimal AWS resource configurations, such as Amazon EC2 instance types, Amazon EBS volume configurations, and AWS Lambda function memory sizes, using machine learning to analyze historical utilization metrics. AWS Compute Optimizer delivers recommendations for selected types of EC2 instances, EC2 Auto Scaling groups, Amazon EBS volumes, and AWS Lambda functions.

AWS Compute Optimizer calculates an individual performance risk score for each resource dimension of the recommended instance, including CPU, memory, EBS throughput, EBS IOPS, disk throughput, disk throughput, network throughput, and network packets per second (PPS).

AWS Compute Optimizer provides EC2 instance type and size recommendations for EC2 Auto Scaling groups with a fixed group size, meaning desired, minimum, and maximum are all set to the same value and have no scaling policy attached.

AWS Compute Optimizer supports IOPS and throughput recommendations for General Purpose (SSD) (gp3) volumes and IOPS recommendations for Provisioned IOPS (io1 and io2) volumes.

AWS Compute Optimizer helps you optimize two categories of Lambda functions. The first category includes Lambda functions that may be over-provisioned in memory sizes. The second category includes compute-intensive Lambda functions that may benefit from additional CPU power.

194
Q

Compared to the on-demand instance prices, what is the highest possible discount offered for reserved instances (RI)?

a. 40
b. 90
c. 72
d. 50

A

c. 72

Reserved instances (RI) provide you with significant savings (up to 72%) on your Amazon Elastic Compute Cloud (Amazon EC2) costs compared to on-demand instance pricing. Reserved Instances (RI) are not physical instances, but rather a billing discount applied to the use of on-demand instances in your account. You can purchase a reserved instance (RI) for a one-year or three-year commitment, with the three-year commitment offering a bigger discount.

Amazon Elastic Compute Cloud (Amazon EC2) Pricing Options Overview: https://aws.amazon.com/ec2/pricing/

Note: Spot Instances can offer up to 90% discounts.

195
Q

Which AWS services can be used to facilitate organizational change management, part of the Reliability pillar of AWS Well-Architected Framework? (Select three)
a. Amazon Inspector
b. AWS CloudTrail
c. Amazon CloudWatch
d. Amazon GuardDuty
e. AWS Trusted Advisor
f. AWS Config

A

b. AWS CloudTrail
c. Amazon CloudWatch
f. AWS Config

There are three best practice areas for Reliability in the cloud - Foundations, Change Management, Failure Management. Being aware of how change affects a system (change management) allows you to plan proactively, and monitoring allows you to quickly identify trends that could lead to capacity issues or SLA breaches.

AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. Config continuously monitors and records your AWS resource configurations and allows you to automate the evaluation of recorded configurations against desired configurations.

AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. With CloudTrail, you can log, continuously monitor, and retain account activity related to actions across your AWS infrastructure. CloudTrail provides event history of your AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command-line tools, and other AWS services.

Amazon CloudWatch is a monitoring and observability service built for DevOps engineers, developers, site reliability engineers (SREs), and IT managers. CloudWatch provides data and actionable insights to monitor applications, respond to system-wide performance changes, optimize resource utilization, and get a unified view of operational health.