Technology Flashcards

1
Q

Which AWS services can be used to facilitate organizational change management, part of the Reliability pillar of AWS Well-Architected Framework? (Select three)

A. Amazon GuardDuty
B. AWS Trusted Advisor
C. AWS Config
D. Amazon Inspector
E. AWS CloudTrail
F. Amazon CloudWatch

A

C. AWS Config
E. AWS CloudTrail
F. Amazon CloudWatch

Explanation
Correct options:

There are three best practice areas for Reliability in the cloud - Foundations, Change Management, Failure Management. Being aware of how change affects a system (change management) allows you to plan proactively, and monitoring allows you to quickly identify trends that could lead to capacity issues or SLA breaches.

AWS Config - AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. Config continuously monitors and records your AWS resource configurations and allows you to automate the evaluation of recorded configurations against desired configurations.

How AWS Config Works: via - https://aws.amazon.com/config/

AWS CloudTrail - AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. With CloudTrail, you can log, continuously monitor, and retain account activity related to actions across your AWS infrastructure. CloudTrail provides event history of your AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command-line tools, and other AWS services.

How CloudTrail Works: via - https://aws.amazon.com/cloudtrail/

Amazon CloudWatch - Amazon CloudWatch is a monitoring and observability service built for DevOps engineers, developers, site reliability engineers (SREs), and IT managers. CloudWatch provides data and actionable insights to monitor applications, respond to system-wide performance changes, optimize resource utilization, and get a unified view of operational health.

Incorrect options:

AWS Trusted Advisor - AWS Trusted Advisor is an online tool that provides you real-time guidance to help you provision your resources following AWS best practices on cost optimization, security, fault tolerance, service limits, and performance improvement.

Amazon Inspector - Amazon Inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on AWS. Amazon Inspector automatically assesses applications for exposure, vulnerabilities, and deviations from best practices.

Amazon GuardDuty - Amazon GuardDuty is a threat detection service that monitors malicious activity and unauthorized behavior to protect your AWS account. GuardDuty analyzes billions of events across your AWS accounts from AWS CloudTrail (AWS user and API activity in your accounts), Amazon VPC Flow Logs (network traffic data), and DNS Logs (name query patterns). This service is for AWS account level access, not for instance-level management like an EC2. GuardDuty cannot be used to check OS vulnerabilities.

References:

https://d1.awsstatic.com/whitepapers/architecture/AWS_Well-Architected_Framework.pdf

https://aws.amazon.com/config/

https://aws.amazon.com/cloudtrail/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

A silicon valley based healthcare startup stores anonymized patient health data on Amazon S3. The CTO further wants to ensure that any sensitive data on S3 is discovered and identified to prevent any sensitive data leaks. As a Cloud Practitioner, which AWS service would you recommend addressing this use-case?

A. AWS Secrets Manager
B. Amazon Polly
C. AWS Glue
D. Amazon Macie

A

D. Amazon Macie

Explanation
Correct option:

Amazon Macie

Amazon Macie is a fully managed data security and data privacy service that uses machine learning and pattern matching to discover and protect your sensitive data in AWS. Macie automatically provides an inventory of Amazon S3 buckets including a list of unencrypted buckets, publicly accessible buckets, and buckets shared with AWS accounts outside those you have defined in AWS Organizations. Then, Macie applies machine learning and pattern matching techniques to the buckets you select to identify and alert you to sensitive data, such as personally identifiable information (PII).

How Macie Works: via - https://aws.amazon.com/macie/

Incorrect options:

AWS Glue - AWS Glue is a fully managed extract, transform, and load (ETL) service that makes it easy for customers to prepare and load their data for analytics. AWS Glue job is meant to be used for batch ETL data processing. It cannot be used to discover and protect your sensitive data in AWS.

Amazon Polly - Amazon Polly is a service that turns text into lifelike speech, allowing you to create applications that talk, and build entirely new categories of speech-enabled products. Polly’s Text-to-Speech (TTS) service uses advanced deep learning technologies to synthesize natural sounding human speech. It cannot be used to discover and protect your sensitive data in AWS.

AWS Secrets Manager - AWS Secrets Manager helps you protect secrets needed to access your applications, services, and IT resources. The service enables you to easily rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle. Users and applications retrieve secrets with a call to Secrets Manager APIs, eliminating the need to hardcode sensitive information in plain text. It cannot be used to discover and protect your sensitive data in AWS.

Reference:

https://aws.amazon.com/macie/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

A research group wants to use EC2 instances to run a scientific computation application that has a fault tolerant architecture. The application needs high-performance hardware disks that provide fast I/O performance. As a Cloud Practitioner, which of the following storage options would you recommend as the MOST cost-effective solution?

A. Amazon Elastic Block Store (EBS)
B. Amazon Elastic File System (Amazon EFS)
C. Instance Store
D. Amazon Simple Storage Service (Amazon S3)

A

C. Instance Store

Explanation
Correct option:

Instance Store

An instance store provides temporary block-level storage for your instance. This storage is located on disks that are physically attached to the host computer. This is a good option when you need storage with very low latency, but you don’t need the data to persist when the instance terminates or you can take advantage of fault-tolerant architectures. For this use-case, the computation application itself has a fault tolerant architecture, so it can automatically handle any failures of Instance Store volumes.

As the Instance Store volumes are included as part of the instance’s usage cost, therefore this is the correct option.

EC2 Instances Store Overview: via - https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html

Incorrect options:

Amazon Elastic File System (Amazon EFS) - Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed, elastic NFS file system. EFS is not available as a hardware disk on the instance, so this option is not correct.

Amazon Elastic Block Store (EBS) - Amazon Elastic Block Store (EBS) is an easy to use, high-performance block storage service designed for use with Amazon Elastic Compute Cloud (EC2) for both throughput and transaction-intensive workloads at any scale. EBS is not available as a hardware disk on the instance, so this option is not correct.

Amazon Simple Storage Service (Amazon S3) - Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance. S3 is not available as a hardware disk on the instance, so this option is not correct.

Reference:

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

A data analytics company is running a proprietary batch analytics application on AWS and wants to use a storage service which would be accessed by hundreds of EC2 instances simultaneously to append data to existing files. As a Cloud Practitioner, which AWS service would you suggest for this use-case?

A. Instance Store
B. Amazon Elastic Block Storage (Amazon EBS)
C. Amazon Simple Storage Service (Amazon S3)
D. Amazon Elastic File System (Amazon EFS)

A

D. Amazon Elastic File System (Amazon EFS)

Explanation
Correct option:

Amazon Elastic File System (Amazon EFS)

Amazon EFS is a file storage service for use with Amazon EC2. Amazon EFS provides a file system interface, file system access semantics, and concurrently-accessible storage for up to thousands of Amazon EC2 instances. Amazon EFS uses the Network File System protocol.

How EFS works: via - https://aws.amazon.com/efs/

Incorrect options:

Amazon Elastic Block Store (Amazon EBS) - Amazon Elastic Block Store (EBS) is an easy to use, high-performance block storage service designed for use with Amazon Elastic Compute Cloud (EC2) for both throughput and transaction-intensive workloads at any scale. EBS volumes cannot be accessed simultaneously by multiple EC2 instances, so this option is incorrect.

Instance Store - An instance store provides temporary block-level storage for your instance. This storage is located on disks that are physically attached to the host computer. Instance Store volumes cannot be accessed simultaneously by multiple EC2 instances, so this option is incorrect.

Amazon Simple Storage Service (Amazon S3) - Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance. S3 is object storage and it does not support file append operations, so this option is incorrect.

Reference:

https://aws.amazon.com/efs/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Which of the following statements are CORRECT regarding the Availability Zone (AZ) specific characteristics of Amazon Elastic Block Store (EBS) and Amazon Elastic File System (Amazon EFS) storage types?

A. EBS volume can be attached to a single instance in the same (AZ) where as EFS file systems can be mounted on instances across multiple (AZ)s.
B. EBS volume can be attached to one or more instances in multiple (AZ)s and EFS file system can be mounted on instances across multiple (AZ)s.
C. EBS volume can be attached to one or more instances in multiple (AZ)s and EFS file system can be mounted on instances in the same (AZ)
D, EBS volume can be attached to a single instance in the same (AZ) and EFS file system can only be mounted on instances in the same (AZ)

A

A. EBS volume can be attached to a single instance in the same (AZ) where as EFS file systems can be mounted on instances across multiple (AZ)s.

Explanation
Correct options:

EBS volume can be attached to a single instance in the same Availability Zone (AZ) whereas EFS file system can be mounted on instances across multiple Availability Zones (AZ)

Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources. It is built to scale on-demand to petabytes without disrupting applications, growing and shrinking automatically as you add and remove files, eliminating the need to provision and manage capacity to accommodate growth.

The service is designed to be highly scalable, highly available, and highly durable. Amazon EFS file systems store data and metadata across multiple Availability Zones (AZ) in an AWS Region. EFS file system can be mounted on instances across multiple Availability Zones (AZ).

Amazon Elastic Block Store (EBS) is an easy to use, high-performance block storage service designed for use with Amazon Elastic Compute Cloud (EC2) for both throughput and transaction-intensive workloads at any scale.

Designed for mission-critical systems, EBS volumes are replicated within an Availability Zone (AZ) and can easily scale to petabytes of data. You can attach an available EBS volume to one instance that is in the same Availability Zone (AZ) as the volume.

Incorrect options:

EBS volume can be attached to one or more instances in multiple Availability Zones (AZ) and EFS file system can be mounted on instances in the same Availability Zone (AZ)

EBS volume can be attached to a single instance in the same Availability Zone (AZ) and EFS file system can only be mounted on instances in the same Availability Zone (AZ)

EBS volume can be attached to one or more instances in multiple Availability Zones (AZ) and EFS file system can be mounted on instances across multiple Availability Zones (AZ)

These three options contradict the details provided earlier in the explanation, so these options are incorrect.

References:

https://aws.amazon.com/efs/faq/

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-attaching-volume.html

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volumes-multi.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Which tool/service will help you access AWS services using programming language-specific APIs?

A. AWS Management Console
B. AWS Command Line Interface (CLI)
C. AWS Software Developer Kit (SDK)
D. Intergrated Development Environments (IDE)

A

C. AWS Software Developer Kit (SDK)

Explanation
Correct option:

AWS Software Developer Kit (SDK) - SDKs take the complexity out of coding by providing language-specific APIs for AWS services. For example, the AWS SDK for JavaScript simplifies the use of AWS Services by providing a set of libraries that are consistent and familiar for JavaScript developers. It provides support for API lifecycle considerations such as credential management, retries, data marshaling, serialization, and deserialization. AWS SDKs are offered in several programming languages to make it simple for developers working on different programming and scripting languages. So, AWS SDK can help with using AWS services from within an application using language-specific APIs.

Incorrect options:

AWS Management Console - The AWS Management Console is a web application that comprises and refers to a broad collection of service consoles for managing Amazon Web Services. When you first sign in, you see the console home page. The home page provides access to each service console as well as an intuitive user interface for exploring AWS and getting helpful tips.

AWS Command Line Interface (CLI) - The AWS Command Line Interface (CLI) is a unified tool to manage your AWS services. With just one tool to download and configure, you can control multiple AWS services from the command line and automate them through scripts. CLI cannot be used with language-specific APIs.

Integrated Development Environments (IDE) - An integrated development environment (IDE) provides a set of coding productivity tools such as a source code editor, a debugger, and build tools. Cloud9 IDE is an offering from AWS under IDEs.

References:

https://aws.amazon.com/tools/

https://aws.amazon.com/cli/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Which of the following are correct statements regarding the AWS Global Infrastructure? (Select two)

A. Each AWS Region consist of two or more Edge locations
B. Each Availability Zone (AZ) consists of one or more discrete data centers
C. Each AWS Region consists of a minimum two Availability Zones (AZ)
D. Each AWS Region consists of a miminum if three Availability Zones (AZ)
E. Each Availabilty Zone (AZ) consists of two or more discrete data centers

A

Correct options:
B. Each Availability Zone (AZ) consists of one or more discrete data centers

D. Each AWS Region consists of a miminum if three Availability Zones (AZ)

Explanation
Each Availability Zone (AZ) consists of one or more discrete data centers

AWS has the concept of a Region, which is a physical location around the world where AWS clusters its data centers. AWS calls each group of logical data centers an Availability Zone (AZ). Each AWS Region consists of a minimum of three, isolated, and physically separate AZs within a geographic area. Each AZ has independent power, cooling, and physical security and is connected via redundant, ultra-low-latency networks.

An Availability Zone (AZ) is one or more discrete data centers with redundant power, networking, and connectivity in an AWS Region. All AZs in an AWS Region are interconnected with high-bandwidth, low-latency networking, over fully redundant, dedicated metro fiber providing high-throughput, low-latency networking between AZs.

AWS Regions and Availability Zones Overview: via - https://aws.amazon.com/about-aws/global-infrastructure/regions_az/

Incorrect options:

Each AWS Region consists of a minimum of two Availability Zones (AZ)

Each Availability Zone (AZ) consists of two or more discrete data centers

Each AWS Region consists of two or more Edge Locations

These three options contradict the details provided earlier in the explanation, so these options are incorrect.

Reference:

https://aws.amazon.com/about-aws/global-infrastructure/regions_az/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

A unicorn startup is building an analytics application with support for a speech-based interface. The application will accept speech-based input from users and then convey results via speech. As a Cloud Practitioner, which solution would you recommend for the given use-case?

A. Use Amazon Translate to convert speeh to text for downstream analysis. Then use Amazon Polly to convey the text results via speech
B. Use Amazon Polly to convert speech to text for downstreams analysis. Then use Amazon Translate to convey the results via speech
C. Use Amazon Transcribe to convert speech to text for downstream analysis. Then use Amazon Polly to convey the text results via speech
D. Use Amazon Polly to convert speech to text for downstream analysis. Then use Amazon Transcribe to convey the text results via speech

A

C. Use Amazon Transcribe to convert speech to text for downstream analysis. Then use Amazon Polly to convey the text results via speech

Explanation
Correct option:

Use Amazon Transcribe to convert speech to text for downstream analysis. Then use Amazon Polly to convey the text results via speech

You can use Amazon Transcribe to add speech-to-text capability to your applications. Amazon Transcribe uses a deep learning process called automatic speech recognition (ASR) to convert speech to text quickly and accurately. Amazon Transcribe can be used to transcribe customer service calls, to automate closed captioning and subtitling, and to generate metadata for media assets.

Amazon Transcribe Use-Cases: via - https://aws.amazon.com/transcribe/

You can use Amazon Polly to turn text into lifelike speech thereby allowing you to create applications that talk. Polly’s Text-to-Speech (TTS) service uses advanced deep learning technologies to synthesize natural sounding human speech.

Amazon Polly Benefits: via - https://aws.amazon.com/polly/

Amazon Translate is used for language translation. Amazon Translate uses neural machine translation via deep learning models to deliver more accurate and more natural-sounding translation than traditional statistical and rule-based translation algorithms.

Incorrect options:

Use Amazon Polly to convert speech to text for downstream analysis. Then use Amazon Transcribe to convey the text results via speech - Amazon Polly cannot be used to convert speech to text, so this option is incorrect.

Use Amazon Translate to convert speech to text for downstream analysis. Then use Amazon Polly to convey the text results via speech - Amazon Translate cannot convert speech to text, so this option is incorrect.

Use Amazon Polly to convert speech to text for downstream analysis. Then use Amazon Translate to convey the text results via speech - Amazon Polly cannot be used to convert speech to text, so this option is incorrect.

References:

https://aws.amazon.com/transcribe/

https://aws.amazon.com/polly/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Which of the following entities applies patches to the underlying OS for Amazon Aurora?

A. The AWS Product Team automatically
B. The AWS customer by SSHing on the instances
C. The AWS Support after receiving a request from the customer
D. The AWS customer by using AWS Systems Manager

A

A. The AWS Product Team automatically

Explanation
Correct option:

The AWS Product Team automatically

Amazon Aurora is a MySQL and PostgreSQL-compatible relational database built for the cloud. Amazon Aurora is fully managed by Amazon Relational Database Service (RDS), which automates time-consuming administration tasks like hardware provisioning, database setup, patching, and backups. The AWS Product team is responsible for applying patches to the underlying OS for AWS Aurora.

Incorrect options:

The AWS customer by using AWS Systems Manager - AWS Systems Manager gives you visibility and control of your infrastructure on AWS. Systems Manager provides a unified user interface so you can view operational data from multiple AWS services and allows you to automate operational tasks such as running commands, managing patches and configuring servers across AWS Cloud as well as on-premises infrastructure. You can only use AWS Systems Manager to apply patches to your EC2 instances or on-premises instances. You cannot use Systems Manager to apply patches to the underlying OS for AWS Aurora.

The AWS Support after receiving a request from the customer - AWS Support handles support tickets regarding AWS services. AWS Support is not responsible for applying patches to the underlying OS for AWS Aurora.

The AWS customer by SSHing on the instances - AWS customers are only responsible for patching their own EC2 instances.

Reference:

https://aws.amazon.com/rds/aurora/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

A company wants to improve the resiliency of its flagship application so it wants to move from its traditional database system to a managed AWS database service to support active-active configuration in both the East and West US AWS regions. The active-active configuration with cross-region support is the prime criteria for any database solution that the company considers.

Which AWS database service is the right fit for this requirement?

A. Amazon Aurora with multi-master clusters
B. Amazon DynamoDB with DynamoDB Accelerator
C. Amazon Relational Database (Amazon RDS) for MySQL
D. Amazon DynamiDB with global tables

A

D. Amazon DynamiDB with global tables

Explanation
Correct option: Amazon DynamoDB with global tables

Amazon DynamoDB is a fully managed, serverless, key-value NoSQL database designed to run high-performance applications at any scale. DynamoDB offers built-in security, continuous backups, automated multi-region replication, in-memory caching, and data export tools.

DynamoDB global tables replicate data automatically across your choice of AWS Regions and automatically scale capacity to accommodate your workloads. With global tables, your globally distributed applications can access data locally in the selected regions to get single-digit millisecond read and write performance. DynamoDB offers active-active cross-region support that is needed for the company.

Incorrect options:

Amazon DynamoDB with DynamoDB Accelerator - DynamoDB Accelerator (DAX) is an in-memory cache that delivers fast read performance for your tables at scale by enabling you to use a fully managed in-memory cache. Using DAX, you can improve the read performance of your DynamoDB tables by up to 10 times—taking the time required for reads from milliseconds to microseconds, even at millions of requests per second. DAX does not offer active-active cross-Region configuration.

Amazon Aurora with multi-master cluster - Amazon Aurora (Aurora) is a fully managed relational database engine that’s compatible with MySQL and PostgreSQL. With some workloads, Aurora can deliver up to five times the throughput of MySQL and up to three times the throughput of PostgreSQL without requiring changes to most of your existing applications. In a multi-master cluster, all DB instances have read/write capability. Currently, all DB instances in a multi-master cluster must be in the same AWS Region. You can’t enable cross-Region replicas from multi-master clusters.

Amazon Relational Database Service (Amazon RDS) for MYSQL - Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. It provides cost-efficient and resizable capacity while automating time-consuming administration tasks such as hardware provisioning, database setup, patching and backups. It frees you to focus on your applications so you can give them the fast performance, high availability, security and compatibility they need. RDS does not support active-active configuration with cross-region support.

References:

https://aws.amazon.com/dynamodb/features/

https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-multi-master.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

A company is using a message broker service on its on-premises application and wants to move this messaging functionality to AWS Cloud. Which of the following AWS services is the right choice to move the existing functionality easily?

A. Amazon MQ
B. Amazon Simple Queue Service (Amazon SQS)
C. Amazon Kinesis Data Streams
D. Amazon Simple Notification Service

A

A. Amazon MQ

Explanation
Correct option:

Amazon MQ

Amazon MQ is a managed message broker service for Apache ActiveMQ and RabbitMQ that makes it easy to set up and operate message brokers on AWS. Amazon MQ reduces your operational responsibilities by managing the provisioning, setup, and maintenance of message brokers for you. Because Amazon MQ connects to your current applications with industry-standard APIs and protocols, you can easily migrate to AWS without having to rewrite code.

If you’re using messaging with existing applications, and want to move the messaging functionality to the cloud quickly and easily, AWS recommends you consider Amazon MQ. It supports industry-standard APIs and protocols so you can switch from any standards-based message broker to Amazon MQ without rewriting the messaging code in your applications. If you are building brand new applications in the cloud, AWS recommends you consider Amazon SQS and Amazon SNS.

How Amazon MQ works: via - https://aws.amazon.com/amazon-mq/

Incorrect options:

Amazon Simple Queue Service (Amazon SQS) - Amazon Simple Queue Service (Amazon SQS) offers a reliable, highly scalable hosted queue for storing messages as they travel between computers. Amazon SQS lets you easily move data between distributed application components and helps you build applications in which messages are processed independently (with message-level ack/fail semantics), such as automated workflows.

Amazon Simple Notification Service (Amazon SNS) - Amazon Simple Notification Service (Amazon SNS) is a fully managed messaging service for both application-to-application (A2A) and application-to-person (A2P) communication. The A2A pub/sub functionality provides topics for high-throughput, push-based, many-to-many messaging between distributed systems, microservices, and event-driven serverless applications. Amazon SNS allows applications to send time-critical messages to multiple subscribers through a “push” mechanism, which implies that the receiving applications have to be present and running to receive the messages.

Amazon Kinesis Data Streams - Amazon Kinesis Data Streams enables you to build custom applications that process or analyze streaming data for specialized needs. You can continuously add various types of data such as clickstreams, application logs, and social media to an Amazon Kinesis data stream from hundreds of thousands of sources. Within seconds, the data will be available for your Amazon Kinesis Applications to read and process from the stream.

Reference:

https://aws.amazon.com/amazon-mq/faqs/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

AWS Compute Optimizer delivers recommendations for which of the following AWS resources? (Select two)

A. Amazon Elastic File System (Amazon EFS), AWS Lambda functions
B. Amazon Elastic Compute (Amazon EC2) instances, Amazon EC2 Auto Scaling groups
C. Amazon Elastic Compute Cloud (Amazon EC2) instances, Amazon Elastic File System (Amazon EFS)
D. AWS Lambda functions, Amazon Simple Storage Service (Amazon S3)
E. Amazon Elastic Block Store (Amazon EBS), AWS Lambda functions

A

B. Amazon Elastic Compute (Amazon EC2) instances, Amazon EC2 Auto Scaling groups
E. Amazon Elastic Block Store (Amazon EBS), AWS Lambda functions

Explanation
Correct options:

Amazon Elastic Compute Cloud (Amazon EC2) instances, Amazon EC2 Auto Scaling groups

Amazon Elastic Block Store (Amazon EBS), AWS Lambda functions

AWS Compute Optimizer helps you identify the optimal AWS resource configurations, such as Amazon EC2 instance types, Amazon EBS volume configurations, and AWS Lambda function memory sizes, using machine learning to analyze historical utilization metrics. AWS Compute Optimizer delivers recommendations for selected types of EC2 instances, EC2 Auto Scaling groups, Amazon EBS volumes, and AWS Lambda functions.

AWS Compute Optimizer calculates an individual performance risk score for each resource dimension of the recommended instance, including CPU, memory, EBS throughput, EBS IOPS, disk throughput, disk throughput, network throughput, and network packets per second (PPS).

AWS Compute Optimizer provides EC2 instance type and size recommendations for EC2 Auto Scaling groups with a fixed group size, meaning desired, minimum, and maximum are all set to the same value and have no scaling policy attached.

AWS Compute Optimizer supports IOPS and throughput recommendations for General Purpose (SSD) (gp3) volumes and IOPS recommendations for Provisioned IOPS (io1 and io2) volumes.

AWS Compute Optimizer helps you optimize two categories of Lambda functions. The first category includes Lambda functions that may be over-provisioned in memory sizes. The second category includes compute-intensive Lambda functions that may benefit from additional CPU power.

Incorrect options:

Amazon Elastic Compute Cloud (Amazon EC2) instances, Amazon Elastic File System (Amazon EFS)

Amazon Elastic File System (Amazon EFS), AWS Lambda functions

AWS Lambda functions, Amazon Simple Storage Service (Amazon S3)

AWS Compute Optimizer does not provide optimization recommendations for S3 and EFS, so these options are incorrect.

Reference:

https://aws.amazon.com/compute-optimizer/faqs/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

An IT company wants to run a log backup process every Monday at 2 AM. The usual runtime of the process is 5 minutes. As a Cloud Practitioner, which AWS services would you recommend to build a serverless solution for this use-case? (Select two)

A. Amazon Elastic Compute Cloud (EC2)
B. Amazon CloudWatch
C. AWS Lambda
D. AWS Step Function
E. AWS Systems Manager

A

B. Amazon CloudWatch
C. AWS Lambda

Explanation
Correct option:

Amazon CloudWatch - Amazon CloudWatch is a monitoring and observability service built for DevOps engineers, developers, site reliability engineers (SREs), and IT managers. CloudWatch provides data and actionable insights to monitor applications, respond to system-wide performance changes, optimize resource utilization, and get a unified view of operational health.

AWS Lambda - AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume. The lambda has a maximum execution time of 15 minutes, so it can be used to run this log backup process.

To build the solution for the given use-case, you can create a CloudWatch Events rule that triggers on a schedule via a cron expression. You can then set the Lambda as the target for this rule.

Incorrect options:

AWS Systems Manager - AWS Systems Manager gives you visibility and control of your infrastructure on AWS. Systems Manager provides a unified user interface so you can view operational data from multiple AWS services and allows you to automate operational tasks across your AWS resources. With Systems Manager, you can group resources, like Amazon EC2 instances, Amazon S3 buckets, or Amazon RDS instances, by application, view operational data for monitoring and troubleshooting, and take action on your groups of resources. Secrets Manager cannot be used to run a process on a schedule.

Amazon Elastic Compute Cloud (Amazon EC2) - Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud with support for per-second billing. It is the easiest way to provision servers on AWS Cloud and access the underlying OS. As the company wants a serverless solution, so this option is ruled out.

AWS Step Function - AWS Step Function lets you coordinate multiple AWS services into serverless workflows. You can design and run workflows that stitch together services such as AWS Lambda, AWS Glue and Amazon SageMaker. Step Function cannot be used to run a process on a schedule.

Reference:

https://wa.aws.amazon.com/wat.concepts.wa-concepts.en.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

A data analytics company stores its data on Amazon Simple Storage Service (Amazon S3) and wants to do SQL based analysis on this data with minimum effort. As a Cloud Practitioner, which of the following AWS services will you suggest for this use case?

A. Amazon Aurora
B. Amazon Athena
C. Amazon DynamoDB
D. Amazon Redshift

A

B. Amazon Athena

Explanation
Correct option:

Amazon Athena

Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL. Amazon Athena is serverless, so there is no infrastructure to manage, and you pay only for the queries that you run.

Key features of Amazon Athena: via - https://aws.amazon.com/athena/

To use Amazon Athena, simply point to your data in Amazon Simple Storage Service (Amazon S3), define the schema, and start querying using standard SQL. Most results are delivered within seconds. With Amazon Athena, there’s no need for complex ETL jobs to prepare your data for analysis. This makes it easy for anyone with SQL skills to quickly analyze large-scale datasets.

Incorrect options:

Amazon Aurora - Amazon Aurora is a MySQL and PostgreSQL-compatible relational database built for the cloud, that combines the performance and availability of traditional enterprise databases with the simplicity and cost-effectiveness of open source databases. You cannot use Amazon Aurora for SQL analysis on S3 based data.

Amazon Redshift - Amazon Redshift is the most popular and fastest cloud data warehouse. Though analytics can be run on Redshift, in the current use case, old data is residing on S3, and Athena is the right choice since analytics can be run directly while data is sitting on S3. You cannot use Amazon Redshift for SQL analysis on S3 based data.

Amazon DynamoDB - Amazon DynamoDB is a key-value and document database that delivers single-digit millisecond performance at any scale. It’s a fully managed, multi-Region, multi-master, durable database with built-in security, backup and restore, and in-memory caching for internet-scale applications. You cannot use Amazon DynamoDB for SQL analysis on S3 based data.

Reference:

https://aws.amazon.com/athena/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is the primary benefit of deploying an Amazon RDS Multi-AZ database with one standby?

A. Amazon RDS Multi-AZ improves database performance for read-heavy workloads
B. Amazon RDS Multi-AZ reduces database usage costs
C. Amazon RDS Multi-AZ protects the database from regional failure
D. Amazon RDS Multi-AZ enhances database availabilty

A

D. Amazon RDS Multi-AZ enhances database availabilty

Explanation
Correct option:

Amazon RDS Multi-AZ enhances database availability

Amazon RDS Multi-AZ deployments provide enhanced availability and durability forAmazon Relational Database Service (Amazon RDS) instances, making them a natural fit for production database workloads. When you provision an Amazon RDS Multi-AZ Instance with one standby, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ).

In case of an infrastructure failure, Amazon RDS performs an automatic failover to the standby so that you can resume database operations as soon as the failover is complete.

How Amazon RDS Multi-AZ Works: via - https://aws.amazon.com/rds/features/multi-az/

Exam Alert:

Please review the differences between Multi-AZ, Multi-Region and Read Replica deployments for RDS: via - https://aws.amazon.com/rds/features/multi-az/

Incorrect options:

Amazon RDS Multi-AZ improves database performance for read-heavy workloads - Amazon RDS Multi-AZ with one standby does not allow read operations from the standby. Read Replicas allow you to create read-only copies that are synchronized with your master database. Read Replicas are used for improved read performance. Therefore, this option is incorrect.

Amazon RDS Multi-AZ protects the database from a regional failure - You need to use RDS in Multi-Region deployment configuration to protect from a regional failure. Amazon RDS Multi-AZ cannot protect from a regional failure.

Amazon RDS Multi-AZ reduces database usage costs - Amazon RDS Multi-AZ increases the database costs compared to the standard deployment. So this option is incorrect.

Reference:

https://aws.amazon.com/rds/features/multi-az/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What are the fundamental drivers of cost with AWS Cloud?

A. Compute, Storage and Inbound Data Transfer
B. Compute, Databases and Inbound Data Transfer
C. Compute, Storage and Outbound Data Transfer
D. Compute, Databases and Outbound Data Transfer

A

C. Compute, Storage and Outbound Data Transfer

Explanation
Correct options:

Compute, Storage and Outbound Data Transfer

There are three fundamental drivers of cost with AWS: compute, storage, and outbound data transfer. In most cases, there is no charge for inbound data transfer or data transfer between other AWS services within the same region. Outbound data transfer is aggregated across services and then charged at the outbound data transfer rate.

AWS Cloud Pricing Fundamentals: via - https://d0.awsstatic.com/whitepapers/aws_pricing_overview.pdf

Incorrect options:

Compute, Storage and Inbound Data Transfer

Compute, Databases and Outbound Data Transfer

Compute, Databases and Inbound Data Transfer

These three options contradict the details provided earlier in the explanation, so these options are incorrect.

Reference:

https://d0.awsstatic.com/whitepapers/aws_pricing_overview.pdf

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Which of the following statements is INCORRECT about AWS Auto Scaling?

A. You can automatically resgister new instances to a load balancer
B. You can automatically remove unhealthy instances
C. You can automatically deploy AWS Shield when DDoS attack is detected
D. You can scale out and add more Amazon EC2 instances to match an increase in demand as well as scale in and remove Amazon Elastic Compute Cloud (EC2) instances to match a reduced demand

A

C. You can automatically deploy AWS Shield when DDoS attack is detected

Explanation
Correct option:

You can automatically deploy AWS Shield when a DDoS attack is detected

AWS Auto Scaling is helpful during a DDoS attack, as it can scale out resources fast. But, it cannot automatically deploy AWS Shield service onto its group of resources.

Incorrect options:

AWS Auto Scaling monitors your applications and automatically adjusts the capacity to maintain steady, predictable performance at the lowest possible cost. Using AWS Auto Scaling, it’s easy to setup application scaling for multiple resources across multiple services in minutes. The service provides a simple, powerful user interface that lets you build scaling plans for resources including Amazon EC2 instances and Spot Fleets, Amazon ECS tasks, Amazon DynamoDB tables and indexes, and Amazon Aurora Replicas.

You can scale out and add more Amazon Elastic Compute Cloud (Amazon EC2) instances to match an increase in demand as well as scale in and remove Amazon Elastic Compute Cloud (Amazon EC2) instances to match a reduced demand - As explained above, it can scale out resources on-demand as well as scale in resources to match reduced demand.

You can automatically remove unhealthy instances - Based on health checks, Auto Scaling can remove unhealthy instances.

You can automatically register new instances to a load balancer - During a scale out process, Auto scaling can spin up new instances and register them with the load balancer, also part of the Scaling group.

Reference:

https://aws.amazon.com/autoscaling/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Which of the following AWS services allows a database to have flexible schema and supports document data models?

A. Amazon Redshift
B. Amazon Aurora
C. Amazon Relational Databse Service (Amazon RDS)
D. Amazon DynamoDB

A

D. Amazon DynamoDB

Explanation
Correct option:

Amazon DynamoDB

Amazon DynamoDB is a NoSQL database that supports key-value and document data models and enables developers to build modern, serverless applications that can start small and scale globally to support petabytes of data and tens of millions of read and write requests per second.

Amazon DynamoDB supports both key-value and document data models. This enables Amazon DynamoDB to have a flexible schema, so each row can have any number of columns at any point in time. This allows you to easily adapt the tables as your business requirements change, without having to redefine the table schema as you would in relational databases.

Incorrect options:

Amazon Relational Database Service (Amazon RDS) - Amazon Relational Database Service (Amazon RDS) is an AWS service for relational databases. Schema change on a relational database is not easy and straight-forward as it is on a NoSQL database. Amazon Relational Database Service (Amazon RDS) does not support flexible schema.

Amazon Redshift - Amazon Redshift is a fully-managed petabyte-scale cloud-based data warehouse product designed for large scale data set storage and analysis. Amazon Redshift does not support flexible schema.

Amazon Aurora - Amazon Aurora is an AWS service for relational databases. Schema change on a relational database is not easy and straight-forward as it is on a NoSQL database. Amazon Aurora does not support flexible schema.

Reference:

https://aws.amazon.com/dynamodb/features/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Which AWS service enables users to find, buy, and immediately start using software solutions in their AWS environment?

A. AWS Marketplace
B. AWS Config
C. AWS System Manager
D. AWS OpsWorks

A

A. AWS Marketplace

Explanation
Correct option:

AWS Marketplace

AWS Marketplace is a digital catalog with thousands of software listings from independent software vendors that make it easy to find, test, buy, and deploy software that runs on AWS. AWS Marketplace includes thousands of software listings from popular categories such as security, networking, storage, machine learning, IoT, business intelligence, database, and DevOps. You can use AWS Marketplace as a buyer (subscriber) or as a seller (provider), or both. Anyone with an AWS account can use AWS Marketplace as a consumer and can register to become a seller.

Incorrect options:

AWS Config - AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. Config continuously monitors and records your AWS resource configurations and allows you to automate the evaluation of recorded configurations against desired configurations. Think resource-specific history, audit, and compliance; think Config.

AWS OpsWorks - AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet. OpsWorks lets you use Chef and Puppet to automate how servers are configured, deployed and managed across your Amazon EC2 instances or on-premises compute environments.

AWS Systems Manager - AWS Systems Manager gives you visibility and control of your infrastructure on AWS. Systems Manager provides a unified user interface so you can view operational data from multiple AWS services and allows you to automate operational tasks across your AWS resources. With Systems Manager, you can group resources, like Amazon EC2 instances, Amazon S3 buckets, or Amazon RDS instances, by application, view operational data for monitoring and troubleshooting, and take action on your groups of resources.

Reference:

https://docs.aws.amazon.com/marketplace/latest/buyerguide/what-is-marketplace.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

A gaming company is looking at a technology/service that can deliver a consistent low-latency gameplay to ensure a great user experience for end-users in various locations.

Which AWS technology/service will provide the necessary low-latency access to the end-users?

A. AWS Edge Locations
B. AWS Direct Connect
C. AWS Local Zones
D. AWS Wavelength

A

C. AWS Local Zones

Explanation
Correct option:

AWS Local Zones

AWS Local Zones allow you to use select AWS services, like compute and storage services, closer to more end-users, providing them very low latency access to the applications running locally. AWS Local Zones are also connected to the parent region via Amazon’s redundant and very high bandwidth private network, giving applications running in AWS Local Zones fast, secure, and seamless access to the rest of AWS services.

You should use AWS Local Zones to deploy workloads closer to your end-users for low-latency requirements. AWS Local Zones have their connection to the internet and support AWS Direct Connect, so resources created in the Local Zone can serve local end-users with very low-latency communications.

Various AWS services such as Amazon Elastic Compute Cloud (EC2), Amazon Virtual Private Cloud (VPC), Amazon Elastic Block Store (EBS), Amazon FSx, Amazon Elastic Load Balancing, Amazon EMR, Amazon ElastiCache, and Amazon Relational Database Service (RDS) are available locally in the AWS Local Zones. You can also use services that orchestrate or work with local services such as Amazon EC2 Auto Scaling, Amazon EKS clusters, Amazon ECS clusters, Amazon EC2 Systems Manager, Amazon CloudWatch, AWS CloudTrail, and AWS CloudFormation. AWS Local Zones also provide a high-bandwidth, secure connection to the AWS Region, allowing you to seamlessly connect to the full range of services in the AWS Region through the same APIs and toolsets.

Incorrect options:

AWS Edge Locations - An AWS Edge location is a site that CloudFront uses to cache copies of the content for faster delivery to users at any location.

AWS Wavelength - AWS Wavelength extends the AWS cloud to a global network of 5G edge locations to enable developers to innovate and build a whole new class of applications that require ultra-low latency. Wavelength Zones provide a high-bandwidth, secure connection to the parent AWS Region, allowing developers to seamlessly connect to the full range of services in the AWS Region through the same APIs and toolsets.

AWS Direct Connect - AWS Direct Connect is a cloud service that links your network directly to AWS, bypassing the internet to deliver more consistent, lower-latency performance. When creating a new connection, you can choose a hosted connection provided by an AWS Direct Connect Delivery Partner, or choose a dedicated connection from AWS—and deploy at over 100 AWS Direct Connect locations around the world. AWS Direct Connect provides consistently high bandwidth, low-latency access and it is generally used between on-premises data centers and AWS network. Direct Connect is overkill for the given requirement.

Reference:

https://aws.amazon.com/about-aws/global-infrastructure/localzones/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Which of the following options can be used to access and manage all AWS services (Select three)?

A. AWS Software Development Kit (SDK)
B. AWS Comand Line Interface (AWS CLI)
C. AWS Management Console
D. Amazon API Gateway
E. AWS Systems Manager
F. AWS Secrets Manager

A

A. AWS Software Development Kit (SDK)
B. AWS Comand Line Interface (AWS CLI)
C. AWS Management Console

Explanation
Correct options:

AWS services can be accessed in three different ways:

AWS Management Console - This is a simple web interface for accessing AWS services.

AWS Command Line Interface (AWS CLI) - You can access AWS services from the command line and automate service management with scripts.

AWS Software Development Kit (SDK) - You can also access via AWS SDK that provides language-specific abstracted APIs for AWS services.

Incorrect options:

AWS Systems Manager - AWS Systems Manager gives you visibility and control of your infrastructure on AWS. AWS Systems Manager provides a unified user interface so you can view operational data from multiple AWS services and allows you to automate operational tasks across your AWS resources. With AWS Systems Manager, you can group resources, like Amazon EC2 instances, Amazon S3 buckets, or Amazon RDS instances, by application, view operational data for monitoring and troubleshooting, and take action on your groups of resources.

AWS Secrets Manager - AWS Secrets Manager helps you protect secrets needed to access your applications, services, and IT resources. The service enables you to easily rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle. Users and applications retrieve secrets with a call to AWS Secrets Manager APIs, eliminating the need to hardcode sensitive information in plain text.

Amazon API Gateway - Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

The engineering team at an IT company wants to monitor the CPU utilization for its fleet of Amazon Elastic Compute Cloud (Amazon EC2) instances and send an email to the administrator if the utilization exceeds 80%. As a Cloud Practitioner, which AWS services would you recommend to build this solution? (Select two)

A. Amazon Simple Notification Service (SNS)
B. Amazon CloudWatch
C. Amazon Simple Queue Service (SQS)
D. AWS Lambda
E. AWS CloudTrail

A

A. Amazon Simple Notification Service (SNS)
B. Amazon CloudWatch

Explanation
Correct options:

Amazon CloudWatch - Amazon CloudWatch is a monitoring and observability service built for DevOps engineers, developers, site reliability engineers (SREs), and IT managers. CloudWatch provides data and actionable insights to monitor applications, respond to system-wide performance changes, optimize resource utilization, and get a unified view of operational health. You can create an CloudWatch alarm that sends an email message using Amazon SNS when the alarm changes state from OK to ALARM. The alarm changes to the ALARM state when the average CPU use of an EC2 instance exceeds a specified threshold for consecutive specified periods.

Amazon Simple Notification Service (SNS) - Amazon Simple Notification Service (SNS) is a highly available, durable, secure, fully managed pub/sub messaging service that enables you to decouple microservices, distributed systems, and serverless applications.

How SNS Works: via - https://aws.amazon.com/sns/

Incorrect options:

AWS CloudTrail - AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. With CloudTrail, you can log, continuously monitor, and retain account activity related to actions across your AWS infrastructure. Think account-specific activity and audit; think CloudTrail. CloudTrail cannot be used to monitor CPU utilization for EC2 instances or send emails.

AWS Lambda - AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume. Lambda cannot be used to monitor CPU utilization for EC2 instances or send emails.

Amazon Simple Queue Service (SQS) - Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. SQS offers two types of message queues - Standard queues vs FIFO queues. SQS cannot be used to monitor CPU utilization for EC2 instances or send emails.

References:

https://aws.amazon.com/cloudwatch/

https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/US_AlarmAtThresholdEC2.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Which AWS service helps with global application availability and performance using the AWS global network?

A. AWS Global Accelerator
B. AWS CloudFront
C. Amazon Route 53
D. Elastic Load Balancing (ELB)

A

A. AWS Global Accelerator

Explanation
Correct option:

AWS Global Accelerator

AWS Global Accelerator is a service that improves the availability and performance of your applications with local or global users. It provides static IP addresses that act as a fixed entry point to your application endpoints in a single or multiple AWS Regions, such as your Application Load Balancers, Network Load Balancers, or Amazon EC2 instances. AWS Global Accelerator uses the AWS global network to optimize the path from your users to your applications, improving the performance of your traffic by as much as 60%.

AWS Global Accelerator improves performance for a wide range of applications over TCP or UDP by proxying packets at the edge to applications running in one or more AWS Regions. AWS Global Accelerator is a good fit for non-HTTP use cases, such as gaming (UDP), IoT (MQTT), or Voice over IP, as well as for HTTP use cases that specifically require static IP addresses or deterministic, fast regional failover.

How AWS Global Accelerator Works: via - https://aws.amazon.com/global-accelerator/

Exam Alert:

Please review the differences between Amazon CloudFront and AWS Global Accelerator: via - https://aws.amazon.com/global-accelerator/faqs/

Incorrect options:

Amazon CloudFront - Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment. It cannot be used to improve application availability and performance using the AWS global network.

Elastic Load Balancing (ELB) - Elastic Load Balancing (ELB) distributes incoming application or network traffic across multiple targets, such as Amazon EC2 instances, containers, and IP addresses, in multiple Availability Zones. Elastic Load Balancing (ELB) scales your load balancer as traffic to your application changes over time. It can automatically scale to the vast majority of workloads. Elastic Load Balancing (ELB) cannot be used to improve application availability and performance using the AWS global network.

Amazon Route 53 - Amazon Route 53 is a highly available and scalable cloud Domain Name System (DNS) web service. It is designed to give developers and businesses an extremely reliable and cost-effective way to route end users to Internet applications by translating names like www.example.com into the numeric IP addresses like 192.0.2.1 that computers use to connect. It cannot be used to improve application availability and performance using the AWS global network.

Reference:

https://aws.amazon.com/global-accelerator/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Which of the following AWS services are global in scope? (Select two)

A. Amazon Relational Database Service (Amazon RDS)
B. Amazon Simple Storage Service (Amazon S3)
C. Amazon CloudFront
D. Amazon EC2
E. Amazon IAM

A

C. Amazon CloudFront
E. Amazon IAM

Explanation
Correct options:

AWS Identity and Access Management (AWS IAM)

Amazon CloudFront

Most of the services that AWS offers are Region specific. But few services, by definition, need to be in a global scope because of the underlying service they offer. AWS Identity and Access Management (AWS IAM), Amazon CloudFront, Amazon Route 53 and AWS Web Application Firewall (AWS WAF) are some of the global services.

AWS Identity and Access Management (AWS IAM) enables you to manage access to AWS services and resources securely. Using AWS Identity and Access Management (AWS IAM), you can create and manage IAM users and IAM user-groups, and use permissions to allow and deny their access to AWS resources.

Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment.

Incorrect options:

Amazon Relational Database Service (Amazon RDS) - Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. This is a regional service.

Amazon Elastic Compute Cloud (Amazon EC2) - Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It comes under Infrastructure as a Service (IaaS) type of Cloud Computing. This is a regional service.

Exam Alert:

Amazon Simple Storage Service (Amazon S3) - Amazon Simple Storage Service (Amazon S3) is a unique service in the sense that it follows a global namespace but the buckets are regional. You specify an AWS Region when you create your Amazon S3 bucket. This is a regional service.

References:

https://aws.amazon.com/iam/faqs/

https://aws.amazon.com/cloudfront/faqs/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Which AWS service can be used to store, manage, and deploy Docker container images?

A. AWS Lambda
B. Amazon ECS
C. Amazon Elastic Container Registry (Amazon ECR)
D. Amazon EC2

A

C. Amazon Elastic Container Registry (Amazon ECR)

Explanation
Correct option:

Amazon Elastic Container Registry (Amazon ECR) - Amazon Elastic Container Registry (Amazon ECR) can be used to store, manage, and deploy Docker container images. Amazon Elastic Container Registry (Amazon ECR) eliminates the need to operate your container repositories. You can then pull your docker images from Amazon Elastic Container Registry (Amazon ECR) and run those on Amazon Elastic Container Service (Amazon ECS).

Please see this schematic diagram to understand how Amazon Elastic Container Registry (Amazon ECR) works: via - https://aws.amazon.com/ecr/

Incorrect options:

Amazon Elastic Container Service (Amazon ECS) - Amazon Elastic Container Service (Amazon ECS) is a highly scalable, fast, container management service that makes it easy to run, stop, and manage Docker containers on a cluster. You cannot use Amazon Elastic Container Service (Amazon ECS) to store and deploy docker container images.

Please see this schematic diagram to understand how Amazon Elastic Container Service (Amazon ECS) works: via - https://aws.amazon.com/ecs/

Amazon Elastic Compute Cloud (Amazon EC2) - Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the AWS cloud. You can use EC2 to provision virtual servers on AWS Cloud. You cannot use EC2 to store and deploy docker container images.

AWS Lambda - AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume. You cannot use AWS Lambda to store and deploy docker container images.

References:

https://aws.amazon.com/ecr/

https://aws.amazon.com/ecs/

26
Q

Which AWS service should be used when you want to run container applications, but want to avoid the operational overhead of scaling, patching, securing, and managing servers?

A. Amazon ECS - EC2 Launch type
B. Amazon EC2
C. AWS Lambda
D. Amazon ECS - Fargate launch type

A

D. Amazon ECS - Fargate launch type

Explanation
Correct option:

Amazon Elastic Container Service (Amazon ECS) - Fargate launch type

AWS Fargate is a serverless compute engine for containers. It works with both Amazon Elastic Container Service (Amazon ECS) and Amazon Elastic Kubernetes Service (Amazon EKS). AWS Fargate makes it easy for you to focus on building your applications. AWS Fargate removes the need to provision and manage servers, lets you specify and pay for resources per application, and improves security through application isolation by design. AWS Fargate allocates the right amount of compute, eliminating the need to choose instances and scale cluster capacity. You only pay for the resources required to run your containers, so there is no over-provisioning and paying for additional servers. AWS Fargate runs each task or pod in its kernel providing the tasks and pods their own isolated compute environment. This enables your application to have workload isolation and improved security by design.

How AWS Fargate Works: via - https://aws.amazon.com/fargate/

Incorrect options:

Amazon Elastic Container Service (Amazon ECS) - EC2 launch type - Amazon Elastic Container Service (Amazon ECS) is a highly scalable, fast, container management service that makes it easy to run, stop, and manage Docker containers and allows you to easily run applications on a managed cluster of Amazon EC2 instances. Unlike AWS Fargate, this is not a fully managed service and you need to manage the underlying servers yourself.

AWS Lambda - AWS Lambda is a compute service that lets you run code without provisioning or managing servers. AWS Lambda executes your code only when needed and scales automatically, from a few requests per day to thousands per second. AWS Lambda does not support running container applications.

Amazon Elastic Compute Cloud (Amazon EC2) - Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud, per-second billing, and access to the underlying OS. It is designed to make web-scale cloud computing easier for developers. Maintenance of the server and its software has to be done by the customer, so this option is ruled out.

Reference:

https://aws.amazon.com/fargate/

27
Q

A fleet of Amazon EC2 instances spread across different Availability Zones (AZ) needs to access, edit and share file-based data stored centrally on a system. As a Cloud Practitioner, which AWS service would you recommend for this use-case?

A. Amazon EBS
B. Amazon S3
C. Amazon EFS
D. EC2 Instance Store

A

C. Amazon Elastic File System (Amazon EFS)

Explanation
Correct option:

Amazon Elastic File System (Amazon EFS)

Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed, elastic NFS file system. It is built to scale on-demand to petabytes without disrupting applications, growing and shrinking automatically as you add and remove files, eliminating the need to provision and manage capacity to accommodate growth. Amazon EFS is designed to provide massively parallel shared access to thousands of Amazon EC2 instances, enabling your applications to achieve high levels of aggregate throughput and IOPS with consistent low latencies.

How Amazon Elastic File System (Amazon EFS) Works: via - https://aws.amazon.com/efs/

Incorrect options:

Amazon Elastic Block Store (Amazon EBS) - Amazon Elastic Block Store (EBS) is an easy to use, high-performance block storage service designed for use with Amazon Elastic Compute Cloud (EC2) for both throughput and transaction-intensive workloads at any scale. A broad range of workloads, such as relational and non-relational databases, enterprise applications, containerized applications, big data analytics engines, file systems, and media workflows are widely deployed on Amazon EBS. For the Cloud Practitioner exam, you should consider that an EBS volume can only be mounted to one EC2 instance at a time, so this option is not correct for the given use-case. As a special case, you should note that Amazon EBS Multi-Attach enables you to attach a single Provisioned IOPS SSD (io1 or io2) volume to multiple nitro based instances that are in the same Availability Zone (AZ).

EC2 Instance Store - An instance store provides temporary block-level storage for your EC2 instance. This storage is located on disks that are physically attached to the host computer. Instance store is ideal for the temporary storage of information that changes frequently, such as buffers, caches, scratch data, and other temporary content, or for data that is replicated across a fleet of instances, such as a load-balanced pool of web servers. Instance storage is temporary, data is lost if instance experiences failure or is terminated. EC2 instance store cannot be used for file sharing between instances.

Amazon Simple Storage Service (Amazon S3) - Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance. This means customers of all sizes and industries can use it to store and protect any amount of data for a range of use cases, such as websites, mobile applications, backup and restore, archive, enterprise applications, IoT devices, and big data analytics. As S3 is object-based storage, so it cannot be used for file sharing between instances.

Reference:

https://aws.amazon.com/efs/

28
Q

Which of the following statement is correct for a Security Group and a Network Access Control List (Network ACL)?

A. Security Groups act as a firewall at the VPC level whereas Network Access Control List acts as a firewall at the AZ level
B. Security Groups act as a firewall at the instance level whereas Network Access Control List (network ACL) acts as a firewall at the subnet level
C. Security Groups act as a firewall at the subnet level whereas Network Access Control List (network ACL) acts as a firewall at the instance level
D. Security Groups act as a firewall at the AZ level whereas Network Access Control List acts as a firewall at the VPC level

A

B. Security Groups act as a firewall at the instance level whereas Network Access Control List (network ACL) acts as a firewall at the subnet level

Explanation
Correct option:

Security Group acts as a firewall at the instance level whereas Network Access Control List (Network ACL) acts as a firewall at the subnet level

A security group acts as a virtual firewall for your instance to control inbound and outbound traffic. When you launch an instance in a VPC, you can assign up to five security groups to the instance. Security groups act at the instance level, not the subnet level. A network access control list (network ACL) is an optional layer of security for your VPC that acts as a firewall for controlling traffic in and out of one or more subnets (i.e. it works at subnet level).

Security Group Overview: via - https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html

Network Access Control List (network NACL) Overview: via - https://docs.aws.amazon.com/vpc/latest/userguide/vpc-network-acls.html

Incorrect options:

Security Group acts as a firewall at the subnet level whereas Network Access Control List (Network ACL) acts as a firewall at the instance level - As explained above, the security group acts at the instance level and network access control list (network ACL) is at the subnet level.

Security Group acts as a firewall at the VPC level whereas Network Access Control List (Network ACL) acts as a firewall at the Availability Zone (AZ) level - As explained above, the security group acts at the instance level and network access control list (network ACL) is at the subnet level.

Security Group acts as a firewall at the Availability Zone (AZ) level whereas Network Access Control List (Network ACL) acts as a firewall at the VPC level - As explained above, the security group acts at the instance level and network access control list (network ACL) is at the subnet level.

References:

https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html

https://docs.aws.amazon.com/vpc/latest/userguide/vpc-network-acls.html

29
Q

Which of the following statements are correct about the AWS root user account? (Select two)

A. Root user account get unrestricted permissions when the account is created, but these can be restricted using IAM policies
B. It is highly recommended to enable Multi-Factor Authentication (MFA) for the root user
C. Root user access credentials are the email address and password used to create the AWS account
D. Root user account password cannot be changed once it is set
E. Root user credentials should only be shared with managers requiring administrative responsibilites to complete their jobs

A

B. It is highly recommended to enable Multi-Factor Authentication (MFA) for the root user
C. Root user access credentials are the email address and password used to create the AWS account

Explanation
Correct options:

Root user access credentials are the email address and password used to create the AWS account

It is highly recommended to enable Multi Factor Authentication (MFA) for root user account

The Email address and the password used for signing up for AWS services are the AWS root user account credentials. Root user account, therefore, has full permissions on all AWS resources under that account. Restricting root user account access is not possible. As a best practice, Multi-Factor Authentication (MFA) should be set on the root user account. The root user account password can be changed after account creation. For all employees performing various administrative jobs, create individual user accounts using AWS IAM, and give administrative permissions as needed.

AWS Root User Account Security Best Practices:

via - https://docs.aws.amazon.com/IAM/latest/UserGuide/id_root-user.html

via - https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#lock-away-credentials

Incorrect options:

Root user account password cannot be changed once it is set - This is incorrect. Like any other user credentials, the root user password can be changed after creation.

Root user credentials should only be shared with managers requiring administrative responsibilities to complete their jobs - This is a dangerous practice. Root user credentials should only be used only for some limited account-specific activity and root user credentials should be never be shared with anyone.

Root user account gets unrestricted permissions when the account is created, but these can be restricted using IAM policies - Root user account permissions cannot be restricted, whoever has access to these credentials can perform any operation for that AWS account. The root user credentials should be kept safely.

References:

https://docs.aws.amazon.com/IAM/latest/UserGuide/id_root-user.html

https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html

30
Q

A company has a static website hosted on an Amazon Simple Storage Service (Amazon S3) bucket in an AWS Region in Asia. Although most of its users are in Asia, now it wants to drive growth globally. How can it improve the global performance of its static website?

A. Use Amazon CloudFront to improve the perfom
B. Use Amazon CloudFormation to improve the performance of your website
C. Use Amazon S3 Transfer Acceleration (Amazon S3TA) to improve the performance of your website
D. Use AWS Web Application Firewall (AWS WAF) to improve the performance of your website

A

A. Use Amazon CloudFront to improve the perfom

Explanation
Correct option:

Use Amazon CloudFront to improve the performance of your website

You can use Amazon CloudFront to improve the performance of your website. Amazon CloudFront makes your website files (such as HTML, images, and video) available from data centers around the world (called edge locations). When a visitor requests a file from your website, Amazon CloudFront automatically redirects the request to a copy of the file at the nearest edge location. This results in faster download times than if the visitor had requested the content from a data center that is located farther away.

Incorrect options:

Use Amazon CloudFormation to improve the performance of your website - AWS CloudFormation allows you to use programming languages or a simple text file to model and provision, in an automated and secure manner, all the resources needed for your applications across all regions and accounts. CloudFormation cannot be used to improve the performance of a static website.

Use AWS Web Application Firewall (AWS WAF) to improve the performance of your website - By using AWS Web Application Firewall (AWS WAF), you can configure web access control lists (Web ACLs) on your CloudFront distributions or Application Load Balancers to filter and block requests based on request signatures. Besides, by using AWS WAF’s rate-based rules, you can automatically block the IP addresses of bad actors when requests matching a rule exceed a threshold that you define. WAF cannot be used to improve the performance of a static website.

Use Amazon S3 Transfer Acceleration (Amazon S3TA) to improve the performance of your website - Amazon S3 Transfer Acceleration (Amazon S3TA) enables fast, easy, and secure transfers of files over long distances between your client and an S3 bucket. Amazon S3 Transfer Acceleration (Amazon S3TA) takes advantage of Amazon CloudFront’s globally distributed edge locations. As the data arrives at an edge location, data is routed to Amazon S3 over an optimized network path. Amazon S3 Transfer Acceleration (Amazon S3TA) cannot be used to improve the performance of a static website.

References:

https://docs.aws.amazon.com/AmazonS3/latest/dev/website-hosting-cloudfront-walkthrough.html

https://docs.aws.amazon.com/AmazonS3/latest/dev/transfer-acceleration.html

31
Q

An AWS user is trying to launch an Amazon Elastic Compute Cloud (Amazon EC2) instance in a given region. What is the region-specific constraint that the Amazon Machine Image (AMI) must meet so that it can be used for this Amazon Elastic Compute Cloud (Amazon EC2) instance?

A. An Amazon Machine Image is a global enitity, so the region is not applicable
B. You can use an Amazon Machine Image (AMI) from a diffrent region, but it degrades the performance of the Amazon EC2 instance
C. You must use an Amazon Machine Image (AMI) from the same region as that of the Amazon EC2 instance. The region of the AMI has no bearing on the performance of the Amazon EC2 instance
D. You should use an AMI from the same region, as it improves the performance of the Amazon EC2 instance

A

C. You must use an Amazon Machine Image (AMI) from the same region as that of the Amazon EC2 instance. The region of the AMI has no bearing on the performance of the Amazon EC2 instance

Explanation
Correct option:

You must use an Amazon Machine Image (AMI) from the same region as that of the Amazon EC2 instance. The region of the Amazon Machine Image (AMI) has no bearing on the performance of the Amazon EC2 instance

An Amazon Machine Image (AMI) provides the information required to launch an instance. You must specify an Amazon Machine Image (AMI) when you launch an instance. You can launch multiple instances from a single AMI when you need multiple instances with the same configuration.

The Amazon Machine Image (AMI) must be in the same region as that of the Amazon EC2 instance to be launched. If the Amazon Machine Image (AMI) exists in a different region, you can copy that Amazon Machine Image (AMI) to the region where you want to launch the EC2 instance. The region of Amazon Machine Image (AMI) has no bearing on the performance of the Amazon EC2 instance.

Amazon Machine Images (AMI) Overview: via - https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html

Incorrect options:

You can use an Amazon Machine Image (AMI) from a different region, but it degrades the performance of the Amazon EC2 instance

You should use an Amazon Machine Image (AMI) from the same region, as it improves the performance of the Amazon EC2 instance

An Amazon Machine Image (AMI) is a global entity, so the region is not applicable

These three options contradict the details provided earlier in the explanation, so these options are incorrect.

Reference:

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html

32
Q

Which of the following statements are CORRECT regarding security groups and network access control lists (network ACL)? (Select two)

A. A security group is stateless, that is, the return traffic must be explicitly allowed
B. A security group contains a numbered list of rules and evaluates these rules in the increasing order while deciding whether to allow traffic
C. A network access control list (network ACL) contains a numbered list of rules and evaluates the rules in the increasing order while deciding whether to allow the traffic
D. A security group is statefull, that is, it automatically allows the return traffic
E. A network access control list (network ACL) is stateful, that is, it automatically allows the traffic return

A

C. A network access control list (network ACL) contains a numbered list of rules and evaluates the rules in the increasing order while deciding whether to allow the traffic
D. A security group is statefull, that is, it automatically allows the return traffic

Explanation
Correct options:

A security group is stateful, that is, it automatically allows the return traffic

A network access control list (network ACL) contains a numbered list of rules and evaluates these rules in the increasing order while deciding whether to allow the traffic

A security group acts as a virtual firewall for your instance to control inbound and outbound traffic. Security groups act at the instance level, not at the subnet level. Security groups are stateful — if you send a request from your instance, the response traffic for that request is allowed to flow in regardless of inbound security group rules. A security group evaluates all rules before deciding whether to allow traffic.

Security group overview: via - https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html

A network access control list (network ACL) is an optional layer of security for your VPC that acts as a firewall for controlling traffic in and out of one or more subnets (i.e. it works at subnet level). A network access control list (network ACL) contains a numbered list of rules. A network access control list (network ACL) evaluates the rules in order, starting with the lowest numbered rule, to determine whether traffic is allowed in or out of any subnet associated with the network ACL. The highest number that you can use for a rule is 32766. AWS recommends that you start by creating rules in increments (for example, increments of 10 or 100) so that you can insert new rules where you need to later on.

Network access control list (network ACL) overview: via - https://docs.aws.amazon.com/vpc/latest/userguide/vpc-network-acls.html

Incorrect options:

A security group contains a numbered list of rules and evaluates these rules in the increasing order while deciding whether to allow the traffic

A network access control list (network ACL) is stateful, that is, it automatically allows the return traffic

A security group is stateless, that is, the return traffic must be explicitly allowed

These three options contradict the details provided earlier in the explanation, so these options are incorrect.

References:

https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html

https://docs.aws.amazon.com/vpc/latest/userguide/vpc-network-acls.html

33
Q

Which of the following statements are CORRECT about the AWS Auto Scaling group? (Select two)

A. Auto Scaling group scales in and reduces the number of Amazon EC2 instances to match a decrease in demand
B. Auto Scaling group scales up and upgrades to a more powerful Amazon EC2 instance to match an increase in demand
C. Auto Scaling group scales out and adds more number of Amazon EC2 instances to match an increase in demand
D. Auto Scaling group scales down and reduces the number of Amazon EC2 instances to match a decrease in demand
E. Auto Scaling group scales down and downgrades to a less powerful Amazon EC2 instance to match a decrease in demand

A

A. Auto Scaling group scales in and reduces the number of Amazon EC2 instances to match a decrease in demand
C. Auto Scaling group scales out and adds more number of Amazon EC2 instances to match an increase in demand

Explanation
Correct option:

Auto Scaling group scales out and adds more number of Amazon EC2 instances to match an increase in demand

Auto Scaling group scales in and reduces the number of Amazon EC2 instances to match a decrease in demand

AWS Auto Scaling monitors your applications and automatically adjusts the capacity to maintain steady, predictable performance at the lowest possible cost. Using AWS Auto Scaling, it’s easy to setup application scaling for multiple resources across multiple services in minutes. The service provides a simple, powerful user interface that lets you build scaling plans for resources including Amazon EC2 instances and Spot Fleets, Amazon ECS tasks, Amazon DynamoDB tables and indexes, and Amazon Aurora Replicas.

You can use scaling policies to increase or decrease the number of instances in your group dynamically to meet changing conditions. When the scaling policy is in effect, the Auto Scaling group adjusts the desired capacity of the group, between the minimum and maximum capacity values that you specify, and launches or terminates the instances as needed. You can also scale on a schedule.

Incorrect options:

Auto Scaling group scales down and reduces the number of Amazon EC2 instances to match a decrease in demand - A scale down refers to a downgrade to a less powerful Amazon EC2 instance. Therefore this option is incorrect.

Auto Scaling group scales down and downgrades to a less powerful Amazon EC2 instance to match a decrease in demand

Auto Scaling group scales up and upgrades to a more powerful Amazon EC2 instance to match an increase in demand

An Auto Scaling group does not scale up or scale down, so these two options are incorrect.

Reference:

https://docs.aws.amazon.com/autoscaling/ec2/userguide/AutoScalingGroup.html

34
Q

Which of the following is the best way to protect your data from accidental deletion on Amazon S3?

A. Amazon S3 lifecycle configuration
B. Amazon S3 Transfer Acceleration (Amazon S3TA)
C. Amazon S3 Versioning
D. Amazon S3 storage classes

A

C. Amazon S3 Versioning

Explanation
Correct option:

Amazon S3 Versioning

Versioning is a means of keeping multiple variants of an object in the same bucket. You can use versioning to preserve, retrieve, and restore every version of every object stored in your Amazon S3 bucket. With versioning, you can easily recover from both unintended user actions and application failures.

Versioning-enabled buckets enable you to recover objects from accidental deletion or overwrite. For example: if you delete an object, instead of removing it permanently, Amazon S3 inserts a delete marker, which becomes the current object version.

Amazon S3 Versioning Overview: via - https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectVersioning.html

Incorrect options:

Amazon S3 lifecycle configuration - To manage your Amazon S3 objects so that they are stored cost-effectively throughout their lifecycle, configure their Amazon S3 Lifecycle. With Amazon S3 Lifecycle configuration rules, you can tell Amazon S3 to transition objects to less expensive storage classes, or archive or delete them. Amazon S3 lifecycle configuration will do the hard lifting of moving your data into cost-effective storage classes without user intervention. Amazon S3 Lifecycle configuration is not meant to protect from accidental deletion of data.

Amazon S3 storage classes - Amazon S3 offers a range of storage classes designed for different use cases. These include S3 Standard for general-purpose storage of frequently accessed data; S3 Intelligent-Tiering for data with unknown or changing access patterns; S3 Standard-Infrequent Access (S3 Standard-IA) and S3 One Zone-Infrequent Access (S3 One Zone-IA) for long-lived, but less frequently accessed data; and Amazon S3 Glacier (S3 Glacier) and Amazon S3 Glacier Deep Archive (S3 Glacier Deep Archive) for long-term archive and digital preservation. Storage classes are for different storage pattern needs that customers have, and not a data protection mechanism for S3.

Amazon S3 Transfer Acceleration (Amazon S3TA) - Amazon S3 Transfer Acceleration (Amazon S3TA) enables fast, easy, and secure transfers of files over long distances between your client and an S3 bucket. Amazon S3 Transfer Acceleration (Amazon S3TA) takes advantage of Amazon CloudFront’s globally distributed edge locations. As the data arrives at an edge location, data is routed to Amazon S3 over an optimized network path. Amazon S3 Transfer Acceleration (Amazon S3TA) cannot be used to protect from accidental deletion of data.

Reference:

https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectVersioning.html

35
Q

Which AWS service can be used to execute code triggered by new files being uploaded to Amazon Simple Storage Service (Amazon S3)?

A. Amazon Elastic Container Service (Amazon ECS)
B. Amazon Lambda
C. Amazon Simple Queue Service (Amazon SQS)
D. Amazon Elastic Compute Cloud (Amazon EC2)

A

B. Amazon Lambda

Explanation
Correct option:

AWS Lambda

AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume. With AWS Lambda, you can run code for virtually any type of application or backend service - all with zero administration. Just upload your code and Lambda takes care of everything required to run and scale your code with high availability.

You can use Amazon S3 to trigger AWS Lambda to process data immediately after an upload. For example, you can use AWS Lambda to thumbnail images, transcode videos, index files, process logs, validate content, and aggregate and filter data in real-time.

How AWS Lambda executes code in response to a trigger from S3: via - https://aws.amazon.com/lambda/

Incorrect options:

Amazon Elastic Compute Cloud (Amazon EC2) - Amazon EC2 is a web service that provides secure, resizable compute capacity in the AWS cloud. You can use Amazon EC2 to provision virtual servers on AWS Cloud. Amazon EC2 cannot execute code via a trigger from S3.

Amazon Elastic Container Service (Amazon ECS) - Amazon Elastic Container Service (Amazon ECS) is a highly scalable, fast, container management service that makes it easy to run, stop, and manage Docker containers on a cluster. Amazon ECS cannot execute code via a trigger from S3.

Amazon Simple Queue Service (Amazon SQS) - Amazon Simple Queue Service (Amazon SQS) is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. Using Amazon SQS, you can send, store, and receive messages between software components at any volume, without losing messages or requiring other services to be available.

Although Amazon SQS can be triggered from an Amazon S3 event, but Amazon SQS cannot execute code as its a message queuing service.

Reference:

https://aws.amazon.com/lambda/

36
Q

Which of the following are components of an AWS Site-to-Site VPN? (Select two)

A. AWS storage gateway
B. Network Address Translation gateway (NAT gateway)
C. Customer gateway
D. Virtual Private gateway
E. Internet gateway

A

C. Customer gateway
D. Virtual Private gateway

Explanation
Correct option:

Virtual private gateway (VGW)

Customer gateway

AWS Site-to-Site VPN enables you to securely connect your on-premises network or branch office site to your Amazon Virtual Private Cloud (Amazon VPC). VPN Connections are a good solution if you have an immediate need, and have low to modest bandwidth requirements. This connection goes over the public internet. Virtual private gateway (VGW) / Transit Gateway and Customer Gateway are the components of a VPC.

A virtual private gateway (VGW) is the VPN concentrator on the Amazon side of the AWS Site-to-Site VPN connection. A customer gateway is a resource in AWS that provides information to AWS about your Customer gateway device.

Components of an AWS Site-to-Site VPN: via - https://docs.aws.amazon.com/vpn/latest/s2svpn/how_it_works.html

Incorrect options:

AWS storage gateway - AWS storage gateway is a hybrid cloud storage service that connects your existing on-premises environments with the AWS Cloud. Customers use storage gateway to simplify storage management and reduce costs for key hybrid cloud storage use cases.

Network Address Translation gateway (NAT gateway) - A Network Address Translation gateway (NAT gateway) or a NAT Instance can be used in a public subnet in your VPC to enable instances in the private subnet to initiate outbound IPv4 traffic to the Internet. Network Address Translation gateway (NAT gateway) is managed by AWS but NAT Instance is managed by you.

Internet gateway - An internet gateway is a horizontally scaled, redundant, and highly available VPC component that allows communication between instances in your VPC and the internet. It, therefore, imposes no availability risks or bandwidth constraints on your network traffic.

Reference:

https://docs.aws.amazon.com/vpn/latest/s2svpn/how_it_works.html

37
Q

Which of the following capabilities does Amazon Rekognition provide as a ready-to-use feature?

A. Human pose detection
B. Convert images into greyscale
C. Identity objects in a photo
D. Resize images quickly

A

C. Identity objects in a photo

Explanation
Correct option:

Identify objects in a photo

With Amazon Rekognition, you can identify objects, people, text, scenes, and activities in images and videos, as well as detect any inappropriate content. Amazon Rekognition also provides highly accurate facial analysis and facial search capabilities that you can use to detect, analyze, and compare faces for a wide variety of user verification, people counting, and public safety use cases.

Amazon Rekognition Use-Cases:

via - https://aws.amazon.com/rekognition/

via - https://aws.amazon.com/rekognition/

Incorrect options:

Convert images into greyscale

Resize images quickly

Human pose detection

Amazon Rekognition does not do image processing tasks such as converting images to greyscale or resizing images. Human pose detection is not available in Amazon Rekognition.

Reference:

https://aws.amazon.com/rekognition/

38
Q

Which AWS service will you use if you have to move large volumes of on-premises data to AWS Cloud from a remote location with limited bandwidth?

A. AWS Transit Gateway
B. AWS Snowball
C. AWS Direct Connect
D. AWS Virtual Private Network (VPN)

A

B. AWS Snowball

Explanation
Correct option:

AWS Snowball

AWS Snowball, a part of the AWS Snow Family, is a data migration and edge computing device. If you have large quantities of data you need to migrate into AWS, offline data transfer with AWS Snowball can overcome the challenge of limited bandwidth, and avoid the need to lease additional bandwidth. AWS Snowball moves terabytes of data in about a week. You can use it to move things like databases, backups, archives, healthcare records, analytics datasets, IoT sensor data and media content, especially when network conditions prevent realistic timelines for transferring large amounts of data both into and out of AWS.

Incorrect options:

AWS Virtual Private Network (VPN) - A VPN connection refers to the connection between your Virtual Private Cloud and your on-premises network. By default, instances that you launch into an Amazon VPC can’t communicate with your own (remote) network. You can enable access to your remote network from your VPC by creating an AWS Site-to-Site VPN (Site-to-Site VPN) connection, and configuring routing to pass traffic through the connection. VPN aids regular connectivity of AWS and your private om-premises network, it is not a data migration solution.

AWS Direct Connect - AWS Direct Connect is a cloud service solution that makes it easy to establish a dedicated network connection from your premises to AWS. You can use AWS Direct Connect to establish a private virtual interface from your on-premise network directly to your Amazon VPC, providing you with a private, high bandwidth network connection between your network and your VPC. This connection is private and does not go over the public internet. It takes at least a month to establish this physical connection. It is not feasible to set up AWS Direct Connect in remote locations.

AWS Transit Gateway - AWS Transit Gateway connects VPCs and on-premises networks through a central hub. This simplifies your network and puts an end to complex peering relationships. It acts as a cloud router – each new connection is only made once. VPC peering across large connections is made possible using AWS Transit Gateway without ending up with a complex VPC peering network. AWS Transit Gateway is not a data migration solution.

Reference:

https://aws.amazon.com/snowball/

39
Q

A research lab wants to optimize the caching capabilities for its scientific computations application running on Amazon Elastic Compute Cloud (Amazon EC2) instances. Which Amazon Elastic Compute Cloud (Amazon EC2) storage option is best suited for this use-case?

A. Amazon Simple Storage Service (Amazon S3)
B. Instance Store
C. Amazon Elastic Block Store (Amazon EBS)
D. Amazon Elastic File System (Amazon EFS)

A

B. Instance Store

Explanation
Correct option:

Instance Store

An Instance Store provides temporary block-level storage for your Amazon EC2 instance. This storage is located on disks that are physically attached to the host computer. Instance store is ideal for the temporary storage of information that changes frequently, such as buffers, caches, scratch data, and other temporary content, or for data that is replicated across a fleet of instances, such as a load-balanced pool of web servers. Instance storage is temporary, data is lost if instance experiences failure or is terminated.

Instance Store Overview: via - https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html

Incorrect options:

Amazon Elastic Block Store (Amazon EBS) - Amazon Elastic Block Store (Amazon EBS) is an easy to use, high-performance block storage service designed for use with Amazon Elastic Compute Cloud (Amazon EC2) for both throughput and transaction-intensive workloads at any scale. A broad range of workloads, such as relational and non-relational databases, enterprise applications, containerized applications, big data analytics engines, file systems, and media workflows are widely deployed on Amazon EBS. Amazon EBS is not a good fit for caching information on Amazon EC2 instances.

Amazon Elastic File System (Amazon EFS) - Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed, elastic NFS file system. It is built to scale on-demand to petabytes without disrupting applications, growing and shrinking automatically as you add and remove files, eliminating the need to provision and manage capacity to accommodate growth. Amazon EFS is designed to provide massively parallel shared access to thousands of Amazon EC2 instances, enabling your applications to achieve high levels of aggregate throughput and IOPS with consistent low latencies. Amazon EFS is not a good fit for caching information on Amazon EC2 instances.

Amazon Simple Storage Service (Amazon S3) - Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance. Amazon S3 is not a good fit for caching information on Amazon EC2 instances.

Reference:

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html

40
Q

An IT company has deployed a static website on Amazon Simple Storage Service (Amazon S3), but the website is still inaccessible. As a Cloud Practioner, which of the following solutions would you suggest to address this issue?

A. Enable Amazon S3 replication
B. Disable Amazon S3 encryption
C. Fix the Amazon S3 bucket policy
D. Enable Amazon S3 versioning

A

C. Fix the Amazon S3 bucket policy

Explanation
Correct option:

Fix the Amazon S3 bucket policy

To host a static website on Amazon S3, you configure an Amazon S3 bucket for website hosting and then upload your website content to the bucket. When you configure a bucket as a static website, you must enable website hosting, set permissions, and create and add an index document.

Hosting a static website on Amazon S3: via - https://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteHosting.html

If you want to configure an existing bucket as a static website that has public access, you must edit block public access settings for that bucket. You may also have to edit your account-level block public access settings. Amazon S3 applies the most restrictive combination of the bucket-level and account-level block public access settings.

Here is how you can edit Public Access settings for S3 buckets: via - https://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteAccessPermissionsReqd.html

Incorrect options:

Disable Amazon S3 encryption

Enable Amazon S3 versioning

Enable Amazon S3 replication

Disabling Amazon S3 encryption, enabling Amazon S3 versioning or enabling Amazon S3 replication have no bearing on deploying a static website on Amazon S3, so these options are not correct.

References:

https://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteHosting.html

https://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteAccessPermissionsReqd.h

41
Q

An IT company would like to move its IT resources (including any data and applications) from an AWS Region in the US to another AWS Region in Europe. Which of the following represents the correct solution for this use-case?

A. The company should use AWS CloudFormation to move the resources
B. The company should just start creating new resources in the destination AWS Region and then migrate the relevant data and applications to this new AWS Region
C. The company should use AWS Database Migration Services (AWS DMS) to move the resources from source AWS Regions to destination AWS Region
D. The company should raise a ticket with AWS Support for this resource migration

A

B. The company should just start creating new resources in the destination AWS Region and then migrate the relevant data and applications to this new AWS Region

Explanation
Correct option:

The company should just start creating new resources in the destination AWS Region and then migrate the relevant data and applications into this new AWS Region - The company needs to create resources in the new AWS Region and then move the relevant data and applications into the new AWS Region. There is no off-the-shelf solution or service that the company can use to facilitate this transition.

Incorrect options:

The company should use AWS CloudFormation to move the resources (including any data and applications) from source AWS Region to destination AWS Region - AWS CloudFormation allows you to use programming languages or a simple text file to model and provision, in an automated and secure manner, all the resources needed for your applications across all regions and accounts. AWS CloudFormation cannot help with moving data and applications into another Region.

The company should use AWS Database Migration Service (AWS DMS) to move the resources (including any data and applications) from source AWS Region to destination AWS Region - AWS Database Migration Service (AWS DMS) helps you migrate databases to AWS quickly and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database. The AWS Database Migration Service (AWS DMS) can migrate your data to and from the most widely used commercial and open-source databases. AWS Database Migration Service (AWS DMS) cannot help with the entire IT resources migration.

The company should raise a ticket with AWS Support for this resource migration - This option has been added as a distractor. AWS Support cannot help with IT resources migration.

42
Q

Which AWS services support High Availability by default? (Select two)

A. Amazon Elastic Block Store (Amazon EBS)
B. Amazon DynamoDB
C. Amazon Elastic File System (Amazon EFS)
D, Amazon Redshift
E. Instance Store

A

B. Amazon DynamoDB
C. Amazon Elastic File System (Amazon EFS)

Explanation
Correct options:

Amazon DynamoDB

Amazon DynamoDB is a key-value and document database that delivers single-digit millisecond performance at any scale. It’s a fully managed, multi-Region, multi-master, durable database with built-in security, backup and restore, and in-memory caching for internet-scale applications. All of your data is stored on solid-state disks (SSDs) and is automatically replicated across multiple Availability Zones (AZ) in an AWS Region, providing built-in high availability and data durability.

Amazon DynamoDB High Availability: via - https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Introduction.html

Amazon Elastic File System (Amazon EFS)

Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources. It is built to scale on-demand to petabytes without disrupting applications, growing and shrinking automatically as you add and remove files, eliminating the need to provision and manage capacity to accommodate growth. Amazon EFS is a regional service storing data within and across multiple Availability Zones (AZ) for high availability and durability.

Amazon EFS High Availability: via - https://aws.amazon.com/efs/faq/

Incorrect options:

Amazon Redshift - Amazon Redshift is a fast, fully managed cloud data warehouse that makes it simple and cost-effective to analyze all your data using standard SQL and your existing Business Intelligence (BI) tools.

Amazon Redshift only supports single Availability Zone (AZ) deployments: via - https://aws.amazon.com/redshift/faqs/

Amazon Elastic Block Store (Amazon EBS) - Amazon Elastic Block Store (Amazon EBS) is an easy to use, high-performance block storage service designed for use with Amazon Elastic Compute Cloud (EC2) for both throughput and transaction-intensive workloads at any scale. EBS volumes are replicated within an Availability Zone (AZ) and can easily scale to petabytes of data.

Instance Store - As Instance Store volumes are tied to an EC2 instance, they are also single Availability Zone (AZ) entities.

References:

https://aws.amazon.com/efs/faq/

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Introduction.html

https://aws.amazon.com/redshift/faqs/

https://aws.amazon.com/ebs/

43
Q

A leading research firm needs to access information available in old patents and documents (such as PDFs, Text Files, Word documents, etc) present in its huge knowledge base. The firm is looking for a powerful search tool that can dig into these knowledge resources and return the most relevant files/documents. Which of the following is the correct service to address this requirement?

A. Amazon Lex
B. Amazon Personalize
C. Amazon Kendra
D. Amazon Comprehend

A

C. Amazon Kendra

Explanation
Correct option:

Amazon Kendra

Amazon Kendra is an intelligent search service powered by machine learning. Kendra reimagines enterprise search for your websites and applications so your employees and customers can easily find the content they are looking for, even when it’s scattered across multiple locations and content repositories within your organization.

Using Amazon Kendra, you can stop searching through troves of unstructured data and discover the right answers to your questions, when you need them. Amazon Kendra is a fully managed service, so there are no servers to provision, and no machine learning models to build, train, or deploy. Amazon Kendra supports unstructured and semi-structured data in .html, MS Office (.doc, .ppt), PDF, and text formats.

Unlike conventional search technology, natural language search capabilities return the answers you’re looking for quickly and accurately, no matter where the information lives within your organization.

Amazon Kendra’s deep learning models come pre-trained across 14 industry domains, allowing it to extract more accurate answers across a wide range of business use cases from the get-go. You can also fine-tune search results by manually adjusting the importance of data sources, authors, freshness, or using custom tags.

Incorrect options:

Amazon Personalize - Amazon Personalize enables developers to build applications with the same machine learning (ML) technology used by Amazon.com for real-time personalized recommendations. Amazon Personalize makes it easy for developers to build applications capable of delivering a wide array of personalization experiences, including specific product recommendations, personalized product re-ranking, and customized direct marketing.

Amazon Comprehend - Amazon Comprehend is a natural-language processing (NLP) service that uses machine learning to uncover information in unstructured data. Instead of combing through documents, the process is simplified and unseen information is easier to understand.

Amazon Kendra provides ML-powered search capabilities for all unstructured data customers store in AWS. Amazon Kendra offers easy-to-use native connectors to popular AWS repository types such as S3 and RDS databases. Other AI services such as Amazon Comprehend, Amazon Transcribe, and Amazon Comprehend Medical can be used to pre-process documents, generate searchable text, extract entities, and enrich their metadata for more specialized search experiences.

Amazon Lex - Amazon Lex is a service for building conversational interfaces into any application using voice and text. Amazon Lex provides the advanced deep learning functionalities of automatic speech recognition (ASR) for converting speech to text, and natural language understanding (NLU) to recognize the intent of the text, to enable you to build applications with highly engaging user experiences and lifelike conversational interactions.

Reference:

https://aws.amazon.com/kendra/

44
Q

Which of the following use cases is best suited for Amazon EFS Standard-Infrequent Access (EFS Standard-IA) storage class?

A. Object storage for workloads that need sub-second latency speeds for accessing data
B. User boot volume for highly available EC2 instances
C. Storing data in a single AWS Availablity Zone (AZ)
D. Storing files in an accessible location to satisfy audit requirements

A

D. Storing files in an accessible location to satisfy audit requirements

Explanation
Correct option:

Storing files in an accessible location to satisfy audit requirements

The Amazon EFS Standard-Infrequent Access (EFS Standard-IA) storage class reduces storage costs for files that are not accessed every day. It does this without sacrificing the high availability, high durability, elasticity, and POSIX file system access that Amazon Elastic File System (Amazon EFS) provides.

AWS recommends Amazon EFS Standard-Infrequent Access (EFS Standard-IA) storage class if you need your full dataset to be readily accessible and want to automatically save on storage costs for files that are less frequently accessed. Examples include keeping files accessible to satisfy audit requirements, performing historical analysis, or performing backup and recovery. Amazon EFS Standard-Infrequent Access (EFS Standard-IA) storage is compatible with all Amazon EFS features, and is available in all AWS Regions where Amazon EFS is available.

Incorrect options:

Storing data in a single AWS Availability Zone (AZ) - Amazon EFS One Zone-Infrequent Access (EFS One Zone-IA) storage class is used to store data in a single AWS Availability Zone. Data stored in this storage class may be lost in the event of a disaster or other fault that affects all copies of the data within the Availability Zone (AZ), or in the event of Availability Zone (AZ) destruction.

Object storage for workloads that need sub-second latency speeds for accessing the data - Amazon EFS is a file system service and not an object storage service. You should use Amazon S3 for object storage. So, this option is incorrect.

Use as boot volume for highly available Amazon Elastic Compute Cloud (Amazon EC2) instances - Amazon EFS cannot be used as a boot volume for Amazon Elastic Compute Cloud (Amazon EC2) instances. For boot volumes, Amazon Elastic Block Storage (Amazon EBS) volumes are used.

References:

https://docs.aws.amazon.com/efs/latest/ug/storage-classes.html

https://aws.amazon.com/efs/features/infrequent-access/

45
Q

AWS Lambda pricing is based on which of the following criteria? (Select two)

A. The number of lines of codes for the AWS Lambda functions
B. The size of the deplyment package for the AWS Lambda function
C. The language runtime of the AWS Lambda function
D. The time it takes for the AWS Lambda function to execute
E. Number of requests for the AWS Lambda function

A

D. The time it takes for the AWS Lambda function to execute
E. Number of requests for the AWS Lambda function

Explanation
Correct options:

Number of requests for the AWS Lambda function

The time it takes for the AWS Lambda function to execute

AWS Lambda lets you run code without provisioning or managing servers. With AWS Lambda, you can run code for virtually any type of application or backend service - all with zero administration. Just upload your code and Lambda takes care of everything required to run and scale your code with high availability.

With AWS Lambda, you pay only for what you use. You are charged based on the number of requests for your functions and the duration, the time it takes for your code to execute. AWS Lambda counts a request each time it starts executing in response to an event notification or invoke call, including test invokes from the console. Duration is calculated from the time your code begins executing until it returns or otherwise terminates, rounded up to the nearest 100ms.

Incorrect options:

The language runtime of the AWS Lambda function - AWS Lambda supports many programming language runtimes such as NodeJS, Python, Go, C# etc. The pricing for an AWS Lambda function is not dependent on the language runtime of the function.

The number of lines of code for the AWS Lambda function - The pricing for an AWS Lambda function is not dependent on the number of lines of code for the function.

The size of the deployment package for the AWS Lambda function - The pricing for an AWS Lambda function is not dependent on the size of the deployment package for the function.

Reference:

https://aws.amazon.com/lambda/pricing/

46
Q

Which of the following AWS services offer block-level storage? (Select two)

A. Instance Store
B. Amazon Elastic Block Store (Amazon EBS)
C. Amazon Simple Storage Service (Amazon S3)
D. Amazon Elastic File System (Amazon EFS)
E. Amazon Elastic Container Service (Amazon ECS)

A

A. Instance Store
B. Amazon Elastic Block Store (Amazon EBS)

Explanation
Correct options:

Amazon Elastic Block Store (Amazon EBS)

Amazon Elastic Block Store (Amazon EBS) is an easy to use, high-performance block storage service designed for use with Amazon Elastic Compute Cloud (Amazon EC2) for both throughput and transaction-intensive workloads at any scale. A broad range of workloads, such as relational and non-relational databases, enterprise applications, containerized applications, big data analytics engines, file systems, and media workflows are widely deployed on Amazon EBS.

Instance Store

An instance store provides temporary block-level storage for your EC2 instance. This storage is located on disks that are physically attached to the host computer. Instance store is ideal for the temporary storage of information that changes frequently, such as buffers, caches, scratch data, and other temporary content, or for data that is replicated across a fleet of instances, such as a load-balanced pool of web servers. Instance storage is temporary, data is lost if instance experiences failure or is terminated. Amazon EC2 instance store cannot be used for file sharing between instances.

Incorrect options:

Amazon Elastic File System (Amazon EFS) - Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed, elastic NFS file system. It is built to scale on-demand to petabytes without disrupting applications, growing and shrinking automatically as you add and remove files, eliminating the need to provision and manage capacity to accommodate growth. Amazon EFS is designed to provide massively parallel shared access to thousands of Amazon EC2 instances, enabling your applications to achieve high levels of aggregate throughput and IOPS with consistent low latencies.

Amazon Simple Storage Service (Amazon S3) - Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance. This means customers of all sizes and industries can use it to store and protect any amount of data for a range of use cases, such as websites, mobile applications, backup and restore, archive, enterprise applications, IoT devices, and big data analytics.

Amazon Elastic Container Service (Amazon ECS) - Amazon Elastic Container Service (Amazon ECS) is a highly scalable, high-performance container management service that supports Docker containers and allows you to easily run applications on a managed cluster of Amazon EC2 instances. This is not a storage service and has been added as a distractor.

References:

https://aws.amazon.com/ebs/

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html

47
Q

A startup runs its proprietary application on docker containers. As a Cloud Practitioner, which AWS service would you recommend so that the startup can run containers and still have access to the underlying servers?

A Amazon Elastic Container Services (Amazon ECS)
B. Amazon Elastic Container Registry (Amazon ECR)
C. AWS Fargate
D. AWS Lambda

A

A Amazon Elastic Container Services (Amazon ECS)

Explanation
Correct option:

Amazon Elastic Container Service (Amazon ECS)

Amazon Elastic Container Service (Amazon ECS) is a highly scalable, fast, container management service that makes it easy to run, stop, and manage Docker containers on a cluster. This is not a fully managed service and you can manage the underlying servers yourself.

Incorrect options:

AWS Fargate - AWS Fargate is a serverless compute engine for containers. It works with both Amazon Elastic Container Service (Amazon ECS) and Amazon Elastic Kubernetes Service (Amazon EKS). AWS Fargate makes it easy for you to focus on building your applications. AWS Fargate removes the need to provision and manage servers, lets you specify and pay for resources per application, and improves security through application isolation by design. With AWS Fargate, you do not have access to the underlying servers, so this option is incorrect.

How AWS Fargate Works: via - https://aws.amazon.com/fargate/

AWS Lambda - AWS Lambda is a compute service that lets you run code without provisioning or managing servers. AWS Lambda executes your code only when needed and scales automatically, from a few requests per day to thousands per second. AWS Lambda does not support running container applications.

Amazon Elastic Container Registry (Amazon ECR) - Amazon Elastic Container Registry (Amazon ECR) can be used to store, manage, and deploy Docker container images. Amazon Elastic Container Registry (Amazon ECR) eliminates the need to operate your container repositories. Amazon Elastic Container Registry (Amazon ECR) does not support running container applications.

Reference:

https://aws.amazon.com/fargate/

47
Q

What is the primary benefit of deploying an Amazon Relational Database Service (Amazon RDS) database in a Read Replica configuration?

A. Read Replica protects the database from regional failure
B. Read Replica improves database scalabilty
C. Read Replica reduces database usage costs
D. Read Replica enhances database availablity

A

B. Read Replica improves database scalabilty

Explanation
Correct option:

Read Replica improves database scalability

Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. Read Replicas allow you to create read-only copies that are synchronized with your master database. Read Replicas are used for improved read performance. You can also place your read replica in a different AWS Region closer to your users for better performance. Read Replicas are an example of horizontal scaling of resources.

Read Replica Overview: via - https://aws.amazon.com/rds/features/multi-az/

Exam Alert:

Please review the differences between Amazon RDS Multi-AZ, Multi-Region and Read Replica deployments for RDS: via - https://aws.amazon.com/rds/features/multi-az/

Incorrect options:

Read Replica enhances database availability - Amazon RDS Multi-AZ deployments provide enhanced availability and durability for RDS database (DB) instances, making them a natural fit for production database workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Read Replica cannot enhance database availability.

Read Replica protects the database from a regional failure - You need to use RDS in Multi-Region deployment configuration to protect from a regional failure. Read Replica cannot protect from a regional failure.

Read Replica reduces database usage costs - Amazon Relational Database Service (Amazon RDS) with Read Replicas increases the database costs compared to the standard deployment. So this option is incorrect.

Reference:

https://aws.amazon.com/rds/features/multi-az/

48
Q

Which AWS service can be used as an in-memory database with high-performance and low latency?

A. Amazon ElastiCache
B. Amazon Relational Database Service (Amazon RDS)
C. Amazon Athena
D. Amazon DynamoDB

A

A. Amazon ElastiCache

Explanation
Correct option:

Amazon ElastiCache

Amazon ElastiCache allows you to seamlessly set up, run, and scale popular open-Source compatible in-memory data stores in the cloud. Build data-intensive apps or boost the performance of your existing databases by retrieving data from high throughput and low latency in-memory data stores. Amazon ElastiCache is a popular choice for real-time use cases like Caching, Session Stores, Gaming, Geospatial Services, Real-Time Analytics, and Queuing. ElastiCache cannot be used for online analytical processing.

How Amazon ElastiCache Works: via - https://aws.amazon.com/elasticache/

Incorrect options:

Amazon Relational Database Service (Amazon RDS) - Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. It provides cost-efficient and resizable capacity while automating time-consuming administration tasks such as hardware provisioning, database setup, patching, and backups. Amazon Relational Database Service (Amazon RDS) cannot be used as an in-memory database.

Amazon DynamoDB - Amazon DynamoDB is a NoSQL database that supports key-value and document data models and enables developers to build modern, serverless applications that can start small and scale globally to support petabytes of data and tens of millions of read and write requests per second. Amazon DynamoDB supports both key-value and document data models. This enables Amazon DynamoDB to have a flexible schema, so each row can have any number of columns at any point in time. This allows you to easily adapt the tables as your business requirements change, without having to redefine the table schema as you would in relational databases. Amazon DynamoDB cannot be used as an in-memory database.

Amazon Athena - Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL. Amazon Athena is serverless, so there is no infrastructure to manage, and you pay only for the queries that you run. Amazon Athena cannot be used as an in-memory database.

Reference:

https://aws.amazon.com/elasticache/

49
Q

What is the primary benefit of deploying an Amazon Relational Database Service (Amazon RDS) database in a Read Replica configuration?

A. Read Replica protects the database from regional failure
B. Read Replica improves database scalabilty
C. Read Replica reduces database usage costs
D. Read Replica enhances database availablity

A

B. Read Replica improves database scalabilty

Explanation
Correct option:

Read Replica improves database scalability

Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. Read Replicas allow you to create read-only copies that are synchronized with your master database. Read Replicas are used for improved read performance. You can also place your read replica in a different AWS Region closer to your users for better performance. Read Replicas are an example of horizontal scaling of resources.

Read Replica Overview: via - https://aws.amazon.com/rds/features/multi-az/

Exam Alert:

Please review the differences between Amazon RDS Multi-AZ, Multi-Region and Read Replica deployments for RDS: via - https://aws.amazon.com/rds/features/multi-az/

Incorrect options:

Read Replica enhances database availability - Amazon RDS Multi-AZ deployments provide enhanced availability and durability for RDS database (DB) instances, making them a natural fit for production database workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Read Replica cannot enhance database availability.

Read Replica protects the database from a regional failure - You need to use RDS in Multi-Region deployment configuration to protect from a regional failure. Read Replica cannot protect from a regional failure.

Read Replica reduces database usage costs - Amazon Relational Database Service (Amazon RDS) with Read Replicas increases the database costs compared to the standard deployment. So this option is incorrect.

Reference:

https://aws.amazon.com/rds/features/multi-az/

50
Q

Which AWS service can be used for online analytical processing?

A. DynamoDB
B. Amazon ElastiCache
C. Amazon Relational Database Service (Amazon RDS)
D. Amazon Reshift

A

D. Amazon Reshift

Explanation
Correct option:

Amazon Redshift

Amazon Redshift is a fast, fully managed cloud data warehouse that makes it simple and cost-effective to analyze all your data using standard SQL and your existing Business Intelligence (BI) tools. It allows you to run complex analytic queries against terabytes to petabytes of structured data, using sophisticated query optimization, columnar storage on high-performance storage, and massively parallel query execution.

Incorrect options:

Amazon Relational Database Service (Amazon RDS) - Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. It provides cost-efficient and resizable capacity while automating time-consuming administration tasks such as hardware provisioning, database setup, patching, and backups.

Customers use Amazon RDS databases primarily for online-transaction processing (OLTP) workload while Amazon Redshift is used primarily for reporting and analytics.

Amazon DynamoDB - Amazon DynamoDB is a NoSQL database that supports key-value and document data models and enables developers to build modern, serverless applications that can start small and scale globally to support petabytes of data and tens of millions of read and write requests per second. Amazon DynamoDB supports both key-value and document data models. This enables Amazon DynamoDB to have a flexible schema, so each row can have any number of columns at any point in time. This allows you to easily adapt the tables as your business requirements change, without having to redefine the table schema as you would in relational databases. Amazon DynamoDB cannot be used for online analytical processing.

Amazon ElastiCache - Amazon ElastiCache allows you to seamlessly set up, run, and scale popular open-Source compatible in-memory data stores in the cloud. Build data-intensive apps or boost the performance of your existing databases by retrieving data from high throughput and low latency in-memory data stores. Amazon ElastiCache is a popular choice for real-time use cases like Caching, Session Stores, Gaming, Geospatial Services, Real-Time Analytics, and Queuing. Amazon ElastiCache cannot be used for online analytical processing.

Reference:

https://aws.amazon.com/redshift/faqs/

51
Q

What is the primary benefit of deploying an Amazon Relational Database Service (Amazon RDS) database in a Read Replica configuration?

A. Read Replica protects the database from regional failure
B. Read Replica improves database scalabilty
C. Read Replica reduces database usage costs
D. Read Replica enhances database availablity

A

B. Read Replica improves database scalabilty

Explanation
Correct option:

Read Replica improves database scalability

Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. Read Replicas allow you to create read-only copies that are synchronized with your master database. Read Replicas are used for improved read performance. You can also place your read replica in a different AWS Region closer to your users for better performance. Read Replicas are an example of horizontal scaling of resources.

Read Replica Overview: via - https://aws.amazon.com/rds/features/multi-az/

Exam Alert:

Please review the differences between Amazon RDS Multi-AZ, Multi-Region and Read Replica deployments for RDS: via - https://aws.amazon.com/rds/features/multi-az/

Incorrect options:

Read Replica enhances database availability - Amazon RDS Multi-AZ deployments provide enhanced availability and durability for RDS database (DB) instances, making them a natural fit for production database workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Read Replica cannot enhance database availability.

Read Replica protects the database from a regional failure - You need to use RDS in Multi-Region deployment configuration to protect from a regional failure. Read Replica cannot protect from a regional failure.

Read Replica reduces database usage costs - Amazon Relational Database Service (Amazon RDS) with Read Replicas increases the database costs compared to the standard deployment. So this option is incorrect.

Reference:

https://aws.amazon.com/rds/features/multi-az/

52
Q

What is the primary benefit of deploying an Amazon Relational Database Service (Amazon RDS) database in a Read Replica configuration?

A. Read Replica protects the database from regional failure
B. Read Replica improves database scalabilty
C. Read Replica reduces database usage costs
D. Read Replica enhances database availablity

A

B. Read Replica improves database scalabilty

Explanation
Correct option:

Read Replica improves database scalability

Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. Read Replicas allow you to create read-only copies that are synchronized with your master database. Read Replicas are used for improved read performance. You can also place your read replica in a different AWS Region closer to your users for better performance. Read Replicas are an example of horizontal scaling of resources.

Read Replica Overview: via - https://aws.amazon.com/rds/features/multi-az/

Exam Alert:

Please review the differences between Amazon RDS Multi-AZ, Multi-Region and Read Replica deployments for RDS: via - https://aws.amazon.com/rds/features/multi-az/

Incorrect options:

Read Replica enhances database availability - Amazon RDS Multi-AZ deployments provide enhanced availability and durability for RDS database (DB) instances, making them a natural fit for production database workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Read Replica cannot enhance database availability.

Read Replica protects the database from a regional failure - You need to use RDS in Multi-Region deployment configuration to protect from a regional failure. Read Replica cannot protect from a regional failure.

Read Replica reduces database usage costs - Amazon Relational Database Service (Amazon RDS) with Read Replicas increases the database costs compared to the standard deployment. So this option is incorrect.

Reference:

https://aws.amazon.com/rds/features/multi-az/

53
Q

What is the primary benefit of deploying an Amazon Relational Database Service (Amazon RDS) database in a Read Replica configuration?

A. Read Replica protects the database from regional failure
B. Read Replica improves database scalabilty
C. Read Replica reduces database usage costs
D. Read Replica enhances database availablity

A

B. Read Replica improves database scalabilty

Explanation
Correct option:

Read Replica improves database scalability

Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. Read Replicas allow you to create read-only copies that are synchronized with your master database. Read Replicas are used for improved read performance. You can also place your read replica in a different AWS Region closer to your users for better performance. Read Replicas are an example of horizontal scaling of resources.

Read Replica Overview: via - https://aws.amazon.com/rds/features/multi-az/

Exam Alert:

Please review the differences between Amazon RDS Multi-AZ, Multi-Region and Read Replica deployments for RDS: via - https://aws.amazon.com/rds/features/multi-az/

Incorrect options:

Read Replica enhances database availability - Amazon RDS Multi-AZ deployments provide enhanced availability and durability for RDS database (DB) instances, making them a natural fit for production database workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Read Replica cannot enhance database availability.

Read Replica protects the database from a regional failure - You need to use RDS in Multi-Region deployment configuration to protect from a regional failure. Read Replica cannot protect from a regional failure.

Read Replica reduces database usage costs - Amazon Relational Database Service (Amazon RDS) with Read Replicas increases the database costs compared to the standard deployment. So this option is incorrect.

Reference:

https://aws.amazon.com/rds/features/multi-az/

54
Q

What is the primary benefit of deploying an Amazon Relational Database Service (Amazon RDS) database in a Read Replica configuration?

A. Read Replica protects the database from regional failure
B. Read Replica improves database scalabilty
C. Read Replica reduces database usage costs
D. Read Replica enhances database availablity

A

B. Read Replica improves database scalabilty

Explanation
Correct option:

Read Replica improves database scalability

Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. Read Replicas allow you to create read-only copies that are synchronized with your master database. Read Replicas are used for improved read performance. You can also place your read replica in a different AWS Region closer to your users for better performance. Read Replicas are an example of horizontal scaling of resources.

Read Replica Overview: via - https://aws.amazon.com/rds/features/multi-az/

Exam Alert:

Please review the differences between Amazon RDS Multi-AZ, Multi-Region and Read Replica deployments for RDS: via - https://aws.amazon.com/rds/features/multi-az/

Incorrect options:

Read Replica enhances database availability - Amazon RDS Multi-AZ deployments provide enhanced availability and durability for RDS database (DB) instances, making them a natural fit for production database workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Read Replica cannot enhance database availability.

Read Replica protects the database from a regional failure - You need to use RDS in Multi-Region deployment configuration to protect from a regional failure. Read Replica cannot protect from a regional failure.

Read Replica reduces database usage costs - Amazon Relational Database Service (Amazon RDS) with Read Replicas increases the database costs compared to the standard deployment. So this option is incorrect.

Reference:

https://aws.amazon.com/rds/features/multi-az/

55
Q

What is the primary benefit of deploying an Amazon Relational Database Service (Amazon RDS) database in a Read Replica configuration?

A. Read Replica protects the database from regional failure
B. Read Replica improves database scalabilty
C. Read Replica reduces database usage costs
D. Read Replica enhances database availablity

A

B. Read Replica improves database scalabilty

Explanation
Correct option:

Read Replica improves database scalability

Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. Read Replicas allow you to create read-only copies that are synchronized with your master database. Read Replicas are used for improved read performance. You can also place your read replica in a different AWS Region closer to your users for better performance. Read Replicas are an example of horizontal scaling of resources.

Read Replica Overview: via - https://aws.amazon.com/rds/features/multi-az/

Exam Alert:

Please review the differences between Amazon RDS Multi-AZ, Multi-Region and Read Replica deployments for RDS: via - https://aws.amazon.com/rds/features/multi-az/

Incorrect options:

Read Replica enhances database availability - Amazon RDS Multi-AZ deployments provide enhanced availability and durability for RDS database (DB) instances, making them a natural fit for production database workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Read Replica cannot enhance database availability.

Read Replica protects the database from a regional failure - You need to use RDS in Multi-Region deployment configuration to protect from a regional failure. Read Replica cannot protect from a regional failure.

Read Replica reduces database usage costs - Amazon Relational Database Service (Amazon RDS) with Read Replicas increases the database costs compared to the standard deployment. So this option is incorrect.

Reference:

https://aws.amazon.com/rds/features/multi-az/

56
Q

What is the primary benefit of deploying an Amazon Relational Database Service (Amazon RDS) database in a Read Replica configuration?

A. Read Replica protects the database from regional failure
B. Read Replica improves database scalabilty
C. Read Replica reduces database usage costs
D. Read Replica enhances database availablity

A

B. Read Replica improves database scalabilty

Explanation
Correct option:

Read Replica improves database scalability

Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. Read Replicas allow you to create read-only copies that are synchronized with your master database. Read Replicas are used for improved read performance. You can also place your read replica in a different AWS Region closer to your users for better performance. Read Replicas are an example of horizontal scaling of resources.

Read Replica Overview: via - https://aws.amazon.com/rds/features/multi-az/

Exam Alert:

Please review the differences between Amazon RDS Multi-AZ, Multi-Region and Read Replica deployments for RDS: via - https://aws.amazon.com/rds/features/multi-az/

Incorrect options:

Read Replica enhances database availability - Amazon RDS Multi-AZ deployments provide enhanced availability and durability for RDS database (DB) instances, making them a natural fit for production database workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Read Replica cannot enhance database availability.

Read Replica protects the database from a regional failure - You need to use RDS in Multi-Region deployment configuration to protect from a regional failure. Read Replica cannot protect from a regional failure.

Read Replica reduces database usage costs - Amazon Relational Database Service (Amazon RDS) with Read Replicas increases the database costs compared to the standard deployment. So this option is incorrect.

Reference:

https://aws.amazon.com/rds/features/multi-az/

57
Q

What is the primary benefit of deploying an Amazon Relational Database Service (Amazon RDS) database in a Read Replica configuration?

A. Read Replica protects the database from regional failure
B. Read Replica improves database scalabilty
C. Read Replica reduces database usage costs
D. Read Replica enhances database availablity

A

B. Read Replica improves database scalabilty

Explanation
Correct option:

Read Replica improves database scalability

Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. Read Replicas allow you to create read-only copies that are synchronized with your master database. Read Replicas are used for improved read performance. You can also place your read replica in a different AWS Region closer to your users for better performance. Read Replicas are an example of horizontal scaling of resources.

Read Replica Overview: via - https://aws.amazon.com/rds/features/multi-az/

Exam Alert:

Please review the differences between Amazon RDS Multi-AZ, Multi-Region and Read Replica deployments for RDS: via - https://aws.amazon.com/rds/features/multi-az/

Incorrect options:

Read Replica enhances database availability - Amazon RDS Multi-AZ deployments provide enhanced availability and durability for RDS database (DB) instances, making them a natural fit for production database workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Read Replica cannot enhance database availability.

Read Replica protects the database from a regional failure - You need to use RDS in Multi-Region deployment configuration to protect from a regional failure. Read Replica cannot protect from a regional failure.

Read Replica reduces database usage costs - Amazon Relational Database Service (Amazon RDS) with Read Replicas increases the database costs compared to the standard deployment. So this option is incorrect.

Reference:

https://aws.amazon.com/rds/features/multi-az/

58
Q

What is the primary benefit of deploying an Amazon Relational Database Service (Amazon RDS) database in a Read Replica configuration?

A. Read Replica protects the database from regional failure
B. Read Replica improves database scalabilty
C. Read Replica reduces database usage costs
D. Read Replica enhances database availablity

A

B. Read Replica improves database scalabilty

Explanation
Correct option:

Read Replica improves database scalability

Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. Read Replicas allow you to create read-only copies that are synchronized with your master database. Read Replicas are used for improved read performance. You can also place your read replica in a different AWS Region closer to your users for better performance. Read Replicas are an example of horizontal scaling of resources.

Read Replica Overview: via - https://aws.amazon.com/rds/features/multi-az/

Exam Alert:

Please review the differences between Amazon RDS Multi-AZ, Multi-Region and Read Replica deployments for RDS: via - https://aws.amazon.com/rds/features/multi-az/

Incorrect options:

Read Replica enhances database availability - Amazon RDS Multi-AZ deployments provide enhanced availability and durability for RDS database (DB) instances, making them a natural fit for production database workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Read Replica cannot enhance database availability.

Read Replica protects the database from a regional failure - You need to use RDS in Multi-Region deployment configuration to protect from a regional failure. Read Replica cannot protect from a regional failure.

Read Replica reduces database usage costs - Amazon Relational Database Service (Amazon RDS) with Read Replicas increases the database costs compared to the standard deployment. So this option is incorrect.

Reference:

https://aws.amazon.com/rds/features/multi-az/

59
Q

What is the primary benefit of deploying an Amazon Relational Database Service (Amazon RDS) database in a Read Replica configuration?

A. Read Replica protects the database from regional failure
B. Read Replica improves database scalabilty
C. Read Replica reduces database usage costs
D. Read Replica enhances database availablity

A

B. Read Replica improves database scalabilty

Explanation
Correct option:

Read Replica improves database scalability

Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. Read Replicas allow you to create read-only copies that are synchronized with your master database. Read Replicas are used for improved read performance. You can also place your read replica in a different AWS Region closer to your users for better performance. Read Replicas are an example of horizontal scaling of resources.

Read Replica Overview: via - https://aws.amazon.com/rds/features/multi-az/

Exam Alert:

Please review the differences between Amazon RDS Multi-AZ, Multi-Region and Read Replica deployments for RDS: via - https://aws.amazon.com/rds/features/multi-az/

Incorrect options:

Read Replica enhances database availability - Amazon RDS Multi-AZ deployments provide enhanced availability and durability for RDS database (DB) instances, making them a natural fit for production database workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Read Replica cannot enhance database availability.

Read Replica protects the database from a regional failure - You need to use RDS in Multi-Region deployment configuration to protect from a regional failure. Read Replica cannot protect from a regional failure.

Read Replica reduces database usage costs - Amazon Relational Database Service (Amazon RDS) with Read Replicas increases the database costs compared to the standard deployment. So this option is incorrect.

Reference:

https://aws.amazon.com/rds/features/multi-az/