Practice Test 2: AWS Certified Cloud Practitioner Practice Exam (2) Flashcards

1
Q

Which of the following aspects of security are managed by AWS? (Choose TWO)

  • Encryption of EBS volumes
  • Access permissions
  • Hardware patching
  • VPC security
  • Securing global physical infrastructure
A

Hardware patching,
Securing global physical infrastructure

Explanation

AWS is continuously innovating the design and systems of its data centers to protect them from man-made and natural risks. For example, at the first layer of security, AWS provides a number of security features depending on the location, such as security guards, fencing, security feeds, intrusion detection technology, and other security measures.

According to the Shared Responsibility model, patching of the underlying hardware is the AWS’ responsibility. AWS is responsible for patching and fixing flaws within the infrastructure, but customers are responsible for patching their guest OS and applications.

The other options are incorrect:

“VPC security” is incorrect. The configuration and security of the VPC are customer’s responsibilities.

“Encryption of EBS volumes” is incorrect. The customer is responsible for encrypting their data on EBS either on the client side or on the server side.

“Access permissions” is incorrect. The customer is responsible for managing the IAM permissions.

Additional information:

IAM permissions let the customer specify access to AWS resources. Permissions are granted to IAM entities (users, user groups, and roles) and by default these entities start with no permissions. In other words, IAM entities can do nothing in AWS until you grant them your desired permissions. To give entities permissions, you can attach a policy that specifies the type of access, the actions that can be performed, and the resources on which the actions can be performed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Where can you store files in AWS? (Choose TWO)

  • Amazon EBS
  • Amazon ECS
  • Amazon EFS
  • Amazon SNS
  • Amazon EMR
A

Amazon EBS
Amazon EFS

Explanation

Amazon Elastic File System (Amazon EFS) provides simple, scalable, elastic file storage for use with AWS Cloud services and on-premises resources. It is easy to use and offers a simple interface that allows you to create and configure file systems quickly and easily. Amazon EFS is built to elastically scale on demand without disrupting applications, growing and shrinking automatically as you add and remove files, so your applications have the storage they need, when they need it. It is designed to provide massively parallel shared access to thousands of Amazon EC2 instances, enabling your applications to achieve high levels of aggregate throughput and IOPS that scale as a file system grows, with consistent low latencies. As a regional service, Amazon EFS is designed for high availability and durability storing data redundantly across multiple Availability Zones.

Amazon Elastic Block Store (Amazon EBS) provides persistent block storage volumes for use with Amazon EC2 and Amazon RDS instances. AWS recommends Amazon EBS for data that must be quickly accessible and requires long-term persistence. EBS volumes are particularly well-suited for use as the primary storage for operating systems, databases, or for any applications that require fine granular updates and access to raw, unformatted, block-level storage. Each Amazon EBS volume is automatically replicated within its Availability Zone to protect you from component failure, offering high availability and durability.

The other options are incorrect:

“Amazon SNS” is incorrect. Amazon Simple Notification Service (SNS) is a publish / subscribe (pub/sub) messaging service.

“Amazon ECS” is incorrect. Amazon Elastic Container Service (ECS) is a compute service that is used to run containerized applications on AWS.

“Amazon EMR” is incorrect. Amazon Elastic MapReduce (EMR) is a web service that enables businesses, researchers, data analysts, and developers to easily and cost-effectively process vast amounts of data across dynamically scalable Amazon EC2 instances.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are the Amazon RDS features that can be used to improve the availability of your database? (Choose TWO)

  • AWS Regions
  • Edge Locations
  • Automatic patching
  • Read Replicas
  • Multi-AZ Deployment
A

Read Replicas,
Multi-AZ Deployment

Explanation

In a Multi-AZ deployment, Amazon RDS automatically provisions and maintains a synchronous standby replica in a different Availability Zone. The primary DB instance is synchronously replicated across Availability Zones to a standby replica to provide data redundancy, eliminate I/O freezes, and minimize latency spikes during system backups. Running a DB instance with high availability can enhance availability during planned system maintenance, and help protect your databases against DB instance failure and Availability Zone disruption.

Amazon RDS Read Replicas provide enhanced performance and durability for database (DB) instances. This feature makes it easy to elastically scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads. You can create one or more replicas of a given source DB Instance and serve high-volume application read traffic from multiple copies of your data, thereby increasing aggregate read throughput.

Read replicas provide a complementary availability mechanism to Amazon RDS Multi-AZ Deployments. You can promote a read replica if the source DB instance fails. You can also replicate DB instances across AWS Regions as part of your disaster recovery strategy. This functionality complements the synchronous replication, automatic failure detection, and failover provided with Multi-AZ deployments.

The other options are incorrect:

“Edge Locations” is incorrect. Edge Locations are not a feature of Amazon RDS. Edge locations are used by the CloudFront service to distribute content globally.

“Automatic patching” is incorrect. The purpose of patching is to resolve functionality issues, improve security or add new features.

“AWS Regions” is incorrect. AWS Regions are not a feature of Amazon RDS. AWS Regions are separate geographic areas around the world that AWS uses to provide its Cloud Services, including Regions in North America, South America, Europe, Asia Pacific, and the Middle East. Choosing a specific AWS Region depends on its proximity to end-users, data sovereignty, and costs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

A company has business critical workloads hosted on AWS and they are unwilling to accept any downtime. Which of the following is a recommended best practice to protect their workloads in the event of an unexpected natural disaster?

  • Replicate data across multiple Edge Locations worldwide and use Amazon CloudFront to perform automatic failover in the event of an outage
  • Deploy AWS resources across multiple Availability Zones within the same AWS Region
  • Deploy AWS resources to another AWS Region and implement an Active-Active disaster recovery strategy
  • Create point-in-time backups in another subnet and recover this data when a disaster occurs
A

Deploy AWS resources to another AWS Region and implement an Active-Active disaster recovery strategy

Explanation

Disaster recovery is about preparing for and recovering from events that have a negative impact on your business continuity or finances. This could be a natural disaster, hardware or software failure, a network outage, a power outage, physical damage to a building like fire or flooding, or some other significant disaster.

In AWS, customers have the flexibility to choose the disaster recovery approach that fits their budget. The approaches could be as minimum as backup and restore from another AWS Region or full-scale multi-region Active-Active solution.

With the multi-region Active-Active solution, your workload is deployed to, and actively serving traffic from, multiple AWS Regions. If an entire Region goes down because of a natural disaster or any other reason, the other Regions will still be available and able to serve user requests.

The other options are incorrect:

“Deploy AWS resources across multiple Availability Zones within the same AWS Region” is incorrect. A natural disaster may affect an entire Region, including all Availability Zones within that Region.

“Replicate data across multiple Edge Locations worldwide and use Amazon CloudFront to perform automatic failover in the event of an outage” is incorrect. Edge locations are not used for disaster recovery. Edge locations are used by CloudFront to cache and distribute content from a geographical location close to users.

“Create point-in-time backups in another subnet and recover this data when a disaster occurs” is incorrect. A subnet is a range of IP addresses within a VPC.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Which of the following services will help businesses ensure compliance in AWS?

  • CloudFron
  • CloudTrail
  • CloudEndure Migration
  • CloudWatch
A

CloudTrail

Explanation

AWS CloudTrail is designed to log all actions taken in your AWS account. This provides a great resource for governance, compliance, and risk auditing.

The other options are incorrect:

“CloudFront” is incorrect. Amazon CloudFront is a content delivery network (CDN) service.

“CloudEndure Migration” is incorrect. CloudEndure Migration simplifies the process of migrating applications from physical, virtual, and cloud-based infrastructure, ensuring that they are fully operational in any AWS Region without compatibility issues.

CloudEndure Migration will be discontinued in all AWS Regions except for the AWS GovCloud (US), Mainland China (Beijing), and Mainland China (Ningxia) Regions on December 30, 2022. AWS Application Migration Service is the primary migration service recommended for lift-and-shift migrations to the AWS Cloud. Customers who currently use CloudEndure Migration are encouraged to switch to AWS Application Migration Service for future migrations.

“CloudWatch” is incorrect. Amazon CloudWatch is used to monitor the utilization of AWS resources. CloudWatch provides you with data and actionable insights to monitor your applications, understand and respond to system-wide performance changes, and get a unified view of operational health.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

A company is migrating its on-premises database to Amazon RDS. What should the company do to ensure Amazon RDS costs are kept to a minimum?

  • Combine On-demand Capacity Reservations with Saving Plans
  • Use a Multi-Region Active-Passive architecture
  • Use a Multi-Region Active-Active architecture
  • Right size before and after migration
A

Right size before and after migration

Explanation

Right-sizing is the process of matching instance types and sizes to your workload performance and capacity requirements at the lowest possible cost. By right-sizing before migration, you can significantly reduce your infrastructure costs. If you skip right-sizing to save time, your migration speed might be faster, but you will end up with higher cloud infrastructure spend for a potentially long time.

Because your resource needs are always changing, right-sizing must become an ongoing process to continually achieve cost optimization. It’s important to right-size when you first consider moving to the cloud and calculate the total cost of ownership. However, it’s equally important to right-size periodically once you’re in the cloud to ensure ongoing cost-performance optimization.

Picking an Amazon RDS instance for a given workload means finding the instance family that most closely matches the CPU, disk I/O, and memory needs of your workload. Amazon RDS provides a wide selection of instances, which gives you lots of flexibility to right-size your resources to match capacity needs at the lowest cost.

The other options are incorrect:

“Use a Multi-Region Active-Active architecture” is incorrect. With the Multi-Region Active-Active solution, your workload is deployed to, and actively serving traffic from, multiple AWS Regions. AWS Customers use this approach to reduce latency for global users and achieve the highest level of availability. Using a Multi-Region Active-Active architecture will increase infrastructure costs, including Amazon RDS costs.

“Use a Multi-Region Active-Passive architecture” is incorrect. With Multi-Region Active-Passive architecture, your workload is deployed to two AWS Regions (a primary Region and a standby Region). In this architecture, user requests are served from the primary Region only. If the primary Region goes down because of a natural disaster or any other reason, the other Region will still be available and serve user requests. AWS customers use this approach for disaster recovery purposes. Using a Multi-Region Active-Passive architecture will increase infrastructure costs, including Amazon RDS costs.

“Combine On-demand Capacity Reservations with Saving Plans” is incorrect. When you combine On-demand Capacity Reservations with Saving Plans, you will be able to reduce costs significantly. But, On-demand Capacity Reservations is available only for Amazon EC2. For more information about On-demand Capacity Reservations, check this link: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-purchasing-options.html

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

A company is developing a new application using a microservices framework. The new application is having performance and latency issues. Which AWS Service should be used to troubleshoot these issues?

  • AWS X-Ray
  • AWS CodePipeline
  • AWS Inspector
  • AWS CloudTrail
A

AWS X-Ray

Explanation

AWS X-Ray helps developers analyze and debug distributed applications in production or under development, such as those built using microservice architecture. With X-Ray, you can understand how your application and its underlying services are performing so you can identify and troubleshoot the root cause of performance issues and errors. X-Ray provides an end-to-end view of requests as they travel through your application, and shows a map of your application’s underlying components. You can use X-Ray to analyze both applications in development and in production, from simple three-tier applications to complex microservices applications consisting of thousands of services.

The other options are incorrect:

“AWS CodePipeline” is incorrect. AWS CodePipeline is a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates.

“AWS Inspector” is incorrect. Amazon Inspector helps you to identify security vulnerabilities as well as deviations from security best practices in applications NOT for troubleshooting performance issues.

“AWS CloudTrail” is incorrect. CloudTrail is a service that allows you to track all users’ actions that are taken in your AWS account.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Your application has recently experienced significant global growth, and international users are complaining of high latency. What is the AWS characteristic that can help improve your international users’ experience?

  • Elasticity
  • High Availbility
  • Data Durability
  • Global Reach
A

Global Reach

Explanation

With AWS, you can deploy your application in multiple regions around the world. The user will be redirected to the Region that provides the lowest possible latency and the highest performance. You can also use the CloudFront service that uses edge locations (which are located in most of the major cities across the world) to deliver content with low latency and high performance to your global users.

The other options are incorrect:

“High availability” is incorrect. High Availability can be achieved by deploying your application in multiple Availability Zones within a single Region. If one Availability Zone goes down, the others can handle user requests. This may not reduce latency to your international users. In other words, the application will be available for them all the time, but with high latency.

“Elasticity” is incorrect. Elasticity refers to the ability of a system to scale the underlying resources up when demand increases (to maintain performance), or scale down when demand decreases (to reduce costs). This option does not indicate whether your resources will be deployed in a single Region or multiple Regions.

“Data durability” is incorrect. Durability refers to the ability of a system to assure data is stored and data remains consistent in the system as long as it is not changed by legitimate access. This means that data should not become corrupted or disappear due to a system malfunction. Durability is used to measure the likelihood of data loss. For example, assume you have confidential data stored in your Laptop. If you make a copy of it and store it in a secure place, you have just improved the durability of that data. It is much less likely that all copies will be simultaneously destroyed.

Data durability can be achieved by replicating data across multiple Availability Zones within a single Region. For example, the S3 Standard Tier is designed for 99.999999999% durability. This means that if you store 100 billion objects in S3, you will lose one object at most.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Which of the following are important design principles you should adopt when designing systems on AWS? (Choose TWO)

  • Remove single point of failure
  • Treat servers as fixed resources
  • Always use Global services in your architecture rather than Regional services
  • Always choose to pay as you go
  • Automate wherever possible
A

Remove single point of failure
Automate wherever possible
**

Explanation

A single point of failure (SPOF) is a part of a system that, if it fails, will stop the entire system from working. You can remove single points of failure by assuming everything will fail and designing your architecture to automatically detect and react to failures. For example, configuring and deploying an auto-scaling group of EC2 instances will ensure that if one or more of the instances crashes, Auto-scaling will automatically replace them with new instances. You should also introduce redundancy to remove single points of failure, by deploying your application across multiple Availability Zones. If one Availability Zone goes down for any reason, the other Availability Zones can serve requests.

AWS helps you use automation so you can build faster and more efficiently. Using AWS services, you can automate manual tasks or processes such as deployments, development & test workflows, container management, and configuration management.

The other options are incorrect:

“Always choose to pay as you go” is incorrect. AWS has other payment models that can save you more costs depending on your use case. For example, If your application if your application has a steady state usage, you can use reservations for the Amazon RDS and Amazon EC2 instances to reduce your overall costs significantly.

“Treat servers as fixed resources” is incorrect. AWS enables you to treat your servers as disposable resources not fixed resources. This means that if any issue occurred with a server, you can simply replace it with a new one (rather trying to fix it).

“Always use Global services in your architecture rather than Regional services” is incorrect. AWS services\resources are either Global, Regional or specific to an Availability Zone. Among all the services\resources that AWS offers, only a few of them are considered global services. Examples of AWS global services include Amazon CloudFront, AWS Identity and Access Management, Amazon Route 53 and AWS WAF. There is no way you can build your AWS environment without using Regional services such as Amazon VPC, Amazon RDS, AWS Lambda and Amazon EFS OR Zonal resources (specific to an Availability Zone) such as Amazon EC2 instances or Amazon EBS volumes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Which of the following EC2 instance purchasing options supports the Bring Your Own License (BYOL) model for almost every BYOL scenario?

  • Dedicated Hosts
  • On-demand Instances
  • Dedicated Instances
  • Reserved Instances
A

Dedicated Hosts

Explanation

You have a variety of options for using new and existing Microsoft software licenses on the AWS Cloud. By purchasing Amazon Elastic Compute Cloud (Amazon EC2) or Amazon Relational Database Service (Amazon RDS) license-included instances, you get new, fully compliant Windows Server and SQL Server licenses from AWS. The BYOL model enables AWS customers to use their existing server-bound software licenses, including Windows Server, SQL Server, and SUSE Linux Enterprise Server.

Your existing licenses may be used on AWS with Amazon EC2 Dedicated Hosts, Amazon EC2 Dedicated Instances or EC2 instances with default tenancy using Microsoft License Mobility through Software Assurance.

Dedicated Hosts provide additional control over your instances and visibility into Host level resources and tooling that allows you to manage software that consumes licenses on a per-core or per-socket basis, such as Windows Server and SQL Server. This is why most BYOL scenarios are supported through the use of Dedicated Hosts, while only certain scenarios are supported by Dedicated Instances.

The other options are incorrect:

“Dedicated Instances” is incorrect. Dedicated Hosts is recommended for most BYOL scenarios for the reasons we mentioned above.

“On-demand Instances” and “Reserved Instances” are incorrect. On-demand instances and Reserved instances don’t support the Bring Your Own License (BYOL) model.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is the AWS service that enables you to manage all of your AWS accounts from a single master account?

  • AWS Trusted Advisor
  • AWS Organizations
  • Amazon Config
  • AWS WAF
A

AWS Organizations

Explanation

AWS Organizations is an account management service that enables you to consolidate multiple AWS accounts into an organization that you create and centrally manage.

AWS Organizations enables the following capabilities:

1- Automate AWS account creation and management
2- Consolidate billing across multiple AWS accounts
3- Govern access to AWS services, resources, and regions
4- Centrally manage access policies across multiple AWS accounts
5- Configure AWS services across multiple accounts

The other options are incorrect:

“AWS Trusted Advisor” is incorrect. AWS Trusted Advisor offers a rich set of best practice checks and recommendations across five categories: cost optimization; security; fault tolerance; performance; and service limits (also referred to as Service quotas).

“Amazon Config” is incorrect. Amazon Config is used to record and evaluate configurations of your AWS resources.

“AWS WAF” is incorrect. AWS WAF is a AWS web application firewall that helps protect your web applications.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Which of the following should be considered when performing a TCO analysis to compare the costs of running an application on AWS instead of on-premises?

  • Application development
  • Physical hardware
  • Business analysis
  • Market research
A

Physical hardware

Explanation

Weighing the financial considerations of owning and operating a data center facility versus employing a cloud infrastructure requires detailed and careful analysis. The Total Cost of Ownership (TCO) is often the financial metric used to estimate and compare costs of a product or a service. When comparing AWS with on-premises TCO, customers should consider all costs of owning and operating a data center. Examples of these costs include facilities, physical servers, storage devices, networking equipment, cooling and power consumption, data center space, and Labor IT cost.

The other options are incorrect.

“Application development” is incorrect. Application development is the process of creating a program or a set of programs to perform the different tasks that a business requires. Application development is a separate process that customers need to perform regardless of whether they will be using AWS or an on-premises data center. Application development is not part of the total cost of owning and operating a data center (TCO), and thus is an incorrect answer.

“Market Research” is incorrect. Market research is an organized effort to gather information about target audience and customers to determine how viable a product or service might be. Market research is a separate process that customers need to perform regardless of whether they will be using AWS or an on-premises data center.

“Business analysis” is incorrect. Business analysis is a multistage process aimed at identifying business needs and determining solutions to business problems. Business analysis is a separate process that customers need to perform regardless of whether they will be using AWS or an on-premises data center.

References:

https://aws.amazon.com/blogs/publicsector/cloud-economics-value-tco-assessment/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Under the Shared Responsibility Model, which of the following controls do customers fully inherit from AWS? (Choose TWO)

  • Patch management controls
  • Database controls
  • Awareness & Training
  • Environmental controls
  • Physical controls
A

Environmental controls
Physical controls

Explanation

AWS is responsible for physical controls and environmental controls. Customers inherit these controls from AWS.

As mentioned in the AWS Shared Responsibility Model page, Inherited Controls are controls which a customer fully inherits from AWS such as physical controls and environmental controls.

As a customer deploying an application on AWS infrastructure, you inherit security controls pertaining to the AWS physical, environmental and media protection, and no longer need to provide a detailed description of how you comply with these control families.

For example: Let’s say you have built an application in AWS for customers to securely store their data. But your customers are concerned about the security of the data and ensuring compliance requirements are met. To address this, you assure your customer that “our company does not host customer data in its corporate or remote offices, but rather in AWS data centers that have been certified to meet industry security standards.” That includes physical and environmental controls to secure the data, which is the responsibility of Amazon. Companies do not have physical access to the AWS data centers, and as such, they fully inherit the physical and environmental security controls from AWS.

You can read more about AWS’ data center controls here:

https://aws.amazon.com/compliance/data-center/controls/

The other options are incorrect:

“Patch management controls” is incorrect. Patch Management belongs to the shared controls. AWS is responsible for patching the underlying hosts and fixing flaws within the infrastructure, but customers are responsible for patching their guest OS and applications.

“Database controls” is incorrect. Database controls belongs to the shared controls. AWS maintains the configuration of its infrastructure devices that run the database, but customers are responsible for configuring their own databases, and applications.

“Awareness & Training” is incorrect. Awareness & Training belongs to the shared controls. AWS trains AWS employees, but customers must train their own employees.

References:

https://aws.amazon.com/compliance/shared-responsibility-model/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Which AWS services can be used to improve the performance of a global application and reduce latency for its users? (Choose TWO)

  • AWS Direct Connect
  • AWS Global accelarator
  • Amazon CloudFront
  • AWS KMS
  • AWS Glue
A

AWS Global accelarator
Amazon CloudFront

Explanation

AWS Global Accelerator and CloudFront are two separate services that use the AWS global network and its edge locations around the world. Amazon CloudFront improves performance for global applications by caching content at the closest Edge Location to end-users. AWS Global Accelerator improves performance for global applications by routing end-user requests to the closest AWS Region. Amazon CloudFront improves performance for both cacheable (e.g., images and videos) and dynamic content (e.g. dynamic site delivery). Global Accelerator is a good fit for specific use cases, such as gaming, IoT or Voice over IP.

Note: AWS Global accelerator does not cache content at edge locations like Amazon CloudFront. AWS Global accelerator uses the AWS edge locations to receive end-user requests and then routes these requests to the closest AWS Region over the AWS global network.

The other options are incorrect:

“AWS KMS” is incorrect. AWS KMS is a key management service that makes it easy for you to create and manage encryption keys and control their use across a wide range of AWS services and in your applications.

“AWS Direct Connect” is incorrect. AWS Direct Connect is a cloud service solution that is used to establish a dedicated network connection from your premises to AWS.

“AWS Glue” is incorrect. AWS Glue is a fully-managed, Extract, Transform, and Load (ETL) service that automates the time-consuming steps of data preparation for analytics.

Extract, Transform, and Load (ETL) is the process of extracting (collecting) data from various sources (from different databases for example), transform the data depending on business rules/needs (This step helps in preparing the data for analytics and decision making) and load the data into a destination database, often a data warehouse.

References:

https://aws.amazon.com/cloudfront/
https://aws.amazon.com/global-accelerator/features/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Which of the below is a best-practice when building applications on AWS?

  • Strengthen physical security by applying the principle of least privilege
  • Use IAM policies to maintain performance
  • Ensure that the application runs on hardware from trusted vendors
  • Decouple the components of the application so that they run independently
A

Decouple the components of the application so that they run independently

Explanation

An application should be designed in a way that reduces interdependencies between its components. A change or a failure in one component should not cascade to other components. If the components of an application are tightly-coupled (interconnected) and one component fails, the entire application will also fail. Amazon SQS and Amazon SNS are powerful tools that help you build loosely-coupled applications. SQS and SNS can be integrated together to decouple application components so that they run independently, increasing the overall fault tolerance of the application.

Understanding how SQS and SNS services work is not required for the Cloud Practitioner level, but let’s just take a simple example, let say you have two components in your application, Component A & Component B. Component A sends messages (jobs) to component B to process. Now, what happens if component A sends a large number of messages at the same time? Component B will fail, and the entire application will fail. SQS act as a middleman, receives and stores messages from component A, and component B pull and process messages at its own pace. This way, both components run independently from each other.

The other options are incorrect:

“Ensure that the application runs on hardware from trusted vendors” is incorrect. Choosing a specific hardware vendor is not available in AWS.

“Use IAM policies to maintain performance” is incorrect. There is no relation between IAM policies and performance. IAM policies are used to grant users permission to perform specific actions on AWS.

“Strengthen physical security by applying the principle of least privilege” is incorrect. Physical security is the sole responsibility of AWS.

Additional information:

AWS provides physical data center access only to approved employees. All employees who need data center access must first apply for access and provide a valid business justification. These requests are granted based on the principle of least privilege, where requests must specify to which layer of the data center the individual needs access, and are time-bound. Requests are reviewed and approved by authorized personnel, and access is revoked after the requested time expires.

The principle of least privilege does not only apply to control physical access. AWS customers should also apply this principle when granting permissions to IAM users. In other words, AWS customers should grant IAM users only the permissions they need to perform a task and nothing more.

References:

https://aws.amazon.com/microservices/
https://aws.amazon.com/sns/
https://aws.amazon.com/sqs/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

An organization runs many systems and uses many AWS products. Which of the following services enables them to control how each developer interacts with these products?

  • Network Access Control Lists
  • Amazon RDS
  • AWS Identity and Access Management
  • Amazon EMR
A

AWS Identity and Access Management

Explanation

AWS Identity and Access Management (IAM) is a web service for securely controlling access to AWS services. With IAM, you can centrally manage users, security credentials such as access keys, and permissions that control which AWS resources users and applications can access.

The other options are incorrect:

“Amazon RDS” is incorrect. Amazon RDS is relational database service.

“Network Access Control Lists” is incorrect. Network Access Control Lists is a VPC feature that allows you to control traffic at the subnet level.

“Amazon EMR” is incorrect. Amazon EMR is used to run and Scale Apache Spark, Hadoop, HBase, Presto, Hive, and other Big Data Frameworks.

References:
https://aws.amazon.com/iam/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Using Amazon EC2 falls under which of the following cloud computing models?

  • SaaS
  • PaaS
  • IaaS
  • IaaS & SaaS
A

IaaS

Explanation

Infrastructure as a Service (IaaS) contains the basic building blocks for Cloud IT and typically provide access to networking features, computers (virtual or on dedicated hardware), and data storage space. Infrastructure as a Service provides you with the highest level of flexibility and management control over your IT resources and is most similar to existing IT resources that many IT departments and developers are familiar with today.

For example, a service such as Amazon Elastic Compute Cloud (Amazon EC2) is categorized as Infrastructure as a Service (IaaS) and requires the customer to perform all of the configuration and management tasks. Customers that deploy an Amazon EC2 instance are responsible for management of the guest operating system (including updates and security patches), any application software or utilities installed by the customer on the instances, and the configuration of the AWS-provided firewall (called a security group) on each instance.

The other options are incorrect:

1- Platform as a Service (PaaS) removes the need for your organization to manage the underlying infrastructure (usually hardware and operating systems) and allows you to focus on the deployment and management of your applications. This helps you be more efficient as you don’t need to worry about resource procurement, capacity planning, software maintenance, patching, or any of the other undifferentiated heavy lifting involved in running your application. A common example of a PaaS platform is the AWS Elastic Beanstalk service. Developers simply upload their application, and Elastic Beanstalk automatically handles the deployment details of capacity provisioning, load balancing, auto-scaling, and application health monitoring.

2- Software as a Service(SaaS) provides you with a completed product that is run and managed by the service provider. In most cases, people referring to Software as a Service are referring to end-user applications. With a SaaS offering you do not have to think about how the service is maintained or how the underlying infrastructure is managed; you only need to think about how you will use that particular piece software. A common example of a SaaS application is web-based email where you can send and receive email without having to manage feature additions to the email product or maintaining the servers and operating systems that the email program is running on.

References:
https://d1.awsstatic.com/whitepapers/aws-overview.pdf

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Savings Plans are available for which of the following AWS compute services? (Choose TWO)

  • AWS Outposts
  • Amazon Lightsail
  • Amazon EC2
  • AWS Batch
  • AWS Lambda
A

Amazon EC2
AWS Lambda

Explanation

Savings Plans are a flexible pricing model that offers low prices on EC2, Lambda, and Fargate usage, in exchange for a commitment to a consistent amount of usage (measured in $/hour) for a 1 or 3 year term. When you sign up for Savings Plans, you will be charged the discounted Savings Plans price for your usage up to your commitment. For example, if you commit to $10 of compute usage an hour, you will get the Savings Plans prices on that usage up to $10 and any usage beyond the commitment will be charged On Demand rates.

Additional information:

What is the difference between Amazon EC2 Savings Plans and Amazon EC2 Reserved instances?

Reserved Instances are a billing discount applied to the use of On-Demand Compute Instances in your account. These On-Demand Instances must match certain attributes, such as instance type and Region to benefit from the billing discount.

For example, let say you have a t2.medium instance running as an On-Demand Instance and you purchase a Reserved Instance that matches the configuration of this particular t2.medium instance. At the time of purchase, the billing mode for the existing instance changes to the Reserved Instance discounted rate. The existing t2.medium instance doesn’t need replacing or migrating to get the discount.

After the reservation expires, the instance is charged as an On-Demand Instance. You can repurchase the Reserved Instance to continue the discounted rate on your instance. Reserved Instances act as an automatic discount on new or existing On-Demand Instances in your account.

Savings Plans also offer significant savings on your Amazon EC2 costs compared to On-Demand Instance pricing. With Savings Plans, you make a commitment to a consistent usage amount, measured in USD per hour. This provides you with the flexibility to use the instance configurations that best meet your needs, instead of making a commitment to a specific instance configuration (as is the case with reserved instances). For example, with Compute Savings Plans, if you commit to $10 of compute usage an hour, you can use as many instances as you need (of any type) and you will get the Savings Plans prices on that usage up to $10 and any usage beyond the commitment will be charged On Demand rates.

The other options are incorrect:

“AWS Batch” is incorrect. Savings Plans are not available for AWS Batch.

AWS Batch is a compute service that allows you to run hundreds of thousands of batch computing jobs on AWS. AWS Batch dynamically provisions the optimal quantity and type of compute resources (e.g., CPU or memory optimized instances) based on the volume and specific resource requirements of the batch jobs submitted.

“AWS Outposts” is incorrect. Savings Plans are not available for AWS Outposts.

AWS Outposts is an AWS service that delivers the same AWS infrastructure, native AWS services, APIs, and tools to virtually any customer on-premises facility. With AWS Outposts, customers can run AWS services locally on their Outpost, including EC2, EBS, ECS, EKS, and RDS, and also have full access to services available in the Region.

Customers can use AWS Outposts to securely store and process data that needs to remain on-premises or in countries where there is no AWS region. AWS Outposts is ideal for applications that have low latency or local data processing requirements, such as financial services, healthcare, etc.

“Amazon Lightsail” is incorrect. Savings Plans are not available for Amazon Lightsail.

Amazon Lightsail provides a low-cost Virtual Private Server (VPS) in the cloud.

References:
https://aws.amazon.com/savingsplans/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Which of the following describes the payment model that AWS makes available for customers who consistently use Amazon EC2 over a 3-year term to reduce their total computing costs?

  • Pay less by using more
  • Pay less as AWS grows
  • Save when you commit
  • Pay as you go
A

Save when you commit

Explanation

For Customers who can commit to using EC2 over a one or 3-year term, it is better to use Amazon EC2 Reserved Instances or AWS Savings Plans. Reserved Instances and AWS Savings Plans provide a significant discount (up to 72%) compared to On-Demand instance pricing.

The other options are incorrect:

“Pay as you go” is incorrect. Reserved Instances and AWS Savings Plans provide a significant discount (up to 72%) compared to On-Demand (pay-as-you-go) instance pricing.

“Pay less as AWS grows” is incorrect. Pay less as AWS grows refers to the discounts that you get over time as AWS grows. This sometimes called “AWS Economies of Scale”. For example, AWS has reduced the per GB storage price of S3 by 80% since the service was first introduced in 2006.

“Pay less by using more” is incorrect. “Pay less by using more” means that you get volume based discounts and as your usage increases. For services such as S3, pricing is tiered, meaning the more you use, the less you pay per GB.

References:
https://aws.amazon.com/pricing/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Which of the following AWS services is designed with native Multi-AZ fault tolerance in mind? (Choose TWO)

  • Amazon DynamoDB
  • Amazon EBS
  • Amazon Redshift
  • Amazon Simple Storage Service
  • AWS Snowball
A

Amazon DynamoDB
Amazon Simple Storage Service

Explanation

The Multi-AZ principle involves deploying an AWS resource in multiple Availability Zones to achieve high availability for that resource.

DynamoDB automatically spreads the data and traffic for your tables over a sufficient number of servers to handle your throughput and storage requirements, while maintaining consistent and fast performance. All of your data is stored on solid-state disks (SSDs) and is automatically replicated across multiple Availability Zones in an AWS Region, providing built-in fault tolerance in the event of a server failure or Availability Zone outage.

Amazon S3 provides durable infrastructure to store important data and is designed for durability of 99.999999999% of objects. Data in all Amazon S3 storage classes is redundantly stored across multiple Availability Zones (except S3 One Zone-IA).

The other options are incorrect:

“Amazon Redshift” is incorrect. Currently, Amazon Redshift only supports Single-AZ deployments.

“AWS Snowball” is incorrect. AWS Snowball is a data transport solution that accelerates moving terabytes to petabytes of data into and out of AWS using storage devices designed to be secure for physical transport.

“Amazon EBS” is incorrect. Amazon EBS volume data is replicated across multiple servers within the same Availability Zone.

Note:

Amazon EFS data is redundantly stored across multiple Availability Zones providing better durability compared to EBS volumes.

References:
https://aws.amazon.com/dynamodb/
https://aws.amazon.com/s3/storage-classes/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

An organization needs to analyze and process a large number of data sets. Which AWS service should they use?

  • Amazon EMR
  • Amazon SQS
  • Amazon MQ
  • Amazon SNS
A

Amazon EMR

Explanation

Amazon EMR helps you analyze and process vast amounts of data by distributing the computational work across a cluster of virtual servers running in the AWS Cloud. The cluster is managed using an open-source framework called Hadoop. Amazon EMR lets you focus on crunching or analyzing your data without having to worry about time-consuming setup, management, and tuning of Hadoop clusters or the compute capacity they rely on.

All other options are AWS messaging services.

References:
https://aws.amazon.com/emr/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

A company needs to host a database in Amazon RDS for at least three years. Which of the following options would be the most cost-effective solution?

  • Spot Instances
  • Reserved Instances - no upfront
  • Reserved Instances - partial upront
  • On-Demand Instances
A

Reserved Instances - partial upront

Explanation

Since the database server will be hosted for a period of at least three years, then it is better to use the RDS Reserved Instances as it provides you with a significant discount compared to the On-Demand Instance pricing for the DB instance.

With the Partial Upfront option, you make a low upfront payment and are then charged a discounted hourly rate for the instance for the duration of the Reserved Instance term. The Partial Upfront option is more cost-effective than the No upfront option (The more you spend upfront the more you save).

The other options are incorrect:

“Spot Instances” is incorrect. Spot Instances is an option for EC2; there is no Spot option for RDS.

“Reserved instances - No Upfront” is incorrect. The No Upfront option does not require any upfront payment and provides a discounted hourly rate for the duration of the term. The Partial Upfront option provides more discounts than the No Upfront option because you spend more upfront.

“On-Demand instances” is incorrect. On-Demand is not a cost-effective solution.

References:
https://aws.amazon.com/rds/reserved-instances/

23
Q

Which of the following is equivalent to a user name and password and is used to authenticate your programmatic access to AWS services and APIs?

  • Instance password
  • Key pairs
  • MFA
  • Acces keys
A

Acces keys

Explanation

Access keys consist of two parts: an access key ID and a secret access key. You must provide your AWS access keys to make programmatic requests to AWS or to use the AWS Command Line Interface or AWS Tools for PowerShell. Like a user name and password, you must use both the access key ID and secret access key together to authenticate your requests.

The other options are incorrect:

“MFA” is incorrect. MFA is an additional security layer that can be used to secure your AWS console. MFA can also be used to control access to AWS service APIs.
“Instance Password” is incorrect. There are no passwords related to the EC2 instances.
“Key pairs” is incorrect. The AWS key pair is used to securely connect to your Amazon EC2 instances.

References:
https://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html

24
Q

Using Amazon RDS falls under the shared responsibility model. Which of the following are customer responsibilities? (Choose TWO)

Managing the database settings
Installing the database software
Building the relational database schema
Patching the database software
Performing backups

A

Managing the database settings
Building the relational database schema

Explanation

Amazon RDS manages the work involved in setting up a relational database, from provisioning the infrastructure capacity you request to installing the database software. Once your database is up and running, Amazon RDS automates common administrative tasks such as performing backups and patching the software that powers your database. With optional Multi-AZ deployments, Amazon RDS also manages synchronous data replication across Availability Zones with automatic failover. Since Amazon RDS provides native database access, you interact with the relational database software as you normally would. This means you’re still responsible for managing the database settings that are specific to your application. You’ll need to build the relational schema that best fits your use case and are responsible for any performance tuning to optimize your database for your application’s workflow.

The other options are incorrect:

“Installing the database software” is incorrect. Installing the database software is AWS’ responsibility.
“Performing backups” is incorrect. Performing backups is AWS’ responsibility.
“Patching the database software” is incorrect. Patching the database software is AWS’ responsibility.

References:

https://aws.amazon.com/rds/faqs/
https://aws.amazon.com/compliance/shared-responsibility-model/

25
Q

Which of the following will impact the price paid for an EC2 instance? (Choose TWO)

The Availability Zone where the instance is provisioned
Load balancing
Number of buckets
Number of private IPs
Instance type

A

Load balancing
Instance type

Explanation

EC2 instance pricing varies depending on many variables:

  • The buying option (On-demand, Savings Plans, Reserved, Spot, Dedicated)
  • Selected instance type
  • Selected Region
  • Number of instances
  • Load balancing
  • Allocated Elastic IP Addresses

Load balancing: The number of hours the Elastic Load Balancer runs and the amount of data it processes contribute to the EC2 monthly cost.

Instance type: Amazon EC2 provides a wide selection of instance types optimized to fit different use cases. Instance types comprise varying combinations of CPU, memory, storage, and networking capacity.

The other options are incorrect:

“The Availability Zone where the instance is provisioned” is incorrect. Prices of the Amazon EC2 instances may vary depending on the Region where the instances are provisioned. Amazon EC2 instances provisioned in different Availability Zones within the same Region have the same price.

“Number of private IPs” is incorrect. There is no charge for private IPs.

Additional information:

The number of allocated Elastic IPs is the factor that may affect Amazon EC2 charges. To ensure efficient use of Elastic IP addresses, AWS imposes a small hourly charge if an Elastic IP address is not associated with a running instance, or if it is associated with a stopped instance. While the instance is running, you are not charged for one Elastic IP address associated with the instance, but additional Elastic IPs are not free.

“Number of buckets” is incorrect. A bucket is an Amazon S3 resource, not an Amazon EC2 resource.

Additional information:

To upload your data (photos, videos, documents, etc.) to Amazon S3, you must first create an S3 bucket (which is like a file folder) in one of the AWS Regions. You can then upload any number of objects to the bucket. The customer is charged based on the total size of the objects (in GB) stored in their S3 bucket, not for the bucket itself.

References:
https://docs.aws.amazon.com/whitepapers/latest/how-aws-pricing-works/how-aws-pricing-works.pdf

26
Q

A company has created a solution that helps AWS customers improve their architectures on AWS. Which AWS program may support this company?

AWS TAM
APN Consulting Partners
APN Technology Partner
AWS Profesional Services

A

APN Consulting Partners

Explanation

APN Consulting Partners are professional services firms that help customers design, architect, build, migrate, and manage their workloads and applications on AWS. Consulting Partners include System Integrators, Strategic Consultancies, Agencies, Managed Service Providers, and Value-Added Resellers. AWS supports the APN Consulting Partners by providing a wide range of resources and training to support their customers.

The other options are incorrect:

“APN Technology Partners” is incorrect. APN Technology Partners provide software solutions that are either hosted on, or integrated with, the AWS platform. APN Technology Partners include Independent Software Vendors (ISVs), SaaS, PaaS, Developer Tools, Management and Security Vendors.

“AWS Professional Services” is incorrect. AWS Professional Services shares a collection of offerings to help you achieve specific outcomes related to enterprise cloud adoption. AWS Professional Services also trains your team with specialized skills and provides global specialty practices to support your efforts in focused areas of enterprise cloud computing.

“AWS TAM” is incorrect. A Technical Account Manager (TAM) is your designated technical point of contact who provides advocacy and guidance to help plan and build solutions using best practices and proactively keep your AWS environment operationally healthy. TAM is available only for two AWS support plans: Enterprise On-Ramp and Enterprise.

References:
https://d1.awsstatic.com/whitepapers/aws-partners-customers-work-together-website.pdf

27
Q

A company has deployed a new web application on multiple Amazon EC2 instances. Which of the following should they use to ensure that the incoming HTTP traffic is distributed evenly across the instances?

  • AWS Application Load Balancer
  • AWS Network Load Balancer
  • AWS Gateway Load Balancer
  • AWS Auto Scaling
A

AWS Application Load Balancer

Explanation

Elastic Load Balancing automatically distributes incoming application traffic across multiple targets, such as Amazon EC2 instances, containers, IP addresses, and Lambda functions. Elastic Load Balancing supports four types of load balancers (Application Load Balancer, Network Load Balancer, Gateway Load Balancer, and Classic Load Balancer). You can select the appropriate load balancer based on your application needs.

1- If you need to load balance HTTP\HTTPS requests, AWS recommends using the AWS Application Load Balancer.
2- For network/transport protocols (layer4 – TCP, UDP) load balancing and for extreme performance/low latency applications, AWS recommends using the AWS Network Load Balancer.
3- To manage and distribute traffic across multiple third-party virtual appliances, AWS recommends using the AWS Gateway Load Balancer.
4- If you have an existing application built within the EC2-Classic network, you should use the AWS Classic Load Balancer.

Application Load Balancer is best suited for load balancing of HTTP and HTTPS traffic. In our case, the application receives HTTP traffic. Hence, the Application Load Balancer is the correct answer here.

The other options are incorrect:

“AWS Network Load Balancer” is incorrect. The traffic comes to the instances through HTTP. Network Load Balancer is best suited for load balancing of TCP and UDP traffic.

“AWS Gateway Load Balancer” is incorrect. AWS Gateway Load Balancer is used to manage and distribute traffic across multiple third-party virtual appliances.

Gateway Load Balancer helps you easily deploy, scale, and manage third-party virtual appliances such as firewalls, Anti-malware, deep packet inspection systems, and intrusion detection and prevention systems.

Gateway Load Balancer gives you one gateway for distributing traffic across multiple virtual appliances while scaling them up or down, based on demand. This decreases potential points of failure in your network and increases availability.

Additional information:

What is a virtual appliance?

A virtual appliance sits in line with network traffic and inspects incoming and outgoing traffic flows. These can be firewalls, deep packet inspection systems, inline analytics, or other inline functions that have traditionally been referred to as network appliances. AWS Customers can find, test, and buy virtual appliances from third-party vendors directly in AWS Marketplace. This integrated experience streamlines the deployment process, so customers see value from their virtual appliances more quickly.

“AWS Auto Scaling” is incorrect. AWS Auto Scaling is not for distributing traffic. AWS Auto Scaling monitors your applications and automatically adjusts capacity (up or down) to maintain steady, predictable performance at the lowest possible cost.

References:
https://d1.awsstatic.com/whitepapers/aws-overview.pdf

28
Q

What are the default security credentials that are required to access the AWS management console for an IAM user account?

  • Access keys
  • MFA
  • A username and password
  • Security and tokens
A

A username and password

Explanation

The AWS Management Console allows you to access and manage Amazon Web Services through a simple and intuitive web-based user interface. You can only access the AWS management console if you have a valid user name and password.

The other options are incorrect:

“MFA” is incorrect. MFA is an additional layer of security (i.e. not required).

Although MFA is not required to access IAM user accounts, it is recommended to set it up for all of your IAM users. With MFA enabled, when a user signs in to an AWS Management Console, they will be prompted for their user name and password (the first factor—what they know), as well as for an authentication code from their AWS MFA device (the second factor—what they have). Taken together, these multiple factors provide increased security for your AWS account settings and resources.

It is also recommended that you set an IAM Account Password Policy on your AWS account to specify complexity requirements and mandatory rotation periods for your IAM users’ passwords.

You can configure an IAM Account Password Policy to do these things:

1- Set a minimum password length.
2- Require specific character types, including uppercase letters, lowercase letters, numbers, and non-alphanumeric characters. Be sure to remind your users that passwords are case sensitive.
3- Allow all IAM users to change their own passwords.
4- Require IAM users to change their password after a specified period of time (enable password expiration).
5- Prevent IAM users from reusing previous passwords.
6- Force IAM users to contact an account administrator when the user has allowed his or her password to expire.

Important Note: The IAM Account Password Policy is an account-level setting that applies to all IAM users, excluding the root user. You can not apply a password policy to a single IAM user.

“Access keys” is incorrect. Access keys are long-term credentials that can be used to sign programmatic requests to AWS.

“Security tokens” is incorrect. Security tokens are temporary credentials that can also be used to interact with AWS resources programmatically.

References:
https://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html

29
Q

In your on-premises environment, you can create as many virtual servers as you need from a single template. What can you use to perform the same in AWS?

  • AMI
  • EBS Snapshot
  • IAM
  • An internet gateway
A

AMI

Explanation

An Amazon Machine Image (AMI) is a template that contains a software configuration (for example, an operating system, an application server, and applications). This pre-configured template save time and avoid errors when configuring settings to create new instances. You specify an AMI when you launch an instance, and you can launch as many instances from the AMI as you need. You can also launch instances from as many different AMIs as you need.

The other options are incorrect:

“IAM” is incorrect. IAM refers to the AWS Identity and Access Management.

“EBS Snapshot” is incorrect. An EBS snapshot is a point-in-time copy of your Amazon EBS volume.

“An internet gateway” is incorrect. An internet gateway is a VPC component that allows communication between instances in your VPC and the internet.

References:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html

30
Q

Jessica is managing an e-commerce web application in AWS. The application is hosted on six EC2 instances. One day, three of the instances crashed; but none of her customers were affected. What has Jessica done correctly in this scenario?

  • She has properly built an elastic system
  • She has properly built a scalable system
  • She has properly built an encrypted system
  • She has properly built a fault-tolerant sytem
A

She has properly built a fault-tolerant sytem

Explanation

Fault tolerance is the property that enables a system to continue operating properly in the event of the failure of some (one or more faults within) of its components. Visitors to a website expect the website to be available irrespective of when they visit. For example, when someone wants to visit Jessica’s website to purchase a product, whether it is at 9:00 AM on a Monday or 3:00 PM on holiday, he\she expects that the website will be available and ready to accept his\her purchase. Failing to meet these expectations can cause loss of business and contribute to the development of a negative reputation for the website owner, resulting in lost revenue.

The other options are incorrect:

“She has properly built an elastic system” is incorrect. Elasticity is the ability of a system to scale the resources needed to cope with load dynamically. So that when the load increases you scale by adding more resources and when demand wanes you shrink back and remove unneeded resources.

“She has properly built a scalable system” is incorrect. Scalability is the ability of a system to accommodate larger loads just by adding resources, either making hardware larger (scaling vertically) or adding additional nodes (scaling horizontally).

“She has properly built an encrypted system” is incorrect. Encryption is much more related to data protection, not fault-tolerance.

31
Q

Which of the following AWS offerings is a MySQL-compatible relational database service that can scale capacity automatically based on demand?

  • Amazon RDS PostgreSQL
  • Amazon Aurora
  • Amazon Neptune
  • Amazon RDS for SQL Server
A

Amazon Aurora

Explanation

Amazon Aurora is a MySQL and PostgreSQL compatible relational database built for the cloud, that combines the performance and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. Aurora is up to five times faster than standard MySQL databases and three times faster than standard PostgreSQL databases. It provides the security, availability, and reliability of commercial-grade databases at 1/10th the cost. Aurora is fully managed by Amazon Relational Database Service (RDS), which automates time-consuming administration tasks like hardware provisioning, database setup, patching, and backups.

Amazon Aurora features “Amazon Aurora Serverless” which is an on-demand, auto-scaling configuration for Amazon Aurora (MySQL-compatible and PostgreSQL-compatible editions), where the database will automatically start up, shut down, and scale capacity up or down based on your application’s needs.

The other options are incorrect:

“Amazon RDS PostgreSQL” is incorrect. Amazon RDS PostgreSQL is used to run PostgreSQL databases NOT MySQL databases.

“Amazon RDS for SQL Server “ is incorrect. Amazon RDS for SQL Server is used to run Microsoft SQL Server databases NOT MySQL databases.

“Amazon Neptune” is incorrect. Amazon Neptune is a graph database service NOT a MySQL database. Amazon Neptune can be used to build and run applications that work with highly connected datasets, such as social networking, recommendation engines, and knowledge graphs.

References:
https://aws.amazon.com/rds/aurora/

32
Q

Sarah has deployed an application in the Northern California (us-west-1) region. After examining the application’s traffic, she notices that about 30% of the traffic is coming from Asia. What can she do to reduce latency for the users in Asia?

  • Recreate the website content
  • Replicate the current resources across multiple Availability Zones within the same region
  • Create a CDN using CloudFront, so that content is cached at Edge Locations close to and in Asia
  • Migrate the application to a hosting provider in Asia
A

Create a CDN using CloudFront, so that content is cached at Edge Locations close to and in Asia

Explanation

CloudFront is AWS’s content delivery network (CDN) service. Amazon CloudFront employs a global network of edge locations and regional edge caches that cache copies of your content close to your end-users. Amazon CloudFront ensures that end-user requests are served by the closest edge location. As a result, end-user requests travel a short distance, reducing latency and improving the overall performance.

The other options are incorrect:

“Migrate the application to a hosting provider in Asia” is incorrect. AWS now can deal with most of the customers’ requirements. Whatever your problem is you can find a solution.

“Recreate the website content” is incorrect. There is no relation between the website content and the traffic that comes to the web application.

“Replicate the current resources across multiple Availability Zones within the same region” is incorrect. This will only help if the replication is done in a region located in or close to Asia.

References:
https://aws.amazon.com/cloudfront/

33
Q

What is the AWS serverless service that allows you to run your applications without any administrative burden?

Amazon LightSail
AWS Lambda
Amazon EC2 instances
Amazon RDS instances

A

AWS Lambda

Explanation

AWS Lambda is an AWS-managed compute service. It lets you run code without provisioning or managing servers. With Lambda, you can run code for virtually any type of application or backend service - all with zero administration. Just upload your code, and Lambda takes care of everything required to run and scale your code with high availability. You pay only for the compute time you consume - there is no charge when your code is not running.

The other options are incorrect:

“Amazon EC2 instances” is incorrect. Amazon Elastic Compute Cloud (Amazon EC2) is a server-based compute service. Amazon EC2 is categorized as Infrastructure as a Service (IaaS) and, as such, requires the customer to perform all of the necessary configurations and management tasks.

“Amazon Lightsail” is incorrect. Amazon Lightsail provides a low-cost Virtual Private Server (VPS) in the cloud. Lightsail plans include everything you need to jumpstart your project – virtual machines, containers, databases, CDN, load balancers, SSD-based storage, DNS management, etc. – for a low, predictable monthly price.

“Amazon RDS instances” is incorrect. Amazon RDS is a server-based database service that makes it easy to run a relational database in the cloud.

References:
https://aws.amazon.com/lambda/

34
Q

Amazon S3 Glacier Flexible Retrieval is an Amazon S3 storage class that is suitable for storing () and (). (Choose TWO)

  • Dynamic website’s assets
  • Active archives
  • Cached data
  • Active database
  • Long-term analytic data
A

Active archives,
Long-term analytic data

Explanation

S3 Glacier Flexible Retrieval (Formerly S3 Glacier) delivers the most flexible retrieval options that balance cost with access times ranging from minutes to hours and with free bulk retrievals. Amazon S3 Glacier Flexible Retrieval provides three retrieval options to fit your use case. Expedited retrievals typically return data in 1-5 minutes, and are best used for Active Archive use cases. Standard retrievals typically complete between 3-5 hours work, and work well for less time-sensitive needs like backup data, media editing, or long-term analytics. Bulk retrievals are the free retrieval option, returning large amounts of data within 5-12 hours.

The other options are incorrect:

“Active databases” is incorrect. Active databases require consistent and low-latency storage performance. For example, DB instances for Amazon RDS for MySQL, MariaDB, PostgreSQL, Oracle, and Microsoft SQL Server use Amazon Elastic Block Store (Amazon EBS) volumes for database and log storage. S3 Glacier Flexible Retrieval is generally used for data archiving and backup, not for live databases.

“Cached data” is incorrect. A cache is a high-speed data storage layer which stores a subset of data, typically transient in nature, so that future requests for that data are served up faster than is possible by accessing the data’s primary storage location. Caching allows you to efficiently reuse previously retrieved or computed data. The data in a cache is generally stored in fast access hardware such as RAM (Random-access memory) and may also be used in correlation with a software component. A cache’s primary purpose is to increase data retrieval performance by reducing the need to access the underlying slower storage layer.

“Dynamic websites’ assets” is incorrect. Dynamic websites usually require immediate retrieval, which is not available in S3 Glacier Flexible Retrieval.

References:
https://aws.amazon.com/glacier/

35
Q

What is the AWS service that provides you the highest level of control over the underlying virtual infrastructure?

  • Amazon RDS
  • Amazon Redshift
  • Amazon EC2
  • Amazon DynamoDB
A

Amazon EC2

Explanation

Amazon EC2 provides you the highest level of control over your virtual instances, including root access and the ability to interact with them as you would any machine.

The other options are incorrect:

Amazon DynamoDB, Amazon RDS, and Amazon Redshift belong to the AWS-managed services. The AWS-managed services automate time-consuming administration tasks such as hardware provisioning, software setup, patching and backups. It frees you to focus on your applications so you can give them the fast performance, high availability, security and compatibility they need.

References:
https://aws.amazon.com/ec2/faqs/

36
Q

What is the AWS tool that enables you to use scripts to manage all AWS services and resources?

  • AWS CLI
  • AWS Service Catalog
  • AWS Console
  • AWS OpsWorks
A

AWS CLI

Explanation

The AWS Command Line Interface (CLI) is a unified tool to manage your AWS services. With just one tool to download and configure, you can control multiple AWS services from the command line and automate them through scripts.

The other options are incorrect:

“AWS Service Catalog” is incorrect. AWS Service Catalog allows organizations to create and manage catalogs of IT services that are approved for use on AWS.

“AWS OpsWorks” is incorrect. AWS OpsWorks can be used to automate one service which is EC2. AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet. Chef and Puppet are automation platforms that allow you to use code to automate the configurations of your servers. OpsWorks lets you use Chef and Puppet to automate how servers are configured, deployed, and managed across your Amazon EC2 instances or on-premises compute environments.

“AWS Console” is incorrect. AWS Console lets you access and manage Amazon Web Services through a web-based user interface.

References:
https://aws.amazon.com/cli/

37
Q

How are AWS customers billed for Linux-based Amazon EC2 usage?

  • EC2 instances will be billed on one second increments, with a minimum of one minute
  • EC2 instances will be billed on one hour increments, with a minimum of one day
  • EC2 instances will be billed on one day increments, with a minimum of one month
  • EC2 instances will be billed on one minute increments, with a minimum of one hour
A

EC2 instances will be billed on one second increments, with a minimum of one minute

Explanation

Pricing is per instance-hour consumed for each instance, from the time an instance is launched until it is terminated or stopped. Each partial instance-hour consumed will be billed per-second (minimum of 1 minute) for Linux, Windows, or Ubuntu Instances and as a full hour for all other instance types.

Examples for Linux, Windows, or Ubuntu based instances:

1- If you run a Linux instance for 4 seconds or 20 seconds or 59 seconds, you will be charged for one minute. (this is what we mean by minimum of 1 minute)
2- If you run a Linux instance for 1 minute and 3 seconds, you will be charged for 1 minute and 3 seconds.
3- If you run a Linux instance for 3 hours, 25 minutes and 7 seconds, you will be charged for 3 hours, 25 minutes and 7 seconds.

Examples for instances launched in other operating systems such as Red Hat, Kali, or CentOS:

1- If you run an instance for 4 seconds or 20 seconds or 59 seconds, you will be charged for one hour.
2- If you run an instance for 1 minute and 3 seconds, you will be charged for one hour.
3- If you run an instance for 3 hours, 25 minutes and 7 seconds, you will be charged for 4 hours.

Per-second billing is available for instances launched in:

  • On-Demand, Reserved and Spot forms
  • All regions and Availability Zones
  • Amazon Linux, Windows, and Ubuntu

References:
https://aws.amazon.com/ec2/pricing/

38
Q

What does Amazon ElastiCache provide?

  • An Ehcache compatible in-memory data store
  • In-memory caching for read-heavy applications
  • An online software store that allows Customers to launch pre-configured software with just few clicks
  • A domain name system in the cloud
A

In-memory caching for read-heavy applications

Explanation

ElastiCache is a web service that makes it easy to set up, manage, and scale a distributed in-memory data store or cache environment in the cloud. It provides a high-performance, scalable, and cost-effective caching solution, while removing the complexity associated with deploying and managing a distributed cache environment. The in-memory caching provided by Amazon ElastiCache can be used to significantly improve latency and throughput for many read-heavy applications (such as social networking, gaming, media sharing and Q&A portals) or compute-intensive workloads (such as a recommendation engine).

In-memory caching improves application performance by storing critical pieces of data in memory for low-latency access. Cached information may include the results of common database queries or the results of computationally-intensive calculations.

Additional information:

The primary purpose of an in-memory data store is to provide ultrafast (submillisecond latency) and inexpensive access to copies of data. Querying a database is always slower and more expensive than locating a copy of that data in a cache. Some database queries are especially expensive to perform. An example is queries that involve joins across multiple tables or queries with intensive calculations. By caching (storing) such query results, you pay the price of the query only once. Then you can quickly retrieve the data multiple times without having to re-execute the query.

The other options are incorrect:

“An online software store that allows Customers to launch pre-configured software with just few clicks”is incorrect. AWS Marketplace is the service that provides an online software store that helps customers find, buy, and immediately start using the software and services that run on AWS.

“A domain name system in the cloud” is incorrect. Route53 is the service that provides DNS in the cloud.

“An Ehcache compatible in-memory data store” is incorrect. ElastiCache supports only two cache engines: Redis and Memcached.

References:
https://aws.amazon.com/elasticache/

39
Q

Which AWS service can be used to store and reliably deliver messages across distributed systems?

  • Amazon Simple Email Service
  • AWS Storage Gateway
  • Amazon Simple Storage Storage Service
  • Amazon Simple Queue Service
A

Amazon Simple Queue Service

Explanation

Amazon SQS is a highly reliable, scalable message queuing service that enables asynchronous message-based communication between distributed components of an application. Using SQS, you can send, store, and receive messages between software components at any volume, without losing messages or requiring other services to be available.

The other options are incorrect:

“Amazon Simple Storage Service” is incorrect. Amazon Simple Storage Service (Amazon S3) is an object storage service.

“Amazon Simple Email Service” is incorrect. Amazon Simple Email Service (Amazon SES) is a cloud-based email sending service designed to help digital marketers and application developers send marketing, notification, and transactional emails.

“AWS Storage Gateway” is incorrect. AWS Storage Gateway is a hybrid storage service that enables your on-premises applications to seamlessly use AWS cloud storage. The gateway connects to AWS storage services - such as Amazon S3 and Amazon EBS - and provides storage for files, volumes, snapshots, and virtual tapes in AWS.

References:
https://aws.amazon.com/sqs/

40
Q

You are working on two projects that require completely different network configurations. Which AWS service or feature will allow you to isolate resources and network configurations?

  • Security Groups
  • Internet gateways
  • Virtual Private Cloud
  • Amazon CloudFront
A

Virtual Private Cloud

Explanation

Amazon Virtual Private Cloud (Amazon VPC) lets you provision a logically isolated section of the AWS Cloud where you can launch AWS resources in a virtual network that you define. You have complete control over your virtual networking environment, including selection of the IP address range, creation of subnets, and configuration of route tables and network gateways.

“Security Groups” is incorrect. Security Groups are used to control traffic.

“Internet gateways” is incorrect. An internet gateway is a VPC component that allows communication between your VPC and the internet.

“Amazon CloudFront” is incorrect. Amazon CloudFront is a Content Delivery Network.

References:
https://aws.amazon.com/vpc/

41
Q

Which of the following AWS security features is associated with an EC2 instance and functions to filter incoming traffic requests?

  • VPC Flow Logs
  • Security Groups
  • AWS Systems Manager Session Manager
  • Network ACL
A

Security Groups

Explanation

Security Groups act as a firewall for associated Amazon EC2 instances, controlling both inbound and outbound traffic at the instance level.

The other options are incorrect:

“Network ACL” is incorrect. A network access control list (Network ACL) is an optional layer of security for your VPC that acts as a firewall for controlling traffic in and out of one or more subnets.

Note: Network ACLs act at the subnet level, but security groups act at the instance level.

“AWS Systems Manager Session Manager” is incorrect. AWS Systems Manager Session Manager does not filter traffic. AWS Systems Manager Session Manager is an AWS Systems Manager capability that allows users to connect to an EC2 instance with just one click from the browser (or AWS CLI) without having to provide SSH Key Pairs. Session Manager helps you improve your security posture by letting you close SSH inbound ports, freeing you from managing SSH keys, and bastion hosts.

“VPC Flow logs” is incorrect. The VPC Flow logs feature does not filter traffic. You can use security groups to filter traffic at the instance level and Network ACLs to filter traffic at the subnet level. VPC Flow logs only capture information about the IP traffic going to and from network interfaces in your VPC. This information can help you monitor the traffic that is reaching your instances and diagnose overly restrictive or overly permissive security group and network ACL rules.

AWS customers use VPC Flow logs to troubleshoot connectivity and security issues and make sure that network access rules are working as expected.

Security Groups, Network ACLs, and VPC Flow logs are advanced topics, but they are required for the Cloud Practitioner exam! If you understand what we’ve mentioned above, you should be able to answer any questions related to these topics.

References:
https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Security.html
https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html

42
Q

What are the connectivity options that can be used to build hybrid cloud architectures? (Choose TWO)

  • AWS Cloud9
  • AWS CloudTrail
  • AWS VPN
  • AWS Artifact
  • AWS Direct Connect
A

WS VPN,
AWS Direct Connect

Explanation

In cloud computing, hybrid cloud refers to the use of both on-premises resources in addition to public cloud resources. A hybrid cloud enables an organization to migrate applications and data to the cloud, extend their datacenter capacity, utilize new cloud-native capabilities, move applications closer to customers, and create a backup and disaster recovery solution with cost-effective high availability. By working closely with enterprises, AWS has developed the industry’s broadest set of hybrid capabilities across storage, networking, security, application deployment, and management tools to make it easy for you to integrate the cloud as a seamless and secure extension of your existing investments.

AWS Virtual Private Network solutions establish secure connections between your on-premises networks, remote offices, client devices, and the AWS global network. AWS VPN is comprised of two services: AWS Site-to-Site VPN and AWS Client VPN. AWS Site-to-Site VPN enables you to securely connect your on-premises network or branch office site to AWS. AWS Client VPN enables you to securely connect users (from any location) to AWS or on-premises networks. VPN Connections can be configured in minutes and are a good solution if you have an immediate need, have low to modest bandwidth requirements, and can tolerate the inherent variability in Internet-based connectivity.

AWS Direct Connect does not involve the Internet; instead, it uses dedicated, private network connections between your on-premises network or branch office site and Amazon VPC. AWS Direct Connect is a network service that provides an alternative to using the Internet to connect customer’s on-premise sites to AWS. Using AWS Direct Connect, data that would have previously been transported over the Internet can now be delivered through a private network connection between AWS and your datacenter or corporate network. Companies of all sizes use AWS Direct Connect to establish private connectivity between AWS and datacenters, offices, or colocation environments. Compared to AWS VPN (Internet-based connection), AWS Direct Connect can reduce network costs, increase bandwidth throughput, and provide a more consistent network experience.

Additional information:

Besides the connectivity options that AWS provides, AWS provides many features to support building more efficient hybrid cloud architectures. For example, AWS Identity and Access Management (IAM) can grant your employees and applications access to the AWS Management Console and AWS service APIs using your existing corporate identity systems. AWS IAM supports federation from corporate systems like Microsoft Active Directory, as well as external Web Identity Providers like Google and Facebook.

The other options are incorrect:

“AWS Cloud9” is incorrect. AWS Cloud9 is a cloud-based integrated development environment (IDE) that lets you write, run, and debug your code with just a browser. It includes a code editor, debugger, and terminal. Cloud9 comes prepackaged with essential tools for popular programming languages, including JavaScript, Python, PHP, and more, so you don’t need to install files or configure your development machine to start new projects.

“AWS Artifact” is incorrect. AWS Artifact provides on-demand access to AWS’ compliance reports.

“AWS CloudTrail” is incorrect. AWS CloudTrail is a web service that tracks and records all user interactions with AWS services.

References:
https://aws.amazon.com/directconnect/
https://aws.amazon.com/vpn/

43
Q

Which statement is correct with regards to AWS service limits? (Choose TWO)

  • There are no service limits on AWS
  • You can use the AWS Trusted Advisor to monitor your service limits
  • Each IAM user has the same service limits
  • You can contact AWS support to increase the service limits
  • The Amazon Simple Email Service is responsible for sending email notifications when usage approaches a service limit
A

You can use the AWS Trusted Advisor to monitor your service limits
You can contact AWS support to increase the service limits

Explanation

Service limits, also referred to as Service quotas, are the maximum number of service resources or operations that apply to an AWS account. Understanding your service limits (and how close you are to them) is an important part of managing your AWS deployments – continuous monitoring allows you to request limit increases or shut down resources before the limit is reached. One of the easiest ways to do this is via AWS Trusted Advisor’s Service Limit Dashboard.

AWS maintains service limits for each account to help guarantee the availability of AWS resources, as well as to minimize billing risks for new customers. Some service limits are raised automatically over time as you use AWS, though most AWS services require that you request limit increases manually. Most service limit increases can be requested through the AWS Support Center by choosing Create Case and then choosing Service Limit Increase.

The other options are incorrect:

“There are no service limits on AWS” is incorrect. Each AWS account has default limits, for each AWS service.

“The Amazon Simple Email Service is responsible for sending email notifications when usage approaches a service limit” is incorrect. Amazon Simple Email Service (Amazon SES) is a cloud-based email sending service designed to help digital marketers and application developers send marketing, notification, and transactional emails.

Additional information:

You can configure the AWS Limit Monitor to send email notification when usage approaches a service limit.

“Each IAM user has the same service limits” is incorrect. Service limits are applied at the AWS account level by aggregating usage from all users in the account.

Note: “service limits” and “service quotas” are the exact same thing. Please note that you may encounter both terms being used interchangeably.

References:

https://aws.amazon.com/premiumsupport/technology/trusted-advisor/
https://aws.amazon.com/answers/account-management/limit-monitor/
https://docs.aws.amazon.com/servicequotas/latest/userguide/intro.html

44
Q

Which statement best describes the operational excellence pillar of the AWS Well-Architected Framework?

  • The efficient use of computing resources to meet requirements
  • The ability of a system to recover gracefully from failure
  • The ability to monitor systems and imptove supporting processes and procedures
  • The ability to manage datacenter operations more efficiently
A

The ability to monitor systems and imptove supporting processes and procedures

Explanation

The 6 Pillars of the AWS Well-Architected Framework:

1- Operational Excellence: The operational excellence pillar includes the ability to run and monitor systems to deliver business value and to continually improve supporting processes and procedures.
2- Security: The security pillar includes the ability to protect information, systems, and assets while delivering business value through risk assessments and mitigation strategies.
3- Reliability: The reliability pillar includes the ability of a system to recover from infrastructure or service disruptions, dynamically acquire computing resources to meet demand, and mitigate disruptions such as misconfigurations or transient network issues.
4- Performance Efficiency: The performance efficiency pillar includes the ability to use computing resources efficiently to meet system requirements. Key topics include selecting the right resource types and sizes based on workload requirements, monitoring performance, and making informed decisions to maintain efficiency as business needs evolve.
5- Cost Optimization: The cost optimization pillar includes the ability to avoid or eliminate unneeded cost or sub-optimal resources.
6- Sustainability: The discipline of sustainability addresses the long-term environmental, economic, and societal impact of your business activities. Your business or organization can have negative environmental impacts like direct or indirect carbon emissions, unrecyclable waste, and damage to shared resources like clean water. When building cloud workloads, the practice of sustainability is understanding the impacts of the services used, quantifying impacts through the entire workload lifecycle, and applying design principles and best practices to reduce these impacts.

Additional information:

Creating a software system is a lot like constructing a building. If the foundation is not solid, structural problems can undermine the integrity and function of the building. When architecting technology solutions on Amazon Web Services (AWS), if you neglect the five pillars of operational excellence, security, reliability, performance efficiency, and cost optimization, it can become challenging to build a system that delivers on your expectations and requirements. Incorporating these pillars into your architecture helps produce stable and efficient systems. This allows you to focus on the other aspects of design, such as functional requirements. The AWS Well-Architected Framework helps cloud architects build the most secure, high-performing, resilient, and efficient infrastructure possible for their applications.

The other options are incorrect:

“The ability of a system to recover gracefully from failure” is incorrect. This statement is much more related to the Reliability pillar.

“The efficient use of computing resources to meet requirements” is incorrect. This statement is much more related to the Performance Efficiency pillar.

“The ability to manage datacenter operations more efficiently” is incorrect. Managing datacenter operations is not related to any pillar. It is something that AWS is responsible for, NOT the customer.

References:
https://aws.amazon.com/blogs/apn/the-5-pillars-of-the-aws-well-architected-framework/

45
Q

A company has a large amount of structured data stored in their on-premises data center. They are planning to migrate all the data to AWS, what is the most appropriate AWS database option?

  • Amazon RDS
  • Amazon DynamoDB
  • Amazon SNS
  • Amazon ElastiCache
A

Amazon RDS

Explanation

Since the data is structured, then it is best to use a relational database service such as Amazon RDS.

The other options are incorrect:

“Amazon ElastiCache” is incorrect. ElastiCache is an in-memory data store and cache service.

“Amazon DynamoDB” is incorrect. DynamoDB is a NoSQL database service. NoSQL is designed for unstructured data.

“Amazon SNS” is incorrect. Amazon Simple Notification Service (SNS) is not a database service. Amazon SNS is a highly available, durable, secure, fully managed pub/sub messaging service that enables you to decouple microservices, distributed systems, and serverless applications.

References:
https://aws.amazon.com/rds/

46
Q

What are the change management tools that helps AWS customers audit and monitor all resource changes in their AWS environment? (Choose TWO)

  • AWS Transit Gateway
  • AWS Config
  • AWS X-Ray
  • Amazon Comprehend
  • AWS CloudTrail
A

AWS Config,
AWS CloudTrail

Explanation

Change management is defined as “the Process responsible for controlling the Lifecycle of all Changes. The primary objective of Change Management is to enable beneficial changes to be made, with minimum disruption to IT Services.

Despite all of the investments in software and hardware, an erroneous configuration or misstep in a process can frequently undo these efforts and lead to failure.

AWS Config and AWS CloudTrail are change management tools that help AWS customers audit and monitor all resource and configuration changes in their AWS environment

Customers can use AWS Config to answer “What did my AWS resource look like?” at a point in time. Customers can use AWS CloudTrail to answer “Who made an API call to modify this resource?” For example, a customer can use the AWS Management Console for AWS Config to detect that the security group “Production-DB” was incorrectly configured in the past. Using the integrated AWS CloudTrail information, they can pinpoint which user misconfigured the “Production-DB” security group. In brief, AWS Config provides information about the changes made to a resource, and AWS CloudTrail provides information about who made those changes. These capabilities enable customers to discover any misconfigurations, fix them, and protect their workloads from failures.

The other options are incorrect:

“AWS Transit Gateway” is incorrect. AWS Transit Gateway is a network transit hub that customers can use to interconnect their virtual private clouds (VPCs) and their on-premises networks. AWS transit gateway simplifies how customers interconnect all of their VPCs, across thousands of AWS accounts and into their on-premises networks.

“AWS X-Ray” is incorrect. AWS X-Ray is a debugging service that helps developers understand how their application and its underlying services are performing to identify and troubleshoot the root cause of performance issues and errors.

“Amazon Comprehend” is incorrect. Amazon Comprehend is a Natural Language Processing (NLP) service that uses machine learning to find meaning and insights in text. Customers can use Amazon Comprehend to identify the language of the text, extract key phrases, places, people, brands, or events, understand sentiment about products or services, and identify the main topics from a library of documents. The source of this text could be web pages, social media feeds, emails, or articles. Amazon Comprehend is fully managed, so there are no servers to provision, and no machine learning models to build, train, or deploy.

Note: Natural language processing (NLP) is an artificial intelligence technology that helps computers identify, understand, and manipulate human language.

References:
https://d1.awsstatic.com/whitepapers/aws-overview.pdf

47
Q

Based on the AWS Shared Responsibility Model, which of the following are the sole responsibility of AWS? (Choose TWO)

  • Creating hypervisors
  • Configuring Access Control Lists (ACLs)
  • Monitoring network performance
  • Hardware maintenance
  • Installing software on EC2 instances
A

Creating hypervisors
Hardware maintenance

Explanation

AWS is responsible for items such as the physical security of its data centers, creating hypervisors, replacement of old disk drives, and patch management of the infrastructure.

The customers are responsible for items such as building application schema, analyzing network performance, configuring security groups and network ACLs and encrypting their data.

References:
https://aws.amazon.com/compliance/shared-responsibility-model/

48
Q

Which of the following services can help protect your web applications from SQL injection and other vulnerabilities in your application code?

  • Amazon WAF
  • Amazon Aurora
  • Amazon Cognito
  • AWS IAM
A

Amazon WAF

Explanation

AWS WAF (Web Application Firewall) helps protect your web applications from common web exploits that could affect application availability, compromise security, or consume excessive resources. You can use AWS WAF to create custom rules that block common attack patterns, such as SQL injection or cross-site scripting, and rules that are designed for your specific application.

The other options are incorrect:

“Amazon Aurora” is incorrect. Amazon Aurora is a database service.

“AWS IAM” is incorrect. AWS IAM refers to the AWS Identity and Access Management.

“Amazon Cognito” is incorrect. Amazon Cognito lets you add user sign-up, sign-in, and access control to your web and mobile apps.

References:
https://d1.awsstatic.com/whitepapers/aws-overview.pdf

49
Q

Which of the following activities may help reduce your AWS monthly costs? (Choose TWO)

  • Using the AWS Network Load Balancer (NLB) to load balance the incoming HTTP requests
  • Enabling Amazon EC2 Auto Scaling for all your workloads
  • Removing all of your Cost Allocation Tags
  • Deploying your AWS resources across multiple Availability Zones
  • Creating a lifecycle policy to move infrequently accesed data to less expensive storage tiers
A

Enabling Amazon EC2 Auto Scaling for all your workloads
Creating a lifecycle policy to move infrequently accesed data to less expensive storage tiers

Explanation

Amazon EC2 Auto Scaling monitors your applications and automatically adjusts capacity (up or down) to maintain steady, predictable performance at the lowest possible cost. When demand drops, Amazon EC2 Auto Scaling will automatically remove any excess capacity so you avoid overspending. When demand increases, Amazon EC2 Auto Scaling will automatically add capacity to maintain performance.

For Amazon S3 and Amazon EFS, you can create a lifecycle policy to automatically move infrequently accessed data to less expensive storage tiers. In order to reduce your Amazon S3 costs, you should create a lifecycle policy to automatically move old (or infrequently accessed) files to less expensive storage tiers such as Amazon Glacier, or to automatically delete them after a specified duration. Similarly, you can create an Amazon EFS lifecycle policy to automatically move less frequently accessed data to less expensive storage tiers such as Amazon EFS Standard-Infrequent Access (EFS Standard-IA) and Amazon EFS One Zone-Infrequent Access (EFS One Zone-IA). Amazon EFS Infrequent Access storage classes provide price/performance that is cost-optimized for files not accessed every day, with storage prices up to 92% lower compared to Amazon EFS Standard (EFS Standard) and Amazon EFS One Zone (EFS One Zone) storage classes respectively.

The other options are incorrect:

“Removing all of your Cost Allocation Tags” is incorrect. A tag is a label that you or AWS assigns to an AWS resource. Each tag consists of a key and a value. For each resource, each tag key must be unique, and each tag key can have only one value. You can use tags to organize your resources (by project, team, …etc.), and cost allocation tags to track your AWS costs on a detailed level. After you activate cost allocation tags, AWS uses the cost allocation tags to organize your resource costs on your cost allocation report, to make it easier for you to categorize and track your AWS costs. Removing all of your Cost Allocation Tags will not help reduce your AWS monthly costs.

“Deploying your AWS resources across multiple Availability Zones” is incorrect. Deploying your AWS resources across multiple Availability Zones can help increase your application’s availability and fault-tolerance.

“Using the AWS Network Load Balancer (NLB) to load balance the incoming HTTP requests” is incorrect. This option is incorrect for two reasons:

1st: Load Balancing does not reduce costs, Elastic Load Balancing automatically distributes incoming application traffic evenly across multiple targets, such as Amazon EC2 instances, containers, and Lambda functions, and helps you gain more consistent application performance.
2nd: The recommended Load Balancer for HTTP traffic is the AWS Application Load Balancer, NOT the AWS Network Load Balancer.

Additional information:

Elastic Load Balancing supports four types of load balancers (Application Load Balancer, Network Load Balancer, Gateway Load Balancer, and Classic Load Balancer). You can select the appropriate load balancer based on your application needs.

1- If you need to load balance HTTP\HTTPS requests, AWS recommends using the Application Load Balancer.
2- For network/transport protocols (layer4 – TCP, UDP) load balancing, and for extreme performance/low latency applications, AWS recommends using Network Load Balancer.
3- To manage and distribute traffic across multiple third-party virtual appliances, AWS recommends using the Gateway Load Balancer.
4- If you have an existing application built within the EC2-Classic network, you should use a Classic Load Balancer.

References:
https://aws.amazon.com/ec2/autoscaling/
https://aws.amazon.com/efs/features/infrequent-access/
https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lifecycle-mgmt.html

50
Q

Your company is designing a new application that will store and retrieve photos and videos. Which of the following services should you recommend as the underlying storage mechanism?

  • Amazon SQS
  • Amazon Instance Store
  • Amazon S3
  • Amazon EBS
A

Amazon S3

Explanation

Amazon S3 is object storage built to store and retrieve any amount of data from anywhere on the Internet. It is a storage service that offers an extremely durable, highly available, and infinitely scalable data storage infrastructure at very low costs.

Common use cases of Amazon S3 include:

Media Hosting – Build a redundant, scalable, and highly available infrastructure that hosts video, photo, or music uploads and downloads.
Backup and Storage – Provide data backup and storage services for others.
Hosting static websites – Host and manage static websites quickly and easily.
Deliver content globally - Use S3 in conjunction with CloudFront to distribute content globally with low latency.
Hybrid cloud storage - Create a seamless connection between on-premises applications and Amazon S3 with AWS Storage Gateway in order to reduce your data center footprint, and leverage the scale, reliability, and durability of AWS.

The other options are incorrect:

“Amazon SQS” is incorrect. Amazon SQS is not a storage service. It is a messaging queuing service that can be used to send messages between application components. SQS enables you to decouple and scale microservices, distributed systems, and serverless applications.

“Amazon Instance store” is incorrect. Amazon EC2 Instance Store provides temporary block-level storage for your instance. Instance store is ideal for temporary storage of information that changes frequently, such as buffers, caches, scratch data, and other temporary content, or for data that is replicated across a fleet of instances, such as a load-balanced pool of web servers.

“Amazon EBS” is incorrect. Amazon EBS is not a cost-effective solution for storing images or videos (compared to Amazon S3). Amazon EBS is a block level storage that can be used as a disk drive for Amazon EC2 or Amazon RDS instances. Amazon EBS is designed for application workloads that benefit from fine tuning for performance and capacity. Typical use cases of Amazon EBS include Big Data analytics engines (like the Hadoop/HDFS ecosystem and Amazon EMR clusters), relational and NoSQL databases (like Microsoft SQL Server and MySQL or Cassandra and MongoDB), stream and log processing applications (like Kafka and Splunk), and data warehousing applications (like Vertica and Teradata).

References:
https://aws.amazon.com/s3/

51
Q

Which of the following services allows you to run containerized
applications on a cluster of EC2 instances? (Choose TWO)

  • Amazon Elastic Kubernetes Service
  • Amazon ECS
  • AWS Data Pipeline
  • AWS Cloud9
  • AWS Health Dashboard
A

Amazon Elastic Kubernetes Service
Amazon ECS

Explanation

Amazon Elastic Container Service (Amazon ECS) is a highly scalable, high-performance container orchestration service that supports Docker containers and allows you to easily run and scale containerized applications on AWS. Amazon ECS eliminates the need for you to install and operate your own container orchestration software, manage and scale a cluster of virtual machines, or schedule containers on those virtual machines.

Amazon Elastic Kubernetes Service (Amazon EKS) is a managed service that allows you to use Kubernetes to run and scale containerized applications in the cloud or on-premises.

Kubernetes is an open-source container orchestration system that allows you to deploy and manage containerized applications at scale.

AWS handles provisioning, scaling, and managing the Kubernetes instances in a highly available and secure configuration. This removes a significant operational burden and allows you to focus on building applications instead of managing AWS infrastructure.

On both Amazon EKS and Amazon ECS, you have the option of running your containers on the following compute options:

AWS Fargate — a “serverless” container compute engine where you only pay for the resources required to run your containers. Suited for customers who do not want to worry about managing servers, handling capacity planning, or figuring out how to isolate container workloads for security.

EC2 instances — offers the widest choice of instance types, including processor, storage, and networking. Ideal for customers who want to manage or customize the underlying compute environment and host operating system.

AWS Outposts — run your containers using AWS infrastructure on premises for a consistent hybrid experience. Suited for customers who require local data processing, data residency, and hybrid use cases.

AWS Local Zones — an extension of an AWS Region. Suited for customers who need the ability to place resources in multiple locations closer to end users.

AWS Wavelength — ultra-low-latency mobile edge computing. Suited for 5G applications, interactive and immersive experiences, and connected vehicles.

The other options are incorrect:

“AWS Data Pipeline” is incorrect. AWS Data Pipeline is a web service that makes it easy to schedule regular data movement and data processing activities in the AWS cloud.

“AWS Cloud9” is incorrect. AWS Cloud9 is a cloud-based integrated development environment (IDE) that lets you write, run, and debug your code with just a browser. It includes a code editor, debugger, and terminal.

“AWS Health Dashboard” is incorrect. The AWS Health Dashboard is the single place to learn about the availability and operations of AWS services. You can view the overall status of all AWS services, and you can sign in to access a personalized view of the health of the specific services that are powering your workloads and applications. AWS Health Dashboard proactively notifies you when AWS experiences any events that may affect you, helping provide quick visibility and guidance to minimize the impact of events in progress and plan for any scheduled changes, such as AWS hardware maintenance. With AWS Health Dashboard, alerts are triggered by changes in the health of AWS resources, giving you event visibility and guidance to help quickly diagnose and resolve issues.

References:

https://aws.amazon.com/containers/
https://aws.amazon.com/ecs/
https://docs.aws.amazon.com/eks/latest/userguide/what-is-eks.html
https://aws.amazon.com/blogs/containers/amazon-ecs-vs-amazon-eks-making-sense-of-aws-container-services/

52
Q

Which of the following is one of the benefits of moving infrastructure
from an on-premises data center to AWS?

  • AWS holds responsibility for managing customer applications
  • Free support for all enterprise customers
  • Reduced Capital Expenditure (CapEx)
  • Automatic data protection
A

Reduced Capital Expenditure (CapEx)

Explanation

Capital expenditures (CapEx) are a company’s major, long-term expenses, while operating expenses (OpEx) are a company’s day-to-day expenses. Examples of CapEx include physical assets such as buildings, equipment, and machinery. Examples of OpEx include employee salaries, rent, utilities, and property taxes.

AWS enables businesses to leverage high-end technologies and infrastructure needs with low CapEx and low OpEx. The AWS pay-as-you-go model reduces investments in large capital expenditures. In addition, you can reduce the operating expense (OpEx) costs involved with the management and maintenance of data. This frees up budget, allowing you to quickly act on innovative initiatives that can’t be easily pursued when managing physical data centers.

The other options are incorrect:

“Free support for all enterprise customers” is incorrect. Enterprise customers require access to technical support and other AWS support features. These support features are available only for paid support plans.

“Automatic data protection” is incorrect. Data protection is a customer responsibility. AWS customers have to decide which data should be public or private, set up how their data will be accessed, and decide whether this data will be encrypted or not and so on.

“AWS holds responsibility for managing customer applications” is incorrect. AWS customers are responsible for building, deploying, and managing their applications.

References:
https://d1.awsstatic.com/whitepapers/aws-overview.pdf

53
Q

What does Amazon Elastic Beanstalk provide?

A