AWS Certified Cloud Practitioner Practice Exam (2) Flashcards

1
Q

Which of the following EC2 instance purchasing options supports the Bring Your Own License (BYOL) model for almost every BYOL scenario?

1) Dedicated Instances
2) On-demand Instances
3) Reserved Instances
4) Dedicated Hosts

A

Dedicated Hosts

You have a variety of options for using new and existing Microsoft software licenses on the AWS Cloud. By purchasing Amazon Elastic Compute Cloud (Amazon EC2) or Amazon Relational Database Service (Amazon RDS) license-included instances, you get new, fully compliant Windows Server and SQL Server licenses from AWS. The BYOL model enables AWS customers to use their existing server-bound software licenses, including Windows Server, SQL Server, and SUSE Linux Enterprise Server.

       Your existing licenses may be used on AWS with Amazon EC2 Dedicated Hosts, Amazon EC2 Dedicated Instances or EC2 instances with default tenancy using Microsoft License Mobility through Software Assurance.

      Dedicated Hosts provide additional control over your instances and visibility into Host level resources and tooling that allows you to manage software that consumes licenses on a per-core or per-socket basis, such as Windows Server and SQL Server. This is why most BYOL scenarios are supported through the use of Dedicated Hosts, while only certain scenarios are supported by Dedicated Instances.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Your company is designing a new application that will store and retrieve photos and videos. Which of the following services should you recommend as the underlying storage mechanism?

1) Amazon S3
2) Amazon SQS
3) Amazon Instance store
4) Amazon EBS

A

Amazon S3

Amazon S3 is object storage built to store and retrieve any amount of data from anywhere on the Internet. It is a storage service that offers an extremely durable, highly available, and infinitely scalable data storage infrastructure at very low costs.

Common use cases of Amazon S3 include:

Media Hosting – Build a redundant, scalable, and highly available infrastructure that hosts video, photo, or music uploads and downloads.

Backup and Storage – Provide data backup and storage services for others.

Hosting static websites – Host and manage static websites quickly and easily.

Deliver content globally - Use S3 in conjunction with CloudFront to distribute content globally with low latency.

Hybrid cloud storage - Create a seamless connection between on-premises applications and Amazon S3 with AWS Storage Gateway in order to reduce your data center footprint, and leverage the scale, reliability, and durability of AWS.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Your application has recently experienced significant global growth, and international users are complaining of high latency. What is the AWS characteristic that can help improve your international users’ experience?

1) Elasticity
2) High availability
3) Data durability
4) Global reach

A

Global reach

With AWS, you can deploy your application in multiple regions around the world. The user will be redirected to the Region that provides the lowest possible latency and the highest performance. You can also use the CloudFront service that uses edge locations (which are located in most of the major cities across the world) to deliver content with low latency and high performance to your global users.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Which of the following are important design principles you should adopt when designing systems on AWS? (Choose TWO)

1) Remove single points of failure
2) Always choose to pay as you go
3) Automate wherever possible
4) Always use Global Services in your architecture rather than Regional Services
5) Treat servers as fixed resources

A

1) Remove single points of failure
3) Automate wherever possible

A single point of failure (SPOF) is a part of a system that, if it fails, will stop the entire system from working. You can remove single points of failure by assuming everything will fail and designing your architecture to automatically detect and react to failures. For example, configuring and deploying an auto-scaling group of EC2 instances will ensure that if one or more of the instances crashes, Auto-scaling will automatically replace them with new instances. You should also introduce redundancy to remove single points of failure, by deploying your application across multiple Availability Zones. If one Availability Zone goes down for any reason, the other Availability Zones can serve requests.

           AWS helps you use automation so you can build faster and more efficiently. Using AWS services, you can automate manual tasks or processes such as deployments, development & test workflows, container management, and configuration management.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

AWS has created a large number of Edge Locations as part of its Global Infrastructure. Which of the following is NOT a benefit of using Edge Locations?

1) Edge locations are used by CloudFront to distribute content to global users with low latency
2) Edge locations are used by CloudFront to cache the most recent responses
3) Edge locations are used by CloudFront to improve your end users’ experience when uploading files
4) Edge locations are used by CloudFront to distribute traffic across multiple instances to reduce latency

A

Edge locations are used by CloudFront to distribute traffic across multiple instances to reduce latency

AWS Edge Locations are not used to distribute traffic. Edge Locations are used in conjunction with the CloudFront service to cache common responses and deliver content to end-users with low latency.

With Amazon CloudFront, your users can also benefit from accelerated content uploads. As the data arrives at an edge location, data is routed to AWS storage services over an optimized network path.

The AWS service that is used to distribute load is the AWS Elastic Load Balancing (ELB) service.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Using Amazon RDS falls under the shared responsibility model. Which of the following are customer responsibilities? (Choose TWO)

1) Building the relational database schema
2) Managing the database settings
3) Installing the database software
4) Performing backups
5) Patching the database software

A

1) Building the relational database schema
2) Managing the database settings

Amazon RDS manages the work involved in setting up a relational database, from provisioning the infrastructure capacity you request to installing the database software. Once your database is up and running, Amazon RDS automates common administrative tasks such as performing backups and patching the software that powers your database. With optional Multi-AZ deployments, Amazon RDS also manages synchronous data replication across Availability Zones with automatic failover. Since Amazon RDS provides native database access, you interact with the relational database software as you normally would. This means you’re still responsible for managing the database settings that are specific to your application. You’ll need to build the relational schema that best fits your use case and are responsible for any performance tuning to optimize your database for your application’s workflow.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are the connectivity options that can be used to build hybrid cloud architectures? (Choose TWO)

1) AWS Cloud9
2) AWS VPN
3) AWS CloudTrail
4) AWS Artifact
5) AWS Direct Connect

A

2) AWS VPN
5) AWS Direct Connect

In cloud computing, hybrid cloud refers to the use of both on-premises resources in addition to public cloud resources. A hybrid cloud enables an organization to migrate applications and data to the cloud, extend their datacenter capacity, utilize new cloud-native capabilities, move applications closer to customers, and create a backup and disaster recovery solution with cost-effective high availability. By working closely with enterprises, AWS has developed the industry’s broadest set of hybrid capabilities across storage, networking, security, application deployment, and management tools to make it easy for you to integrate the cloud as a seamless and secure extension of your existing investments.

    AWS Virtual Private Network solutions establish secure connections between your on-premises networks, remote offices, client devices, and the AWS global network. AWS VPN is comprised of two services: AWS Site-to-Site VPN and AWS Client VPN. AWS Site-to-Site VPN enables you to securely connect your on-premises network or branch office site to AWS. AWS Client VPN enables you to securely connect users (from any location) to AWS or on-premises networks. VPN Connections can be configured in minutes and are a good solution if you have an immediate need, have low to modest bandwidth requirements, and can tolerate the inherent variability in Internet-based connectivity.

    AWS Direct Connect does not involve the Internet; instead, it uses dedicated, private network connections between your on-premises network or branch office site and Amazon VPC. AWS Direct Connect is a network service that provides an alternative to using the Internet to connect customer's on-premise sites to AWS. Using AWS Direct Connect, data that would have previously been transported over the Internet can now be delivered through a private network connection between AWS and your datacenter or corporate network. Companies of all sizes use AWS Direct Connect to establish private connectivity between AWS and datacenters, offices, or colocation environments. Compared to AWS VPN (Internet-based connection), AWS Direct Connect can reduce network costs, increase bandwidth throughput, and provide a more consistent network experience.

Additional information:

         Besides the connectivity options that AWS provides, AWS provides many features to support building more efficient hybrid cloud architectures. For example, AWS Identity and Access Management (IAM) can grant your employees and applications access to the AWS Management Console and AWS service APIs using your existing corporate identity systems. AWS IAM supports federation from corporate systems like Microsoft Active Directory, as well as external Web Identity Providers like Google and Facebook.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Which of the following AWS services is designed with native Multi-AZ fault tolerance in mind? (Choose TWO)

1) Amazon Simple Storage Service
2) Amazon EBS
3) Amazon EC2
4) Amazon DynamoDB
5) AWS Snowball

A

1) Amazon Simple Storage Service
4) Amazon DynamoDB

The Multi-AZ principle involves deploying an AWS resource in multiple Availability Zones to achieve high availability for that resource.

DynamoDB automatically spreads the data and traffic for your tables over a sufficient number of servers to handle your throughput and storage requirements, while maintaining consistent and fast performance. All of your data is stored on solid-state disks (SSDs) and is automatically replicated across multiple Availability Zones in an AWS Region, providing built-in fault tolerance in the event of a server failure or Availability Zone outage.

Amazon S3 provides durable infrastructure to store important data and is designed for durability of 99.999999999% of objects. Data in all Amazon S3 storage classes is redundantly stored across multiple Availability Zones (except S3 One Zone-IA and S3 Express One Zone).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Jessica is managing an e-commerce web application in AWS. The application is hosted on six EC2 instances. One day, three of the instances crashed; but none of her customers were affected. What has Jessica done correctly in this scenario?

1) She has properly built a scalable system
2) She has properly built an encrypted system
3) She has properly built an elastic system
4) She has properly built a fault tolerant system

A

She has properly built a fault tolerant system

Fault tolerance is the property that enables a system to continue operating properly in the event of the failure of some (one or more faults within) of its components. Visitors to a website expect the website to be available irrespective of when they visit. For example, when someone wants to visit Jessica’s website to purchase a product, whether it is at 9:00 AM on a Monday or 3:00 PM on holiday, he\she expects that the website will be available and ready to accept his\her purchase. Failing to meet these expectations can cause loss of business and contribute to the development of a negative reputation for the website owner, resulting in lost revenue.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is the AWS service that provides you the highest level of control over the underlying virtual infrastructure?

1) Amazon RDS
2) Amazon EC2
3) Amazon DynamoDB
4) Amazon Redshift

A

Amazon EC2

Amazon EC2 provides you the highest level of control over your virtual instances, including root access and the ability to interact with them as you would any machine.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Amazon S3 Glacier Flexible Retrieval is an Amazon S3 storage class that is suitable for storing ____________ & ______________. (Choose TWO)

1) Long-term analytic data
2) Cached data
3) Active archives
4) Active databases
5) Dynamic websites’ assets

A

1) Long-term analytic data
3) Active archives

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Which AWS Service can be used to establish a dedicated, private network connection between AWS and your datacenter?

1) AWS Snowball
2) AWS Direct Connect
3) Amazon CloudFront
4) Amazon Route 53

A

AWS Direct Connect

AWS Direct Connect is used to establish a dedicated network connection from your premises to AWS. Using AWS Direct Connect, you can establish private connectivity between AWS and your data center, office, or co-location environment, which in many cases can reduce your network costs, increase bandwidth throughput, and provide a more consistent network experience than Internet-based connections.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Which of the following should be considered when performing a TCO analysis to compare the costs of running an application on AWS instead of on-premises?

1) Physical hardware
2) Application development
3) Market research
4) Business analysis

A

Physical hardware

Weighing the financial considerations of owning and operating a data center facility versus employing a cloud infrastructure requires detailed and careful analysis. The Total Cost of Ownership (TCO) is often the financial metric used to estimate and compare costs of a product or a service. When comparing AWS with on-premises TCO, customers should consider all costs of owning and operating a data center. Examples of these costs include facilities, physical servers, storage devices, networking equipment, cooling and power consumption, data center space, and Labor IT cost.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Which statement best describes the operational excellence pillar of the AWS Well-Architected Framework?

1) The ability to monitor systems and improve supporting processes and procedures
2) The efficient use of computing resources to meet requirements
3) The ability to manage datacenter operations more efficiently
4) The ability of a system to recover gracefully from failure

A

The ability to monitor systems and improve supporting processes and procedures

The 6 Pillars of the AWS Well-Architected Framework:

1- Operational Excellence: The operational excellence pillar includes the ability to run and monitor systems to deliver business value and to continually improve supporting processes and procedures.

2- Security: The security pillar includes the ability to protect information, systems, and assets while delivering business value through risk assessments and mitigation strategies.

3- Reliability: The reliability pillar includes the ability of a system to recover from infrastructure or service disruptions, dynamically acquire computing resources to meet demand, and mitigate disruptions such as misconfigurations or transient network issues.

4- Performance Efficiency: The performance efficiency pillar includes the ability to use computing resources efficiently to meet system requirements. Key topics include selecting the right resource types and sizes based on workload requirements, monitoring performance, and making informed decisions to maintain efficiency as business needs evolve.

5- Cost Optimization: The cost optimization pillar includes the ability to avoid or eliminate unneeded cost or sub-optimal resources.

6- Sustainability: The discipline of sustainability addresses the long-term environmental, economic, and societal impact of your business activities. Your business or organization can have negative environmental impacts like direct or indirect carbon emissions, unrecyclable waste, and damage to shared resources like clean water. When building cloud workloads, the practice of sustainability is understanding the impacts of the services used, quantifying impacts through the entire workload lifecycle, and applying design principles and best practices to reduce these impacts.

Additional information:

Creating a software system is a lot like constructing a building. If the foundation is not solid, structural problems can undermine the integrity and function of the building. When architecting technology solutions on Amazon Web Services (AWS), if you neglect the five pillars of operational excellence, security, reliability, performance efficiency, and cost optimization, it can become challenging to build a system that delivers on your expectations and requirements. Incorporating these pillars into your architecture helps produce stable and efficient systems. This allows you to focus on the other aspects of design, such as functional requirements. The AWS Well-Architected Framework helps cloud architects build the most secure, high-performing, resilient, and efficient infrastructure possible for their applications.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

A company is developing a new application using a microservices framework. The new application is having performance and latency issues. Which AWS Service should be used to troubleshoot these issues?

1) AWS CloudTrail
2) Amazon Inspector
3) AWS CodePipeline
4) AWS X-Ray

A

AWS X-Ray

AWS X-Ray helps developers analyze and debug distributed applications in production or under development, such as those built using microservice architecture. With X-Ray, you can understand how your application and its underlying services are performing so you can identify and troubleshoot the root cause of performance issues and errors. X-Ray provides an end-to-end view of requests as they travel through your application, and shows a map of your application’s underlying components. You can use X-Ray to analyze both applications in development and in production, from simple three-tier applications to complex microservices applications consisting of thousands of services.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

In your on-premises environment, you can create as many virtual servers as you need from a single template. What can you use to perform the same in AWS?

1) IAM
2) EBS Snapshot
3) AMI
4) An internet gateway

A

AMI

An Amazon Machine Image (AMI) is a template that contains a software configuration (for example, an operating system, an application server, and applications). This pre-configured template save time and avoid errors when configuring settings to create new instances. You specify an AMI when you launch an instance, and you can launch as many instances from the AMI as you need. You can also launch instances from as many different AMIs as you need.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Which statement is correct with regards to AWS service limits? (Choose TWO)

1) You can contact AWS support to increase the service limits
2) The Amazon Simple Email Service is responsible for sending email notifications when usage approaches a service limit
3) There are no service limits on AWS
4) Each IAM user has the same service limits
5) You can use the AWS Trusted Advisor to monitor your service limits

A

1) You can contact AWS support to increase the service limits
5) You can use the AWS Trusted Advisor to monitor your service limits

Service limits, also referred to as Service quotas, are the maximum number of service resources or operations that apply to an AWS account. Understanding your service limits (and how close you are to them) is an important part of managing your AWS deployments – continuous monitoring allows you to request limit increases or shut down resources before the limit is reached. One of the easiest ways to do this is via AWS Trusted Advisor’s Service Limit Dashboard.

AWS maintains service limits (quotas) for each account to help guarantee the availability of AWS resources, as well as to minimize billing risks for new customers. Some service quotas are raised automatically over time as you use AWS, though most AWS services require that you request quotas increases manually. You can request a quota increase using the Service Quotas console or AWS CLI. AWS Support might approve, deny, or partially approve your requests.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Which of the following are perspectives of the AWS Cloud Adoption Framework (AWS CAF)? (Choose TWO)

1) Sustainability
2) Governance
3) Operational Excellence
4) People
5) Performance Efficiency

A

2) Governance
4) People

The AWS Cloud Adoption Framework (AWS CAF) leverages AWS experience and best practices to help you digitally transform and accelerate your business outcomes through innovative use of AWS. AWS CAF identifies specific organizational capabilities that underpin successful cloud transformations. These capabilities provide best practice guidance that helps you improve your cloud readiness. AWS CAF groups its capabilities in six perspectives: Business, People, Governance, Platform, Security, and Operations. Each perspective comprises a set of capabilities that functionally related stakeholders own or manage in the cloud transformation journey.

AWS CAF perspectives: (IMPORTANT)

Business perspective helps ensure that your cloud investments accelerate your digital transformation ambitions and business outcomes.

People perspective serves as a bridge between technology and business, accelerating the cloud journey to help organizations more rapidly evolve to a culture of continuous growth, learning, and where change becomes business-as-normal, with focus on culture, organizational structure, leadership, and workforce.

Governance perspective helps you orchestrate your cloud initiatives while maximizing organizational benefits and minimizing transformation-related risks.

Platform perspective helps you build an enterprise-grade, scalable, hybrid cloud platform, modernize existing workloads, and implement new cloud-native solutions.

Security perspective helps you achieve the confidentiality, integrity, and availability of your data and cloud workloads.

Operations perspective helps ensure that your cloud services are delivered at a level that meets the needs of your business.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What is the AWS tool that enables you to use scripts to manage all AWS services and resources?

1) AWS Service Catalog
2) AWS Console
3) Amazon FSx
4) AWS CLI

A

AWS CLI

The AWS Command Line Interface (CLI) is a unified tool to manage your AWS services. With just one tool to download and configure, you can control multiple AWS services from the command line and automate them through scripts.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

An organization runs many systems and uses many AWS products. Which of the following services enables them to control how each developer interacts with these products?

1) AWS Identity and Access Management
2) Amazon EMR
3) Network Access Control Lists
4) Amazon RDS

A

AWS Identity and Access Management

AWS Identity and Access Management (IAM) is a web service for securely controlling access to AWS services. With IAM, you can centrally manage users, security credentials such as access keys, and permissions that control which AWS resources users and applications can access.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

An organization needs to analyze and process a large number of data sets. Which AWS service should they use?

1) Amazon SNS
2) Amazon SQS
3) Amazon MQ
4) Amazon EMR

A

Amazon EMR

Amazon EMR (Amazon Elastic MapReduce) is a managed service that helps you analyze and process large volumes of data by distributing computational tasks across a cluster of virtual servers in the AWS Cloud. Amazon EMR supports a range of big data frameworks, including Apache Spark, Apache Hive, and Presto, enabling you to perform large-scale data processing, analytics, and machine learning. Amazon EMR is designed to minimize the complexity of setup, management, and tuning for these frameworks, allowing you to focus on data analysis rather than infrastructure.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Which of the following activities may help reduce your AWS monthly costs? (Choose TWO)

1) Removing all of your Cost Allocation Tags
2) Creating a lifecycle policy to move infrequently accessed data to less expensive storage tiers
3) Deploying your AWS resources across multiple Availability Zones
4) Enabling Amazon EC2 Auto Scaling for all of your workloads
5) Using the AWS Network Load Balancer (NLB) to load balance the incoming HTTP requests

A

2) Creating a lifecycle policy to move infrequently accessed data to less expensive storage tiers
4) Enabling Amazon EC2 Auto Scaling for all of your workloads

Amazon EC2 Auto Scaling monitors your applications and automatically adjusts capacity (up or down) to maintain steady, predictable performance at the lowest possible cost. When demand drops, Amazon EC2 Auto Scaling will automatically remove any excess capacity so you avoid overspending. When demand increases, Amazon EC2 Auto Scaling will automatically add capacity to maintain performance.

For Amazon S3 and Amazon EFS, you can create a lifecycle policy to automatically move infrequently accessed data to less expensive storage tiers. In order to reduce your Amazon S3 costs, you should create a lifecycle policy to automatically move old (or infrequently accessed) files to less expensive storage tiers such as Amazon Glacier, or to automatically delete them after a specified duration. Similarly, you can create an Amazon EFS lifecycle policy to automatically move less frequently accessed data to less expensive storage tiers such as Amazon EFS Standard-Infrequent Access (EFS Standard-IA) and Amazon EFS One Zone-Infrequent Access (EFS One Zone-IA). Amazon EFS Infrequent Access storage classes provide price/performance that is cost-optimized for files not accessed every day, with storage prices up to 92% lower compared to Amazon EFS Standard (EFS Standard) and Amazon EFS One Zone (EFS One Zone) storage classes respectively.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Savings Plans are available for which of the following AWS compute services? (Choose TWO)

1) AWS Lambda
2) Amazon EC2
3) AWS Outposts
4) AWS Batch
5) Amazon Lightsail

A

1) AWS Lambda
2) Amazon EC2

Savings Plans are a flexible pricing model that offers low prices on Amazon EC2, Lambda, Fargate, and Amazon SageMaker usage, in exchange for a commitment to a consistent amount of usage (measured in $/hour) for a 1 or 3 year term. When you sign up for Savings Plans, you will be charged the discounted Savings Plans price for your usage up to your commitment. For example, if you commit to $10 of compute usage an hour, you will get the Savings Plans prices on that usage up to $10 and any usage beyond the commitment will be charged On Demand rates.

Additional information:

What is the difference between Amazon EC2 Savings Plans and Amazon EC2 Reserved instances?

Reserved Instances are a billing discount applied to the use of On-Demand Compute Instances in your account. These On-Demand Instances must match certain attributes, such as instance type and Region to benefit from the billing discount.

For example, let say you have a t2.medium instance running as an On-Demand Instance and you purchase a Reserved Instance that matches the configuration of this particular t2.medium instance. At the time of purchase, the billing mode for the existing instance changes to the Reserved Instance discounted rate. The existing t2.medium instance doesn’t need replacing or migrating to get the discount.

After the reservation expires, the instance is charged as an On-Demand Instance. You can repurchase the Reserved Instance to continue the discounted rate on your instance. Reserved Instances act as an automatic discount on new or existing On-Demand Instances in your account.

Savings Plans also offer significant savings on your Amazon EC2 costs compared to On-Demand Instance pricing. With Savings Plans, you make a commitment to a consistent usage amount, measured in USD per hour. This provides you with the flexibility to use the instance configurations that best meet your needs, instead of making a commitment to a specific instance configuration (as is the case with reserved instances). For example, with Compute Savings Plans, if you commit to $10 of compute usage an hour, you can use as many instances as you need (of any type and in any Region) and you will get the Savings Plans prices on that usage up to $10 and any usage beyond the commitment will be charged On Demand rates.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What is the primary storage service used by Amazon RDS database instances?

1) Amazon S3
2) AWS Storage Gateway
3) Amazon EBS
4) Amazon FSx

A

Amazon EBS

DB instances for Amazon RDS for MySQL, MariaDB, PostgreSQL, Oracle, IBM Db2, and Microsoft SQL Server use Amazon Elastic Block Store (Amazon EBS) volumes for database and log storage.

Additional information:

EBS volumes are performant for your most demanding workloads, including mission-critical applications such as SAP, Oracle, and Microsoft products. Amazon EBS scales with your performance needs, whether you are supporting millions of gaming customers or billions of e-commerce transactions. A broad range of workloads, such as relational databases (including Amazon RDS databases) and non-relational databases (including Cassandra and MongoDB), enterprise applications, containerized applications, big data analytics engines, file systems, and media workflows are widely deployed on Amazon EBS.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Using Amazon EC2 falls under which of the following cloud computing models?

1) IaaS
2) IaaS & SaaS
3) SaaS
4) PaaS

A

IaaS

Infrastructure as a Service (IaaS) contains the basic building blocks for Cloud IT and typically provide access to networking features, computers (virtual or on dedicated hardware), and data storage space. Infrastructure as a Service provides you with the highest level of flexibility and management control over your IT resources and is most similar to existing IT resources that many IT departments and developers are familiar with today.

For example, a service such as Amazon Elastic Compute Cloud (Amazon EC2) is categorized as Infrastructure as a Service (IaaS) and requires the customer to perform all of the configuration and management tasks. Customers that deploy an Amazon EC2 instance are responsible for management of the guest operating system (including updates and security patches), any application software or utilities installed by the customer on the instances, and the configuration of the AWS-provided firewall (called a security group) on each instance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Which of the below is a best-practice when building applications on AWS?

1) Strengthen physical security by applying the principle of least privilege
2) Use IAM policies to maintain performance
3) Decouple the components of the application so that they run independently
4) Ensure that the application runs on hardware from trusted vendors

A

Decouple the components of the application so that they run independently

An application should be designed in a way that reduces interdependencies between its components. A change or a failure in one component should not cascade to other components. If the components of an application are tightly-coupled (interconnected) and one component fails, the entire application will also fail. Amazon SQS and Amazon SNS are powerful tools that help you build loosely-coupled applications. SQS and SNS can be integrated together to decouple application components so that they run independently, increasing the overall fault tolerance of the application.

Understanding how SQS and SNS services work is not required for the Cloud Practitioner level, but let’s just take a simple example, let say you have two components in your application, Component A & Component B. Component A sends messages (jobs) to component B to process. Now, what happens if component A sends a large number of messages at the same time? Component B will fail, and the entire application will fail. SQS act as a middleman, receives and stores messages from component A, and component B pull and process messages at its own pace. This way, both components run independently from each other.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Which of the following can help protect your EC2 instances from DDoS attacks? (Choose TWO)

1) Security Groups
2) AWS IAM
3) AWS Batch
4) AWS CloudHSM
5) Network Access Control Lists (Network ACLs)

A

1) Security Groups
5) Network Access Control Lists (Network ACLs)

Malicious actors sometimes use distributed denial of service (DDoS) attacks to flood a network, system, or application with more traffic, connections, or requests than it can handle.

When dealing with DDoS attacks, it is important to minimize the opportunities an attacker has to target your applications. This means restricting the type of traffic that can reach your applications. Configuring security groups and network ACLs in Amazon VPC is an effective tool to help filter traffic, and reduce the attack surface of your applications.

Security groups allow you to control inbound and outbound traffic to your Amazon EC2 instances by specifically allowing communication only on the ports and protocols required for your applications. Access to any other port or protocol is automatically denied.

Network ACLs provide an additional layer of defense for your VPC by allowing you to create allow and deny rules that are processed in numeric order, much like a traditional firewall. This is useful for allowing or denying traffic at a subnet level, as opposed to security groups that filter traffic at an EC2 instance level. For example, if you have identified Internet IP addresses or ranges that are unwanted or potentially abusive, you can block them from reaching your application with a Network ACL deny rule.

Additional information:

AWS does not configure security groups or Network ACLs to protect you from DDoS attacks. It is the responsibility of the customer to set the appropriate Network ACL and security group rules to protect from these attacks and secure their network.

In addition to Security Groups and Network ACLs, AWS provides flexible infrastructure and services that help customers implement strong DDoS mitigations and create highly available application architectures that follow AWS Best Practices for DDoS Resiliency. These include services such as Amazon Route 53, Amazon CloudFront, Elastic Load Balancing, and AWS WAF to control and absorb traffic, and deflect unwanted requests. These services integrate with AWS Shield, a managed DDoS protection service that provides always-on detection and automatic inline mitigations to safeguard web applications running on AWS.

28
Q

Sarah has deployed an application in the Northern California (us-west-1) region. After examining the application’s traffic, she notices that about 30% of the traffic is coming from Asia. What can she do to reduce latency for the users in Asia?

1) Recreate the website content
2) Replicate the current resources across multiple Availability Zones within the same region
3) Migrate the application to a hosting provider in Asia
4) Create a CDN using CloudFront, so that content is cached at Edge Locations close to and in Asia

A

Create a CDN using CloudFront, so that content is cached at Edge Locations close to and in Asia

CloudFront is AWS’s content delivery network (CDN) service. Amazon CloudFront employs a global network of edge locations and regional edge caches that cache copies of your content close to your end-users. Amazon CloudFront ensures that end-user requests are served by the closest edge location. As a result, end-user requests travel a short distance, reducing latency and improving the overall performance.

29
Q

What is the AWS service that enables you to manage all of your AWS accounts from a single management account?

1) AWS Trusted Advisor
2) Amazon Config
3) AWS Organizations
4) AWS WAF

A

AWS Organizations

AWS Organizations is an account management service that enables you to consolidate multiple AWS accounts into an organization that you create and centrally manage.

AWS Organizations enables the following capabilities:

1- Automate AWS account creation and management

2- Consolidate billing across multiple AWS accounts

3- Govern access to AWS services, resources, and regions

4- Centrally manage access policies across multiple AWS accounts

5- Configure AWS services across multiple accounts

AWS Organizations is offered at no additional charge.

30
Q

Which of the following is one of the benefits of moving infrastructure from an on-premises data center to AWS?

1) AWS holds responsibility for managing customer applications
2) Free support for all enterprise customers
3) Automatic data protection
4) Reduced Capital Expenditure (CapEx)

A

Reduced Capital Expenditure (CapEx)

31
Q

What are two advantages of using Cloud Computing over using traditional data centers? (Choose TWO)

1) Distributed infrastructure
2) Reserved Compute capacity
3) Virtualized compute resources
4) Dedicated hosting
5) Eliminating Single Points of Failure (SPOFs)

A

1) Distributed infrastructure
5) Eliminating Single Points of Failure (SPOFs)

These are things that traditional web hosting cannot provide:

**High-availability (eliminating single points of failure): A system is highly available when it can withstand the failure of an individual component or multiple components, such as hard disks, servers, and network links. The best way to understand and avoid the single point of failure is to begin by making a list of all major points of your architecture. You need to break the points down and understand them further. Then, review each of these points and think what would happen if any of these failed. AWS gives you the opportunity to automate recovery and reduce disruption at every layer of your architecture.

Additionally, AWS provides fully managed services that enable customers to offload the administrative burdens of operating and scaling the infrastructure to AWS so that they don’t have to worry about high availability or Single Point of Failures. For example, AWS Lambda and DynamoDB are serverless services; there are no servers to provision, patch, or manage and no software to install, maintain, or operate. Availability and fault tolerance are built-in, eliminating the need to architect your applications for these capabilities.

**Distributed infrastructure: The AWS Cloud operates in over 75 Availability Zones within over 20 geographic Regions around the world, with announced plans for more Availability Zones and Regions, allowing you to reduce latency to users from all around the world.

**On-demand infrastructure for scaling applications or tasks: AWS allows you to provision the required resources for your application in minutes and also allows you to stop them when you don’t need them.

**Cost savings: You don’t have to run your own data center for internal or private servers, so your IT department doesn’t have to make bulk purchases of servers which may never get used, or may be inadequate. The “pay as you go” model from AWS allows you to pay only for what you use and the ability to scale down to avoid over-spending. With AWS you don’t have to pay an entire IT department to maintain that hardware – you don’t even have to pay an accountant to figure out how much hardware you can afford or how much you need to purchase.

32
Q

What are the Amazon RDS features that can be used to improve the availability of your database? (Choose TWO)

1) Multi-AZ Deployment
2) Automatic patching
3) Edge Locations
4) Read Replicas
5) AWS Regions

A

1) Multi-AZ Deployment
4) Read Replicas

In a Multi-AZ deployment, Amazon RDS automatically provisions and maintains a synchronous standby replica in a different Availability Zone. The primary DB instance is synchronously replicated across Availability Zones to a standby replica to provide data redundancy, eliminate I/O freezes, and minimize latency spikes during system backups. Running a DB instance with high availability can enhance availability during planned system maintenance, and help protect your databases against DB instance failure and Availability Zone disruption.

Amazon RDS Read Replicas provide enhanced performance and durability for database (DB) instances. This feature makes it easy to elastically scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads. You can create one or more replicas of a given source DB Instance and serve high-volume application read traffic from multiple copies of your data, thereby increasing aggregate read throughput.

Read replicas provide a complementary availability mechanism to Amazon RDS Multi-AZ Deployments. You can promote a read replica if the source DB instance fails. You can also replicate DB instances across AWS Regions as part of your disaster recovery strategy. This functionality complements the synchronous replication, automatic failure detection, and failover provided with Multi-AZ deployments.

33
Q

A company has deployed a new web application on multiple Amazon EC2 instances. Which of the following should they use to ensure that the incoming HTTP traffic is distributed evenly across the instances?

1) AWS Network Load Balancer
2) AWS Application Load Balancer
3) AWS Auto Scaling
4) AWS Gateway Load Balancer

A

AWS Application Load Balancer

Elastic Load Balancing automatically distributes incoming application traffic across multiple targets, such as Amazon EC2 instances, containers, IP addresses, and Lambda functions. Elastic Load Balancing supports four types of load balancers (Application Load Balancer, Network Load Balancer, Gateway Load Balancer, and Classic Load Balancer). You can select the appropriate load balancer based on your application needs.

1- If you need to load balance HTTP\HTTPS requests, AWS recommends using the AWS Application Load Balancer.

2- For network/transport protocols (layer4 – TCP, UDP) load balancing and for extreme performance/low latency applications, AWS recommends using the AWS Network Load Balancer.

3- To manage and distribute traffic across multiple third-party virtual appliances, AWS recommends using the AWS Gateway Load Balancer.

4- If you have an existing application built within the EC2-Classic network, you should use the AWS Classic Load Balancer.

Application Load Balancer is best suited for load balancing of HTTP and HTTPS traffic. In our case, the application receives HTTP traffic. Hence, the Application Load Balancer is the correct answer here.

34
Q

You are working on two projects that require completely different network configurations. Which AWS service or feature will allow you to isolate resources and network configurations?

1) Virtual Private Cloud
2) Security Groups
3) Amazon CloudFront
4) Internet gateways

A

Virtual Private Cloud

Amazon Virtual Private Cloud (Amazon VPC) lets you provision a logically isolated section of the AWS Cloud where you can launch AWS resources in a virtual network that you define. You have complete control over your virtual networking environment, including selection of the IP address range, creation of subnets, and configuration of route tables and network gateways.

“Security Groups” is incorrect. Security Groups are used to control traffic.

“Internet gateways” is incorrect. An internet gateway is a VPC component that allows communication between your VPC and the internet.

“Amazon CloudFront” is incorrect. Amazon CloudFront is a Content Delivery Network.

35
Q

What is the AWS serverless service that allows you to run your applications without any administrative burden?

1) Amazon RDS instances
2) Amazon LightSail
3) AWS Lambda
4) Amazon EC2 instances

A

AWS Lambda

AWS Lambda is an AWS-managed compute service. It lets you run code without provisioning or managing servers. With Lambda, you can run code for virtually any type of application or backend service - all with zero administration. Just upload your code, and Lambda takes care of everything required to run and scale your code with high availability. You pay only for the compute time you consume - there is no charge when your code is not running.

36
Q

Which of the following AWS services can be used as a compute resource? (Choose TWO)

1) AWS Lambda
2) Amazon VPC
3) Amazon CloudWatch
4) Amazon EC2
5) Amazon S3

A

Amazon EC2
Lambda
## Footnote

AWS Lambda is a Serverless computing service. Serverless computing allows you to build and run applications and services without thinking about servers. With serverless computing, your application still runs on servers, but all the server management is done by AWS.

Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, and resizable compute capacity in the cloud. Unlike AWS Lambda, Amazon EC2 is a server-based computing service, the Customer is responsible for performing all server configurations and management tasks.

37
Q

A company has business critical workloads hosted on AWS and they are unwilling to accept any downtime. Which of the following is a recommended best practice to protect their workloads in the event of an unexpected natural disaster?

1) Replicate data across multiple Edge Locations worldwide and use Amazon CloudFront to perform automatic failover in the event of an outage
2) Create point-in-time backups in another subnet and recover this data when a disaster occurs
3) Deploy AWS resources to another AWS Region and implement an Active-Active disaster recovery strategy
4) Deploy AWS resources across multiple Availability Zones within the same AWS Region

A

Deploy AWS resources to another AWS Region and implement an Active-Active disaster recovery strategy

Disaster recovery is about preparing for and recovering from events that have a negative impact on your business continuity or finances. This could be a natural disaster, hardware or software failure, a network outage, a power outage, physical damage to a building like fire or flooding, or some other significant disaster.

In AWS, customers have the flexibility to choose the disaster recovery approach that fits their budget. The approaches could be as minimum as backup and restore from another AWS Region or full-scale multi-region Active-Active solution.

With the multi-region Active-Active solution, your workload is deployed to, and actively serving traffic from, multiple AWS Regions. If an entire Region goes down because of a natural disaster or any other reason, the other Regions will still be available and able to serve user requests.

38
Q

Which of the following AWS offerings is a MySQL-compatible relational database service that can scale capacity automatically based on demand?

1) Amazon Neptune
2) Amazon Aurora
3) Amazon RDS for PostgreSQL
4) Amazon RDS for SQL Server

A

Amazon Aurora

Amazon Aurora is a MySQL and PostgreSQL compatible relational database built for the cloud, that combines the performance and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. Aurora is up to five times faster than standard MySQL databases and three times faster than standard PostgreSQL databases. It provides the security, availability, and reliability of commercial-grade databases at 1/10th the cost. Aurora is fully managed by Amazon Relational Database Service (RDS), which automates time-consuming administration tasks like hardware provisioning, database setup, patching, and backups.

Amazon Aurora features “Amazon Aurora Serverless” which is an on-demand, auto-scaling configuration for Amazon Aurora (MySQL-compatible and PostgreSQL-compatible editions), where the database will automatically start up, shut down, and scale capacity up or down based on your application’s needs.

39
Q

What does Amazon Elastic Beanstalk provide?

1) A PaaS solution to automate application deployment
2) A compute engine for Amazon ECS
3) A NoSQL database service
4) A scalable file storage solution for use with AWS and on-premises servers

A

A PaaS solution to automate application deployment

AWS Elastic Beanstalk is an application container on top of Amazon Web Services. Elastic Beanstalk makes it easy for developers to quickly deploy and manage applications in the AWS Cloud. Developers simply upload their application code, and Elastic Beanstalk automatically handles the deployment details of capacity provisioning, load balancing, auto-scaling, and application health monitoring.

40
Q

Which of the following is equivalent to a user name and password and is used to authenticate your programmatic access to AWS services and APIs?

1) MFA
2) Instance Password
3) Access Keys
4) Key pairs

A

Access Keys

Access keys consist of two parts: an access key ID and a secret access key. You must provide your AWS access keys to make programmatic requests to AWS or to use the AWS Command Line Interface or AWS Tools for PowerShell. Like a user name and password, you must use both the access key ID and secret access key together to authenticate your requests.

41
Q

Which of the following procedures will help reduce your Amazon S3 costs?

1) Use the Import/Export feature to move old files automatically to Amazon Glacier
2) Move all the data stored in S3 standard to EBS
3) Use the right combination of storage classes based on different use cases
4) Pick the right Availability Zone for your S3 bucket

A

Use the right combination of storage classes based on different use cases

Amazon S3 offers a range of storage classes designed for different use cases. These include S3 Standard for general-purpose storage of frequently accessed data; S3 Intelligent-Tiering for data with unknown or changing access patterns; S3 Standard-Infrequent Access (S3 Standard-IA) and S3 One Zone-Infrequent Access (S3 One Zone-IA) for long-lived, but less frequently accessed data; and Amazon S3 Glacier Instant Retrieval, Amazon S3 Glacier Flexible Retrieval, and Amazon S3 Glacier Deep Archive for long-term archive and digital preservation.

42
Q

Which of the following aspects of security are managed by AWS? (Choose TWO)

1) Access permissions
2) Encryption of EBS volumes
3) Hardware patching
4) VPC security
5) Securing global physical infrastructure

A

3) Hardware patching
5) Securing global physical infrastructure

AWS is continuously innovating the design and systems of its data centers to protect them from man-made and natural risks. For example, at the first layer of security, AWS provides a number of security features depending on the location, such as security guards, fencing, security feeds, intrusion detection technology, and other security measures.

According to the Shared Responsibility model, patching of the underlying hardware is the AWS’ responsibility. AWS is responsible for patching and fixing flaws within the infrastructure, but customers are responsible for patching their guest OS and applications.

43
Q

What is the AWS data warehouse service that supports a high level of query performance on large amounts of datasets?

1) Amazon DynamoDB
2) Amazon RDS
3) Amazon Kinesis
4) Amazon Redshift

A

Amazon Redshift

Amazon Redshift is a fully managed, petabyte-scale data warehouse service in the cloud. It allows you to run complex analytic queries against petabytes of structured and semi-structured data. You can start with just a few hundred gigabytes of data and scale to a petabyte or more. Amazon Redshift manages the work needed to set up, operate, and scale a data warehouse, from provisioning the infrastructure capacity to automating ongoing administrative tasks such as backups, and patching.

Amazon Redshift offers two main options: provisioned and serverless.

Amazon Redshift Serverless: With this option, you can quickly set up and scale your data warehouse in seconds. There’s no need to configure or maintain the infrastructure. Simply create a Redshift Serverless endpoint, load your data, and start querying to gain insights.

Provisioned Redshift: This option is suitable for predictable workloads that benefit from dedicated infrastructure. It offers more control over resource allocation and can be optimized for stable, long-term use.

Amazon Redshift’s flexibility allows you to select the option that best matches your needs, making it easier to run analytics efficiently, whether for variable or consistent workloads.

44
Q

What is the AWS service that performs automated network assessments of Amazon EC2 instances to check for vulnerabilities?

1) Amazon Kinesis
2) AWS Network Access Control Lists
3) Security groups
4) Amazon Inspector

A

Amazon Inspector

Amazon Inspector is an automated vulnerability management service that continually scans Amazon Elastic Compute Cloud (EC2) instances, AWS Lambda functions, and container images in Amazon ECR and within continuous integration and continuous delivery (CI/CD) tools, in near-real time for software vulnerabilities and unintended network exposure.

45
Q

AWS Compute Optimizer provides rightsizing recommendations for which of the following AWS resources?

1) AWS STS credentials
2) Amazon Lightsail virtual private servers
3) Amazon Elastic Compute Cloud (EC2) instances
4) Amazon S3 buckets

A

Amazon Elastic Compute Cloud (EC2) instances

Right-sizing is the process of matching compute resources to your workload performance and capacity requirements at the lowest possible cost. AWS Compute Optimizer recommends the optimal AWS compute resources for your workloads to reduce costs and improve performance. AWS Compute Optimizer helps you identify the optimal AWS resource configurations, such as Amazon Elastic Compute Cloud (EC2) instance types, Amazon Elastic Block Store (EBS) volume configurations, task sizes of Amazon Elastic Container Service (ECS) services on AWS Fargate, and AWS Lambda function memory sizes, using machine learning to analyze historical utilization metrics.

Picking an Amazon EC2 instance for a given workload means finding the instance family that most closely matches the CPU, disk I/O, and memory needs of your workload. Amazon EC2 provides a wide selection of instances, which gives you lots of flexibility to right-size your resources to match capacity needs at the lowest cost.

AWS Compute Optimizer helps avoid over-provisioning and under-provisioning four types of AWS resources:

1- Amazon Elastic Compute Cloud (EC2) instance types

2- Amazon Elastic Block Store (EBS) volumes

3- Amazon Elastic Container Service (ECS) services on AWS Fargate

4- AWS Lambda functions

Additional information:

AWS Cost Explorer can also be used to find Amazon EC2 rightsizing recommendations. AWS Cost Explorer and AWS Compute Optimizer use the same rightsizing recommendation engine.

46
Q

Under the Shared Responsibility Model, which of the following controls do customers fully inherit from AWS? (Choose TWO)

1) Patch management controls
2) Environmental controls
3) Awareness & Training
4) Database controls
5) Physical controls

A

2) Environmental controls
5) Physical controls

AWS is responsible for physical controls and environmental controls. Customers inherit these controls from AWS.

As mentioned in the AWS Shared Responsibility Model page, Inherited Controls are controls which a customer fully inherits from AWS such as physical controls and environmental controls.

As a customer deploying an application on AWS infrastructure, you inherit security controls pertaining to the AWS physical, environmental and media protection, and no longer need to provide a detailed description of how you comply with these control families.

For example: Let’s say you have built an application in AWS for customers to securely store their data. But your customers are concerned about the security of the data and ensuring compliance requirements are met. To address this, you assure your customer that “our company does not host customer data in its corporate or remote offices, but rather in AWS data centers that have been certified to meet industry security standards.” That includes physical and environmental controls to secure the data, which is the responsibility of Amazon. Companies do not have physical access to the AWS data centers, and as such, they fully inherit the physical and environmental security controls from AWS.

You can read more about AWS’ data center controls here:

https://aws.amazon.com/compliance/data-center/controls/

47
Q

A company is migrating its on-premises database to Amazon RDS. What should the company do to ensure Amazon RDS costs are kept to a minimum?

1) Use a Multi-Region Active-Passive architecture
2) Combine On-demand Capacity Reservations with Saving Plans
3) Right-size before and after migration
4) Use a Multi-Region Active-Active architecture

A

Right-size before and after migration

Right-sizing is the process of matching instance types and sizes to your workload performance and capacity requirements at the lowest possible cost. By right-sizing before migration, you can significantly reduce your infrastructure costs. If you skip right-sizing to save time, your migration speed might be faster, but you will end up with higher cloud infrastructure spend for a potentially long time.

Because your resource needs are always changing, right-sizing must become an ongoing process to continually achieve cost optimization. It’s important to right-size when you first consider moving to the cloud and calculate the total cost of ownership. However, it’s equally important to right-size periodically once you’re in the cloud to ensure ongoing cost-performance optimization.

Picking an Amazon RDS instance for a given workload means finding the instance family that most closely matches the CPU, disk I/O, and memory needs of your workload. Amazon RDS provides a wide selection of instances, which gives you lots of flexibility to right-size your resources to match capacity needs at the lowest cost.

48
Q

Which of the following services will help businesses ensure compliance in AWS?

1) CloudWatch
2) CloudFront
3) AWS Application Migration Service
4) CloudTrail

A

CloudTrail

AWS CloudTrail is designed to log all actions taken in your AWS account. This provides a great resource for governance, compliance, and risk auditing.

49
Q

A company has created a solution that helps AWS customers improve their architectures on AWS. Which AWS Partner Path may support this company?

1) APN Services Path
2) APN Training Path
3) APN Distribution Path
4) APN Hardware Path

A

APN Services Path

The AWS Partner Network (APN) is a global community of partners that leverages programs, expertise, and resources to build, market, and sell customer offerings. This diverse network features 100,000 partners from more than 150 countries. As an AWS Partner, you are uniquely positioned to help customers take full advantage of all that AWS has to offer and accelerate their journey to the cloud.

Together, partners and AWS can provide innovative solutions, solve technical challenges, win deals, and deliver value to our mutual customers.

AWS Partner Paths provide a flexible way to accelerate your engagement with AWS. Easily navigate through resources, benefits, and programs relevant to your business. For example, if you provide consultation services ( like helping AWS customers improve their architectures, improve performance or reduce costs), you should enroll in the Services Path. The Services Path is for organizations that deliver consulting, professional, managed, and value-added resale services.

AWS Partner Paths:

1- Software Path

The Software Path is for organizations that develop software that runs on or is integrated with AWS.

2- Hardware Path

The Hardware Path is for organizations that develop hardware devices that work with AWS.

3- Services Path

The Services Path is for organizations that deliver consulting, professional, managed, and value-added resale services.

4- Training Path

The Training Path is for organizations that sell, deliver, or incorporate AWS training.

5- Distribution Path

The Distribution Path is for organizations that recruit, onboard, and support their partners to resell and develop AWS solutions.

50
Q

Which of the following services can help protect your web applications from SQL injection and other vulnerabilities in your application code?

1) Amazon Aurora
2) AWS IAM
3) Amazon Cognito
4) AWS WAF

A

AWS WAF

AWS WAF (Web Application Firewall) helps protect your web applications from common web exploits that could affect application availability, compromise security, or consume excessive resources. You can use AWS WAF to create custom rules that block common attack patterns, such as SQL injection or cross-site scripting, and rules that are designed for your specific application.

51
Q

According to the AWS Shared responsibility model, which of the following are the responsibility of the customer? (Choose TWO)

1) Controlling physical access to AWS Regions
2) Managing environmental events of AWS data centers
3) Ensuring that the underlying EC2 host is configured properly
4) Protecting the confidentiality of data in transit in Amazon S3
5) Patching applications installed on Amazon EC2

A

4) Protecting the confidentiality of data in transit in Amazon S3
5) Patching applications installed on Amazon EC2

Data protection refers to protecting data while in-transit (as it travels to and from Amazon S3) and at rest (while it is stored on disks in AWS data centers). AWS customers are responsible for protecting data in-transit using Secure Socket Layer/Transport Layer Security (SSL/TLS) or client-side encryption.

Patch management is a shared control between AWS and the customer. AWS is responsible for patching the underlying hosts, updating the firmware, and fixing flaws within the infrastructure, but customers are responsible for patching their guest operating system and applications.

52
Q

What does Amazon ElastiCache provide?

1) In-memory caching for read-heavy applications
2) An online software store that allows Customers to launch pre-configured software with just few clicks
3) A domain name system in the cloud
4) An Ehcache compatible in-memory data store

A

In-memory caching for read-heavy applications

ElastiCache is a web service that makes it easy to set up, manage, and scale a distributed in-memory data store or cache environment in the cloud. It provides a high-performance, scalable, and cost-effective caching solution, while removing the complexity associated with deploying and managing a distributed cache environment. The in-memory caching provided by Amazon ElastiCache can be used to significantly improve latency and throughput for many read-heavy applications (such as social networking, gaming, media sharing and Q&A portals) or compute-intensive workloads (such as a recommendation engine).

In-memory caching improves application performance by storing critical pieces of data in memory for low-latency access. Cached information may include the results of common database queries or the results of computationally-intensive calculations.

Additional information:

The primary purpose of an in-memory data store is to provide ultrafast (submillisecond latency) and inexpensive access to copies of data. Querying a database is always slower and more expensive than locating a copy of that data in a cache. Some database queries are especially expensive to perform. An example is queries that involve joins across multiple tables or queries with intensive calculations. By caching (storing) such query results, you pay the price of the query only once. Then you can quickly retrieve the data multiple times without having to re-execute the query.

53
Q

A company has a large amount of structured data stored in their on-premises data center. They are planning to migrate all the data to AWS, what is the most appropriate AWS database option?

1) Amazon DynamoDB
2) Amazon RDS
3) Amazon ElastiCache
4) Amazon SNS

A

Amazon RDS

Since the data is structured, then it is best to use a relational database service such as Amazon RDS.

54
Q

Which of the following describes the payment model that AWS makes available for customers who consistently use Amazon EC2 over a 3-year term to reduce their total computing costs?

1) Save when you commit
2) Pay as you go
3) Pay less as AWS grows
4) Pay less by using more

A

Save when you commit

For Customers who can commit to using EC2 over a one or 3-year term, it is better to use Amazon EC2 Reserved Instances or AWS Savings Plans. Reserved Instances and AWS Savings Plans provide a significant discount (up to 72%) compared to On-Demand instance pricing.

55
Q

What are the AWS services\features that can help you maintain a highly available and fault-tolerant architecture in AWS? (Choose TWO)

1) AWS Direct Connect
2) Amazon EC2 Auto Scaling
3) Elastic Load Balancer
4) CloudFormation
5) Network ACLs

A

2) Amazon EC2 Auto Scaling
3) Elastic Load Balancer

Amazon EC2 Auto Scaling is a fully managed service designed to launch or terminate Amazon EC2 instances automatically to help ensure you have the correct number of Amazon EC2 instances available to handle the load for your application. Amazon EC2 Auto Scaling helps you maintain application availability and fault tolerance through fleet management for EC2 instances, which detects and replaces unhealthy instances, and by scaling your Amazon EC2 capacity automatically according to conditions you define. You can use Amazon EC2 Auto Scaling to automatically increase the number of Amazon EC2 instances during demand spikes to maintain performance and decrease capacity during lulls to reduce costs.

Elastic Load Balancing provides an effective way to increase the availability and fault tolerance of a system. First ELB tries to discover the availability of your EC2 instances, it periodically sends pings, attempts connections, or sends requests to test the EC2 instances. These tests are called health checks. The load balancer routes user requests only to the healthy instances. When the load balancer determines that an instance is unhealthy, it stops routing requests to that instance. The load balancer resumes routing requests to the instance when it has been restored to a healthy state.

56
Q

What are the default security credentials that are required to access the AWS management console for an IAM user account?

1) Security tokens
2) MFA
3) Access keys
4) A user name and password

A

A user name and password

The AWS Management Console allows you to access and manage Amazon Web Services through a simple and intuitive web-based user interface. You can only access the AWS management console if you have a valid user name and password.

57
Q

Based on the AWS Shared Responsibility Model, which of the following are the sole responsibility of AWS? (Choose TWO)

1) Installing software on EC2 instances
2) Hardware maintenance
3) Monitoring network performance
4) Creating hypervisors
5) Configuring Access Control Lists (ACLs)

A

2) Hardware maintenance
4) Creating hypervisors

AWS is responsible for items such as the physical security of its data centers, creating hypervisors, replacement of old disk drives, and patch management of the infrastructure.

The customers are responsible for items such as building application schema, analyzing network performance, configuring security groups and network ACLs and encrypting their data.

58
Q

Which of the following AWS security features is associated with an EC2 instance and functions to filter incoming traffic requests?

1) Network ACL
2) AWS Systems Manager Session Manager
3) Security Groups
4) VPC Flow logs

A

Security Groups

Security Groups act as a firewall for associated Amazon EC2 instances, controlling both inbound and outbound traffic at the instance level.

59
Q

What are the change management tools that helps AWS customers audit and monitor all resource changes in their AWS environment? (Choose TWO)

1) AWS Config
2) AWS Transit Gateway
3) AWS X-Ray
4) Amazon Comprehend
5) AWS CloudTrail

A

1) AWS Config
5) AWS CloudTrail

Change management is defined as “the Process responsible for controlling the Lifecycle of all Changes. The primary objective of Change Management is to enable beneficial changes to be made, with minimum disruption to IT Services.

Despite all of the investments in software and hardware, an erroneous configuration or misstep in a process can frequently undo these efforts and lead to failure.

AWS Config and AWS CloudTrail are change management tools that help AWS customers audit and monitor all resource and configuration changes in their AWS environment

Customers can use AWS Config to answer “What did my AWS resource look like?” at a point in time. Customers can use AWS CloudTrail to answer “Who made an API call to modify this resource?” For example, a customer can use the AWS Management Console for AWS Config to detect that the security group “Production-DB” was incorrectly configured in the past. Using the integrated AWS CloudTrail information, they can pinpoint which user misconfigured the “Production-DB” security group. In brief, AWS Config provides information about the changes made to a resource, and AWS CloudTrail provides information about who made those changes. These capabilities enable customers to discover any misconfigurations, fix them, and protect their workloads from failures.

60
Q

How are AWS customers billed for Linux-based Amazon EC2 usage?

1) EC2 instances will be billed on one hour increments, with a minimum of one day
2) EC2 instances will be billed on one day increments, with a minimum of one month
3) EC2 instances will be billed on one minute increments, with a minimum of one hour
4) EC2 instances will be billed on one second increments, with a minimum of one minute

A

EC2 instances will be billed on one second increments, with a minimum of one minute

Pricing is per instance-hour consumed for each instance, from the time an instance is launched until it is terminated or stopped. Each partial instance-hour consumed will be billed per-second (minimum of 1 minute) for Amazon Linux, Windows, Red Hat Enterprise Linux, Ubuntu, and Ubuntu Pro Instances and as a full hour for all other instance types.

Examples for per-second billing:

1- If you run an Amazon Linux instance for 4 seconds or 20 seconds or 59 seconds, you will be charged for one minute. (this is what we mean by minimum of 1 minute)

2- If you run an Amazon Linux instance for 1 minute and 3 seconds, you will be charged for 1 minute and 3 seconds.

3- If you run an Amazon Linux instance for 3 hours, 25 minutes and 7 seconds, you will be charged for 3 hours, 25 minutes and 7 seconds.

Examples for instances launched in other operating systems such as Kali, or CentOS:

1- If you run an instance for 4 seconds or 20 seconds or 59 seconds, you will be charged for one hour.

2- If you run an instance for 1 minute and 3 seconds, you will be charged for one hour.

3- If you run an instance for 3 hours, 25 minutes and 7 seconds, you will be charged for 4 hours.

Per-second billing is available for instances launched in:

  • All EC2 purchase options (On-Demand, Reserved, Savings Plans, and Spot)
  • All regions and Availability Zones
  • Amazon Linux, Windows, Red Hat Enterprise Linux, Ubuntu, and Ubuntu Pro instances
61
Q

Which AWS services can be used to improve the performance of a global application and reduce latency for its users? (Choose TWO)

1) AWS Glue
2) AWS Direct Connect
3) AWS KMS
4) Amazon CloudFront
5) AWS Global accelerator

A

4) Amazon CloudFront
5) AWS Global accelerator

AWS Global Accelerator and CloudFront are two separate services that use the AWS global network and its edge locations around the world. Amazon CloudFront improves performance for global applications by caching content at the closest Edge Location to end-users. AWS Global Accelerator improves performance for global applications by routing end-user requests to the closest AWS Region. Amazon CloudFront improves performance for both cacheable (e.g., images and videos) and dynamic content (e.g. dynamic site delivery). Global Accelerator is a good fit for specific use cases, such as gaming, IoT or Voice over IP.

Note: AWS Global accelerator does not cache content at edge locations like Amazon CloudFront. AWS Global accelerator uses the AWS edge locations to receive end-user requests and then routes these requests to the closest AWS Region over the AWS global network.

62
Q

Which of the following services allows you to run containerized applications on a cluster of EC2 instances? (Choose TWO)

1) Amazon SageMaker Autopilot
2) AWS Health
3) Amazon Elastic Kubernetes Service
4) Amazon ECS
5) AWS Cloud9

A

3) Amazon Elastic Kubernetes Service
4) Amazon ECS

Amazon Elastic Container Service (Amazon ECS) is a highly scalable, high-performance container orchestration service that supports Docker containers and allows you to easily run and scale containerized applications on AWS. Amazon ECS eliminates the need for you to install and operate your own container orchestration software, manage and scale a cluster of virtual machines, or schedule containers on those virtual machines.

Amazon Elastic Kubernetes Service (Amazon EKS) is a managed service that allows you to use Kubernetes to run and scale containerized applications in the cloud or on-premises.

Kubernetes is an open-source container orchestration system that allows you to deploy and manage containerized applications at scale.

AWS handles provisioning, scaling, and managing the Kubernetes instances in a highly available and secure configuration. This removes a significant operational burden and allows you to focus on building applications instead of managing AWS infrastructure.

On both Amazon EKS and Amazon ECS, you have the option of running your containers on the following compute options:

AWS Fargate — a “serverless” container compute engine where you only pay for the resources required to run your containers. Suited for customers who do not want to worry about managing servers, handling capacity planning, or figuring out how to isolate container workloads for security.

EC2 instances — offers the widest choice of instance types, including processor, storage, and networking. Ideal for customers who want to manage or customize the underlying compute environment and host operating system.

On-premises virtual machines (VM) or servers — Amazon ECS Anywhere provides support for registering an external instance such as an on-premises server or virtual machine (VM), to your Amazon ECS cluster.

63
Q

Where can you store files in AWS? (Choose TWO)

1) Amazon ECS
2) Amazon SNS
3) Amazon EBS
4) Amazon EFS
5) Amazon EMR

A

3) Amazon EBS
4) Amazon EFS

** Amazon Elastic File System (Amazon EFS) provides simple, scalable, elastic file storage for use with AWS Cloud services and on-premises resources. It is easy to use and offers a simple interface that allows you to create and configure file systems quickly and easily. Amazon EFS is built to elastically scale on demand without disrupting applications, growing and shrinking automatically as you add and remove files, so your applications have the storage they need, when they need it. It is designed to provide massively parallel shared access to thousands of Amazon EC2 instances, enabling your applications to achieve high levels of aggregate throughput and IOPS that scale as a file system grows, with consistent low latencies. As a regional service, Amazon EFS is designed for high availability and durability storing data redundantly across multiple Availability Zones.

** Amazon Elastic Block Store (Amazon EBS) provides persistent block storage volumes for use with Amazon EC2 and Amazon RDS instances. AWS recommends Amazon EBS for data that must be quickly accessible and requires long-term persistence. EBS volumes are particularly well-suited for use as the primary storage for operating systems, databases, or for any applications that require fine granular updates and access to raw, unformatted, block-level storage. Each Amazon EBS volume is automatically replicated within its Availability Zone to protect you from component failure, offering high availability and durability.

64
Q

What is the AWS service\feature that takes advantage of Amazon CloudFront’s globally distributed edge locations to transfer files to S3 with higher upload speeds?

1) AWS Snowball
2) AWS WAF
3) S3 Transfer Acceleration
4) AWS CloudShell

A

S3 Transfer Acceleration

Amazon S3 Transfer Acceleration enables fast, easy, and secure transfers of files over long distances between your client and an S3 bucket. Transfer Acceleration takes advantage of Amazon CloudFront’s globally distributed edge locations. As the data arrives at an edge location, data is routed to Amazon S3 over an optimized network path.

65
Q

A company needs to host a database in Amazon RDS for at least 12 months. Which of the following options would be the most cost-effective solution?

1) Reserved instances - Partial Upfront
2) On-Demand instances
3) Spot Instances
4) Reserved instances - No Upfront

A

Reserved instances - Partial Upfront

Since the database server will be hosted for a period of at least three years, then it is better to use the RDS Reserved Instances as it provides you with a significant discount compared to the On-Demand Instance pricing for the DB instance.

With the Partial Upfront option, you make a low upfront payment and are then charged a discounted hourly rate for the instance for the duration of the Reserved Instance term. The Partial Upfront option is more cost-effective than the No upfront option (The more you spend upfront the more you save).