Practice Test 2: AWS Certified Cloud Practitioner Practice Exam (2) Flashcards
Which of the following aspects of security are managed by AWS? (Choose TWO)
- Encryption of EBS volumes
- Access permissions
- Hardware patching
- VPC security
- Securing global physical infrastructure
Hardware patching,
Securing global physical infrastructure
Explanation
AWS is continuously innovating the design and systems of its data centers to protect them from man-made and natural risks. For example, at the first layer of security, AWS provides a number of security features depending on the location, such as security guards, fencing, security feeds, intrusion detection technology, and other security measures.
According to the Shared Responsibility model, patching of the underlying hardware is the AWS’ responsibility. AWS is responsible for patching and fixing flaws within the infrastructure, but customers are responsible for patching their guest OS and applications.
The other options are incorrect:
“VPC security” is incorrect. The configuration and security of the VPC are customer’s responsibilities.
“Encryption of EBS volumes” is incorrect. The customer is responsible for encrypting their data on EBS either on the client side or on the server side.
“Access permissions” is incorrect. The customer is responsible for managing the IAM permissions.
Additional information:
IAM permissions let the customer specify access to AWS resources. Permissions are granted to IAM entities (users, user groups, and roles) and by default these entities start with no permissions. In other words, IAM entities can do nothing in AWS until you grant them your desired permissions. To give entities permissions, you can attach a policy that specifies the type of access, the actions that can be performed, and the resources on which the actions can be performed.
Where can you store files in AWS? (Choose TWO)
- Amazon EBS
- Amazon ECS
- Amazon EFS
- Amazon SNS
- Amazon EMR
Amazon EBS
Amazon EFS
Explanation
Amazon Elastic File System (Amazon EFS) provides simple, scalable, elastic file storage for use with AWS Cloud services and on-premises resources. It is easy to use and offers a simple interface that allows you to create and configure file systems quickly and easily. Amazon EFS is built to elastically scale on demand without disrupting applications, growing and shrinking automatically as you add and remove files, so your applications have the storage they need, when they need it. It is designed to provide massively parallel shared access to thousands of Amazon EC2 instances, enabling your applications to achieve high levels of aggregate throughput and IOPS that scale as a file system grows, with consistent low latencies. As a regional service, Amazon EFS is designed for high availability and durability storing data redundantly across multiple Availability Zones.
Amazon Elastic Block Store (Amazon EBS) provides persistent block storage volumes for use with Amazon EC2 and Amazon RDS instances. AWS recommends Amazon EBS for data that must be quickly accessible and requires long-term persistence. EBS volumes are particularly well-suited for use as the primary storage for operating systems, databases, or for any applications that require fine granular updates and access to raw, unformatted, block-level storage. Each Amazon EBS volume is automatically replicated within its Availability Zone to protect you from component failure, offering high availability and durability.
The other options are incorrect:
“Amazon SNS” is incorrect. Amazon Simple Notification Service (SNS) is a publish / subscribe (pub/sub) messaging service.
“Amazon ECS” is incorrect. Amazon Elastic Container Service (ECS) is a compute service that is used to run containerized applications on AWS.
“Amazon EMR” is incorrect. Amazon Elastic MapReduce (EMR) is a web service that enables businesses, researchers, data analysts, and developers to easily and cost-effectively process vast amounts of data across dynamically scalable Amazon EC2 instances.
What are the Amazon RDS features that can be used to improve the availability of your database? (Choose TWO)
- AWS Regions
- Edge Locations
- Automatic patching
- Read Replicas
- Multi-AZ Deployment
Read Replicas,
Multi-AZ Deployment
Explanation
In a Multi-AZ deployment, Amazon RDS automatically provisions and maintains a synchronous standby replica in a different Availability Zone. The primary DB instance is synchronously replicated across Availability Zones to a standby replica to provide data redundancy, eliminate I/O freezes, and minimize latency spikes during system backups. Running a DB instance with high availability can enhance availability during planned system maintenance, and help protect your databases against DB instance failure and Availability Zone disruption.
Amazon RDS Read Replicas provide enhanced performance and durability for database (DB) instances. This feature makes it easy to elastically scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads. You can create one or more replicas of a given source DB Instance and serve high-volume application read traffic from multiple copies of your data, thereby increasing aggregate read throughput.
Read replicas provide a complementary availability mechanism to Amazon RDS Multi-AZ Deployments. You can promote a read replica if the source DB instance fails. You can also replicate DB instances across AWS Regions as part of your disaster recovery strategy. This functionality complements the synchronous replication, automatic failure detection, and failover provided with Multi-AZ deployments.
The other options are incorrect:
“Edge Locations” is incorrect. Edge Locations are not a feature of Amazon RDS. Edge locations are used by the CloudFront service to distribute content globally.
“Automatic patching” is incorrect. The purpose of patching is to resolve functionality issues, improve security or add new features.
“AWS Regions” is incorrect. AWS Regions are not a feature of Amazon RDS. AWS Regions are separate geographic areas around the world that AWS uses to provide its Cloud Services, including Regions in North America, South America, Europe, Asia Pacific, and the Middle East. Choosing a specific AWS Region depends on its proximity to end-users, data sovereignty, and costs.
A company has business critical workloads hosted on AWS and they are unwilling to accept any downtime. Which of the following is a recommended best practice to protect their workloads in the event of an unexpected natural disaster?
- Replicate data across multiple Edge Locations worldwide and use Amazon CloudFront to perform automatic failover in the event of an outage
- Deploy AWS resources across multiple Availability Zones within the same AWS Region
- Deploy AWS resources to another AWS Region and implement an Active-Active disaster recovery strategy
- Create point-in-time backups in another subnet and recover this data when a disaster occurs
Deploy AWS resources to another AWS Region and implement an Active-Active disaster recovery strategy
Explanation
Disaster recovery is about preparing for and recovering from events that have a negative impact on your business continuity or finances. This could be a natural disaster, hardware or software failure, a network outage, a power outage, physical damage to a building like fire or flooding, or some other significant disaster.
In AWS, customers have the flexibility to choose the disaster recovery approach that fits their budget. The approaches could be as minimum as backup and restore from another AWS Region or full-scale multi-region Active-Active solution.
With the multi-region Active-Active solution, your workload is deployed to, and actively serving traffic from, multiple AWS Regions. If an entire Region goes down because of a natural disaster or any other reason, the other Regions will still be available and able to serve user requests.
The other options are incorrect:
“Deploy AWS resources across multiple Availability Zones within the same AWS Region” is incorrect. A natural disaster may affect an entire Region, including all Availability Zones within that Region.
“Replicate data across multiple Edge Locations worldwide and use Amazon CloudFront to perform automatic failover in the event of an outage” is incorrect. Edge locations are not used for disaster recovery. Edge locations are used by CloudFront to cache and distribute content from a geographical location close to users.
“Create point-in-time backups in another subnet and recover this data when a disaster occurs” is incorrect. A subnet is a range of IP addresses within a VPC.
Which of the following services will help businesses ensure compliance in AWS?
- CloudFron
- CloudTrail
- CloudEndure Migration
- CloudWatch
CloudTrail
Explanation
AWS CloudTrail is designed to log all actions taken in your AWS account. This provides a great resource for governance, compliance, and risk auditing.
The other options are incorrect:
“CloudFront” is incorrect. Amazon CloudFront is a content delivery network (CDN) service.
“CloudEndure Migration” is incorrect. CloudEndure Migration simplifies the process of migrating applications from physical, virtual, and cloud-based infrastructure, ensuring that they are fully operational in any AWS Region without compatibility issues.
CloudEndure Migration will be discontinued in all AWS Regions except for the AWS GovCloud (US), Mainland China (Beijing), and Mainland China (Ningxia) Regions on December 30, 2022. AWS Application Migration Service is the primary migration service recommended for lift-and-shift migrations to the AWS Cloud. Customers who currently use CloudEndure Migration are encouraged to switch to AWS Application Migration Service for future migrations.
“CloudWatch” is incorrect. Amazon CloudWatch is used to monitor the utilization of AWS resources. CloudWatch provides you with data and actionable insights to monitor your applications, understand and respond to system-wide performance changes, and get a unified view of operational health.
A company is migrating its on-premises database to Amazon RDS. What should the company do to ensure Amazon RDS costs are kept to a minimum?
- Combine On-demand Capacity Reservations with Saving Plans
- Use a Multi-Region Active-Passive architecture
- Use a Multi-Region Active-Active architecture
- Right size before and after migration
Right size before and after migration
Explanation
Right-sizing is the process of matching instance types and sizes to your workload performance and capacity requirements at the lowest possible cost. By right-sizing before migration, you can significantly reduce your infrastructure costs. If you skip right-sizing to save time, your migration speed might be faster, but you will end up with higher cloud infrastructure spend for a potentially long time.
Because your resource needs are always changing, right-sizing must become an ongoing process to continually achieve cost optimization. It’s important to right-size when you first consider moving to the cloud and calculate the total cost of ownership. However, it’s equally important to right-size periodically once you’re in the cloud to ensure ongoing cost-performance optimization.
Picking an Amazon RDS instance for a given workload means finding the instance family that most closely matches the CPU, disk I/O, and memory needs of your workload. Amazon RDS provides a wide selection of instances, which gives you lots of flexibility to right-size your resources to match capacity needs at the lowest cost.
The other options are incorrect:
“Use a Multi-Region Active-Active architecture” is incorrect. With the Multi-Region Active-Active solution, your workload is deployed to, and actively serving traffic from, multiple AWS Regions. AWS Customers use this approach to reduce latency for global users and achieve the highest level of availability. Using a Multi-Region Active-Active architecture will increase infrastructure costs, including Amazon RDS costs.
“Use a Multi-Region Active-Passive architecture” is incorrect. With Multi-Region Active-Passive architecture, your workload is deployed to two AWS Regions (a primary Region and a standby Region). In this architecture, user requests are served from the primary Region only. If the primary Region goes down because of a natural disaster or any other reason, the other Region will still be available and serve user requests. AWS customers use this approach for disaster recovery purposes. Using a Multi-Region Active-Passive architecture will increase infrastructure costs, including Amazon RDS costs.
“Combine On-demand Capacity Reservations with Saving Plans” is incorrect. When you combine On-demand Capacity Reservations with Saving Plans, you will be able to reduce costs significantly. But, On-demand Capacity Reservations is available only for Amazon EC2. For more information about On-demand Capacity Reservations, check this link: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-purchasing-options.html
A company is developing a new application using a microservices framework. The new application is having performance and latency issues. Which AWS Service should be used to troubleshoot these issues?
- AWS X-Ray
- AWS CodePipeline
- AWS Inspector
- AWS CloudTrail
AWS X-Ray
Explanation
AWS X-Ray helps developers analyze and debug distributed applications in production or under development, such as those built using microservice architecture. With X-Ray, you can understand how your application and its underlying services are performing so you can identify and troubleshoot the root cause of performance issues and errors. X-Ray provides an end-to-end view of requests as they travel through your application, and shows a map of your application’s underlying components. You can use X-Ray to analyze both applications in development and in production, from simple three-tier applications to complex microservices applications consisting of thousands of services.
The other options are incorrect:
“AWS CodePipeline” is incorrect. AWS CodePipeline is a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates.
“AWS Inspector” is incorrect. Amazon Inspector helps you to identify security vulnerabilities as well as deviations from security best practices in applications NOT for troubleshooting performance issues.
“AWS CloudTrail” is incorrect. CloudTrail is a service that allows you to track all users’ actions that are taken in your AWS account.
Your application has recently experienced significant global growth, and international users are complaining of high latency. What is the AWS characteristic that can help improve your international users’ experience?
- Elasticity
- High Availbility
- Data Durability
- Global Reach
Global Reach
Explanation
With AWS, you can deploy your application in multiple regions around the world. The user will be redirected to the Region that provides the lowest possible latency and the highest performance. You can also use the CloudFront service that uses edge locations (which are located in most of the major cities across the world) to deliver content with low latency and high performance to your global users.
The other options are incorrect:
“High availability” is incorrect. High Availability can be achieved by deploying your application in multiple Availability Zones within a single Region. If one Availability Zone goes down, the others can handle user requests. This may not reduce latency to your international users. In other words, the application will be available for them all the time, but with high latency.
“Elasticity” is incorrect. Elasticity refers to the ability of a system to scale the underlying resources up when demand increases (to maintain performance), or scale down when demand decreases (to reduce costs). This option does not indicate whether your resources will be deployed in a single Region or multiple Regions.
“Data durability” is incorrect. Durability refers to the ability of a system to assure data is stored and data remains consistent in the system as long as it is not changed by legitimate access. This means that data should not become corrupted or disappear due to a system malfunction. Durability is used to measure the likelihood of data loss. For example, assume you have confidential data stored in your Laptop. If you make a copy of it and store it in a secure place, you have just improved the durability of that data. It is much less likely that all copies will be simultaneously destroyed.
Data durability can be achieved by replicating data across multiple Availability Zones within a single Region. For example, the S3 Standard Tier is designed for 99.999999999% durability. This means that if you store 100 billion objects in S3, you will lose one object at most.
Which of the following are important design principles you should adopt when designing systems on AWS? (Choose TWO)
- Remove single point of failure
- Treat servers as fixed resources
- Always use Global services in your architecture rather than Regional services
- Always choose to pay as you go
- Automate wherever possible
Remove single point of failure
Automate wherever possible**
Explanation
A single point of failure (SPOF) is a part of a system that, if it fails, will stop the entire system from working. You can remove single points of failure by assuming everything will fail and designing your architecture to automatically detect and react to failures. For example, configuring and deploying an auto-scaling group of EC2 instances will ensure that if one or more of the instances crashes, Auto-scaling will automatically replace them with new instances. You should also introduce redundancy to remove single points of failure, by deploying your application across multiple Availability Zones. If one Availability Zone goes down for any reason, the other Availability Zones can serve requests.
AWS helps you use automation so you can build faster and more efficiently. Using AWS services, you can automate manual tasks or processes such as deployments, development & test workflows, container management, and configuration management.
The other options are incorrect:
“Always choose to pay as you go” is incorrect. AWS has other payment models that can save you more costs depending on your use case. For example, If your application if your application has a steady state usage, you can use reservations for the Amazon RDS and Amazon EC2 instances to reduce your overall costs significantly.
“Treat servers as fixed resources” is incorrect. AWS enables you to treat your servers as disposable resources not fixed resources. This means that if any issue occurred with a server, you can simply replace it with a new one (rather trying to fix it).
“Always use Global services in your architecture rather than Regional services” is incorrect. AWS services\resources are either Global, Regional or specific to an Availability Zone. Among all the services\resources that AWS offers, only a few of them are considered global services. Examples of AWS global services include Amazon CloudFront, AWS Identity and Access Management, Amazon Route 53 and AWS WAF. There is no way you can build your AWS environment without using Regional services such as Amazon VPC, Amazon RDS, AWS Lambda and Amazon EFS OR Zonal resources (specific to an Availability Zone) such as Amazon EC2 instances or Amazon EBS volumes.
Which of the following EC2 instance purchasing options supports the Bring Your Own License (BYOL) model for almost every BYOL scenario?
- Dedicated Hosts
- On-demand Instances
- Dedicated Instances
- Reserved Instances
Dedicated Hosts
Explanation
You have a variety of options for using new and existing Microsoft software licenses on the AWS Cloud. By purchasing Amazon Elastic Compute Cloud (Amazon EC2) or Amazon Relational Database Service (Amazon RDS) license-included instances, you get new, fully compliant Windows Server and SQL Server licenses from AWS. The BYOL model enables AWS customers to use their existing server-bound software licenses, including Windows Server, SQL Server, and SUSE Linux Enterprise Server.
Your existing licenses may be used on AWS with Amazon EC2 Dedicated Hosts, Amazon EC2 Dedicated Instances or EC2 instances with default tenancy using Microsoft License Mobility through Software Assurance.
Dedicated Hosts provide additional control over your instances and visibility into Host level resources and tooling that allows you to manage software that consumes licenses on a per-core or per-socket basis, such as Windows Server and SQL Server. This is why most BYOL scenarios are supported through the use of Dedicated Hosts, while only certain scenarios are supported by Dedicated Instances.
The other options are incorrect:
“Dedicated Instances” is incorrect. Dedicated Hosts is recommended for most BYOL scenarios for the reasons we mentioned above.
“On-demand Instances” and “Reserved Instances” are incorrect. On-demand instances and Reserved instances don’t support the Bring Your Own License (BYOL) model.
What is the AWS service that enables you to manage all of your AWS accounts from a single master account?
- AWS Trusted Advisor
- AWS Organizations
- Amazon Config
- AWS WAF
AWS Organizations
Explanation
AWS Organizations is an account management service that enables you to consolidate multiple AWS accounts into an organization that you create and centrally manage.
AWS Organizations enables the following capabilities:
1- Automate AWS account creation and management
2- Consolidate billing across multiple AWS accounts
3- Govern access to AWS services, resources, and regions
4- Centrally manage access policies across multiple AWS accounts
5- Configure AWS services across multiple accounts
The other options are incorrect:
“AWS Trusted Advisor” is incorrect. AWS Trusted Advisor offers a rich set of best practice checks and recommendations across five categories: cost optimization; security; fault tolerance; performance; and service limits (also referred to as Service quotas).
“Amazon Config” is incorrect. Amazon Config is used to record and evaluate configurations of your AWS resources.
“AWS WAF” is incorrect. AWS WAF is a AWS web application firewall that helps protect your web applications.
Which of the following should be considered when performing a TCO analysis to compare the costs of running an application on AWS instead of on-premises?
- Application development
- Physical hardware
- Business analysis
- Market research
Physical hardware
Explanation
Weighing the financial considerations of owning and operating a data center facility versus employing a cloud infrastructure requires detailed and careful analysis. The Total Cost of Ownership (TCO) is often the financial metric used to estimate and compare costs of a product or a service. When comparing AWS with on-premises TCO, customers should consider all costs of owning and operating a data center. Examples of these costs include facilities, physical servers, storage devices, networking equipment, cooling and power consumption, data center space, and Labor IT cost.
The other options are incorrect.
“Application development” is incorrect. Application development is the process of creating a program or a set of programs to perform the different tasks that a business requires. Application development is a separate process that customers need to perform regardless of whether they will be using AWS or an on-premises data center. Application development is not part of the total cost of owning and operating a data center (TCO), and thus is an incorrect answer.
“Market Research” is incorrect. Market research is an organized effort to gather information about target audience and customers to determine how viable a product or service might be. Market research is a separate process that customers need to perform regardless of whether they will be using AWS or an on-premises data center.
“Business analysis” is incorrect. Business analysis is a multistage process aimed at identifying business needs and determining solutions to business problems. Business analysis is a separate process that customers need to perform regardless of whether they will be using AWS or an on-premises data center.
References:
https://aws.amazon.com/blogs/publicsector/cloud-economics-value-tco-assessment/
Under the Shared Responsibility Model, which of the following controls do customers fully inherit from AWS? (Choose TWO)
- Patch management controls
- Database controls
- Awareness & Training
- Environmental controls
- Physical controls
Environmental controls
Physical controls
Explanation
AWS is responsible for physical controls and environmental controls. Customers inherit these controls from AWS.
As mentioned in the AWS Shared Responsibility Model page, Inherited Controls are controls which a customer fully inherits from AWS such as physical controls and environmental controls.
As a customer deploying an application on AWS infrastructure, you inherit security controls pertaining to the AWS physical, environmental and media protection, and no longer need to provide a detailed description of how you comply with these control families.
For example: Let’s say you have built an application in AWS for customers to securely store their data. But your customers are concerned about the security of the data and ensuring compliance requirements are met. To address this, you assure your customer that “our company does not host customer data in its corporate or remote offices, but rather in AWS data centers that have been certified to meet industry security standards.” That includes physical and environmental controls to secure the data, which is the responsibility of Amazon. Companies do not have physical access to the AWS data centers, and as such, they fully inherit the physical and environmental security controls from AWS.
You can read more about AWS’ data center controls here:
https://aws.amazon.com/compliance/data-center/controls/
The other options are incorrect:
“Patch management controls” is incorrect. Patch Management belongs to the shared controls. AWS is responsible for patching the underlying hosts and fixing flaws within the infrastructure, but customers are responsible for patching their guest OS and applications.
“Database controls” is incorrect. Database controls belongs to the shared controls. AWS maintains the configuration of its infrastructure devices that run the database, but customers are responsible for configuring their own databases, and applications.
“Awareness & Training” is incorrect. Awareness & Training belongs to the shared controls. AWS trains AWS employees, but customers must train their own employees.
References:
https://aws.amazon.com/compliance/shared-responsibility-model/
Which AWS services can be used to improve the performance of a global application and reduce latency for its users? (Choose TWO)
- AWS Direct Connect
- AWS Global accelarator
- Amazon CloudFront
- AWS KMS
- AWS Glue
AWS Global accelarator
Amazon CloudFront
Explanation
AWS Global Accelerator and CloudFront are two separate services that use the AWS global network and its edge locations around the world. Amazon CloudFront improves performance for global applications by caching content at the closest Edge Location to end-users. AWS Global Accelerator improves performance for global applications by routing end-user requests to the closest AWS Region. Amazon CloudFront improves performance for both cacheable (e.g., images and videos) and dynamic content (e.g. dynamic site delivery). Global Accelerator is a good fit for specific use cases, such as gaming, IoT or Voice over IP.
Note: AWS Global accelerator does not cache content at edge locations like Amazon CloudFront. AWS Global accelerator uses the AWS edge locations to receive end-user requests and then routes these requests to the closest AWS Region over the AWS global network.
The other options are incorrect:
“AWS KMS” is incorrect. AWS KMS is a key management service that makes it easy for you to create and manage encryption keys and control their use across a wide range of AWS services and in your applications.
“AWS Direct Connect” is incorrect. AWS Direct Connect is a cloud service solution that is used to establish a dedicated network connection from your premises to AWS.
“AWS Glue” is incorrect. AWS Glue is a fully-managed, Extract, Transform, and Load (ETL) service that automates the time-consuming steps of data preparation for analytics.
Extract, Transform, and Load (ETL) is the process of extracting (collecting) data from various sources (from different databases for example), transform the data depending on business rules/needs (This step helps in preparing the data for analytics and decision making) and load the data into a destination database, often a data warehouse.
References:
https://aws.amazon.com/cloudfront/
https://aws.amazon.com/global-accelerator/features/
Which of the below is a best-practice when building applications on AWS?
- Strengthen physical security by applying the principle of least privilege
- Use IAM policies to maintain performance
- Ensure that the application runs on hardware from trusted vendors
- Decouple the components of the application so that they run independently
Decouple the components of the application so that they run independently
Explanation
An application should be designed in a way that reduces interdependencies between its components. A change or a failure in one component should not cascade to other components. If the components of an application are tightly-coupled (interconnected) and one component fails, the entire application will also fail. Amazon SQS and Amazon SNS are powerful tools that help you build loosely-coupled applications. SQS and SNS can be integrated together to decouple application components so that they run independently, increasing the overall fault tolerance of the application.
Understanding how SQS and SNS services work is not required for the Cloud Practitioner level, but let’s just take a simple example, let say you have two components in your application, Component A & Component B. Component A sends messages (jobs) to component B to process. Now, what happens if component A sends a large number of messages at the same time? Component B will fail, and the entire application will fail. SQS act as a middleman, receives and stores messages from component A, and component B pull and process messages at its own pace. This way, both components run independently from each other.
The other options are incorrect:
“Ensure that the application runs on hardware from trusted vendors” is incorrect. Choosing a specific hardware vendor is not available in AWS.
“Use IAM policies to maintain performance” is incorrect. There is no relation between IAM policies and performance. IAM policies are used to grant users permission to perform specific actions on AWS.
“Strengthen physical security by applying the principle of least privilege” is incorrect. Physical security is the sole responsibility of AWS.
Additional information:
AWS provides physical data center access only to approved employees. All employees who need data center access must first apply for access and provide a valid business justification. These requests are granted based on the principle of least privilege, where requests must specify to which layer of the data center the individual needs access, and are time-bound. Requests are reviewed and approved by authorized personnel, and access is revoked after the requested time expires.
The principle of least privilege does not only apply to control physical access. AWS customers should also apply this principle when granting permissions to IAM users. In other words, AWS customers should grant IAM users only the permissions they need to perform a task and nothing more.
References:
https://aws.amazon.com/microservices/
https://aws.amazon.com/sns/
https://aws.amazon.com/sqs/
An organization runs many systems and uses many AWS products. Which of the following services enables them to control how each developer interacts with these products?
- Network Access Control Lists
- Amazon RDS
- AWS Identity and Access Management
- Amazon EMR
AWS Identity and Access Management
Explanation
AWS Identity and Access Management (IAM) is a web service for securely controlling access to AWS services. With IAM, you can centrally manage users, security credentials such as access keys, and permissions that control which AWS resources users and applications can access.
The other options are incorrect:
“Amazon RDS” is incorrect. Amazon RDS is relational database service.
“Network Access Control Lists” is incorrect. Network Access Control Lists is a VPC feature that allows you to control traffic at the subnet level.
“Amazon EMR” is incorrect. Amazon EMR is used to run and Scale Apache Spark, Hadoop, HBase, Presto, Hive, and other Big Data Frameworks.
References:
https://aws.amazon.com/iam/
Using Amazon EC2 falls under which of the following cloud computing models?
- SaaS
- PaaS
- IaaS
- IaaS & SaaS
IaaS
Explanation
Infrastructure as a Service (IaaS) contains the basic building blocks for Cloud IT and typically provide access to networking features, computers (virtual or on dedicated hardware), and data storage space. Infrastructure as a Service provides you with the highest level of flexibility and management control over your IT resources and is most similar to existing IT resources that many IT departments and developers are familiar with today.
For example, a service such as Amazon Elastic Compute Cloud (Amazon EC2) is categorized as Infrastructure as a Service (IaaS) and requires the customer to perform all of the configuration and management tasks. Customers that deploy an Amazon EC2 instance are responsible for management of the guest operating system (including updates and security patches), any application software or utilities installed by the customer on the instances, and the configuration of the AWS-provided firewall (called a security group) on each instance.
The other options are incorrect:
1- Platform as a Service (PaaS) removes the need for your organization to manage the underlying infrastructure (usually hardware and operating systems) and allows you to focus on the deployment and management of your applications. This helps you be more efficient as you don’t need to worry about resource procurement, capacity planning, software maintenance, patching, or any of the other undifferentiated heavy lifting involved in running your application. A common example of a PaaS platform is the AWS Elastic Beanstalk service. Developers simply upload their application, and Elastic Beanstalk automatically handles the deployment details of capacity provisioning, load balancing, auto-scaling, and application health monitoring.
2- Software as a Service(SaaS) provides you with a completed product that is run and managed by the service provider. In most cases, people referring to Software as a Service are referring to end-user applications. With a SaaS offering you do not have to think about how the service is maintained or how the underlying infrastructure is managed; you only need to think about how you will use that particular piece software. A common example of a SaaS application is web-based email where you can send and receive email without having to manage feature additions to the email product or maintaining the servers and operating systems that the email program is running on.
References:
https://d1.awsstatic.com/whitepapers/aws-overview.pdf
Savings Plans are available for which of the following AWS compute services? (Choose TWO)
- AWS Outposts
- Amazon Lightsail
- Amazon EC2
- AWS Batch
- AWS Lambda
Amazon EC2
AWS Lambda
Explanation
Savings Plans are a flexible pricing model that offers low prices on EC2, Lambda, and Fargate usage, in exchange for a commitment to a consistent amount of usage (measured in $/hour) for a 1 or 3 year term. When you sign up for Savings Plans, you will be charged the discounted Savings Plans price for your usage up to your commitment. For example, if you commit to $10 of compute usage an hour, you will get the Savings Plans prices on that usage up to $10 and any usage beyond the commitment will be charged On Demand rates.
Additional information:
What is the difference between Amazon EC2 Savings Plans and Amazon EC2 Reserved instances?
Reserved Instances are a billing discount applied to the use of On-Demand Compute Instances in your account. These On-Demand Instances must match certain attributes, such as instance type and Region to benefit from the billing discount.
For example, let say you have a t2.medium instance running as an On-Demand Instance and you purchase a Reserved Instance that matches the configuration of this particular t2.medium instance. At the time of purchase, the billing mode for the existing instance changes to the Reserved Instance discounted rate. The existing t2.medium instance doesn’t need replacing or migrating to get the discount.
After the reservation expires, the instance is charged as an On-Demand Instance. You can repurchase the Reserved Instance to continue the discounted rate on your instance. Reserved Instances act as an automatic discount on new or existing On-Demand Instances in your account.
Savings Plans also offer significant savings on your Amazon EC2 costs compared to On-Demand Instance pricing. With Savings Plans, you make a commitment to a consistent usage amount, measured in USD per hour. This provides you with the flexibility to use the instance configurations that best meet your needs, instead of making a commitment to a specific instance configuration (as is the case with reserved instances). For example, with Compute Savings Plans, if you commit to $10 of compute usage an hour, you can use as many instances as you need (of any type) and you will get the Savings Plans prices on that usage up to $10 and any usage beyond the commitment will be charged On Demand rates.
The other options are incorrect:
“AWS Batch” is incorrect. Savings Plans are not available for AWS Batch.
AWS Batch is a compute service that allows you to run hundreds of thousands of batch computing jobs on AWS. AWS Batch dynamically provisions the optimal quantity and type of compute resources (e.g., CPU or memory optimized instances) based on the volume and specific resource requirements of the batch jobs submitted.
“AWS Outposts” is incorrect. Savings Plans are not available for AWS Outposts.
AWS Outposts is an AWS service that delivers the same AWS infrastructure, native AWS services, APIs, and tools to virtually any customer on-premises facility. With AWS Outposts, customers can run AWS services locally on their Outpost, including EC2, EBS, ECS, EKS, and RDS, and also have full access to services available in the Region.
Customers can use AWS Outposts to securely store and process data that needs to remain on-premises or in countries where there is no AWS region. AWS Outposts is ideal for applications that have low latency or local data processing requirements, such as financial services, healthcare, etc.
“Amazon Lightsail” is incorrect. Savings Plans are not available for Amazon Lightsail.
Amazon Lightsail provides a low-cost Virtual Private Server (VPS) in the cloud.
References:
https://aws.amazon.com/savingsplans/
Which of the following describes the payment model that AWS makes available for customers who consistently use Amazon EC2 over a 3-year term to reduce their total computing costs?
- Pay less by using more
- Pay less as AWS grows
- Save when you commit
- Pay as you go
Save when you commit
Explanation
For Customers who can commit to using EC2 over a one or 3-year term, it is better to use Amazon EC2 Reserved Instances or AWS Savings Plans. Reserved Instances and AWS Savings Plans provide a significant discount (up to 72%) compared to On-Demand instance pricing.
The other options are incorrect:
“Pay as you go” is incorrect. Reserved Instances and AWS Savings Plans provide a significant discount (up to 72%) compared to On-Demand (pay-as-you-go) instance pricing.
“Pay less as AWS grows” is incorrect. Pay less as AWS grows refers to the discounts that you get over time as AWS grows. This sometimes called “AWS Economies of Scale”. For example, AWS has reduced the per GB storage price of S3 by 80% since the service was first introduced in 2006.
“Pay less by using more” is incorrect. “Pay less by using more” means that you get volume based discounts and as your usage increases. For services such as S3, pricing is tiered, meaning the more you use, the less you pay per GB.
References:
https://aws.amazon.com/pricing/
Which of the following AWS services is designed with native Multi-AZ fault tolerance in mind? (Choose TWO)
- Amazon DynamoDB
- Amazon EBS
- Amazon Redshift
- Amazon Simple Storage Service
- AWS Snowball
Amazon DynamoDB
Amazon Simple Storage Service
Explanation
The Multi-AZ principle involves deploying an AWS resource in multiple Availability Zones to achieve high availability for that resource.
DynamoDB automatically spreads the data and traffic for your tables over a sufficient number of servers to handle your throughput and storage requirements, while maintaining consistent and fast performance. All of your data is stored on solid-state disks (SSDs) and is automatically replicated across multiple Availability Zones in an AWS Region, providing built-in fault tolerance in the event of a server failure or Availability Zone outage.
Amazon S3 provides durable infrastructure to store important data and is designed for durability of 99.999999999% of objects. Data in all Amazon S3 storage classes is redundantly stored across multiple Availability Zones (except S3 One Zone-IA).
The other options are incorrect:
“Amazon Redshift” is incorrect. Currently, Amazon Redshift only supports Single-AZ deployments.
“AWS Snowball” is incorrect. AWS Snowball is a data transport solution that accelerates moving terabytes to petabytes of data into and out of AWS using storage devices designed to be secure for physical transport.
“Amazon EBS” is incorrect. Amazon EBS volume data is replicated across multiple servers within the same Availability Zone.
Note:
Amazon EFS data is redundantly stored across multiple Availability Zones providing better durability compared to EBS volumes.
References:
https://aws.amazon.com/dynamodb/
https://aws.amazon.com/s3/storage-classes/
An organization needs to analyze and process a large number of data sets. Which AWS service should they use?
- Amazon EMR
- Amazon SQS
- Amazon MQ
- Amazon SNS
Amazon EMR
Explanation
Amazon EMR helps you analyze and process vast amounts of data by distributing the computational work across a cluster of virtual servers running in the AWS Cloud. The cluster is managed using an open-source framework called Hadoop. Amazon EMR lets you focus on crunching or analyzing your data without having to worry about time-consuming setup, management, and tuning of Hadoop clusters or the compute capacity they rely on.
All other options are AWS messaging services.
References:
https://aws.amazon.com/emr/