AWS Cloud Practitioner Practice Exam Questions Flashcards
Which of the following statements are CORRECT regarding the AWS VPC service? (Select two)
a. A Security Group can have allow rules only
b. A network access control list (network ACL) can have allow rules only
c. A Security Group can have both allow and deny rules
d. A Network Address Translation instance (NAT instance) is managed by AWS
e. A Network Address Translation gateway (NAT gateway) is managed by AWS
a. A Security Group can have allow rules only
e. A Network Address Translation gateway (NAT gateway) is managed by AWS
Honestly…the explanation is lengthy and confusing.
The DevOps team at an IT company is moving 500 GB of data from an EC2 instance to an S3 bucket in the same region. Which of the following scenario captures the correct charges for this data transfer?
a. The company would only be charged for the inbound data transfer into the S3 bucket
b. The company would only be charged for the outbound data transfer from EC2 instance
c. The company would not be charged for this data transfer
d. The company would be charged for both the outbound data transfer from EC2 instance as well as the inbound data transfer into the S3 bucket
c. The company would not be charged for this data transfer
There are three fundamental drivers of cost with AWS: compute, storage, and outbound data transfer. In most cases, there is no charge for inbound data transfer or data transfer between other AWS services within the same region. Outbound data transfer is aggregated across services and then charged at the outbound data transfer rate.
Per AWS pricing, data transfer between S3 and EC2 instances within the same region is not charged, so there would be no data transfer charge for moving 500 GB of data from an EC2 instance to an S3 bucket in the same region.
According to the AWS Shared Responsibility Model, which of the following are responsibilities of AWS? (Select two)
a. Replacing faulty hardware of Amazon EC2 instances
b. Creating IAM role for accessing Amazon EC2 instances
c. Enabling Multi Factor Authentication on AWS accounts in your organization
d. Creating S3 bucket policies for appropriate user access
e. Operating the infrastructure layer, the operating system and the platform for the Amazon S3 service
a. Replacing faulty hardware of Amazon EC2 instances
e. Operating the infrastructure layer, the operating system and the platform for the Amazon S3 service
I got this fair and square.
The engineering team at an IT company wants to monitor the CPU utilization for its fleet of Amazon Elastic Compute Cloud (Amazon EC2) instances and send an email to the administrator if the utilization exceeds 80%. As a Cloud Practitioner, which AWS services would you recommend to build this solution? (Select two)
a. AWS CloudTrail
b. AWS Lambda
c. Amazon CloudWatch
d. Amazon Simple Queue Service (SQS)
e. Amazon Simple Notification Service (SNS)
c. Amazon CloudWatch
e. Amazon Simple Notification Service (SNS)
Amazon CloudWatch - Amazon CloudWatch is a monitoring and observability service built for DevOps engineers, developers, site reliability engineers (SREs), and IT managers. CloudWatch provides data and actionable insights to monitor applications, respond to system-wide performance changes, optimize resource utilization, and get a unified view of operational health. You can create an CloudWatch alarm that sends an email message using Amazon SNS when the alarm changes state from OK to ALARM. The alarm changes to the ALARM state when the average CPU use of an EC2 instance exceeds a specified threshold for consecutive specified periods.
Amazon Simple Notification Service (SNS) - Amazon Simple Notification Service (SNS) is a highly available, durable, secure, fully managed pub/sub messaging service that enables you to decouple microservices, distributed systems, and serverless applications.
How SNS Works: https://aws.amazon.com/sns/
Which of the following AWS services support VPC Endpoint Gateway for a private connection from a VPC? (Select two)
a. Amazon Elastic Compute Cloud (Amazon EC2)
b. Amazon Simple Storage Service (Amazon S3)
c. Amazon DynamoDB
d. Amazon Simple Notification Service (SNS)
e. Amazon Simple Queue Service (SQS)
b. Amazon Simple Storage Service (Amazon S3)
c. Amazon DynamoDB
A VPC endpoint enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by AWS PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Instances in your VPC do not require public IP addresses to communicate with resources in the service. Traffic between your VPC and the other service does not leave the Amazon network.
There are two types of VPC endpoints: interface endpoints and gateway endpoints.
An interface endpoint is an elastic network interface with a private IP address from the IP address range of your subnet that serves as an entry point for traffic destined to a supported service. Interface endpoints are powered by AWS PrivateLink, a technology that enables you to privately access services by using private IP addresses.
A gateway endpoint is a gateway that you specify as a target for a route in your route table for traffic destined to a supported AWS service. The following AWS services are supported:
Amazon Simple Storage Service (Amazon S3)
Amazon DynamoDB
Exam Alert:
You may see a question around this concept in the exam. Just remember that only Amazon S3 and Amazon DynamoDB support VPC gateway endpoint. All other services that support VPC Endpoints use a VPC interface endpoint (note that Amazon S3 supports the VPC interface endpoint as well).
Which of the following AWS services comes under the Software as a Service (SaaS) Cloud Computing Type?
a. Elastic Load Balancing (ELB)
b. AWS Elastic Beanstalk
c. Amazon Elastic Compute Cloud (Amazon EC2)
d. Amazon Rekognition
d. Amazon Rekognition
Cloud Computing can be broadly divided into three types - Infrastructure as a Service (IaaS), Platform as a Service (PaaS), Software as a Service (SaaS).
IaaS contains the basic building blocks for cloud IT. It typically provides access to networking features, computers (virtual or on dedicated hardware), and data storage space. IaaS gives the highest level of flexibility and management control over IT resources. Examples - Amazon EC2 (on AWS), GCP, Azure, Rackspace, Digital Ocean, Linode.
PaaS removes the need to manage underlying infrastructure (usually hardware and operating systems) and allows you to focus on the deployment and management of your applications. You don’t need to worry about resource procurement, capacity planning, software maintenance, patching, or any of the other undifferentiated heavy lifting involved in running your application. Examples - Elastic Beanstalk (on AWS), Heroku, Google App Engine (GCP), Windows Azure (Microsoft).
SaaS provides you with a complete product that is run and managed by the service provider. With a SaaS offering, you don’t have to think about how the service is maintained or how the underlying infrastructure is managed. You only need to think about how you will use that particular software. Examples - Amazon Rekognition, Google Apps (Gmail), Dropbox, Zoom.
Overview of Cloud Computing Types: https://aws.amazon.com/types-of-cloud-computing/
You can use Amazon Rekognition to add image and video analysis to your applications using proven, highly scalable, deep learning technology that requires no machine learning expertise. With Amazon Rekognition, you can identify objects, people, text, scenes, and activities in images and videos as well as detect any inappropriate content. Rekognition is an example of Software as a Service (Saas) model.
An AWS user is trying to launch an Amazon Elastic Compute Cloud (Amazon EC2) instance in a given region. What is the region-specific constraint that the Amazon Machine Image (AMI) must meet so that it can be used for this Amazon Elastic Compute Cloud (Amazon EC2) instance?
a. You must use an Amazon Machine Image (AMI) from the same region as that of the Amazon EC2 instance. The region of the Amazon Machine Image (AMI) has no bearing on the performance of the Amazon EC2 instance
b. You should use an Amazon Machine Image (AMI) from the same region, as it improves the performance of the Amazon EC2 instance
c. You can use an Amazon Machine Image (AMI) from a different region, but it degrades the performance of the Amazon EC2 instance
d. An Amazon Machine Image (AMI) is a global entity, so the region is not applicable
a. You must use an Amazon Machine Image (AMI) from the same region as that of the Amazon EC2 instance. The region of the Amazon Machine Image (AMI) has no bearing on the performance of the Amazon EC2 instance
An Amazon Machine Image (AMI) provides the information required to launch an instance. You must specify an Amazon Machine Image (AMI) when you launch an instance. You can launch multiple instances from a single AMI when you need multiple instances with the same configuration.
The Amazon Machine Image (AMI) must be in the same region as that of the Amazon EC2 instance to be launched. If the Amazon Machine Image (AMI) exists in a different region, you can copy that Amazon Machine Image (AMI) to the region where you want to launch the EC2 instance. The region of Amazon Machine Image (AMI) has no bearing on the performance of the Amazon EC2 instance.
Amazon Machine Images (AMI) Overview: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html
A silicon valley based healthcare startup stores anonymized patient health data on Amazon S3. The CTO further wants to ensure that any sensitive data on S3 is discovered and identified to prevent any sensitive data leaks. As a Cloud Practitioner, which AWS service would you recommend addressing this use-case?
a. AWS Glue
b. Amazon Polly
c. Amazon Macie
d. AWS Secrets Manager
c. Amazon Macie
Amazon Macie is a fully managed data security and data privacy service that uses machine learning and pattern matching to discover and protect your sensitive data in AWS. Macie automatically provides an inventory of Amazon S3 buckets including a list of unencrypted buckets, publicly accessible buckets, and buckets shared with AWS accounts outside those you have defined in AWS Organizations. Then, Macie applies machine learning and pattern matching techniques to the buckets you select to identify and alert you to sensitive data, such as personally identifiable information (PII).
Which of the following AWS services can be used to connect a company’s on-premises environment to a VPC without using the public internet?
a. VPC Endpoint
b. Internet Gateway
c. AWS Direct Connect
d. AWS Site-to-Site VPN
c. AWS Direct Connect
AWS Direct Connect is a cloud service solution that makes it easy to establish a dedicated network connection from your premises to AWS. You can use AWS Direct Connect to establish a private virtual interface from your on-premise network directly to your Amazon VPC, providing you with a private, high bandwidth network connection between your network and your VPC. This connection is private and does not go over the public internet. It takes at least a month to establish this physical connection.
How Direct Connect Works: https://aws.amazon.com/directconnect/
A cyber-security agency uses AWS Cloud and wants to carry out security assessments on its own AWS infrastructure without any prior approval from AWS. Which of the following describes/facilitates this practice?
a. Network Stress Testing
b. Amazon Inspector
c. AWS Secrets Manager
d. Penetration Testing
d. Penetration Testing
AWS customers can carry out security assessments or penetration tests against their AWS infrastructure without prior approval for few common AWS services. Customers are not permitted to conduct any security assessments of AWS infrastructure, or the AWS services themselves.
Incorrect options:
Network Stress Testing - AWS considers “network stress test” to be when a test sends a large volume of legitimate or test traffic to a specific intended target application. The endpoint and infrastructure are expected to be able to handle this traffic.
AWS Secrets Manager - AWS Secrets Manager helps you protect secrets needed to access your applications, services, and IT resources. The service enables you to easily rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle. Users and applications retrieve secrets with a call to AWS Secrets Manager APIs, eliminating the need to hardcode sensitive information in plain text.
An e-commerce company wants to store data from a recommendation engine in a database. As a Cloud Practioner, which AWS service would you recommend to provide this functionality with the LEAST operational overhead for any scale?
a. Amazon Neptune
b. Amazon Simple Storage Service (Amazon S3)
c. Amazon Relational Database Service (Amazon RDS)
d. Amazon DynamoDB
d. Amazon DynamoDB
Amazon DynamoDB is a key-value and document database that delivers sub-millisecond performance at any scale. Amazon DynamoDB enables customers to offload the administrative burdens of operating and scaling distributed databases to AWS so that they don’t have to worry about hardware provisioning, setup and configuration, throughput capacity planning, replication, software patching, or cluster scaling.
You can use Amazon DynamoDB to store recommendation results with the LEAST operational overhead for any scale.
https://catalog.us-east-1.prod.workshops.aws/workshops/ed82a5d4-6630-41f0-a6a1-9345898fa6ec/en-US/batch/dynamodb
Due to regulatory and compliance reasons, an organization is supposed to use a hardware device for any data encryption operations in the cloud. Which AWS service can be used to meet this compliance requirement?
a. AWS Secrets Manager
b. AWS Key Management Service (AWS KMS)
c. AWS CloudHSM
d. AWS Trusted Advisor
c. AWS CloudHSM
AWS CloudHSM is a cloud-based Hardware Security Module (HSM) that enables you to easily generate and use your encryption keys on the AWS Cloud. With CloudHSM, you can manage your encryption keys using FIPS 140-2 Level 3 validated HSMs. It is a fully-managed service that automates time-consuming administrative tasks for you, such as hardware provisioning, software patching, high-availability, and backups.
Please review this detailed description for CloudHSM: https://aws.amazon.com/cloudhsm/
AWS Identity and Access Management (AWS IAM) policies are written as JSON documents. Which of the following are mandatory elements of an IAM policy?
a. Action, Condition
b. Effect, Sid
c. Sid, Principal
d. Effect, Action
d. Effect, Action
Most policies are stored in AWS as JSON documents. Identity-based policies and policies used to set permissions boundaries are JSON policy documents that you attach to a user or role. Resource-based policies are JSON policy documents that you attach to a resource.
A JSON policy document includes these elements:
- Optional policy-wide information at the top of the document
- One or more individual statements
Each statement includes information about a single permission. The information in a statement is contained within a series of elements.
- Version – Specify the version of the policy language that you want to use. As a best practice, use the latest 2012-10-17 version.
- Statement – Use this main policy element as a container for the following elements. You can include more than one statement in a policy.
a. Sid (Optional) – Include an optional statement ID to differentiate between your statements.
b. Effect – Use Allow or Deny to indicate whether the policy allows or denies access.
c. Principal (Required in only some circumstances) – If you create a resource-based policy, you must indicate the account, user, role, or federated user to which you would like to allow or deny access. If you are creating an IAM permissions policy to attach to a user or role, you cannot include this element. The principal is implied as that user or role.
d. Action – Include a list of actions that the policy allows or denies.
e. Resource (Required in only some circumstances) – If you create an IAM permissions policy, you must specify a list of resources to which the actions apply. If you create a resource-based policy, this element is optional. If you do not include this element, then the resource to which the action applies is the resource to which the policy is attached.
f. Condition (Optional) – Specify the circumstances under which the policy grants permission.
Which of the following is the best way to protect your data from accidental deletion on Amazon S3?
a. Amazon S3 lifecycle configuration
b. Amazon S3 storage classes
c. Amazon S3 Versioning
d. Amazon S3 Transfer Acceleration (Amazon S3TA)
c. Amazon S3 Versioning
Versioning is a means of keeping multiple variants of an object in the same bucket. You can use versioning to preserve, retrieve, and restore every version of every object stored in your Amazon S3 bucket. With versioning, you can easily recover from both unintended user actions and application failures.
Versioning-enabled buckets enable you to recover objects from accidental deletion or overwrite. For example: if you delete an object, instead of removing it permanently, Amazon S3 inserts a delete marker, which becomes the current object version.
Amazon S3 Versioning Overview: https://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectVersioning.html
Incorrect options:
Amazon S3 lifecycle configuration - To manage your Amazon S3 objects so that they are stored cost-effectively throughout their lifecycle, configure their Amazon S3 Lifecycle. With Amazon S3 Lifecycle configuration rules, you can tell Amazon S3 to transition objects to less expensive storage classes, or archive or delete them. Amazon S3 lifecycle configuration will do the hard lifting of moving your data into cost-effective storage classes without user intervention. Amazon S3 Lifecycle configuration is not meant to protect from accidental deletion of data.
Amazon S3 storage classes - Amazon S3 offers a range of storage classes designed for different use cases. These include S3 Standard for general-purpose storage of frequently accessed data; S3 Intelligent-Tiering for data with unknown or changing access patterns; S3 Standard-Infrequent Access (S3 Standard-IA) and S3 One Zone-Infrequent Access (S3 One Zone-IA) for long-lived, but less frequently accessed data; and Amazon S3 Glacier (S3 Glacier) and Amazon S3 Glacier Deep Archive (S3 Glacier Deep Archive) for long-term archive and digital preservation. Storage classes are for different storage pattern needs that customers have, and not a data protection mechanism for S3.
Amazon S3 Transfer Acceleration (Amazon S3TA) - Amazon S3 Transfer Acceleration (Amazon S3TA) enables fast, easy, and secure transfers of files over long distances between your client and an S3 bucket. Amazon S3 Transfer Acceleration (Amazon S3TA) takes advantage of Amazon CloudFront’s globally distributed edge locations. As the data arrives at an edge location, data is routed to Amazon S3 over an optimized network path. Amazon S3 Transfer Acceleration (Amazon S3TA) cannot be used to protect from accidental deletion of data.
A medical research startup wants to understand the compliance of AWS services concerning HIPAA guidelines. Which AWS service can be used to review the HIPAA compliance and governance-related documents on AWS?
a. AWS Secrets Manager
b. AWS Systems Manager
c. AWS Trusted Advisor
d. AWS Artifact
d. AWS Artifact
AWS Artifact is your go-to, central resource for compliance-related information that matters to your organization. It provides on-demand access to AWS security and compliance reports and select online agreements. Reports available in AWS Artifact include our Service Organization Control (SOC) reports, Payment Card Industry (PCI) reports, and certifications from accreditation bodies across geographies and compliance verticals that validate the implementation and operating effectiveness of AWS security controls. Different types of agreements are available in AWS Artifact Agreements to address the needs of customers subject to specific regulations. For example, the Business Associate Addendum (BAA) is available for customers that need to comply with the Health Insurance Portability and Accountability Act (HIPAA). It is not a service, it’s a no-cost, self-service portal for on-demand access to AWS compliance reports.
Incorrect options:
AWS Secrets Manager - AWS Secrets Manager helps you protect secrets needed to access your applications, services, and IT resources. The service enables you to easily rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle. Users and applications retrieve secrets with a call to Secrets Manager APIs, eliminating the need to hardcode sensitive information in plain text.
AWS Systems Manager - AWS Systems Manager gives you visibility and control of your infrastructure on AWS. Systems Manager provides a unified user interface so you can view operational data from multiple AWS services and allows you to automate operational tasks across your AWS resources. With Systems Manager, you can group resources, like Amazon EC2 instances, Amazon S3 buckets, or Amazon RDS instances, by application, view operational data for monitoring and troubleshooting, and take action on your groups of resources.
Which option is a common stakeholder role for the AWS Cloud Adoption Framework (AWS CAF) platform perspective? (Select two)
a. Chief Data Officer (CDO)
b. Chief Product Officer (CPO)
c. Engineer
d. Chief Technology Officer (CTO)
e. Chief Information Officer (CIO)
c. Engineer
d. Chief Technology Officer (CTO)
Because this is all about eng.
Which AWS Service can be used to mitigate a Distributed Denial of Service (DDoS) attack?
a. Amazon CloudWatch
b. AWS Systems Manager
c. AWS Shield
d. AWS Key Management Service (AWS KMS)
c. AWS Shield
AWS Shield is a managed Distributed Denial of Service (DDoS) protection service that safeguards applications running on AWS. AWS Shield provides always-on detection and automatic inline mitigations that minimize application downtime and latency, so there is no need to engage AWS Support to benefit from DDoS protection. There are two tiers of AWS Shield - Standard and Advanced.
All AWS customers benefit from the automatic protections of AWS Shield Standard, at no additional charge. AWS Shield Standard defends against most common, frequently occurring network and transport layer DDoS attacks that target your web site or applications. When you use AWS Shield Standard with Amazon CloudFront and Amazon Route 53, you receive comprehensive availability protection against all known infrastructure (Layer 3 and 4) attacks.
For higher levels of protection against attacks targeting your applications running on Amazon Elastic Compute Cloud (EC2), Elastic Load Balancing (ELB), Amazon CloudFront, AWS Global Accelerator and Amazon Route 53 resources, you can subscribe to AWS Shield Advanced. In addition to the network and transport layer protections that come with Standard, AWS Shield Advanced provides additional detection and mitigation against large and sophisticated DDoS attacks, near real-time visibility into attacks, and integration with AWS WAF, a web application firewall.
Which AWS service can be used as an in-memory database with high-performance and low latency?
a. Amazon ElastiCache
b. Amazon Relational Database Service (Amazon RDS)
c. Amazon Athena
d. Amazon DynamoDB
a. Amazon ElastiCache
Amazon ElastiCache allows you to seamlessly set up, run, and scale popular open-Source compatible in-memory data stores in the cloud. Build data-intensive apps or boost the performance of your existing databases by retrieving data from high throughput and low latency in-memory data stores. Amazon ElastiCache is a popular choice for real-time use cases like Caching, Session Stores, Gaming, Geospatial Services, Real-Time Analytics, and Queuing. ElastiCache cannot be used for online analytical processing.
How Amazon ElastiCache Works: https://aws.amazon.com/elasticache/
Which tool/service will help you access AWS services using programming language-specific APIs?
a. AWS Management Console
b. AWS Software Developer Kit (SDK)
c. Integrated Development Environments (IDE)
d. AWS Command Line Interface (CLI)
b. AWS Software Developer Kit (SDK)
SDKs take the complexity out of coding by providing language-specific APIs for AWS services. For example, the AWS SDK for JavaScript simplifies the use of AWS Services by providing a set of libraries that are consistent and familiar for JavaScript developers. It provides support for API lifecycle considerations such as credential management, retries, data marshaling, serialization, and deserialization. AWS SDKs are offered in several programming languages to make it simple for developers working on different programming and scripting languages. So, AWS SDK can help with using AWS services from within an application using language-specific APIs.
AWS Lambda pricing is based on which of the following criteria? (Select two)
a. The size of the deployment package for the AWS Lambda function
b. The language runtime of the AWS Lambda function
c. The time it takes for the AWS Lambda function to execute
d. The number of lines of code for the AWS Lambda function
e. Number of requests for the AWS Lambda function
c. The time it takes for the AWS Lambda function to execute
e. Number of requests for the AWS Lambda function
AWS Lambda lets you run code without provisioning or managing servers. With AWS Lambda, you can run code for virtually any type of application or backend service - all with zero administration. Just upload your code and Lambda takes care of everything required to run and scale your code with high availability.
With AWS Lambda, you pay only for what you use. You are charged based on the number of requests for your functions and the duration, the time it takes for your code to execute. AWS Lambda counts a request each time it starts executing in response to an event notification or invoke call, including test invokes from the console. Duration is calculated from the time your code begins executing until it returns or otherwise terminates, rounded up to the nearest 100ms.
An e-commerce company wants to assess its applications deployed on Amazon Elastic Compute Cloud (Amazon EC2) instances for vulnerabilities and deviations from AWS best practices. Which AWS service can be used to facilitate this?
a. Amazon Inspector
b. AWS CloudHSM
c. AWS Secrets Manager
d. AWS Trusted Advisor
a. Amazon Inspector
Amazon Inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on AWS. Amazon Inspector automatically assesses applications for exposure, vulnerabilities, and deviations from best practices.
Overview of Amazon Inspector: https://aws.amazon.com/inspector/
Which of the following are examples of Horizontal Scalability (aka Elasticity)? (Select two)
a. Read Replicas in Amazon Relational Database Service (Amazon RDS)
b. Modify a Database instance to higher CPU and RAM
c. Elastic Load Balancing (ELB)
d. Add a bigger CPU to a computer
e. Modify an EC2 instance type from t2.nano to u-12tb1.metal
a. Read Replicas in Amazon Relational Database Service (Amazon RDS)
c. Elastic Load Balancing (ELB)
A “horizontally scalable” system is one that can increase capacity by adding more computers to the system. This is in contrast to a “vertically scalable” system, which is constrained to running its processes on only one computer; in such systems, the only way to increase performance is to add more resources into one computer in the form of faster (or more) CPUs, memory or storage. Horizontally scalable systems are oftentimes able to outperform vertically scalable systems by enabling parallel execution of workloads and distributing those across many different computers.
Elastic Load Balancing (ELB)
Elastic Load Balancing (ELB) automatically distributes incoming application traffic across multiple targets, such as Amazon EC2 instances, containers, IP addresses, and Lambda functions. It can handle the varying load of your application traffic in a single Availability Zone (AZ) or across multiple Availability Zones (AZ). This falls under Horizontal Scaling.
Read Replicas in Amazon Relational Database Service (Amazon RDS)
Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. Read replicas allow you to create read-only copies that are synchronized with your master database. You can also place your read replica in a different AWS Region closer to your users for better performance. Read replicas are an example of horizontal scaling of resources.
Which AWS compute service provides the EASIEST way to access resizable compute capacity in the cloud with support for per-second billing and access to the underlying OS?
a. Amazon Lightsail
b. Amazon Elastic Compute Cloud (Amazon EC2)
c. AWS Lambda
d. Amazon Elastic Container Service (Amazon ECS)
b. Amazon Elastic Compute Cloud (Amazon EC2)
Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud with support for per-second billing. It is the easiest way to provision servers on AWS Cloud and access the underlying OS. Amazon EC2 reduces the time required to obtain and boot new server instances to minutes, allowing you to quickly scale capacity, both up and down, as your computing requirements change.
Amazon Elastic Compute Cloud (Amazon EC2) Overview: https://aws.amazon.com/ec2/
Incorrect options:
Amazon Lightsail - Amazon Lightsail is an easy-to-use cloud platform that offers you everything needed to build an application or website, plus a cost-effective, monthly plan. Amazon Lightsail offers several preconfigured, one-click-to-launch operating systems, development stacks, and web applications, including Linux, Windows OS, and WordPress. Amazon Lightsail comes with monthly payment plans and does not support per second billing, so this option is ruled out.
Which of the following is the MOST cost-effective option to purchase an EC2 Reserved Instance (RI)?
a. All upfront payment option with the standard 1-year term
b. Partial upfront payment option with standard 3-years term
c. No upfront payment option with standard 1-year term
d. No upfront payment option with standard 3-years term
b. Partial upfront payment option with standard 3-years term
There was a whole payment chart for this. Just know 3-year plan means cheaper and paying upfront means cheaper.
Which of the following AWS services is essential for implementing security of resources in AWS Cloud?
a. AWS Shield
b. AWS Identity and Access Management (IAM)
c. Amazon CloudWatch
d. AWS Web Application Firewall (AWS WAF)
b. AWS Identity and Access Management (IAM)
AWS Identity and Access Management (IAM) enables you to manage access to AWS services and resources securely. Using IAM, you can create and manage AWS users and groups, and use permissions to allow and deny their access to AWS resources. IAM enables security best practices by allowing you to grant unique security credentials to users and groups to specify which AWS service APIs and resources they can access. These features make IAM an important service for the overall security of AWS resources in your account. IAM is secure by default; users have no access to AWS resources until permissions are explicitly granted.
The AWS Well-Architected Framework provides guidance on building cloud based applications using AWS best practices. Which of the following options are the pillars mentioned in the AWS Well-Architected Framework? (Select two)
a. Reliability
b. Availability
c. Cost Optimization
d. Scalability
e. Elasticity
a. Reliability
c. Cost Optimization
The AWS Well-Architected Framework provides guidance on building secure, high-performing, resilient, and efficient infrastructure for cloud based applications. Based on six pillars — operational excellence, security, reliability, performance efficiency, cost optimization and sustainability — the Framework provides a consistent approach for customers and partners to evaluate architectures, and implement designs that will scale over time.
Incorrect options:
Elasticity - Elasticity is the ability to acquire resources as you need them and release resources when you no longer need them. In the cloud, you want to do this automatically.
Availability - A system that is available is capable of delivering the designed functionality at a given point in time. Highly available systems are those that can withstand some measure of degradation while still remaining available.
Scalability - A measurement of a system’s ability to grow to accommodate an increase in demand.
These three options are not part of the AWS Well-Architected Framework.
Which Amazon Elastic Compute Cloud (Amazon EC2) pricing model is the most cost-effective and flexible with no requirement for a long term resource commitment or upfront payment but still guarantees that instance would not be interrupted?
a. Dedicated Host
b. Reserved Instance (RI)
c. Spot Instance
d. On-demand Instance
d. On-demand Instance
An On-Demand Instance is an instance that you use on-demand. You have full control over its lifecycle — you decide when to launch, stop, hibernate, start, reboot, or terminate it. There is no long-term commitment required when you purchase On-Demand Instances. There is no upfront payment and you pay only for the seconds that your On-Demand Instances are running. The price per second for running an On-Demand Instance is fixed. On-demand instances cannot be interrupted.
EC2 Pricing Options Overview: https://aws.amazon.com/ec2/pricing/
A multi-national corporation wants to get expert professional advice on migrating to AWS and managing their applications on AWS Cloud. Which of the following entities would you recommend for this engagement?
a. Concierge Support Team
b. AWS Trusted Advisor
c. APN Consulting Partner
d. APN Technology Partner
c. APN Consulting Partner
The AWS Partner Network (APN) is the global partner program for technology and consulting businesses that leverage Amazon Web Services to build solutions and services for customers.
APN Consulting Partners are professional services firms that help customers of all types and sizes design, architect, build, migrate, and manage their workloads and applications on AWS, accelerating their migration to AWS cloud.
APN Partner Types Overview: https://aws.amazon.com/partners/
Which pillar of the AWS Well-Architected Framework recommends maintaining infrastructure as code (IaC)?
a. Performance Efficiency
b. Cost Optimization
c. Security
d. Operational Excellence
d. Operational Excellence
The AWS Well-Architected Framework helps you understand the pros and cons of decisions you make while building systems on AWS. By using the Framework you will learn architectural best practices for designing and operating reliable, secure, efficient, and cost-effective systems in the cloud. It provides a way for you to consistently measure your architectures against best practices and identify areas for improvement.
The AWS Well-Architected Framework is based on six pillars — Operational Excellence, Security, Reliability, Performance Efficiency, Cost Optimization and Sustainability.
The Operational Excellence pillar includes the ability to run and monitor systems to deliver business value and to continually improve supporting processes and procedures. In the cloud, you can apply the same engineering discipline that you use for application code to your entire environment. You can define your entire workload (applications, infrastructure) as code and update it with code. You can implement your operations procedures as code and automate their execution by triggering them in response to events.
An IT company has deployed a static website on Amazon Simple Storage Service (Amazon S3), but the website is still inaccessible. As a Cloud Practioner, which of the following solutions would you suggest to address this issue?
a. Enable Amazon S3 versioning
b. Enable Amazon S3 replication
c. Disable Amazon S3 encryption
d. Fix the Amazon S3 bucket policy
d. Fix the Amazon S3 bucket policy
To host a static website on Amazon S3, you configure an Amazon S3 bucket for website hosting and then upload your website content to the bucket. When you configure a bucket as a static website, you must enable website hosting, set permissions, and create and add an index document.
Hosting a static website on Amazon S3: https://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteHosting.html
If you want to configure an existing bucket as a static website that has public access, you must edit block public access settings for that bucket. You may also have to edit your account-level block public access settings. Amazon S3 applies the most restrictive combination of the bucket-level and account-level block public access settings.
Here is how you can edit Public Access settings for S3 buckets: https://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteAccessPermissionsReqd.html
Which of the following statements is INCORRECT about AWS Auto Scaling?
a. You can automatically deploy AWS Shield when a DDoS attack is detected
b. You can scale out and add more Amazon Elastic Compute Cloud (Amazon EC2) instances to match an increase in demand as well as scale in and remove Amazon Elastic Compute Cloud (Amazon EC2) instances to match a reduced demand
c. You can automatically register new instances to a load balancer
d. You can automatically remove unhealthy instances
a. You can automatically deploy AWS Shield when a DDoS attack is detected
AWS Auto Scaling is helpful during a DDoS attack, as it can scale out resources fast. But, it cannot automatically deploy AWS Shield service onto its group of resources.
Which AWS service can be used to provision resources to run big data workloads on Hadoop clusters?
a. AWS Step Functions
b. AWS Batch
c. Amazon Elastic Compute Cloud (Amazon EC2)
d. Amazon EMR
d. Amazon EMR
Amazon EMR is the industry-leading cloud big data platform for processing vast amounts of data using open source tools such as Hadoop, Apache Spark, Apache Hive, Apache HBase, Apache Flink, Apache Hudi, and Presto. Amazon EMR can be used to provision resources to run big data workloads on Hadoop clusters.
Incorrect options:
AWS Step Functions - AWS Step Functions service lets you coordinate multiple AWS services into serverless workflows. You can design and run workflows that stitch together services such as AWS Lambda, AWS Glue and Amazon SageMaker.
AWS Step Functions Overview: https://aws.amazon.com/step-functions/
AWS Batch - You can use AWS Batch to plan, schedule and execute your batch computing workloads across the full range of AWS compute services. AWS Batch dynamically provisions the optimal quantity and type of compute resources (e.g., CPU or memory optimized instances) based on the volume and specific resource requirements of the batch jobs submitted. AWS Batch provisions compute resources and optimizes the job distribution based on the volume and resource requirements of the submitted batch jobs.
Please review the common use-cases for AWS Batch: https://aws.amazon.com/batch/
Exam Alert:
Understand the difference between AWS Step Functions and AWS Batch. You may get questions to choose one over the other. AWS Batch runs batch computing workloads by provisioning the compute resources. AWS Step Functions does not provision any resources. AWS Step Functions only orchestrates AWS services required for a given workflow. You cannot use AWS Step Functions to plan, schedule and execute your batch computing workloads by provisioning underlying resources.
A company wants to move to AWS cloud and release new features with quick iterations by utilizing relevant AWS services whenever required. Which of the following characteristics of AWS Cloud does it want to leverage?
a. Scalability
b. Agility
c. Reliability
d. Elasticity
b. Agility
In the world of cloud computing, “Agility” refers to the ability to rapidly develop, test and launch software applications that drive business growth Another way to explain “Agility” - AWS provides a massive global cloud infrastructure that allows you to quickly innovate, experiment and iterate. Instead of waiting weeks or months for hardware, you can instantly deploy new applications. This ability is called Agility.
Which of the following is the correct statement regarding the AWS Storage services?
a. Amazon Simple Storage Service (Amazon S3) is file based storage, Amazon Elastic Block Store (Amazon EBS) is block based storage and Amazon Elastic File System (Amazon EFS) is object based storage
b. Amazon Simple Storage Service (Amazon S3) is block based storage, Amazon Elastic Block Store (Amazon EBS) is object based storage and Amazon Elastic File System (Amazon EFS) is file based storage
c. Amazon Simple Storage Service (Amazon S3) is object based storage, Amazon Elastic Block Store (Amazon EBS) is file based storage and Amazon Elastic File System (Amazon EFS) is block based storage
d. Amazon Simple Storage Service (Amazon S3) is object based storage, Amazon Elastic Block Store (Amazon EBS) is block based storage and Amazon Elastic File System (Amazon EFS) is file based storage
d. Amazon Simple Storage Service (Amazon S3) is object based storage, Amazon Elastic Block Store (Amazon EBS) is block based storage and Amazon Elastic File System (Amazon EFS) is file based storage
Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed, elastic NFS file system.
Amazon Elastic Block Store (Amazon EBS) is an easy to use, high-performance block storage service designed for use with Amazon Elastic Compute Cloud (Amazon EC2) for both throughput and transaction-intensive workloads at any scale.
Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance.
A startup has just moved its IT infrastructure to AWS Cloud. The CTO would like to receive detailed reports that break down the startup’s AWS costs by the hour in an Amazon Simple Storage Service (Amazon S3) bucket. As a Cloud Practitioner, which AWS service would you recommend for this use-case?
a. AWS Cost & Usage Report (AWS CUR)
b. AWS Budgets
c. AWS Cost Explorer
d. AWS Pricing Calculator
a. AWS Cost & Usage Report (AWS CUR)
AWS Cost & Usage Report (AWS CUR) contains the most comprehensive set of cost and usage data available. You can use AWS Cost & Usage Report (AWS CUR) to publish your AWS billing reports to an Amazon Simple Storage Service (Amazon S3) bucket that you own. You can receive reports that break down your costs by the hour or month, by product or product resource, or by tags that you define yourself. AWS updates the report in your bucket once a day in comma-separated value (CSV) format.
AWS Cost & Usage Report (AWS CUR) Overview: https://docs.aws.amazon.com/cur/latest/userguide/what-is-cur.html
Incorrect Options:
AWS Cost Explorer - AWS Cost Explorer has an easy-to-use interface that lets you visualize, understand, and manage your AWS costs and usage over time. AWS Cost Explorer includes a default report that helps you visualize the costs and usage associated with your top five cost-accruing AWS services, and gives you a detailed breakdown of all services in the table view. The reports let you adjust the time range to view historical data going back up to twelve months to gain an understanding of your cost trends. AWS Cost Explorer cannot provide a detailed report of your AWS costs by the hour into an Amazon S3 bucket.
Which AWS support plan provides access to a designated Technical Account Manager (TAM)?
a. AWS Enterprise Support
b. AWS Enterprise On-Ramp Support
c. AWS Business Support
d. AWS Developer Support
a. AWS Enterprise Support
AWS offers four different support plans to cater to each of its customers - AWS Developer Support, AWS Business Support, AWS Enterprise On-Ramp Support and AWS Enterprise Support plans. A basic support plan is included for all AWS customers.
AWS Enterprise Support provides customers with concierge-like service where the main focus is helping the customer achieve their outcomes and find success in the cloud. With AWS Enterprise Support, you get 24x7 technical support from high-quality engineers, tools and technology to automatically manage the health of your environment, consultative architectural guidance delivered in the context of your applications and use-cases, and a designated Technical Account Manager (TAM) to coordinate access to proactive/preventative programs and AWS subject matter experts.
Exam Alert:
Please review the differences between the AWS Developer Support, AWS Business Support, AWS Enterprise On-Ramp Support and AWS Enterprise Support plans as you can expect at least a couple of questions on the exam: https://aws.amazon.com/premiumsupport/plans/
A company runs an application on a fleet of EC2 instances. The company wants to automate the traditional maintenance job of running timely assessments and checking for OS vulnerabilities. As a Cloud Practitioner, which service will you suggest for this use case?
a. Amazon GuardDuty
b. AWS Shield
c. Amazon Inspector
d. Amazon Macie
c. Amazon Inspector
Amazon Inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on your Amazon EC2 instances. Amazon Inspector automatically assesses applications for exposure, vulnerabilities, and deviations from best practices. After performing an assessment, Amazon Inspector produces a detailed list of security findings prioritized by level of severity. These findings can be reviewed directly or as part of detailed assessment reports which are available via the Amazon Inspector console or API.
Which of the following statement is correct regarding the AWS pricing policy for data transfer charges into or out of an AWS Region?
a. Both inbound data transfer and outbound data transfer are charged
b. Only outbound data transfer is charged
c. Only inbound data transfer is charged
d. Neither inbound nor outbound data transfer are charged
b. Only outbound data transfer is charged
One of the main benefits of cloud services is the ability it gives you to optimize costs to match your needs, even as those needs change. AWS services do not have complex dependencies or licensing requirements, so you can get exactly what you need to build innovative, cost-effective solutions using the latest technology.
There are three fundamental drivers of cost with AWS: compute, storage, and outbound data transfer. These characteristics vary somewhat, depending on the AWS product and pricing model you choose. Outbound data to the internet from all AWS regions is billed at region-specific, tiered data transfer rates. Inbound data transfer into all AWS regions from the internet is free.
Which AWS service should be used when you want to run container applications, but want to avoid the operational overhead of scaling, patching, securing, and managing servers?
a. Amazon Elastic Container Service (Amazon ECS) - Fargate launch type
b. AWS Lambda
c. Amazon Elastic Compute Cloud (Amazon EC2)
d. Amazon Elastic Container Service (Amazon ECS) - EC2 launch type
a. Amazon Elastic Container Service (Amazon ECS) - Fargate launch type
AWS Fargate is a serverless compute engine for containers. It works with both Amazon Elastic Container Service (Amazon ECS) and Amazon Elastic Kubernetes Service (Amazon EKS). AWS Fargate makes it easy for you to focus on building your applications. AWS Fargate removes the need to provision and manage servers, lets you specify and pay for resources per application, and improves security through application isolation by design. AWS Fargate allocates the right amount of compute, eliminating the need to choose instances and scale cluster capacity. You only pay for the resources required to run your containers, so there is no over-provisioning and paying for additional servers. AWS Fargate runs each task or pod in its kernel providing the tasks and pods their own isolated compute environment. This enables your application to have workload isolation and improved security by design.
How AWS Fargate Works: https://aws.amazon.com/fargate/
Incorrect options:
Amazon Elastic Container Service (Amazon ECS) - EC2 launch type - Amazon Elastic Container Service (Amazon ECS) is a highly scalable, fast, container management service that makes it easy to run, stop, and manage Docker containers and allows you to easily run applications on a managed cluster of Amazon EC2 instances. Unlike AWS Fargate, this is not a fully managed service and you need to manage the underlying servers yourself.
AWS Lambda - AWS Lambda is a compute service that lets you run code without provisioning or managing servers. AWS Lambda executes your code only when needed and scales automatically, from a few requests per day to thousands per second. AWS Lambda does not support running container applications.
Amazon Elastic Compute Cloud (Amazon EC2) - Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud, per-second billing, and access to the underlying OS. It is designed to make web-scale cloud computing easier for developers. Maintenance of the server and its software has to be done by the customer, so this option is ruled out.
Which AWS service can be used to automate code deployment to Amazon Elastic Compute Cloud (Amazon EC2) instances as well as on-premises instances?
a. AWS CodePipeline
b. AWS CloudFormation
c. AWS CodeCommit
d. AWS CodeDeploy
d. AWS CodeDeploy
AWS CodeDeploy is a service that automates code deployments to any instance, including Amazon EC2 instances and instances running on-premises. AWS CodeDeploy makes it easier for you to rapidly release new features, helps you avoid downtime during deployment, and handles the complexity of updating your applications. You can use AWS CodeDeploy to automate deployments, eliminating the need for error-prone manual operations, and the service scales with your infrastructure so you can easily deploy to one instance or thousands.
Incorrect options:
AWS CodeCommit - AWS CodeCommit is a fully-managed source control service that hosts secure Git-based repositories. It makes it easy for teams to collaborate on code in a secure and highly scalable ecosystem. CodeCommit eliminates the need to operate your own source control system or worry about scaling its infrastructure. It cannot be used to automate code deployment.
AWS CloudFormation - AWS CloudFormation allows you to use programming languages or a simple text file to model and provision, in an automated and secure manner, all the resources needed for your applications across all regions and accounts. It cannot be used to automate code deployment.
AWS CodePipeline - AWS CodePipeline is a continuous delivery service that enables you to model, visualize, and automate the steps required to release your software. With AWS CodePipeline, you model the full release process for building your code, deploying to pre-production environments, testing your application and releasing it to production.
AWS CodePipeline integrates with AWS services such as AWS CodeCommit, Amazon S3, AWS CodeBuild, AWS CodeDeploy, AWS Elastic Beanstalk, AWS CloudFormation, AWS OpsWorks, Amazon ECS, and AWS Lambda. To further elucidate, CodePipeline cannot by itself deploy the code, it can integrate with CodeDeploy for the actual deployment.
Which of the following entities applies patches to the underlying OS for Amazon Aurora?
a. The AWS customer by SSHing on the instances
b. The AWS customer by using AWS Systems Manager
c. The AWS Support after receiving a request from the customer
d. The AWS Product Team automatically
d. The AWS Product Team automatically
Amazon Aurora is a MySQL and PostgreSQL-compatible relational database built for the cloud. Amazon Aurora is fully managed by Amazon Relational Database Service (RDS), which automates time-consuming administration tasks like hardware provisioning, database setup, patching, and backups. The AWS Product team is responsible for applying patches to the underlying OS for AWS Aurora.
Which of the following improves the availability for a fleet of Amazon Elastic Compute Cloud (Amazon EC2) instances?
a. Deploy the Amazon Elastic Compute Cloud (Amazon EC2) instances in the same Availability Zone (AZ) across two different AWS Regions
b. Deploy the Amazon Elastic Compute Cloud (Amazon EC2) instances across different Availability Zones (AZ) in the same AWS Region
c. Deploy the Amazon Elastic Compute Cloud (Amazon EC2) instances in the same Availability Zone (AZ) of an AWS Region
d. Deploy the Amazon Elastic Compute Cloud (Amazon EC2) instances across different AWS Regions of the same Availability Zone (AZ)
b. Deploy the Amazon Elastic Compute Cloud (Amazon EC2) instances across different Availability Zones (AZ) in the same AWS Region
Two of the other options are not possible and the other option that’s possible does not help with availability.
Which of the following is CORRECT regarding removing an AWS account from AWS Organizations?
a. Raise a support ticket with AWS Support to remove the account
b. The AWS account must be able to operate as a standalone account. Only then it can be removed from AWS organizations
c. The AWS account can be removed from AWS Systems Manager
d. The AWS account must not have any Service Control Policies (SCPs) attached to it. Only then it can be removed from AWS organizations
b. The AWS account must be able to operate as a standalone account. Only then it can be removed from AWS organizations
You can remove an account from your organization only if the account has the information that is required for it to operate as a standalone account. For each account that you want to make standalone, you must accept the AWS Customer Agreement, choose a support plan, provide and verify the required contact information, and provide a current payment method. AWS uses the payment method to charge for any billable (not AWS Free Tier) AWS activity that occurs while the account isn’t attached to an organization.
A company wants to identify the optimal AWS resource configuration for its workloads so that the company can reduce costs and increase workload performance. Which of the following services can be used to meet this requirement?
a. AWS Cost Explorer
b. AWS Systems Manager
c. AWS Budgets
d. AWS Compute Optimizer
d. AWS Compute Optimizer
AWS Compute Optimizer recommends optimal AWS resources for your workloads to reduce costs and improve performance by using machine learning to analyze historical utilization metrics. Over-provisioning resources can lead to unnecessary infrastructure costs, and under-provisioning resources can lead to poor application performance. Compute Optimizer helps you choose optimal configurations for three types of AWS resources: Amazon EC2 instances, Amazon EBS volumes, and AWS Lambda functions, based on your utilization data.
Compute Optimizer recommends up to 3 options from 140+ EC2 instance types, as well as a wide range of EBS volume and Lambda function configuration options, to right-size your workloads. Compute Optimizer also projects what the CPU utilization, memory utilization, and run time of your workload would have been on recommended AWS resource options. This helps you understand how your workload would have performed on the recommended options before implementing the recommendations.
How Compute Optimizer works: https://aws.amazon.com/compute-optimizer/
Incorrect options:
AWS Cost Explorer - AWS Cost Explorer has an easy-to-use interface that lets you visualize, understand, and manage your AWS costs and usage over time. Cost Explorer Resource Rightsizing Recommendations and Compute Optimizer use the same recommendation engine. The Compute Optimizer recommendation engine delivers recommendations to help customers identify optimal EC2 instance types for their workloads. The Cost Explorer console and API surface a subset of these recommendations that may lead to cost savings, and augments them with customer-specific cost and savings information (e.g. billing information, available credits, RI, and Savings Plans) to help Cost Management owners quickly identify savings opportunities through infrastructure rightsizing. Compute Optimizer console and its API delivers all recommendations regardless of the cost implications.
AWS Web Application Firewall (WAF) offers protection from common web exploits at which layer?
a. Layer 3
b. Layer 4 and 7
c. Layer 4
d. Layer 7
d. Layer 7
AWS WAF is a web application firewall that lets you monitor the HTTP and HTTPS requests that are forwarded to an Amazon API Gateway API, Amazon CloudFront or an Application Load Balancer. HTTP and HTTPS requests are part of the Application layer, which is layer 7.
Incorrect options:
Layer 3 - Layer 3 is the Network layer and this layer decides which physical path data will take when it moves on the network. AWS Shield offers protection at this layer. WAF does not offer protection at this layer.
Layer 4 - Layer 4 is the Transport layer and this layer data transmission occurs using TCP or UDP protocols. AWS Shield offers protection at this layer. WAF does not offer protection at this layer.
Which policy describes prohibited uses of the web services offered by Amazon Web Services?
a. AWS Trusted Advisor
b. AWS Acceptable Use Policy
c. AWS Fair Use Policy
d. AWS Applicable Use Policy
b. AWS Acceptable Use Policy
The Acceptable Use Policy describes prohibited uses of the web services offered by Amazon Web Services, Inc. and its affiliates (the “Services”) and the website located at http://aws.amazon.com (the “AWS Site”). This policy is present at https://aws.amazon.com/aup/ and is updated on a need basis by AWS.
Which AWS service can be used for online analytical processing?
a. Amazon DynamoDB
b. Amazon Redshift
c. Amazon Relational Database Service (Amazon RDS)
d. Amazon ElastiCache
b. Amazon Redshift
Amazon Redshift is a fast, fully managed cloud data warehouse that makes it simple and cost-effective to analyze all your data using standard SQL and your existing Business Intelligence (BI) tools. It allows you to run complex analytic queries against terabytes to petabytes of structured data, using sophisticated query optimization, columnar storage on high-performance storage, and massively parallel query execution.
Which AWS technology/service helps you to scale your resources to match supply with demand while still keeping your cloud solution cost-effective?
a. AWS CloudFormation
b. AWS Cost Explorer
c. AWS Auto Scaling
d. AWS OpsWorks
c. AWS Auto Scaling
AWS Auto Scaling monitors applications and automatically adjusts capacity to maintain steady, predictable performance at the lowest possible cost. Using AWS Auto Scaling, it’s easy to setup application scaling for multiple resources across multiple services in minutes. The service provides a simple, powerful user interface that lets you build scaling plans for resources including Amazon EC2 instances and Spot Fleets, Amazon ECS tasks, Amazon DynamoDB tables and indexes, and Amazon Aurora Replicas. AWS Auto Scaling makes scaling simple with recommendations that allow you to optimize performance, costs, or balance between them.
How AWS Auto Scaling Works: via - https://aws.amazon.com/autoscaling/
Which AWS service can help you create data-driven business cases for transitioning your business from on-premises to AWS Cloud?
a. AWS Budgets
b. AWS Billing and Cost Management
c. AWS Migration Evaluator
d. AWS Trusted Advisor
c. AWS Migration Evaluator
AWS Migration Evaluator (Formerly TSO Logic) is a complimentary service to create data-driven business cases for AWS Cloud planning and migration.
AWS Migration Evaluator quickly provides a business case to make sound AWS planning and migration decisions. With AWS Migration Evaluator, your organization can build a data-driven business case for AWS, gets access to AWS expertise, visibility into the costs associated with multiple migration strategies, and insights on how reusing existing software licensing reduces costs further.
Incorrect options:
AWS Budgets - AWS Budgets gives the ability to set custom budgets that alert you when your costs or usage exceed (or are forecasted to exceed) your budgeted amount. You can also use AWS Budgets to set reservation utilization or coverage targets and receive alerts when your utilization drops below the threshold you define. AWS Budgets can be created at the monthly, quarterly, or yearly level, and you can customize the start and end dates. You can further refine your budget to track costs associated with multiple dimensions, such as AWS service, linked account, tag, and others. You cannot use this service to create data-driven business cases for transitioning your business from on-premises to AWS Cloud.
AWS Trusted Advisor - AWS Trusted Advisor is an online tool that provides real-time guidance to help provision your resources following AWS best practices. Whether establishing new workflows, developing applications, or as part of ongoing improvement, recommendations provided by AWS Trusted Advisor regularly help keep your solutions provisioned optimally. AWS Trusted Advisor analyzes your AWS environment and provides best practice recommendations in five categories: Cost Optimization, Performance, Security, Fault Tolerance, Service Limits. You cannot use this service to create data-driven business cases for transitioning your business from on-premises to AWS Cloud.
AWS Billing and Cost Management - AWS Billing and Cost Management is the service that you use to pay your AWS bill, monitor your usage, and analyze and control your costs. It is the billing department for AWS services - with necessary tools and services under its hood. You cannot use this service to create data-driven business cases for transitioning your business from on-premises to AWS Cloud.
Which of the following are correct statements regarding the AWS Shared Responsibility Model? (Select two)
a. Configuration Management is the responsibility of the customer
b. For abstracted services like Amazon S3, AWS operates the infrastructure layer, the operating system, and platforms
c. For a service like Amazon EC2, that falls under Infrastructure as a Service (IaaS), AWS is responsible for maintaining guest operating system
d. AWS is responsible for Security ‘of’ the Cloud
e. AWS is responsible for training AWS and customer employees on AWS products and services
b. For abstracted services like Amazon S3, AWS operates the infrastructure layer, the operating system, and platforms
d. AWS is responsible for Security ‘of’ the Cloud
Security and Compliance is a shared responsibility between AWS and the customer. This shared model can help relieve the customer’s operational burden as AWS operates, manages and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the service operates.
AWS is responsible for protecting the infrastructure that runs all of the services offered in the AWS Cloud. This infrastructure is composed of the hardware, software, networking, and facilities that run AWS Cloud services.
For abstracted services, such as Amazon S3 and Amazon DynamoDB, AWS operates the infrastructure layer, the operating system, and platforms, and customers access the endpoints to store and retrieve data.
Incorrect options:
For a service like Amazon EC2, that falls under Infrastructure as a Service (IaaS), AWS is responsible for maintaining guest operating system - A service such as Amazon Elastic Compute Cloud (Amazon EC2) is categorized as Infrastructure as a Service (IaaS) and, as such, requires the customer to perform all of the necessary security configuration and management tasks. Customers are responsible for the management of the guest operating system (including updates and security patches), any application software or utilities installed by the customer on the instances, and the configuration of the AWS-provided firewall (called a security group) on each instance.
A development team is looking for a forum where the most frequent questions and requests from AWS customers are listed along with AWS provided solutions.
Which AWS forum/service can be used for troubleshooting an issue or checking for a solution?
a. AWS Knowledge Center
b. AWS Marketplace
c. AWS Support Center
d. AWS Health Dashboard - service health
a. AWS Knowledge Center
AWS Knowledge Center contains the most frequent & common questions and requests and AWS provided solutions for the same. This should be the starting point of checking for a solution or troubleshooting an issue with AWS services. The URL for Knowledge Center is https://aws.amazon.com/premiumsupport/knowledge-center/.
Which type of cloud computing does Amazon Elastic Compute Cloud (EC2) represent?
a. Software as a Service (SaaS)
b. Network as a Service (NaaS)
c. Infrastructure as a Service (IaaS)
d. Platform as a Service (PaaS)
c. Infrastructure as a Service (IaaS)
Cloud Computing can be broadly divided into three types - Infrastructure as a Service (IaaS), Platform as a Service (PaaS), Software as a Service (SaaS).
IaaS contains the basic building blocks for cloud IT. It typically provides access to networking features, computers (virtual or on dedicated hardware), and data storage space. IaaS gives the highest level of flexibility and management control over IT resources.
EC2 gives you full control over managing the underlying OS, virtual network configurations, storage, data and applications. So EC2 is an example of an IaaS service.
Please review this overview of the types of Cloud Computing: https://aws.amazon.com/types-of-cloud-computing/
What foundational capability under the operations perspective is part of the AWS Cloud Adoption Framework (AWS CAF)?
a. Performance and capacity management
b. Application portfolio management
c. Vulnerability management
d. Platform engineering
a. Performance and capacity management
The AWS Cloud Adoption Framework (AWS CAF) leverages AWS experience and best practices to help you digitally transform and accelerate your business outcomes through innovative use of AWS. Use the AWS CAF to identify and prioritize transformation opportunities, evaluate and improve your cloud readiness, and iteratively evolve your transformation roadmap.
AWS CAF groups its capabilities in six perspectives: Business, People, Governance, Platform, Security, and Operations. Each perspective comprises a set of capabilities that functionally related stakeholders own or manage in your cloud transformation journey.
Operations perspective helps ensure that your cloud services are delivered at a level that meets the needs of your business. Performance and capacity management under the Operations perspective is part of the AWS Cloud Adoption Framework (AWS CAF)
AWS Cloud Adoption Framework (AWS CAF) - Foundational capabilities: https://docs.aws.amazon.com/whitepapers/latest/overview-aws-cloud-adoption-framework/foundational-capabilities.html
Incorrect options:
Vulnerability management - Vulnerability management is a foundational capability under the Security perspective for the AWS Cloud Adoption Framework (AWS CAF).
Platform engineering - Platform engineering is a foundational capability under the Platform perspective for the AWS Cloud Adoption Framework (AWS CAF).
Application portfolio management - Application Portfolio Management is a foundational capability under the Governance perspective for the AWS Cloud Adoption Framework (AWS CAF).
What are the advantages that AWS Cloud offers over a traditional on-premises IT infrastructure? (Select two)
a. Trade capital expense for variable expense
b. Make a capacity decision before deploying an application, to reduce costs
c. Increase speed and agility by keeping servers and other required resources ready before time in your data centers
d. Provide lower latency to applications by maintaining servers on-premises
e. Eliminate guessing on your infrastructure capacity needs
a. Trade capital expense for variable expense
e. Eliminate guessing on your infrastructure capacity needs
Trade capital expense for variable expense
In a traditional on-premises environment, you have to invest heavily in data centers and servers before you know how you’re going to use them. With Cloud Computing, you can pay only when you consume computing resources, and pay only for how much you consume.
Eliminate guessing on your infrastructure capacity needs
When you make a capacity decision before deploying an application, you often end up either sitting on expensive idle resources or dealing with limited capacity. With Cloud Computing, these problems go away. You can access as much or as little capacity as you need, and scale up and down as required with only a few minutes’ notice. You can Stop guessing capacity.
Which AWS services can be used to decouple components of a microservices based application on AWS Cloud? (Select two)
a. Amazon Simple Queue Service (SQS)
b. AWS Lambda
c. AWS Step Functions
d. Amazon Elastic Compute Cloud (Amazon EC2)
e. Amazon Simple Notification Service (SNS)
a. Amazon Simple Queue Service (SQS)
e. Amazon Simple Notification Service (SNS)
Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. Using SQS, you can send, store, and receive messages between software components at any volume, without losing messages or requiring other services to be available.
Amazon Simple Notification Service (SNS) is a highly available, durable, secure, fully managed pub/sub messaging service that enables you to decouple microservices, distributed systems, and serverless applications. Using Amazon SNS topics, your publisher systems can fan-out messages to a large number of subscriber endpoints for parallel processing, including Amazon SQS queues, AWS Lambda functions, and HTTP/S webhooks. Additionally, SNS can be used to fan out notifications to end users using mobile push, SMS, and email.
Therefore, both SNS and SQS can be used to decouple components of a microservices-based application.
According to the AWS Cloud Adoption Framework (AWS CAF), what are two tasks that a company should perform when planning to migrate to the AWS Cloud and aiming to become more responsive to customer inquiries and feedback as part of their organizational transformation? (Select two)
a. Leverage legacy infrastructure for cost efficiencies
b. Leverage agile methods to rapidly iterate and evolve
c. Organize your teams around bureaucratic design principles
d. Organize your teams around products and value streams
e. Create new analytical insights with existing products and services
b. Leverage agile methods to rapidly iterate and evolve
d. Organize your teams around products and value streams
The AWS Cloud Adoption Framework (AWS CAF) leverages AWS experience and best practices to help you digitally transform and accelerate your business outcomes through innovative use of AWS. AWS CAF identifies specific organizational capabilities that underpin successful cloud transformations. These capabilities provide best practice guidance that helps you improve your cloud readiness. AWS CAF groups its capabilities in six perspectives: Business, People, Governance, Platform, Security, and Operations.
Organize your teams around products and value streams
Leverage agile methods to rapidly iterate and evolve
Using the AWS Cloud Adoption Framework (AWS CAF), you can reimagine how your business and technology teams create customer value and meet your strategic intent. Organizing your teams around products and value streams while leveraging agile methods to rapidly iterate and evolve will help you become more responsive and customer centric.
A company needs a storage solution for a project wherein the data is accessed less frequently but needs rapid access when required. Which S3 storage class is the MOST cost-effective for the given use-case?
a. Amazon S3 Intelligent-Tiering (S3 Intelligent-Tiering)
b. Amazon S3 Standard-Infrequent Access (S3 Standard-IA)
c. Amazon S3 Glacier (S3 Glacier)
d. Amazon S3 Standard
b. Amazon S3 Standard-Infrequent Access (S3 Standard-IA)
Amazon S3 Standard-Infrequent Access (S3 Standard-IA) is for data that is accessed less frequently, but requires rapid access when needed. S3 Standard-IA offers the high durability, high throughput, and low latency of S3 Standard, with a low per GB storage price and per GB retrieval fee. This combination of low cost and high performance make S3 Standard-IA ideal for long-term storage, backups, and as a data store for disaster recovery files.
A company is using a message broker service on its on-premises application and wants to move this messaging functionality to AWS Cloud. Which of the following AWS services is the right choice to move the existing functionality easily?
a. Amazon Simple Queue Service (Amazon SQS)
b. Amazon MQ
c. Amazon Simple Notification Service (Amazon SNS)
d. Amazon Kinesis Data Streams
b. Amazon MQ
Amazon MQ is a managed message broker service for Apache ActiveMQ and RabbitMQ that makes it easy to set up and operate message brokers on AWS. Amazon MQ reduces your operational responsibilities by managing the provisioning, setup, and maintenance of message brokers for you. Because Amazon MQ connects to your current applications with industry-standard APIs and protocols, you can easily migrate to AWS without having to rewrite code.
If you’re using messaging with existing applications, and want to move the messaging functionality to the cloud quickly and easily, AWS recommends you consider Amazon MQ. It supports industry-standard APIs and protocols so you can switch from any standards-based message broker to Amazon MQ without rewriting the messaging code in your applications. If you are building brand new applications in the cloud, AWS recommends you consider Amazon SQS and Amazon SNS.
How Amazon MQ works: https://aws.amazon.com/amazon-mq/
Incorrect options:
Amazon Kinesis Data Streams - Amazon Kinesis Data Streams enables you to build custom applications that process or analyze streaming data for specialized needs. You can continuously add various types of data such as clickstreams, application logs, and social media to an Amazon Kinesis data stream from hundreds of thousands of sources. Within seconds, the data will be available for your Amazon Kinesis Applications to read and process from the stream.
Which of the following AWS services can be used to prevent Distributed Denial-of-Service (DDoS) attack? (Select three)
a. AWS Shield
b. Amazon Inspector
c. Amazon CloudFront with Amazon Route 53
d. AWS Web Application Firewall (AWS WAF)
e. AWS Trusted Advisor
f. AWS CloudHSM
a. AWS Shield
c. Amazon CloudFront with Amazon Route 53
d. AWS Web Application Firewall (AWS WAF)
AWS Shield - AWS Shield is a managed Distributed Denial of Service (DDoS) protection service that safeguards applications running on AWS. AWS Shield provides always-on detection and automatic inline mitigations that minimize application downtime and latency, so there is no need to engage AWS Support to benefit from DDoS protection.
AWS Web Application Firewall (AWS WAF) - By using AWS Web Application Firewall (AWS WAF), you can configure web access control lists (Web ACLs) on your Amazon CloudFront distributions or Application Load Balancers to filter and block requests based on request signatures. Besides, by using the AWS Web Application Firewall (AWS WAF) rate-based rules, you can automatically block the IP addresses of bad actors when requests matching a rule exceed a threshold that you define.
Amazon CloudFront with Amazon Route 53 - AWS hosts Amazon CloudFront and Amazon Route 53 services on a distributed network of proxy servers in data centers throughout the world called edge locations. Using the global Amazon network of edge locations for application delivery and DNS service plays an important part in building a comprehensive defense against DDoS attacks for your dynamic web applications.
How AWS Shield, AWS Web Application Firewall (AWS WAF), and Amazon CloudFront with Amazon Route 53 help mitigate DDoS attacks: https://aws.amazon.com/blogs/security/how-to-protect-dynamic-web-applications-against-ddos-attacks-by-using-amazon-cloudfront-and-amazon-route-53/
Which of the following statements are true about Cost Allocation Tags in AWS Billing? (Select two)
a. Tags help in organizing resources and are a mandatory configuration item to run reports
b. Only user-defined tags need to be activated before they can appear in Cost Explorer or on a cost allocation report
c. For each resource, each tag key must be unique, but can have multiple values
d. For each resource, each tag key must be unique, and each tag key can have only one value
e. You must activate both AWS generated tags and user-defined tags separately before they can appear in Cost Explorer or on a cost allocation report
d. For each resource, each tag key must be unique, and each tag key can have only one value
e. You must activate both AWS generated tags and user-defined tags separately before they can appear in Cost Explorer or on a cost allocation report
A Cost Allocation Tag is a label that you or AWS assigns to an AWS resource. Each tag consists of a key and a value. For each resource, each tag key must be unique, and each tag key can have only one value. You can use tags to organize your resources, and cost allocation tags to track your AWS costs on a detailed level.
AWS provides two types of cost allocation tags, an AWS generated tags and user-defined tags. AWS defines, creates, and applies the AWS generated tags for you, and you define, create, and apply user-defined tags. You must activate both types of tags separately before they can appear in Cost Explorer or on a cost allocation report.
AWS Cost Allocation Tags Overview: https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/cost-alloc-tags.html
Incorrect options:
Tags help in organizing resources and are a mandatory configuration item to run reports - Tags definitely help organize resources as per an organization’s requirement; they are not mandatory though.
For each resource, each tag key must be unique, but can have multiple values - For each resource, each tag key must be unique, and each tag key can have only one value.
Only user-defined tags need to be activated before they can appear in Cost Explorer or on a cost allocation report - As explained above, both kinds of tags (user-defined and AWS generated) need to be activated separately before they can appear in report generation.
Which service gives a personalized view of the status of the AWS services that are part of your Cloud architecture so that you can quickly assess the impact on your business when AWS service(s) are experiencing issues?
a. Amazon Inspector
b. Amazon CloudWatch
c. AWS Health - Your Account Health Dashboard
d. AWS Health - Service Health Dashboard
c. AWS Health - Your Account Health Dashboard
AWS Health - Your Account Health Dashboard provides alerts and remediation guidance when AWS is experiencing events that may impact you.
With AWS Health - Your Account Health Dashboard, alerts are triggered by changes in the health of your AWS resources, giving you event visibility, and guidance to help quickly diagnose and resolve issues.
You can check on this page https://phd.aws.amazon.com/phd/home to get current status information.
Incorrect options:
AWS Health - Service Health Dashboard - The AWS Health - Service Health Dashboard is the single place to learn about the availability and operations of AWS services. You can view the overall status of AWS services, and you can sign in to view personalized communications about your particular AWS account or organization.
You can check on this page https://health.aws.amazon.com/health/status to get current status information.
Exam Alert:
While the AWS Health - Service Health Dashboard displays the general status of AWS services; the AWS Health - Your Account Health Dashboard gives you a personalized view of the performance and availability of the AWS services underlying your AWS resources.
A company wants to have control over creating and using its own keys for encryption on AWS services. Which of the following can be used for this use-case?
a. customer managed key (CMK)
b. AWS Secrets Manager
c. AWS owned key
d. AWS managed key
a. customer managed key (CMK)
An AWS KMS key is a logical representation of a cryptographic key. A KMS key contains metadata, such as the key ID, key spec, key usage, creation date, description, and key state. Most importantly, it contains a reference to the key material that is used when you perform cryptographic operations with the KMS key.
The KMS keys that you create are customer managed keys. Customer managed keys are KMS keys in your AWS account that you create, own, and manage. You have full control over these KMS keys, including establishing and maintaining their key policies, IAM policies, and grants, enabling and disabling them, rotating their cryptographic material, adding tags, creating aliases that refer to the KMS keys, and scheduling the KMS keys for deletion.
Which budget types can be created under AWS Budgets (Select three)?
a. Resource budget
b. Reservation budget
c. Hardware budget
d. Software budget
e. Cost budget
f. Usage budget
b. Reservation budget
e. Cost budget
f. Usage budget
AWS Budgets enable you to plan your service usage, service costs, and instance reservations. AWS Budgets information is updated up to three times a day. Updates typically occur between 8 to 12 hours after the previous update. Budgets track your unblended costs, subscriptions, refunds, and RIs. There are four different budget types you can create under AWS Budgets - Cost budget, Usage budget, Reservation budget and Savings Plans budget.
Cost budget - Helps you plan how much you want to spend on a service.
Usage budget - Helps you plan how much you want to use one or more services.
Reservation budget - This helps you track the usage of your Reserved Instances (RI). Two ways of doing it - Reserved Instance (RI) utilization budgets (This lets you see if your Reserved Instances (RI) are unused or under-utilized), Reserved Instance (RI) coverage budgets (This lets you see how much of your instance usage is covered by a reservation).
Which of the following is a part of the AWS Global Infrastructure?
a. Virtual Private Cloud (VPC)
b. Subnet
c. Virtual Private Network (VPN)
d. AWS Region
d. AWS Region
AWS Region is a physical location around the world where AWS builds its data centers. Each group of logical data centers is called an Availability Zone (AZ). Each AWS Region consists of multiple, isolated, and physically separate AZ’s within a geographic area.
Please see this illustration for AWS regions in the US: https://aws.amazon.com/about-aws/global-infrastructure/regions_az/
Which of the following statements are correct about the AWS root user account? (Select two)
a. It is highly recommended to enable Multi-Factor Authentication (MFA) for root user account
b. Root user access credentials are the email address and password used to create the AWS account
c. Root user account password cannot be changed once it is set
d. Root user account gets unrestricted permissions when the account is created, but these can be restricted using IAM policies
e. Root user credentials should only be shared with managers requiring administrative responsibilities to complete their jobs
a. It is highly recommended to enable Multi-Factor Authentication (MFA) for root user account
b. Root user access credentials are the email address and password used to create the AWS account
The Email address and the password used for signing up for AWS services are the AWS root user account credentials. Root user account, therefore, has full permissions on all AWS resources under that account. Restricting root user account access is not possible. As a best practice, Multi-Factor Authentication (MFA) should be set on the root user account. The root user account password can be changed after account creation. For all employees performing various administrative jobs, create individual user accounts using AWS IAM, and give administrative permissions as needed.
AWS Root User Account Security Best Practices: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_root-user.html
AWS Trusted Advisor analyzes your AWS environment and provides best practice recommendations for which of the following categories? (Select two)
a. Change Management
b. Service Limits
c. Elasticity
d. Cost Optimization
e. Documentation
b. Service Limits
d. Cost Optimization
AWS Trusted Advisor is an online tool that provides real-time guidance to help provision your resources following AWS best practices. Whether establishing new workflows, developing applications, or as part of ongoing improvement, recommendations provided by Trusted Advisor on a regular basis help keep your solutions provisioned optimally. AWS Trusted Advisor analyzes your AWS environment and provides best practice recommendations in five categories: Cost Optimization, Performance, Security, Fault Tolerance, Service Limits.
How Trusted Advisor Works:
AWS Trusted Advisor Recommendations:
https://aws.amazon.com/premiumsupport/technology/trusted-advisor/
An organization is planning to move its infrastructure from the on-premises datacenter to AWS Cloud. As a Cloud Practioner, which options would you recommend so that the organization can identify the right AWS services to build solutions on AWS Cloud (Select two)?
a. AWS CloudTrail
b. Amazon CloudWatch
c. AWS Partner Network (APN)
d. AWS Service Catalog
e. AWS Organizations
c. AWS Partner Network (APN)
d. AWS Service Catalog
AWS Service Catalog - AWS Service Catalog allows organizations to create and manage catalogs of IT services that are approved for use on AWS. These IT services can include everything from virtual machine images, servers, software, and databases to complete multi-tier application architectures.
AWS Partner Network (APN) - Organizations can take help from the AWS Partner Network (APN) to identify the right AWS services to build solutions on AWS Cloud. AWS Partner Network (APN) is the global partner program for technology and consulting businesses that leverage Amazon Web Services to build solutions and services for customers.
An IT company wants to run a log backup process every Monday at 2 AM. The usual runtime of the process is 5 minutes. As a Cloud Practitioner, which AWS services would you recommend to build a serverless solution for this use-case? (Select two)
a. AWS Step Function
b. AWS Lambda
c. Amazon Elastic Compute Cloud (Amazon EC2)
d. Amazon Eventbridge
e. AWS Systems Manager
b. AWS Lambda
d. Amazon Eventbridge
Amazon Eventbridge - Amazon EventBridge is a service that provides real-time access to changes in data in AWS services, your own applications, and software as a service (SaaS) applications without writing code. Amazon EventBridge Scheduler is a serverless task scheduler that simplifies creating, executing, and managing millions of schedules across AWS services without provisioning or managing underlying infrastructure.
Amazon Eventbridge Scheduler: via - https://aws.amazon.com/eventbridge/
AWS Lambda - AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume. The lambda has a maximum execution time of 15 minutes, so it can be used to run this log backup process.
To build the solution for the given use-case, you can leverage the Amazon EventBridge Scheduler to trigger on a schedule. You can then set the Lambda as the target for this rule.
A company’s flagship application runs on a fleet of Amazon Elastic Compute Cloud (Amazon EC2) instances. As per the new policies, the system administrators are looking for the best way to provide secure shell access to Amazon Elastic Compute Cloud (Amazon EC2) instances without opening new ports or using public IP addresses.
Which tool/service will help you achieve this requirement?
a. Amazon Elastic Compute Cloud (Amazon EC2) Instance Connect
b. Amazon Inspector
c. Amazon Route 53
d. AWS Systems Manager Session Manager
d. AWS Systems Manager Session Manager
AWS Systems Manager Session Manager is a fully-managed service that provides you with an interactive browser-based shell and CLI experience. It helps provide secure and auditable instance management without the need to open inbound ports, maintain bastion hosts, and manage SSH keys. AWS Systems Manager Session Manager helps to enable compliance with corporate policies that require controlled access to instances, increase security and auditability of access to the instances while providing simplicity and cross-platform instance access to end-users.
Incorrect options:
Amazon Elastic Compute Cloud (Amazon EC2) Instance Connect - Amazon Elastic Compute Cloud (Amazon EC2) Instance Connect provides a simple and secure way to connect to your Linux instances using Secure Shell (SSH). With EC2 Instance Connect, you use AWS Identity and Access Management (IAM) policies and principals to control SSH access to your instances, removing the need to share and manage SSH keys. EC2 Instance Connect will need port 22 to be open for traffic. Therefore, not the correct option here.
Which AWS service protects your AWS account by monitoring malicious activity and detecting threats?
a. Amazon CloudWatch
b. AWS CloudTrail
c. AWS Trusted Advisor
d. Amazon GuardDuty
d. Amazon GuardDuty
Compared to the on-demand instance prices, what is the highest possible discount offered for spot instances?
a. 75
b. 10
c. 50
d. 90
d. 90
Amazon EC2 spot instances let you take advantage of unused EC2 capacity in the AWS cloud. Spot instances are available at up to a 90% discount compared to the on-demand instance prices. You can use spot instances for various stateless, fault-tolerant, or flexible applications such as big data, containerized workloads, CI/CD, web servers, high-performance computing (HPC), and other test & development workloads.
Which Amazon Route 53 routing policy would you use when you want to route your traffic in an active-passive configuration?
a. Simple routing
b. Failover routing
c. Latency-based routing
d. Weighted routing
b. Failover routing
A startup wants to migrate its data and applications from the on-premises data center to AWS Cloud. Which of the following options can be used by the startup to help with this migration? (Select two)
a. Raise a support ticket with AWS Support for further assistance
b. Consult moderators on AWS Developer Forums
c. Leverage AWS Professional Services to accelerate the infrastructure migration
d. Use AWS Trusted Advisor to automate the infrastructure migration
e. Utilize AWS Partner Network (APN) to build a custom solution for this infrastructure migration
c. Leverage AWS Professional Services to accelerate the infrastructure migration
e. Utilize AWS Partner Network (APN) to build a custom solution for this infrastructure migration
Leverage AWS Professional Services to accelerate the infrastructure migration
The AWS Professional Services organization is a global team of experts that can help you realize your desired business outcomes when using the AWS Cloud. AWS Professional Services consultants can supplement your team with specialized skills and experience that can help you achieve quick results. Therefore, leveraging AWS Professional Services can accelerate the infrastructure migration for the startup.
Utilize AWS Partner Network (APN) to build a custom solution for this infrastructure migration
The AWS Partner Network (APN) is the global partner program for technology and consulting businesses that leverage Amazon Web Services to build solutions and services for customers. The startup can work with experts from APN to build a custom solution for this infrastructure migration.
Which of the following AWS services are always free to use (Select two)?
a. Amazon Simple Storage Service (Amazon S3)
b. Amazon DynamoDB
c. Amazon Elastic Compute Cloud (Amazon EC2)
d. AWS Identity and Access Management (AWS IAM)
e. AWS Auto Scaling
d. AWS Identity and Access Management (AWS IAM)
e. AWS Auto Scaling
AWS Identity and Access Management (AWS IAM) - AWS Identity and Access Management (AWS IAM) enables you to manage access to AWS services and resources securely. Using IAM, you can create and manage AWS users and groups, and use permissions to allow and deny their access to AWS resources. IAM is a feature of your AWS account offered at no additional charge.
AWS Auto Scaling - AWS Auto Scaling monitors your applications and automatically adjusts the capacity to maintain steady, predictable performance at the lowest possible cost. Using AWS Auto Scaling, it’s easy to setup application scaling for multiple resources across multiple services in minutes. AWS Auto Scaling is available at no additional charge. You pay only for the AWS resources needed to run your applications and Amazon CloudWatch monitoring fees.
An IT company is planning to migrate from an on-premises environment to AWS Cloud. Which of the following expense areas would result in cost savings when the company moves to AWS Cloud? (Select two)
a. Developer salary
b. Project manager salary
c. Data center physical security expenditure
d. Data center hardware infrastructure expenditure
e. SaaS application license fee
c. Data center physical security expenditure
d. Data center hardware infrastructure expenditure
The company does not need to spend on the computing hardware infrastructure and data center physical security. So these expense areas would result in cost savings. The expenditure on the SaaS application license fee, developer salary, and project manager salary would remain the same.
Exam Alert:
Please check out the following six advantages of Cloud Computing. You would certainly be asked questions on the advantages of Cloud Computing compared to a traditional on-premises setup:
https://docs.aws.amazon.com/whitepapers/latest/aws-overview/six-advantages-of-cloud-computing.html
A leading research firm needs to access information available in old patents and documents (such as PDFs, Text Files, Word documents, etc) present in its huge knowledge base. The firm is looking for a powerful search tool that can dig into these knowledge resources and return the most relevant files/documents. Which of the following is the correct service to address this requirement?
a. Amazon Personalize
b. Amazon Lex
c. Amazon Kendra
d. Amazon Comprehend
c. Amazon Kendra
Amazon Kendra is an intelligent search service powered by machine learning. Kendra reimagines enterprise search for your websites and applications so your employees and customers can easily find the content they are looking for, even when it’s scattered across multiple locations and content repositories within your organization.
Using Amazon Kendra, you can stop searching through troves of unstructured data and discover the right answers to your questions, when you need them. Amazon Kendra is a fully managed service, so there are no servers to provision, and no machine learning models to build, train, or deploy. Amazon Kendra supports unstructured and semi-structured data in .html, MS Office (.doc, .ppt), PDF, and text formats.
Unlike conventional search technology, natural language search capabilities return the answers you’re looking for quickly and accurately, no matter where the information lives within your organization.
Amazon Kendra’s deep learning models come pre-trained across 14 industry domains, allowing it to extract more accurate answers across a wide range of business use cases from the get-go. You can also fine-tune search results by manually adjusting the importance of data sources, authors, freshness, or using custom tags.
Incorrect options:
Amazon Personalize - Amazon Personalize enables developers to build applications with the same machine learning (ML) technology used by Amazon.com for real-time personalized recommendations. Amazon Personalize makes it easy for developers to build applications capable of delivering a wide array of personalization experiences, including specific product recommendations, personalized product re-ranking, and customized direct marketing.
Amazon Comprehend - Amazon Comprehend is a natural-language processing (NLP) service that uses machine learning to uncover information in unstructured data. Instead of combing through documents, the process is simplified and unseen information is easier to understand.
Amazon Kendra provides ML-powered search capabilities for all unstructured data customers store in AWS. Amazon Kendra offers easy-to-use native connectors to popular AWS repository types such as S3 and RDS databases. Other AI services such as Amazon Comprehend, Amazon Transcribe, and Amazon Comprehend Medical can be used to pre-process documents, generate searchable text, extract entities, and enrich their metadata for more specialized search experiences.
Amazon Lex - Amazon Lex is a service for building conversational interfaces into any application using voice and text. Amazon Lex provides the advanced deep learning functionalities of automatic speech recognition (ASR) for converting speech to text, and natural language understanding (NLU) to recognize the intent of the text, to enable you to build applications with highly engaging user experiences and lifelike conversational interactions.
A multi-national company has its business-critical data stored on a fleet of Amazon Elastic Compute Cloud (Amazon EC2) instances, in various countries, configured in region-specific compliance rules. To demonstrate compliance, the company needs to submit historical configurations on a regular basis. Which AWS service is best suited for this requirement?
a. Amazon Macie
b. AWS Config
c. Amazon GuardDuty
d. AWS CloudTrail
b. AWS Config
AWS Config provides a detailed view of the configuration of AWS resources in your AWS account. This includes how the resources are related to one another and how they were configured in the past so that you can see how the configurations and relationships change over time. AWS Config is designed to help you oversee your application resources in the following scenarios: Resource Administration, Auditing and Compliance, Managing and Troubleshooting Configuration Changes, Security Analysis.
How AWS Config Works: https://aws.amazon.com/config/
A startup wants to provision an EC2 instance for the lowest possible cost for a long-term duration but needs to make sure that the instance would never be interrupted. As a Cloud Practitioner, which of the following options would you recommend?
a. EC2 Dedicated Host
b. EC2 Spot Instance
c. EC2 On-Demand Instance
d. EC2 Reserved Instance (RI)
d. EC2 Reserved Instance (RI)
An EC2 Reserved Instance (RI) provides you with significant savings (up to 75%) on your Amazon EC2 costs compared to On-Demand Instance pricing. A Reserved Instance (RI) is not a physical instance, but rather a billing discount applied to the use of On-Demand Instances in your account. You can purchase a Reserved Instance (RI) for a one-year or three-year commitment, with the three-year commitment offering a bigger discount. A reserved instance (RI) cannot be interrupted. So this is the correct option.
On-demand instances cannot be interrupted. However, On-demand instances are not as cost-effective as Reserved instances, so this option is not correct.
EC2 Spot Instances are as cost-effective as Reserved instances. However, EC2 Spot Instance can be interrupted, so this option is not correct.