Practice Test #1 - AWS Certified Cloud Practitioner - Results (Stephen) Flashcards
Which tool/service will help you access AWS services using programming language-specific APIs?
A. AWS Software Developer Kit (SDK)
B. AWS Command Line Interface (CLI)
C. AWS Management Console
D. Language Specific Integrated Development Environments (IDE)
A. AWS Software Developer Kit (SDK)
Explanation:
AWS Software Developer Kit (SDK) - SDKs take the complexity out of coding by providing language-specific APIs for AWS services. For example, the AWS SDK for JavaScript simplifies the use of AWS Services by providing a set of libraries that are consistent and familiar for JavaScript developers. It provides support for API lifecycle considerations such as credential management, retries, data marshaling, serialization, and deserialization. AWS SDKs are offered in several programming languages to make it simple for developers working on different programming and scripting languages. So, AWS SDK can help with using AWS services from within an application using language-specific APIs.
Incorrect options:
AWS Management Console - The AWS Management Console is a web application that comprises and refers to a broad collection of service consoles for managing Amazon Web Services. When you first sign in, you see the console home page. The home page provides access to each service console as well as an intuitive user interface for exploring AWS and getting helpful tips.
AWS Command Line Interface (CLI) - The AWS Command Line Interface (CLI) is a unified tool to manage your AWS services. With just one tool to download and configure, you can control multiple AWS services from the command line and automate them through scripts. CLI cannot be used with language-specific APIs.
Language-specific Integrated Development Environments (IDE) - An integrated development environment (IDE) provides a set of coding productivity tools such as a source code editor, a debugger, and build tools. Cloud9 IDE is an offering from AWS under IDEs.
A data analytics company is running a proprietary batch analytics application on AWS and wants to use a storage service which would be accessed by hundreds of EC2 instances simultaneously to append data to existing files. As a Cloud Practitioner, which AWS service would you suggest for this use-case?
A. Instance Store
B. EBS
C. S3
D. EFS
D. EFS
Explanation:
“EFS” - Amazon EFS is a file storage service for use with Amazon EC2. Amazon EFS provides a file system interface, file system access semantics, and concurrently-accessible storage for up to thousands of Amazon EC2 instances. Amazon EFS uses the Network File System protocol.
How EFS works: via - https://aws.amazon.com/efs/
Incorrect options:
EBS - Amazon Elastic Block Store (EBS) is an easy to use, high-performance block storage service designed for use with Amazon Elastic Compute Cloud (EC2) for both throughput and transaction-intensive workloads at any scale. EBS volumes cannot be accessed simultaneously by multiple EC2 instances, so this option is incorrect.
Instance Store - An instance store provides temporary block-level storage for your instance. This storage is located on disks that are physically attached to the host computer. Instance Store volumes cannot be accessed simultaneously by multiple EC2 instances, so this option is incorrect.
S3 - Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance. S3 is object storage and it does not support file append operations, so this option is incorrect.
A financial services company wants to ensure that its AWS account activity meets the governance, compliance and auditing norms. As a Cloud Practitioner, which AWS service would you recommend for this use-case?
A. Trusted Advisor
B. CloudTrail
C. CloudWatch
D. Config
B. CloudTrail
Explanation:
You can use CloudTrail to log, monitor and retain account activity related to actions across your AWS infrastructure. CloudTrail provides an event history of your AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command-line tools, and other AWS services.
How CloudTrail Works: via - https://aws.amazon.com/cloudtrail/
Incorrect options:
Config - AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. Config continuously monitors and records your AWS resource configurations and allows you to automate the evaluation of recorded configurations against desired configurations.
CloudWatch - Amazon CloudWatch is a monitoring and observability service built for DevOps engineers, developers, site reliability engineers (SREs), and IT managers. CloudWatch provides data and actionable insights to monitor applications, respond to system-wide performance changes, optimize resource utilization, and get a unified view of operational health. This is an excellent service for building Resilient systems.
Trusted Advisor - AWS Trusted Advisor is an online tool that provides you real-time guidance to help you provision your resources following AWS best practices on cost optimization, security, fault tolerance, service limits and performance improvement.
Exam Alert:
You may see use-cases asking you to select one of CloudWatch vs CloudTrail vs Config. Just remember this thumb rule -
Think resource performance monitoring, events, and alerts; think CloudWatch.
Think account-specific activity and audit; think CloudTrail.
Think resource-specific change history, audit, and compliance; think Config
Which AWS Service can be used to mitigate a Distributed Denial of Service (DDoS) attack?
A. AWS Systems Manager
B. AWS KMS
C. Amazon CloudWatch
D. AWS Shield
D. AWS Shield
Explanation:
AWS Shield is a managed Distributed Denial of Service (DDoS) protection service that safeguards applications running on AWS. AWS Shield provides always-on detection and automatic inline mitigations that minimize application downtime and latency, so there is no need to engage AWS Support to benefit from DDoS protection. There are two tiers of AWS Shield - Standard and Advanced.
All AWS customers benefit from the automatic protections of AWS Shield Standard, at no additional charge. AWS Shield Standard defends against most common, frequently occurring network and transport layer DDoS attacks that target your web site or applications. When you use AWS Shield Standard with Amazon CloudFront and Amazon Route 53, you receive comprehensive availability protection against all known infrastructure (Layer 3 and 4) attacks.
For higher levels of protection against attacks targeting your applications running on Amazon Elastic Compute Cloud (EC2), Elastic Load Balancing (ELB), Amazon CloudFront, AWS Global Accelerator and Amazon Route 53 resources, you can subscribe to AWS Shield Advanced. In addition to the network and transport layer protections that come with Standard, AWS Shield Advanced provides additional detection and mitigation against large and sophisticated DDoS attacks, near real-time visibility into attacks, and integration with AWS WAF, a web application firewall.
Incorrect options:
Amazon CloudWatch - Amazon CloudWatch is a monitoring and observability service built for DevOps engineers, developers, site reliability engineers (SREs), and IT managers. CloudWatch provides data and actionable insights to monitor applications, respond to system-wide performance changes, optimize resource utilization, and get a unified view of operational health. This is an excellent service for building Resilient systems.
AWS Systems Manager - AWS Systems Manager gives you visibility and control of your infrastructure on AWS. Systems Manager provides a unified user interface so you can view operational data from multiple AWS services and allows you to automate operational tasks across your AWS resources. With Systems Manager, you can group resources, like Amazon EC2 instances, Amazon S3 buckets, or Amazon RDS instances, by application, view operational data for monitoring and troubleshooting, and take action on your groups of resources.
AWS KMS - AWS Key Management Service (KMS) makes it easy for you to create and manage cryptographic keys and control their use across a wide range of AWS services and in your applications. AWS KMS is a secure and resilient service that uses hardware security modules that have been validated under FIPS 140-2, or are in the process of being validated, to protect your keys.
An e-commerce company has deployed an RDS database in a single Availability Zone. The engineering team wants to ensure that in case of an AZ outage, the database should continue working on the same endpoint without any manual administrative intervention. Which of the following solutions can address this use-case?
A. Configure the database in RDS Multi-AZ deployment with automatic failover to standby
B. Deploy the database via Elastic Beanstalk
C. Provision the database via CloudFormation
D. Configure the database in RDS read replica mode with automatic failover to the standby
A. Configure the database in RDS Multi-AZ deployment with automatic failover to standby
Explanation:
When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). In case of an infrastructure failure, Amazon RDS performs an automatic failover to the standby (or to a read replica in the case of Amazon Aurora), so that you can resume database operations as soon as the failover is complete. Since the endpoint for your DB Instance remains the same after a failover, your application can resume database operation without the need for manual administrative intervention.
Incorrect options:
Deploy the database via Elastic Beanstalk - You cannot deploy only a database via Elastic Beanstalk as its meant for automatic application deployment when you upload your code. Then Elastic Beanstalk automatically handles the deployment, from capacity provisioning, load balancing, auto-scaling to application health monitoring. Hence this option is incorrect.
Configure the database in RDS read replica mode with automatic failover to the standby - For RDS, Read replicas allow you to create read-only copies that are synchronized with your master database. There is no standby available while using read replicas. In case of infrastructure failure, you have to manually promote the read replica to be its own standalone DB Instance, which means that the database endpoint would change. Therefore, this option is incorrect.
Provision the database via CloudFormation - You can provision the database via CloudFront for sure, however, it does not provide any automatic recovery in case of a disaster.
A company wants to identify the optimal AWS resource configuration for its workloads so that the company can reduce costs and increase workload performance. Which of the following services can be used to meet this requirement?
A. AWS Cost Explorer
B. AWS Budgets
C. AWS Compute Optimizer
D. AWS Systems Manager
C. AWS Compute Optimizer
Explanation:
Correct option: AWS Compute Optimizer - AWS Compute Optimizer recommends optimal AWS resources for your workloads to reduce costs and improve performance by using machine learning to analyze historical utilization metrics. Over-provisioning resources can lead to unnecessary infrastructure costs, and under-provisioning resources can lead to poor application performance. Compute Optimizer helps you choose optimal configurations for three types of AWS resources: Amazon EC2 instances, Amazon EBS volumes, and AWS Lambda functions, based on your utilization data.
Compute Optimizer recommends up to 3 options from 140+ EC2 instance types, as well as a wide range of EBS volume and Lambda function configuration options, to right-size your workloads. Compute Optimizer also projects what the CPU utilization, memory utilization, and run time of your workload would have been on recommended AWS resource options. This helps you understand how your workload would have performed on the recommended options before implementing the recommendations.
How Compute Optimizer works: via - https://aws.amazon.com/compute-optimizer/
Incorrect options:
AWS Systems Manager - AWS Systems Manager is the operations hub for AWS. Systems Manager provides a unified user interface so you can track and resolve operational issues across your AWS applications and resources from a central place. With Systems Manager, you can automate operational tasks for Amazon EC2 instances or Amazon RDS instances. You can also group resources by application, view operational data for monitoring and troubleshooting, implement pre-approved change workflows, and audit operational changes for your groups of resources. Systems Manager simplifies resource and application management, shortens the time to detect and resolve operational problems, and makes it easier to operate and manage your infrastructure at scale. Systems Manager cannot be used to identify the optimal resource configuration for workloads running on AWS.
AWS Budgets - AWS Budgets allows you to set custom budgets to track your cost and usage from the simplest to the most complex use cases. With AWS Budgets, you can choose to be alerted by email or SNS notification when actual or forecasted cost and usage exceed your budget threshold, or when your actual RI and Savings Plans’ utilization or coverage drops below your desired threshold. With AWS Budget Actions, you can also configure specific actions to respond to cost and usage status in your accounts, so that if your cost or usage exceeds or is forecasted to exceed your threshold, actions can be executed automatically or with your approval to reduce unintentional over-spending.
AWS Cost Explorer - AWS Cost Explorer has an easy-to-use interface that lets you visualize, understand, and manage your AWS costs and usage over time. Cost Explorer Resource Rightsizing Recommendations and Compute Optimizer use the same recommendation engine. The Compute Optimizer recommendation engine delivers recommendations to help customers identify optimal EC2 instance types for their workloads. The Cost Explorer console and API surface a subset of these recommendations that may lead to cost savings, and augments them with customer-specific cost and savings information (e.g. billing information, available credits, RI, and Savings Plans) to help Cost Management owners quickly identify savings opportunities through infrastructure rightsizing. Compute Optimizer console and its API delivers all recommendations regardless of the cost implications.
Which of the following is a benefit of using AWS managed services such as Amazon RDS?
A. The performance of AWS managed RDS instance is better than a customer managed database instance
B. The customer needs to manage database backups
C. There is no need to optimize database instance type and size
D. The customer needs to patch the underlying OS
A. The performance of AWS managed RDS instance is better than a customer managed database instance
Explanation:
The performance of AWS managed RDS instance is better than a customer-managed database instance
Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. It provides cost-efficient and resizable capacity while automating time-consuming administration tasks such as hardware provisioning, database setup, patching, and backups.
Amazon RDS provides a selection of instance types optimized to fit different relational database use cases. Instance types comprise varying combinations of CPU, memory, storage, and networking capacity and give you the flexibility to choose the appropriate mix of resources for your database to optimize the database for your use-case by selecting the correct instance type and size.
As the RDS instances are optimized for memory, performance, or I/O, therefore the performance of AWS managed RDS instance is better than a customer-managed database instance.
Incorrect options:
The customer needs to patch the underlying OS
The customer needs to manage database backups
There is no need to optimize database instance type and size
These three options contradict the details provided earlier in the explanation, so these options are incorrect.
A company wants to improve the resiliency of its flagship application so it wants to move from its traditional database system to a managed AWS database service to support active-active configuration in both the East and West US AWS regions. The active-active configuration with cross-region support is the prime criteria for any database solution that the company considers.
Which AWS database service is the right fit for this requirement?
A. Amazon DynamoDB with DynamoDB Accelerator
B. Amazon Aurora with multi master clusters
C. Amazon DynamoDB with global tables
D. Amazon Relational Database Service (Amazon RDS) for MySQL
C. Amazon DynamoDB with global tables
Explanation:
Amazon DynamoDB is a fully managed, serverless, key-value NoSQL database designed to run high-performance applications at any scale. DynamoDB offers built-in security, continuous backups, automated multi-region replication, in-memory caching, and data export tools.
DynamoDB global tables replicate data automatically across your choice of AWS Regions and automatically scale capacity to accommodate your workloads. With global tables, your globally distributed applications can access data locally in the selected regions to get single-digit millisecond read and write performance. DynamoDB offers active-active cross-region support that is needed for the company.
Incorrect options:
Amazon DynamoDB with DynamoDB Accelerator - DynamoDB Accelerator (DAX) is an in-memory cache that delivers fast read performance for your tables at scale by enabling you to use a fully managed in-memory cache. Using DAX, you can improve the read performance of your DynamoDB tables by up to 10 times—taking the time required for reads from milliseconds to microseconds, even at millions of requests per second. DAX does not offer active-active cross-Region configuration.
Amazon Aurora with multi-master cluster - Amazon Aurora (Aurora) is a fully managed relational database engine that’s compatible with MySQL and PostgreSQL. With some workloads, Aurora can deliver up to five times the throughput of MySQL and up to three times the throughput of PostgreSQL without requiring changes to most of your existing applications. In a multi-master cluster, all DB instances have read/write capability. Currently, all DB instances in a multi-master cluster must be in the same AWS Region. You can’t enable cross-Region replicas from multi-master clusters.
Amazon Relational Database Service (Amazon RDS) for MYSQL - Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. It provides cost-efficient and resizable capacity while automating time-consuming administration tasks such as hardware provisioning, database setup, patching and backups. It frees you to focus on your applications so you can give them the fast performance, high availability, security and compatibility they need. RDS does not support active-active configuration with cross-region support.
Which AWS Route 53 routing policy would you use to route traffic to multiple resources and also choose how much traffic is routed to each resource?
A. Latency routing policy
B. Failover routing policy
C. Weighted Routing Policy
D. Simple Routing Policy
C. Weighted Routing Policy
Explanation:
Amazon Route 53 is a highly available and scalable cloud Domain Name System (DNS) web service. It is designed to give developers and businesses an extremely reliable and cost-effective way to route end users to Internet applications by translating names like www.example.com into the numeric IP addresses like 192.0.2.1 that computers use to connect to each other.
Weighted routing lets you associate multiple resources with a single domain name (example.com) or subdomain name (acme.example.com) and choose how much traffic is routed to each resource. This can be useful for a variety of purposes, including load balancing and testing new versions of software. To configure weighted routing, you create records that have the same name and type for each of your resources. You assign each record a relative weight that corresponds with how much traffic you want to send to each resource. Amazon Route 53 sends traffic to a resource based on the weight that you assign to the record as a proportion of the total weight for all records in the group.
Route 53 Routing Policy Overview: via - https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html
Incorrect options:
Failover routing policy - This routing policy is used when you want to configure active-passive failover.
Simple routing policy - With simple routing, you typically route traffic to a single resource, for example, to a web server for your website.
Latency routing policy - This routing policy is used when you have resources in multiple AWS Regions and you want to route traffic to the region that provides the best latency.
Which of the following is an AWS database service?
A. Database Migration Service
B. Storage Gateway
C. Redshift
D. Glue
C. Redshift
Explanation:
Redshift - Amazon Redshift is a fully-managed petabyte-scale cloud-based data warehouse product designed for large scale data set storage and analysis.
Incorrect options:
Glue - AWS Glue is a fully managed extract, transform, and load (ETL) service that makes it easy for customers to prepare and load their data for analytics.
Storage Gateway - AWS Storage Gateway is a hybrid cloud storage service that connects your existing on-premises environments with the AWS Cloud. Customers use Storage Gateway to simplify storage management and reduce costs for key hybrid cloud storage use cases.
Database Migration Service - AWS Database Migration Service helps you migrate databases to AWS quickly and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database. The AWS Database Migration Service can migrate your data to and from the most widely used commercial and open-source databases.
An IT company is planning to migrate from an on-premises environment to AWS Cloud. Which of the following expense areas would result in cost savings when the company moves to AWS Cloud? (Select two)
A. Developer salary
B. Data center physical security expenditure
C. Computing hardware infrastructure expenditure
D. Project manager salary
E. SaaS application license fee
B. Data center physical security expenditure
C. Computing hardware infrastructure expenditure
Explanation:
Data center hardware infrastructure expenditure
Data center physical security expenditure
The company does not need to spend on the computing hardware infrastructure and data center physical security. So these expense areas would result in cost savings. The expenditure on the SaaS application license fee, developer salary, and project manager salary would remain the same.
Exam Alert:
Please check out the following six advantages of Cloud Computing. You would certainly be asked questions on the advantages of Cloud Computing compared to a traditional on-premises setup: via - https://docs.aws.amazon.com/whitepapers/latest/aws-overview/six-advantages-of-cloud-computing.html
Incorrect options:
SaaS application license fee
Developer salary
Project manager salary
As explained earlier, the expenditure on the SaaS application license fee, developer salary, and project manager salary would remain the same, so these options are incorrect.
An intern at an IT company provisioned a Linux based On-demand EC2 instance with per-second billing but terminated it within 30 seconds as he wanted to provision another instance type. What is the duration for which the instance would be charged?
A. 300 seconds
B. 60 seconds
C. 30 seconds
D. 600 seconds
B. 60 seconds
Explanation:
60 seconds - There is a one-minute minimum charge for Linux based EC2 instances, so this is the correct option.
Incorrect options:
30 seconds
300 seconds
600 seconds
Which of the following is an INCORRECT statement about Scaling, a design principle of Reliability pillar of the AWS Well-Architected Framework.
A. Fault tolerance is achieved by Vertical Scaling
B. Vertical Scaling implies you scale by adding more power (CPU, RAM) to your existing machine/node
C. Fault tolerance is achieved by Horizontal scaling
D. Horizontal Scaling implies you scale by adding more instances to your existing pool of resources
A. Fault tolerance is achieved by Vertical Scaling
Explanation:
A “vertically scalable” system, is constrained to be running its processes on only one computer. In such systems, the only way to increase performance is to add more resources into one computer in the form of faster (or more) CPUs, memory or storage. Fault tolerance is not possible on vertically scalable systems since a single instance is prone to failure.
Incorrect options:
Vertical Scaling implies you scale by adding more power (CPU, RAM) to your existing machine/node - A “vertically scalable” system runs on a single instance. Adding power is only possible through the addition of resources in the form of CPU, RAM, or storage to enhance performance.
Horizontal Scaling implies you scale by adding more instances to your existing pool of resources - A “horizontally scalable” system is one that can increase capacity by adding more computers to the system. Horizontally scalable systems are oftentimes able to outperform vertically scalable systems by enabling parallel execution of workloads and distributing those across many different computers.
Fault tolerance is achieved by Horizontal scaling - Horizontal scaling adds more instances to its existing pool to scale. This implies, there is no single point of failure. If an instance is down, the workload is taken up by other healthy instances. Distributed systems are an example of horizontal scaling.
Which of the following are correct statements regarding the AWS Global Infrastructure? (Select two)
A. Each AWS Region consist of two or more Edge Locations
B. Each Availability Zone (AZ) consist of two or more discrete data centers
C. Each AWS Region consist of a minimum of two Availability Zones
D. Each Availability Zone (AZ) consist of one or more discrete data centers
E. Each AWS Region consists of a minimum of three Availability Zones
D. Each Availability Zone (AZ) consist of one or more discrete data centers
E. Each AWS Region consists of a minimum of three Availability Zones
Explanation:
Each AWS Region consists of a minimum of three Availability Zones
Each Availability Zone (AZ) consists of one or more discrete data centers
AWS has the concept of a Region, which is a physical location around the world where AWS clusters its data centers. AWS calls each group of logical data centers an Availability Zone. Each AWS Region consists of a minimum of three, isolated, and physically separate AZs within a geographic area. Each AZ has independent power, cooling, and physical security and is connected via redundant, ultra-low-latency networks.
An Availability Zone (AZ) is one or more discrete data centers with redundant power, networking, and connectivity in an AWS Region. All AZ’s in an AWS Region are interconnected with high-bandwidth, low-latency networking, over fully redundant, dedicated metro fiber providing high-throughput, low-latency networking between AZ’s.
AWS Regions and Availability Zones Overview: via - https://aws.amazon.com/about-aws/global-infrastructure/regions_az/
Incorrect options:
Each AWS Region consists of a minimum of two Availability Zones
Each Availability Zone (AZ) consists of two or more discrete data centers
Each AWS Region consists of two or more Edge Locations
These three options contradict the details provided earlier in the explanation, so these options are incorrect.
Which of the following S3 storage classes takes the most time to retrieve data (also known as first byte latency)?
A. S3 Glacier
B. S3 Glacier Deep Archive
C. S3 Standard
D. S3 Intelligent Tiering
B. S3 Glacier Deep Archive
Explanation:
“S3 Glacier Deep Archive” - S3 Glacier Deep Archive is Amazon S3’s lowest-cost storage class and supports long-term retention and digital preservation for data that may be accessed once or twice in a year. It is designed for customers — particularly those in highly-regulated industries, such as the Financial Services, Healthcare, and Public Sectors — that retain data sets for 7-10 years or longer to meet regulatory compliance requirements. S3 Glacier Deep Archive can also be used for backup and disaster recovery use cases. It has a retrieval time (first byte latency) of 12 to 48 hours.
Please review this illustration for S3 Storage Classes data retrieval times. You don’t need to memorize the actual numbers, just remember that S3 Glacier Deep Archive takes the most time to retrieve data: via - https://aws.amazon.com/s3/storage-classes/
Incorrect options:
S3 Standard - S3 Standard offers high durability, availability, and performance object storage for frequently accessed data. S3 Standard has a retrieval time (first byte latency) of milliseconds.
S3 Intelligent-Tiering - The S3 Intelligent-Tiering storage class is designed to optimize costs by automatically moving data to the most cost-effective access tier, without performance impact or operational overhead. It works by storing objects in two access tiers: one tier that is optimized for frequent access and another lower-cost tier that is optimized for infrequent access. S3 Intelligent-Tiering has a retrieval time (first byte latency) of milliseconds.
S3 Glacier - Amazon S3 Glacier is a secure, durable, and extremely low-cost Amazon S3 cloud storage class for data archiving and long-term backup. It is designed to deliver 99.999999999% durability, and provide comprehensive security and compliance capabilities that can help meet even the most stringent regulatory requirements. S3 Glacier has a retrieval time (first byte latency) of minutes or a few hours.
A startup wants to migrate its data and applications from the on-premises data center to AWS Cloud. Which of the following options can be used by the startup to help with this migration? (Select two)
A. Raise a support ticket with AWS Support for further assistance
B. Utilize AWS Partner Network (APN) to build a custom solution of this infrastructure migration
C. Use AWS Trusted Advisor to automate the infrastructure migration
D. Consult moderators on AWS Developer Forums
E. Leverage AWS Professional Services to accelerate the infrastructure migration
B. Utilize AWS Partner Network (APN) to build a custom solution of this infrastructure migration
E. Leverage AWS Professional Services to accelerate the infrastructure migration
Explanation:
Leverage AWS Professional Services to accelerate the infrastructure migration
The AWS Professional Services organization is a global team of experts that can help you realize your desired business outcomes when using the AWS Cloud. AWS Professional Services consultants can supplement your team with specialized skills and experience that can help you achieve quick results. Therefore, leveraging AWS Professional Services can accelerate the infrastructure migration for the startup.
Utilize AWS Partner Network (APN) to build a custom solution for this infrastructure migration
The AWS Partner Network (APN) is the global partner program for technology and consulting businesses that leverage Amazon Web Services to build solutions and services for customers. The startup can work with experts from APN to build a custom solution for this infrastructure migration.
Incorrect options:
Raise a support ticket with AWS Support for further assistance - AWS Support cannot help with complex infrastructure migration of this nature. Hence this option is incorrect.
Consult moderators on AWS Developer Forums - This is a made-up option and has been added as a distractor.
Use AWS Trusted Advisor to automate the infrastructure migration - AWS Trusted Advisor is an online tool that provides you real-time guidance to help you provision your resources following AWS best practices on cost optimization, security, fault tolerance, service limits, and performance improvement. Trusted Advisor cannot automate the infrastructure migration.
A silicon valley based healthcare startup stores anonymized patient health data on Amazon S3. The CTO further wants to ensure that any sensitive data on S3 is discovered and identified to prevent any sensitive data leaks. As a Cloud Practitioner, which AWS service would you recommend addressing this use-case?
A. Amazon Macie
B. Amazon Polly
C. AWS Glue
D. AWS Secrets Manager
A. Amazon Macie
Explanation:
Amazon Macie - Amazon Macie is a fully managed data security and data privacy service that uses machine learning and pattern matching to discover and protect your sensitive data in AWS. Macie automatically provides an inventory of Amazon S3 buckets including a list of unencrypted buckets, publicly accessible buckets, and buckets shared with AWS accounts outside those you have defined in AWS Organizations. Then, Macie applies machine learning and pattern matching techniques to the buckets you select to identify and alert you to sensitive data, such as personally identifiable information (PII).
How Macie Works: via - https://aws.amazon.com/macie/
Incorrect options:
AWS Glue - AWS Glue is a fully managed extract, transform, and load (ETL) service that makes it easy for customers to prepare and load their data for analytics. AWS Glue job is meant to be used for batch ETL data processing. It cannot be used to discover and protect your sensitive data in AWS.
Amazon Polly - Amazon Polly is a service that turns text into lifelike speech, allowing you to create applications that talk, and build entirely new categories of speech-enabled products. Polly’s Text-to-Speech (TTS) service uses advanced deep learning technologies to synthesize natural sounding human speech. It cannot be used to discover and protect your sensitive data in AWS.
AWS Secrets Manager - AWS Secrets Manager helps you protect secrets needed to access your applications, services, and IT resources. The service enables you to easily rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle. Users and applications retrieve secrets with a call to Secrets Manager APIs, eliminating the need to hardcode sensitive information in plain text. It cannot be used to discover and protect your sensitive data in AWS.
A cyber forensics team has detected that AWS owned IP-addresses are being used to carry out malicious attacks. As this constitutes prohibited use of AWS services, which of the following is the correct solution to address this issue?
A. Write an email to Jeff Bezos, the CEO of Amazon, with the details of this incident
B. Contact AWS Support
C. Contact AWS Developer Forum Moderators
D. Contact AWS Abuse Team
D. Contact AWS Abuse Team
Explanation:
The AWS Abuse team can assist you when AWS resources are used to engage in abusive behavior.
Please see details of the various scenarios that the AWS Abuse team can address: via - https://aws.amazon.com/premiumsupport/knowledge-center/report-aws-abuse/
Incorrect options:
Contact AWS Support - You need to contact the AWS Abuse team for prohibited use of AWS services.
Contact AWS Developer Forum moderators - You need to contact the AWS Abuse team for prohibited use of AWS services.
Write an email to Jeff Bezos, the CEO of Amazon, with the details of the incident - This has been added as a distractor. For the record, please let us know if you do get a reply from Mr. Bezos.
Compared to the On-demand prices, what is the highest possible discount offered for spot instances?
A. 75
B. 50
C. 90
D. 10
C. 90
Explanation:
Amazon EC2 Spot Instances let you take advantage of unused EC2 capacity in the AWS cloud. Spot Instances are available at up to a 90% discount compared to On-Demand prices. You can use Spot Instances for various stateless, fault-tolerant, or flexible applications such as big data, containerized workloads, CI/CD, web servers, high-performance computing (HPC), and other test & development workloads.
EC2 Pricing Options Overview: via - https://aws.amazon.com/ec2/pricing/
Incorrect options:
75
10
50
These three options contradict the details provided earlier in the explanation, so these options are incorrect.
Which of the following statements are CORRECT regarding the AWS VPC service? (Select two)
A. A Security Group can have allow rules only
B. A NACL can have allow rules only
C. A NAT Instance is managed by AWS
D. A NAT Gateway is managed by AWS
E. A Security Group can have both allow and deny rules
A. A Security Group can have allow rules only
D. A NAT Gateway is managed by AWS
Explanation:
A Security Group can have allow rules only
A NAT Gateway is managed by AWS
A security group acts as a virtual firewall for your instance to control inbound and outbound traffic. Security groups act at the instance level, not at the subnet level. You can specify allow rules, but not deny rules. You can specify separate rules for inbound and outbound traffic.
Security Group Overview: via - https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html
A Network Access Control List (NACL) is an optional layer of security for your VPC that acts as a firewall for controlling traffic in and out of one or more subnets (i.e. it works at subnet level). A network ACL has separate inbound and outbound rules, and each rule can either allow or deny traffic.
Network Access Control List (NACL) Overview: via - https://docs.aws.amazon.com/vpc/latest/userguide/vpc-network-acls.html
You can use a network address translation (NAT) gateway or a NAT Instance to enable instances in a private subnet to connect to the internet or other AWS services, but prevent the internet from initiating a connection with those instances. NAT Gateway is managed by AWS but NAT Instance is managed by you.
Please see this comparison table for differences between NAT Gateway and NAT Instance: via - https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-comparison.html
Incorrect options:
A Security Group can have both allow and deny rules
A NAT Instance is managed by AWS
A NACL can have allow rules only
These three options contradict the details provided earlier in the explanation, so these options are incorrect.
Which of the following AWS Support plans provide access to guidance, configuration, and troubleshooting of AWS interoperability with third-party software? (Select two)
A. Enterprise
B. Corporate
C. Basic
D. Business
E. Developer
A. Enterprise
D. Business
Explanation:
Enterprise - AWS Enterprise Support provides customers with concierge-like service where the main focus is helping the customer achieve their outcomes and find success in the cloud. With Enterprise Support, you get 24x7 technical support from high-quality engineers, tools and technology to automatically manage the health of your environment, consultative architectural guidance delivered in the context of your applications and use-cases, and a designated Technical Account Manager (TAM) to coordinate access to proactive/preventative programs and AWS subject matter experts. You get access to guidance, configuration, and troubleshooting of AWS interoperability with many common operating systems, platforms, and application stack components.
Business - AWS recommends Business Support if you have production workloads on AWS and want 24x7 phone, email and chat access to technical support and architectural guidance in the context of your specific use-cases. You get full access to AWS Trusted Advisor Best Practice Checks. You get access to guidance, configuration, and troubleshooting of AWS interoperability with many common operating systems, platforms, and application stack components.
Exam Alert:
Please review the differences between the Developer, Business, and Enterprise support plans as you can expect at least a couple of questions on the exam:
via - https://aws.amazon.com/premiumsupport/plans/
Incorrect options:
Basic - The basic plan only provides access to the following:
Customer Service & Communities - 24x7 access to customer service, documentation, whitepapers, and support forums. AWS Trusted Advisor - Access to the 7 core Trusted Advisor checks and guidance to provision your resources following best practices to increase performance and improve security. AWS Personal Health Dashboard - A personalized view of the health of AWS services, and alerts when your resources are impacted.
Developer - AWS recommends Developer Support plan if you are testing or doing early development on AWS and want the ability to get email-based technical support during business hours. This plan also supports general guidance on how services can be used for various use cases, workloads, or applications. You do not get access to Infrastructure Event Management with this plan.
Both these plans do not support access to guidance, configuration, and troubleshooting of AWS interoperability with third-party software.
Corporate - This is a made-up option and has been added as a distractor.
Which of the following is the MOST cost-effective option to purchase an EC2 Reserved Instance?
A. No upfront payment option with standard 3-year term
B. All upfront payment option with standard 1-year term
C. No upfront payment option with standard 1-year term
D. Partial upfront payment option with standard 3-years term
D. Partial upfront payment option with standard 3-years term
Explanation:
Partial upfront payment option with standard 3-years term
You can use Amazon EC2 Reserved Instances to reserve capacity and receive a discount on your instance usage compared to running On-Demand instances. The discounted usage price is reserved for the duration of your contract, allowing you to predict compute costs over the term of the Reserved Instance.
Please review this pricing comparison for EC2 Reserved Instances: via - https://d0.awsstatic.com/whitepapers/aws_pricing_overview.pdf
So the percentage savings for each option is as follows:
“No upfront payment option with the standard 1-year term” - 36%
“All upfront payment option with the standard 1-year term” - 40%
“No upfront payment option with the standard 3-years term” - 56%
“Partial upfront payment option with the standard 3-years term” - 59%
Exam Alert:
For the exam, there is no need to memorize these savings numbers. All you need to remember is that a 3 years term would always be more cost-effective than a 1-year term. Then within a term, “all upfront” is better than “partial upfront” which in turn is better than “no upfront” from a cost savings perspective.
Incorrect options:
No upfront payment option with standard 1-year term
No upfront payment option with standard 1-year term
No upfront payment option with standard 3-years term
These three options contradict the details provided earlier in the explanation, so these options are incorrect.
Which of the following are the advantages of Cloud Computing? (Select three)
A. Trade vartiable expense for capital expense
B. Benefit from massive economies of scale
C. Spend money on building and maintaining data centers
D. Allocate a few months of planning for your infrastructure capacity needs
E. Go global in minutes and deploy applications in multiple regions around the world with just a few clicks
F. Trade capital expense for variable expense
B. Benefit from massive economies of scale
E. Go global in minutes and deploy applications in multiple regions around the world with just a few clicks
F. Trade capital expense for variable expense
Explanation:
Benefit from massive economies of scale
Trade capital expense for variable expense
Go global in minutes and deploy applications in multiple regions around the world with just a few clicks
Exam Alert:
Please check out the following six advantages of Cloud Computing. You would certainly be asked questions on the advantages of Cloud Computing compared to a traditional on-premises setup: via - https://docs.aws.amazon.com/whitepapers/latest/aws-overview/six-advantages-of-cloud-computing.html
Incorrect options:
Spend money on building and maintaining data centers - With Cloud Computing, you can focus on projects that differentiate your business, not the infrastructure. You don’t need to spend money on building and maintaining data centers as the Cloud provider takes care of that.
Allocate a few months of planning for your infrastructure capacity needs - With Cloud Computing, you don’t need to guess on your infrastructure capacity needs. You can access as much or as little capacity as you need, and scale up and down as required with only a few minutes’ notice. There is no need to allocate a few months of infrastructure planning.
Trade variable expense for capital expense - With Cloud Computing, you actually trade capital expense for variable expense.
Which of the following AWS Support plans provides access to online training with self-paced labs?
A. Business
B. Enterprise
C. Developer
D. Basic
B. Enterprise
Explanation:
Enterprise
AWS offers three different support plans to cater to each of its customers - Developer, Business, and Enterprise Support plans. A basic support plan is included for all AWS customers.
AWS Enterprise Support provides customers with concierge-like service where the main focus is on helping the customer achieve their outcomes and find success in the cloud. With Enterprise Support, you get access to online training with self-paced labs, 24x7 technical support from high-quality engineers, tools and technology to automatically manage the health of your environment, consultative architectural guidance, a designated Technical Account Manager (TAM) to coordinate access to proactive/preventative programs and AWS subject matter experts.
AWS Enterprise Support Plan Offerings: via - https://aws.amazon.com/premiumsupport/plans/enterprise/
Incorrect options:
Developer - AWS recommends Developer Support if you are testing or doing early development on AWS and want the ability to get technical support during business hours as well as general architectural guidance as you build and test.
Business - AWS recommends Business Support if you have production workloads on AWS and want 24x7 access to technical support and architectural guidance in the context of your specific use-cases.
Basic - A basic support plan is included for all AWS customers.
None of these three support plans provide access to online training with self-paced labs.
A company uses reserved EC2 instances across multiple units with each unit having its own AWS account. However, some of the units under-utilize their reserved instances while other units need more reserved instances. As a Cloud Practitioner, which of the following would you recommend as the most cost-optimal solution?
A. Use AWS Systems Manager to manage AWS accounts of all units and then share the reserved EC2 instances amongst all units
B. Use AWS Organizations to manage AWS accounts of all units and then share the reserved EC2 instances amongst all units
C. Use AWS Trusted Advisor to manage AWS accounts of all units and then share the reserved EC2 instances amongst all units
D. Use AWS Cost Explorer to manage AWS accounts of all units and then share the reserved EC2 instances amongst all units
B. Use AWS Organizations to manage AWS accounts of all units and then share the reserved EC2 instances amongst all units
Explanation:
Use AWS Organizations to manage AWS accounts of all units and then share the reserved EC2 instances amongst all units
AWS Organizations helps you to centrally manage billing; control access, compliance, and security; and share resources across your AWS accounts. Using AWS Organizations, you can automate account creation, create groups of accounts to reflect your business needs, and apply policies for these groups for governance. You can also simplify billing by setting up a single payment method for all of your AWS accounts. AWS Organizations is available to all AWS customers at no additional charge.
Key Features of AWS Organizations: via - https://aws.amazon.com/organizations/
Incorrect options:
Use AWS Trusted Advisor to manage AWS accounts of all units and then share the reserved EC2 instances amongst all units - AWS Trusted Advisor is an online tool that provides you real-time guidance to help you provision your resources following AWS best practices on cost optimization, security, fault tolerance, service limits, and performance improvement. You cannot use Trusted Advisor to share the reserved EC2 instances amongst multiple AWS accounts.
How Trusted Advisor Works: via - https://aws.amazon.com/premiumsupport/technology/trusted-advisor/
Use AWS Cost Explorer to manage AWS accounts of all units and then share the reserved EC2 instances amongst all units - AWS Cost Explorer lets you explore your AWS costs and usage at both a high level and at a detailed level of analysis, and empowering you to dive deeper using several filtering dimensions (e.g., AWS Service, Region, Linked Account). You cannot use Cost Explorer to share the reserved EC2 instances amongst multiple AWS accounts.
Use AWS Systems Manager to manage AWS accounts of all units and then share the reserved EC2 instances amongst all units - Systems Manager provides a unified user interface so you can view operational data from multiple AWS services and allows you to automate operational tasks across your AWS resources. With Systems Manager, you can group resources, like Amazon EC2 instances, Amazon S3 buckets, or Amazon RDS instances, by application, view operational data for monitoring and troubleshooting, and take action on your groups of resources. You cannot use Systems Manager to share the reserved EC2 instances amongst multiple AWS accounts.
Which AWS services can be used to facilitate organizational change management, part of the Reliability pillar of AWS Well-Architected Framework? (Select three)
A. Amazon GuardDuty
B. Amazon CloudWatch
C. Amazon Inspector
D. AWS CloudTrail
E. AWS Trusted Advisor
F. AWS Config
B. Amazon CloudWatch
D. AWS CloudTrail
F. AWS Config
Explanation:
There are three best practice areas for Reliability in the cloud - Foundations, Change Management, Failure Management. Being aware of how change affects a system (change management) allows you to plan proactively, and monitoring allows you to quickly identify trends that could lead to capacity issues or SLA breaches.
AWS Config - AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. Config continuously monitors and records your AWS resource configurations and allows you to automate the evaluation of recorded configurations against desired configurations.
How AWS Config Works: via - https://aws.amazon.com/config/
AWS CloudTrail - AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. With CloudTrail, you can log, continuously monitor, and retain account activity related to actions across your AWS infrastructure. CloudTrail provides event history of your AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command-line tools, and other AWS services.
How CloudTrail Works: via - https://aws.amazon.com/cloudtrail/
Amazon CloudWatch - Amazon CloudWatch is a monitoring and observability service built for DevOps engineers, developers, site reliability engineers (SREs), and IT managers. CloudWatch provides data and actionable insights to monitor applications, respond to system-wide performance changes, optimize resource utilization, and get a unified view of operational health.
Incorrect options:
AWS Trusted Advisor - AWS Trusted Advisor is an online tool that provides you real-time guidance to help you provision your resources following AWS best practices on cost optimization, security, fault tolerance, service limits, and performance improvement.
Amazon Inspector - Amazon Inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on AWS. Amazon Inspector automatically assesses applications for exposure, vulnerabilities, and deviations from best practices.
Amazon GuardDuty - Amazon GuardDuty is a threat detection service that monitors malicious activity and unauthorized behavior to protect your AWS account. GuardDuty analyzes billions of events across your AWS accounts from AWS CloudTrail (AWS user and API activity in your accounts), Amazon VPC Flow Logs (network traffic data), and DNS Logs (name query patterns). This service is for AWS account level access, not for instance-level management like an EC2. GuardDuty cannot be used to check OS vulnerabilities.